kernl
Core

Agents

Create agents with instructions, models, tools, and memory. Blocking and streaming execution.

Reference: Agent

An agent is a language model (LLM) which is configured with:

  • Instructions — the system prompt that tells the model who it is and how it should respond.
  • Model — which LLM to call, plus any optional model tuning parameters.
  • Tools — a list of functions or APIs the LLM can invoke to accomplish a task.

Basic Usage

import { Agent } from "kernl";
import { anthropic } from "@kernl-sdk/ai/anthropic";
import { github } from "@/toolkits/github";

const agent = new Agent({
  id: "jarvis",
  name: "Jarvis",
  model: anthropic("claude-sonnet-4-5"),
  instructions: "You are a helpful assistant.",
  toolkits: [github],
  memory: { enabled: true },
});

Constructor

import { Agent } from "kernl";

const agent = new Agent<TContext, TOutput>(config);

Config

Prop

Type

Methods

run()

Blocking execution — waits for the full response.

const result = await agent.run(input, options?);

Parameters:

Prop

Type

Returns: Promise<ThreadExecuteResult>

interface ThreadExecuteResult<TResponse> {
  response: TResponse;  // string or structured output
  state: any;           // Thread state at completion
}

stream()

Streaming execution — returns events as they happen.

const stream = agent.stream(input, options?);

for await (const event of stream) {
  if (event.kind === "text.delta") {
    process.stdout.write(event.text);
  }
}

Parameters: Same as run.

Yields: ThreadStreamEvent — includes text.delta, tool.call, tool.result, finish, etc.

Execute Options

Options passed to run or stream:

Prop

Type

Example with options

const result = await agent.run("What's on my calendar?", {
  threadId: "thread_abc123",
  context: {
    userId: "user_456",
    timezone: "America/New_York",
  },
});

Dynamic Instructions

Instructions can be a function that receives context:

interface UserContext {
  user: { name: string; role: string };
}

const agent = new Agent<UserContext>({
  id: "assistant",
  name: "Assistant",
  model: anthropic("claude-sonnet-4-5"),
  instructions: (ctx) => `
    You are helping ${ctx.context.user.name}.
    Their role is ${ctx.context.user.role}.
    Today is ${new Date().toDateString()}.
  `,
});

await agent.run("Hello", {
  context: { user: { name: "Alice", role: "admin" } },
});

Structured Output

Use a Zod schema for typed responses:

import { z } from "zod";

const agent = new Agent({
  id: "extractor",
  name: "Extractor",
  model: anthropic("claude-sonnet-4-5"),
  instructions: "Extract structured data from text.",
  output: z.object({
    name: z.string(),
    email: z.string().email(),
    confidence: z.number(),
  }),
});

const result = await agent.run("Contact: Alice at alice@example.com");
// result.response is typed as { name: string; email: string; confidence: number }

Thread Management

Agents have scoped thread methods:

// Get a thread
const thread = await agent.threads.get("thread_123");

// List threads for this agent
const threads = await agent.threads.list({ limit: 10 });

// Get thread history
const history = await agent.threads.history("thread_123", {
  limit: 50,
  order: "desc",
});

// Delete a thread
await agent.threads.delete("thread_123");

Memory

When memory: { enabled: true }, agents get memory tools automatically:

  • memories.search — semantic search over stored memories
  • memories.create — store new memories
  • memories.list — list memories with filters

See Memory for details.

On this page