kernl
Guides

Chatbot — AI SDK

Build a streaming chatbot UI with kernl and the Vercel AI SDK.

This guide assumes you've completed Getting Started and know how to create an agent with toolkits.

We'll wire up a Hono server that streams agent responses to a React frontend using the Vercel AI SDK and useChat.

For a complete working example, see Jarvis — a full server + client implementation using Hono + Next.js.

Server

The server exposes a streaming endpoint that converts between the AI SDK types and them streams the response from the agent. Notice that because we are passing the threadId from the client, we only have to send the latest message:

import { Hono } from "hono";
import { createUIMessageStreamResponse, type UIMessage } from "ai";
import { UIMessageCodec, toUIMessageStream } from "@kernl-sdk/ai";

import { jarvis } from "@/agents/jarvis";

const app = new Hono();

app.post("/jarvis/stream", async (c) => {
  const { tid, message } = await c.req.json();

  // convert AI SDK UIMessage -> kernl input
  const input = await UIMessageCodec.decode(message as UIMessage);

  // stream the agent response
  const stream = jarvis.stream(input, { threadId: tid });

  // convert back to AI SDK format
  return createUIMessageStreamResponse({
    stream: toUIMessageStream(stream),
  });
});

Key utilities:

  • UIMessageCodec.decode() — Converts a Vercel AI SDK UIMessage into the input format kernl agents expect
  • toUIMessageStream() — Converts the kernl agent stream into a format the AI SDK can consume
  • createUIMessageStreamResponse() — Wraps the stream in a proper HTTP response

Client

On the frontend, we use useChat from the Vercel AI SDK with a custom transport that sends only the latest message (kernl handles history via threads). For the UI, we're using AI Elements — a component library for building chat interfaces.

See agentic-chatbot for a minimal Next.js chatbot template.

import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";

import { Message, MessageContent } from "@/components/ai-elements/message";
import { PromptInput } from "@/components/ai-elements/prompt-input";

export function Chat({ threadId }: { threadId: string }) {

  // ...

  const { messages, sendMessage, status } = useChat({
    id: threadId,
    transport: new DefaultChatTransport({
      prepareSendMessagesRequest: ({ id, messages }) => ({
        api: `/jarvis/stream`,
        body: {
          tid: id,
          message: messages[messages.length - 1],
        },
      }),
    }),
  });

  return (
    <...>
      {messages.map((m) => (
        <Message from={m.role} key={m.id}>
          <MessageContent>
            // ...
          </MessageContent>
        </Message>
      ))}

      <PromptInput
        onSubmit={handleSubmit}
        multiple
        globalDrop
        accept="image/*"
      >
        // ...
      </PromptInput>
    </...>
  );
}

Full example

See the Jarvis microproject for a complete implementation with:

  • Thread creation and management
  • Auto-generated titles
  • Tool call rendering
  • Linear + GitHub toolkits

On this page