The Vercel AI SDK provides a unified interface for building AI applications with support for multiple providers. When integrated with Acontext, you get persistent session management, automatic task extraction, and seamless conversation resumption across sessions.
What This Integration Provides
Unified AI Interface Support for multiple AI providers (OpenAI, Anthropic, etc.) through a single API
Session Persistence Store conversation history across multiple agent runs and resume sessions seamlessly
Tool Calling Define tools with Zod schemas and handle tool execution with full control
Task Extraction Automatically identify and track tasks from agent conversations with progress updates
Quick Start
Download Template
Use acontext-cli to quickly set up a Vercel AI SDK project with Acontext integration:
acontext create my-ai-project --template-path "typescript/vercel-ai-basic"
If you haven’t installed acontext-cli yet, install it first: curl -fsSL https://install.acontext.io | sh
Manual Setup
If you prefer to set up manually:
Install dependencies
Install Vercel AI SDK and Acontext TypeScript packages: npm install ai @ai-sdk/openai @acontext/acontext dotenv zod
Or with yarn: yarn add ai @ai-sdk/openai @acontext/acontext dotenv zod
Configure environment
Create a .env file with your API credentials: OPENAI_API_KEY=your_openai_key_here
ACONTEXT_API_KEY=sk-ac-your-root-api-bearer-token
ACONTEXT_BASE_URL=http://localhost:8029/api/v1
OPENAI_BASE_URL= # Optional, for custom OpenAI-compatible endpoints
Never commit API keys to version control. Always use environment variables or secure secret management.
Initialize clients
Create Vercel AI SDK provider and Acontext client instances: import { createOpenAI } from '@ai-sdk/openai' ;
import { AcontextClient } from '@acontext/acontext' ;
import dotenv from 'dotenv' ;
dotenv . config ();
// Create OpenAI provider
const openaiProvider = createOpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
baseURL: process . env . OPENAI_BASE_URL ,
});
// Initialize Acontext client
const acontextClient = new AcontextClient ({
apiKey: process . env . ACONTEXT_API_KEY || 'sk-ac-your-root-api-bearer-token' ,
baseUrl: process . env . ACONTEXT_BASE_URL || 'http://localhost:8029/api/v1' ,
timeout: 60000 ,
});
How It Works
The Vercel AI SDK integration works by sending conversation messages to Acontext in OpenAI message format. The SDK uses generateText for text generation and requires manual tool execution.
Message Flow
Create session : Initialize a new Acontext session for your agent
Generate text : Use generateText with tools to get model responses
Handle tool calls : Manually execute tools when the model requests them
Send messages : Append each message (user, assistant, and tool) to Acontext
Extract tasks : After the conversation, flush the session and retrieve extracted tasks
Resume sessions : Load previous conversation history to continue where you left off
Important Notes
Message format limitations : Vercel AI SDK v5 only accepts ‘user’ and ‘assistant’ roles in the messages array. Tool results must be converted to user messages for the next iteration, but marked as internal so they aren’t sent to Acontext.
Tool execution : Even though tools have execute functions defined, you still need to manually handle tool calls and pass results back to the model in a format it can understand.
Basic Integration Pattern
Here’s the core pattern for integrating Vercel AI SDK with Acontext:
import { generateText } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
import { AcontextClient } from '@acontext/acontext' ;
// Create provider and model
const openaiProvider = createOpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
});
const model = openaiProvider ( 'gpt-4o-mini' );
// Initialize Acontext
const acontextClient = new AcontextClient ({
apiKey: 'sk-ac-your-root-api-bearer-token' ,
baseUrl: 'http://localhost:8029/api/v1' ,
});
// Create session
const space = await acontextClient . spaces . create ();
const session = await acontextClient . sessions . create ({ spaceId: space . id });
// Build conversation
let conversation : any [] = [];
const userMsg = { role: 'user' , content: 'Hello!' };
conversation . push ( userMsg );
await acontextClient . sessions . sendMessage ( session . id , userMsg , {
format: 'openai' ,
});
// Generate text
const result = await generateText ({
model ,
messages: conversation ,
});
// Send assistant response to Acontext
const assistantMsg = {
role: 'assistant' ,
content: result . text ,
};
conversation . push ( assistantMsg );
await acontextClient . sessions . sendMessage ( session . id , assistantMsg , {
format: 'openai' ,
});
The Vercel AI SDK uses Zod schemas to define tools. Tools must include an execute function, but you still need to manually handle tool calls in a loop.
Define your tools using the tool() function with Zod schemas:
import { tool } from 'ai' ;
import { z } from 'zod' ;
const tools = {
get_weather: tool ({
description: 'Returns weather info for the specified city.' ,
inputSchema: z . object ({
city: z . string (). describe ( 'The city to get weather for' ),
}),
execute : async ({ city } : { city : string }) => {
return `The weather in ${ city } is sunny` ;
},
}),
book_flight: tool ({
description: 'Book a flight.' ,
inputSchema: z . object ({
from_city: z . string (). describe ( 'The departure city' ),
to_city: z . string (). describe ( 'The destination city' ),
date: z . string (). describe ( 'The date of the flight' ),
}),
execute : async ({ from_city , to_city , date } : { from_city : string ; to_city : string ; date : string }) => {
return `Flight booked successfully for ' ${ from_city } ' to ' ${ to_city } ' on ' ${ date } '` ;
},
}),
};
Handle tool calls in a loop until the agent provides a final response:
async function runAgent ( conversation : any []) : Promise <[ string , any []]> {
const model = createModel ();
const newMessages : any [] = [];
const maxIterations = 10 ;
let iteration = 0 ;
let finalContent = '' ;
while ( iteration < maxIterations ) {
iteration += 1 ;
// Filter messages for Vercel AI SDK (only user and assistant)
const messagesToSend = conversation
. filter (( msg : any ) => {
const role = msg . role ;
return ( role === 'user' || role === 'assistant' ) && ! msg . _internal ;
})
. map (( msg : any ) => {
// Ensure content is a string
let content = msg . content ;
if ( Array . isArray ( content )) {
content = content . map (( item : any ) =>
typeof item === 'string' ? item : item . text || item . content || ''
). join ( ' ' );
}
if ( typeof content !== 'string' ) {
content = String ( content || '' );
}
return { role: msg . role , content };
});
const result = await generateText ({
model ,
system: 'You are a helpful assistant' ,
messages: messagesToSend ,
tools ,
});
const messageDict : any = {
role: 'assistant' ,
content: result . text ,
};
// Handle tool calls
const toolCallsWithFunction : Array <{
id : string ;
function : { name : string ; arguments : string };
}> = [];
if ( result . toolCalls && result . toolCalls . length > 0 ) {
messageDict . tool_calls = result . toolCalls . map (( tc : any ) => {
// Get arguments from tool call
let args = tc . args || tc . parameters || tc . input || {};
if ( typeof args === 'string' ) {
try {
args = JSON . parse ( args );
} catch {
args = {};
}
}
const argsString = JSON . stringify ( args );
toolCallsWithFunction . push ({
id: tc . toolCallId ,
function: {
name: tc . toolName ,
arguments: argsString ,
},
});
return {
id: tc . toolCallId ,
type: 'function' ,
function: {
name: tc . toolName ,
arguments: argsString ,
},
};
});
}
conversation . push ( messageDict );
newMessages . push ( messageDict );
// If there are no tool calls, we're done
if ( ! result . toolCalls || result . toolCalls . length === 0 ) {
finalContent = result . text || '' ;
break ;
}
// Execute tool calls manually
const toolResults : Array <{ toolName : string ; result : string ; toolCallId : string }> = [];
for ( const toolCallInfo of toolCallsWithFunction ) {
const functionName = toolCallInfo . function . name ;
const functionArgs = JSON . parse ( toolCallInfo . function . arguments );
const functionResult = executeTool ( functionName , functionArgs );
toolResults . push ({
toolName: functionName ,
result: functionResult ,
toolCallId: toolCallInfo . id ,
});
// Create tool message for Acontext
const toolMessage = {
role: 'tool' as const ,
tool_call_id: toolCallInfo . id ,
content: functionResult ,
};
newMessages . push ( toolMessage );
}
// Convert tool results to user message for next iteration
// Mark as internal so it won't be sent to Acontext
if ( toolResults . length > 0 ) {
const toolResultsText = toolResults
. map ( tr => ` ${ tr . toolName } returned: ${ tr . result } ` )
. join ( ' \n ' );
const toolResultUserMessage = {
role: 'user' as const ,
content: `Tool execution results: \n ${ toolResultsText } ` ,
_internal: true , // Mark as internal
};
conversation . push ( toolResultUserMessage );
}
}
return [ finalContent , newMessages ];
}
Send Messages to Acontext
Send all messages (excluding internal ones) to Acontext:
async function appendMessage (
message : any ,
conversation : any [],
sessionId : string
) : Promise < any []> {
// Skip internal messages (tool results converted to user messages)
if ( message . _internal ) {
conversation . push ( message );
return conversation ;
}
conversation . push ( message );
await acontextClient . sessions . sendMessage ( sessionId , message , {
format: 'openai' ,
});
return conversation ;
}
// After running agent
const [ responseContent , newMessages ] = await runAgent ( conversation );
for ( const msg of newMessages ) {
conversation = await appendMessage ( msg , conversation , session . id );
}
Complete Example
This example demonstrates a multi-turn conversation with tool calling and task extraction:
import { generateText , tool } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
import { AcontextClient } from '@acontext/acontext' ;
import dotenv from 'dotenv' ;
dotenv . config ();
// Initialize Acontext
const acontextClient = new AcontextClient ({
apiKey: process . env . ACONTEXT_API_KEY || 'sk-ac-your-root-api-bearer-token' ,
baseUrl: process . env . ACONTEXT_BASE_URL || 'http://localhost:8029/api/v1' ,
timeout: 60000 ,
});
// Tool implementations
function getWeather ( city : string ) : string {
return `The weather in ${ city } is sunny` ;
}
function executeTool ( toolName : string , toolArgs : Record < string , any >) : string {
if ( toolName === 'get_weather' ) {
return getWeather ( toolArgs . city );
} else {
return `Unknown tool: ${ toolName } ` ;
}
}
// Tool definitions
const tools = {
get_weather: tool ({
description: 'Returns weather info for the specified city.' ,
inputSchema: z . object ({
city: z . string (). describe ( 'The city to get weather for' ),
}),
execute : async ({ city } : { city : string }) => {
return getWeather ( city );
},
}),
};
// Create provider
const openaiProvider = createOpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
});
function createModel () {
return openaiProvider ( 'gpt-4o-mini' );
}
async function appendMessage (
message : any ,
conversation : any [],
sessionId : string
) : Promise < any []> {
if ( message . _internal ) {
conversation . push ( message );
return conversation ;
}
conversation . push ( message );
await acontextClient . sessions . sendMessage ( sessionId , message , {
format: 'openai' ,
});
return conversation ;
}
async function main () : Promise < void > {
// Create space and session
const space = await acontextClient . spaces . create ();
const session = await acontextClient . sessions . create ({ spaceId: space . id });
let conversation : any [] = [];
// First interaction
const userMsg = { role: 'user' , content: "What's the weather in Helsinki?" };
conversation = await appendMessage ( userMsg , conversation , session . id );
// Run agent with tool calling
const [ responseContent , newMessages ] = await runAgent ( conversation );
// Send all messages to Acontext
for ( const msg of newMessages ) {
conversation = await appendMessage ( msg , conversation , session . id );
}
// Extract tasks
await acontextClient . sessions . flush ( session . id );
const tasksResponse = await acontextClient . sessions . getTasks ( session . id );
console . log ( 'Extracted tasks:' );
for ( const task of tasksResponse . items ) {
console . log ( `Task: ${ task . data [ 'task_description' ] } ` );
console . log ( `Status: ${ task . status } ` );
}
}
main (). catch ( console . error );
Key Features
Session Persistence
Resume conversations by loading previous messages from Acontext:
// Load previous conversation
const messages = await acontextClient . sessions . getMessages ( sessionId , {
format: 'openai' ,
});
const conversation : any [] = messages . items ;
// Continue conversation
conversation . push ({
role: 'user' ,
content: 'Summarize our conversation' ,
});
const [ responseContent ] = await runAgent ( conversation );
console . log ( responseContent );
After completing a conversation, extract tasks with their status and metadata:
// Flush session to trigger task extraction
await acontextClient . sessions . flush ( sessionId );
// Retrieve extracted tasks
const tasksResponse = await acontextClient . sessions . getTasks ( sessionId );
for ( const task of tasksResponse . items ) {
console . log ( `Task: ${ task . data [ 'task_description' ] } ` );
console . log ( `Status: ${ task . status } ` );
// Access progress updates if available
if ( 'progresses' in task . data ) {
for ( const progress of task . data [ 'progresses' ] as any []) {
console . log ( ` Progress: ${ progress } ` );
}
}
// Access user preferences if available
if ( 'user_preferences' in task . data ) {
for ( const pref of task . data [ 'user_preferences' ] as any []) {
console . log ( ` Preference: ${ pref } ` );
}
}
}
Vercel AI SDK has specific requirements for message formats:
// Filter messages for Vercel AI SDK
const messagesToSend = conversation
. filter (( msg : any ) => {
// Only user and assistant roles, exclude internal messages
const role = msg . role ;
return ( role === 'user' || role === 'assistant' ) && ! msg . _internal ;
})
. map (( msg : any ) => {
// Ensure content is always a string
let content = msg . content ;
if ( Array . isArray ( content )) {
content = content . map (( item : any ) =>
typeof item === 'string' ? item : item . text || item . content || ''
). join ( ' ' );
}
if ( typeof content !== 'string' ) {
content = String ( content || '' );
}
return { role: msg . role , content };
});
Best Practices
Internal messages : Mark tool results converted to user messages with _internal: true so they aren’t sent to Acontext but are still used for the next model iteration.
Content format : Always ensure message content is a string. Vercel AI SDK doesn’t accept array content for user and assistant messages.
Tool execution : Even though tools have execute functions, you still need to manually handle tool calls in a loop and convert results to the appropriate format for the next iteration.
Message filtering : Filter out internal messages and ensure only ‘user’ and ‘assistant’ roles are sent to generateText, as the SDK doesn’t support ‘tool’ role in messages.
Format specification : Always specify format: 'openai' when sending messages to Acontext to ensure proper format handling.
In your production agent, you don’t need to call flush method after each conversation,
Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to Session Buffer Mechanism .
Differences from OpenAI SDK
The Vercel AI SDK differs from the basic OpenAI SDK in several key ways:
Unified Provider Interface
The Vercel AI SDK provides a unified interface for multiple AI providers (OpenAI, Anthropic, etc.) through a single API, making it easy to switch providers.
Tools are defined using Zod schemas with the tool() function, providing type safety and validation.
Message Format Limitations
Vercel AI SDK v5 only accepts ‘user’ and ‘assistant’ roles in messages. Tool results must be converted to user messages for the next iteration.
Content Type Requirements
Message content must be a string, not an array. Array content needs to be converted to a string before sending to the SDK.
Next Steps