The OpenAI TypeScript SDK provides direct access to OpenAI’s API for building AI applications in Node.js and TypeScript. When integrated with Acontext, you get persistent session management, automatic task extraction, and full observability of your agent’s tool usage and conversations.
What This Integration Provides
Session Persistence Store conversation history across multiple agent runs and resume sessions seamlessly
Manual Tool Calling Full control over tool execution with explicit handling of function calls
Task Extraction Automatically identify and track tasks from agent conversations with progress updates
Tool Observability Track all tool calls and their results for complete visibility into agent behavior
Quick Start
Download Template
Use acontext-cli to quickly set up an OpenAI TypeScript SDK project with Acontext integration:
acontext create my-openai-project --template-path "typescript/openai-basic"
If you haven’t installed acontext-cli yet, install it first: curl -fsSL https://install.acontext.io | sh
Manual Setup
If you prefer to set up manually:
Install dependencies
Install OpenAI and Acontext TypeScript packages: npm install openai @acontext/acontext dotenv
Or with yarn: yarn add openai @acontext/acontext dotenv
Configure environment
Create a .env file with your API credentials: OPENAI_API_KEY=your_openai_key_here
ACONTEXT_API_KEY=sk-ac-your-root-api-bearer-token
ACONTEXT_BASE_URL=http://localhost:8029/api/v1
OPENAI_BASE_URL= # Optional, for custom OpenAI-compatible endpoints
Never commit API keys to version control. Always use environment variables or secure secret management.
Initialize clients
Create OpenAI and Acontext client instances: import OpenAI from 'openai' ;
import { AcontextClient } from '@acontext/acontext' ;
import dotenv from 'dotenv' ;
dotenv . config ();
const openaiClient = new OpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
});
const acontextClient = new AcontextClient ({
apiKey: process . env . ACONTEXT_API_KEY || 'sk-ac-your-root-api-bearer-token' ,
baseUrl: process . env . ACONTEXT_BASE_URL || 'http://localhost:8029/api/v1' ,
timeout: 60000 ,
});
How It Works
The OpenAI TypeScript SDK integration works by sending conversation messages to Acontext in OpenAI message format. Since both use the same format, no conversion is needed.
Message Flow
Create session : Initialize a new Acontext session for your agent
Send messages : Append each message (user, assistant, and tool) to Acontext as the conversation progresses
Handle tool calls : Manually execute tools when OpenAI requests them
Extract tasks : After the conversation, flush the session and retrieve extracted tasks
Resume sessions : Load previous conversation history to continue where you left off
Basic Integration Pattern
Here’s the core pattern for integrating OpenAI TypeScript SDK with Acontext:
import OpenAI from 'openai' ;
import { AcontextClient } from '@acontext/acontext' ;
// Initialize clients
const openaiClient = new OpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
});
const acontextClient = new AcontextClient ({
apiKey: 'sk-ac-your-root-api-bearer-token' ,
baseUrl: 'http://localhost:8029/api/v1' ,
});
// Create Acontext session
const space = await acontextClient . spaces . create ();
const session = await acontextClient . sessions . create ({ spaceId: space . id });
// Build conversation
let conversation : any [] = [];
const userMsg = { role: 'user' , content: 'Hello!' };
conversation . push ( userMsg );
await acontextClient . sessions . sendMessage ( session . id , userMsg , {
format: 'openai' ,
});
// Call OpenAI API
const response = await openaiClient . chat . completions . create ({
model: 'gpt-4o-mini' ,
messages: conversation ,
});
// Send assistant response to Acontext
const assistantMsg = {
role: response . choices [ 0 ]. message . role ,
content: response . choices [ 0 ]. message . content ,
};
conversation . push ( assistantMsg );
await acontextClient . sessions . sendMessage ( session . id , assistantMsg , {
format: 'openai' ,
});
This integration demonstrates manual tool calling, giving you full control over tool execution:
Define your tools in OpenAI’s function calling format:
const tools = [
{
type: 'function' as const ,
function: {
name: 'get_weather' ,
description: 'Returns weather info for the specified city.' ,
parameters: {
type: 'object' ,
properties: {
city: {
type: 'string' ,
description: 'The city to get weather for' ,
},
},
required: [ 'city' ],
additionalProperties: false ,
},
},
},
];
Handle tool calls in a loop until the agent provides a final response:
async function runAgent (
client : OpenAI ,
conversation : any []
) : Promise <[ string , any []]> {
const messagesToSend : any [] = [
... conversation ,
];
const newMessages : any [] = [];
const maxIterations = 10 ;
let iteration = 0 ;
let finalContent = '' ;
while ( iteration < maxIterations ) {
iteration += 1 ;
// Call OpenAI API
const response = await client . chat . completions . create ({
model: 'gpt-4o-mini' ,
messages: messagesToSend ,
tools: tools ,
tool_choice: 'auto' ,
});
const message = response . choices [ 0 ]. message ;
const messageDict : any = {
role: message . role ,
content: message . content ,
};
// Handle tool calls
const toolCallsWithFunction : Array <{
id : string ;
function : { name : string ; arguments : string };
}> = [];
if ( message . tool_calls ) {
messageDict . tool_calls = message . tool_calls . map (( tc : any ) => {
toolCallsWithFunction . push ({
id: tc . id ,
function: {
name: tc . function . name ,
arguments: tc . function . arguments ,
},
});
return {
id: tc . id ,
type: 'function' ,
function: {
name: tc . function . name ,
arguments: tc . function . arguments ,
},
};
});
}
messagesToSend . push ( messageDict );
newMessages . push ( messageDict );
// If there are no tool calls, we're done
if ( ! message . tool_calls || message . tool_calls . length === 0 ) {
finalContent = message . content || '' ;
break ;
}
// Execute tool calls
for ( const toolCallInfo of toolCallsWithFunction ) {
const functionName = toolCallInfo . function . name ;
const functionArgsStr = toolCallInfo . function . arguments || '{}' ;
const functionArgs = JSON . parse ( functionArgsStr );
const functionResult = executeTool ( functionName , functionArgs );
// Add tool response
const toolMessage = {
role: 'tool' as const ,
tool_call_id: toolCallInfo . id ,
content: functionResult ,
};
messagesToSend . push ( toolMessage );
newMessages . push ( toolMessage );
}
}
return [ finalContent , newMessages ];
}
Send Messages to Acontext
Send all messages (including tool calls and tool responses) to Acontext:
async function appendMessage (
message : any ,
conversation : any [],
sessionId : string
) : Promise < any []> {
conversation . push ( message );
await acontextClient . sessions . sendMessage ( sessionId , message , {
format: 'openai' ,
});
return conversation ;
}
// After running agent
const [ responseContent , newMessages ] = await runAgent ( openaiClient , conversation );
for ( const msg of newMessages ) {
conversation = await appendMessage ( msg , conversation , session . id );
}
Complete Example
This example demonstrates a multi-turn conversation with tool calling and task extraction:
import OpenAI from 'openai' ;
import { AcontextClient } from '@acontext/acontext' ;
import dotenv from 'dotenv' ;
dotenv . config ();
// Initialize clients
const openaiClient = new OpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
});
const acontextClient = new AcontextClient ({
apiKey: process . env . ACONTEXT_API_KEY || 'sk-ac-your-root-api-bearer-token' ,
baseUrl: process . env . ACONTEXT_BASE_URL || 'http://localhost:8029/api/v1' ,
timeout: 60000 ,
});
// Tool definitions
const tools = [
{
type: 'function' as const ,
function: {
name: 'get_weather' ,
description: 'Returns weather info for the specified city.' ,
parameters: {
type: 'object' ,
properties: {
city: { type: 'string' , description: 'The city to get weather for' },
},
required: [ 'city' ],
},
},
},
];
function getWeather ( city : string ) : string {
return `The weather in ${ city } is sunny` ;
}
function executeTool ( toolName : string , toolArgs : Record < string , any >) : string {
if ( toolName === 'get_weather' ) {
return getWeather ( toolArgs . city );
} else {
return `Unknown tool: ${ toolName } ` ;
}
}
async function appendMessage (
message : any ,
conversation : any [],
sessionId : string
) : Promise < any []> {
conversation . push ( message );
await acontextClient . sessions . sendMessage ( sessionId , message , {
format: 'openai' ,
});
return conversation ;
}
async function main () : Promise < void > {
// Create space and session
const space = await acontextClient . spaces . create ();
const session = await acontextClient . sessions . create ({ spaceId: space . id });
let conversation : any [] = [];
// First interaction
const userMsg = { role: 'user' , content: "What's the weather in Helsinki?" };
conversation = await appendMessage ( userMsg , conversation , session . id );
// Run agent with tool calling
const [ responseContent , newMessages ] = await runAgent (
openaiClient ,
conversation
);
// Send all messages to Acontext
for ( const msg of newMessages ) {
conversation = await appendMessage ( msg , conversation , session . id );
}
// Extract tasks
await acontextClient . sessions . flush ( session . id );
const tasksResponse = await acontextClient . sessions . getTasks ( session . id );
console . log ( 'Extracted tasks:' );
for ( const task of tasksResponse . items ) {
console . log ( `Task: ${ task . data [ 'task_description' ] } ` );
console . log ( `Status: ${ task . status } ` );
}
}
main (). catch ( console . error );
Key Features
Session Persistence
Resume conversations by loading previous messages from Acontext:
// Load previous conversation
const messages = await acontextClient . sessions . getMessages ( sessionId , {
format: 'openai' ,
});
const conversation : any [] = messages . items ;
// Continue conversation
conversation . push ({
role: 'user' ,
content: 'Summarize our conversation' ,
});
const response = await openaiClient . chat . completions . create ({
model: 'gpt-4o-mini' ,
messages: conversation ,
});
After completing a conversation, extract tasks with their status and metadata:
// Flush session to trigger task extraction
await acontextClient . sessions . flush ( sessionId );
// Retrieve extracted tasks
const tasksResponse = await acontextClient . sessions . getTasks ( sessionId );
for ( const task of tasksResponse . items ) {
console . log ( `Task: ${ task . data [ 'task_description' ] } ` );
console . log ( `Status: ${ task . status } ` );
// Access progress updates if available
if ( 'progresses' in task . data ) {
for ( const progress of task . data [ 'progresses' ] as any []) {
console . log ( ` Progress: ${ progress } ` );
}
}
// Access user preferences if available
if ( 'user_preferences' in task . data ) {
for ( const pref of task . data [ 'user_preferences' ] as any []) {
console . log ( ` Preference: ${ pref } ` );
}
}
}
Acontext automatically tracks all tool calls and their results when messages are sent:
// Tool calls are automatically tracked when you send messages
const messageWithToolCall = {
role: 'assistant' ,
content: null ,
tool_calls: [
{
id: 'call_123' ,
type: 'function' ,
function: {
name: 'get_weather' ,
arguments: '{"city": "Helsinki"}' ,
},
},
],
};
await acontextClient . sessions . sendMessage ( sessionId , messageWithToolCall , {
format: 'openai' ,
});
// Tool results are also tracked
const toolResult = {
role: 'tool' ,
tool_call_id: 'call_123' ,
content: 'The weather in Helsinki is sunny' ,
};
await acontextClient . sessions . sendMessage ( sessionId , toolResult , {
format: 'openai' ,
});
Best Practices
Message format : Always specify format: 'openai' when sending messages to Acontext to ensure proper message format handling.
Tool execution : Always execute tools in the order they appear in tool_calls, and include the tool_call_id in tool response messages for proper tracking.
Iteration limits : Set a reasonable maxIterations limit for tool calling loops to prevent infinite loops if the agent keeps requesting tools.
Async/await : Use async/await consistently when working with both OpenAI and Acontext APIs, as they both return Promises.
In your production agent, you don’t need to call flush method after each conversation,
Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to Session Buffer Mechanism .
Next Steps