# Create agent skill
Source: https://docs.acontext.io/api-reference/agent_skills/create-agent-skill
api-reference/openapi.json post /agent_skills
Upload a zip file containing agent skill and extract it to S3. The zip file must contain a SKILL.md file (case-insensitive) with YAML format containing 'name' and 'description' fields. The name and description will be extracted from SKILL.md.
# Delete agent skill
Source: https://docs.acontext.io/api-reference/agent_skills/delete-agent-skill
api-reference/openapi.json delete /agent_skills/{id}
Delete agent skill and all extracted files from S3
# Get agent skill by ID
Source: https://docs.acontext.io/api-reference/agent_skills/get-agent-skill-by-id
api-reference/openapi.json get /agent_skills/{id}
Get agent skill by its UUID
# Get agent skill by name
Source: https://docs.acontext.io/api-reference/agent_skills/get-agent-skill-by-name
api-reference/openapi.json get /agent_skills/by_name
Get agent skill by its name (unique within project)
# Get file from agent skill
Source: https://docs.acontext.io/api-reference/agent_skills/get-file-from-agent-skill
api-reference/openapi.json get /agent_skills/by_name/{name}/file
Get file content or download URL from agent skill by skill name and file path. If the file is a text-based file (parseable), returns parsed content. Otherwise, returns a presigned download URL.
# Get presigned URL for a file
Source: https://docs.acontext.io/api-reference/agent_skills/get-presigned-url-for-a-file
api-reference/openapi.json get /agent_skills/{id}/file
Get a presigned URL to download a specific file from agent skill
# List agent skills
Source: https://docs.acontext.io/api-reference/agent_skills/list-agent-skills
api-reference/openapi.json get /agent_skills
List all agent skills under a project
# Update agent skill
Source: https://docs.acontext.io/api-reference/agent_skills/update-agent-skill
api-reference/openapi.json put /agent_skills/{id}
Update agent skill metadata (name, description, meta)
# Delete artifact
Source: https://docs.acontext.io/api-reference/artifact/delete-artifact
api-reference/openapi.json delete /disk/{disk_id}/artifact
Delete an artifact by path and filename
# Get artifact
Source: https://docs.acontext.io/api-reference/artifact/get-artifact
api-reference/openapi.json get /disk/{disk_id}/artifact
Get artifact information by path and filename. Optionally include a presigned URL for downloading and parsed file content.
# List artifacts
Source: https://docs.acontext.io/api-reference/artifact/list-artifacts
api-reference/openapi.json get /disk/{disk_id}/artifact/ls
List artifacts in a specific path or all artifacts in a disk
# Update artifact meta
Source: https://docs.acontext.io/api-reference/artifact/update-artifact-meta
api-reference/openapi.json put /disk/{disk_id}/artifact
Update an artifact's metadata (user-defined metadata only)
# Upsert artifact
Source: https://docs.acontext.io/api-reference/artifact/upsert-artifact
api-reference/openapi.json post /disk/{disk_id}/artifact
Upload a file and create or update an artifact record under a disk. File size must not exceed the configured maximum upload size limit (default: 16MB).
# Create block
Source: https://docs.acontext.io/api-reference/block/create-block
api-reference/openapi.json post /space/{space_id}/block
Create a new block (supports types: page, folder, sop). For page and folder types, parent_id is optional. For sop type, parent_id is required.
# Delete block
Source: https://docs.acontext.io/api-reference/block/delete-block
api-reference/openapi.json delete /space/{space_id}/block/{block_id}
Delete a block by its ID (works for block types: page, folder, sop)
# Get block properties
Source: https://docs.acontext.io/api-reference/block/get-block-properties
api-reference/openapi.json get /space/{space_id}/block/{block_id}/properties
Get a block's properties by its ID (works for block types: page, folder, sop)
# List blocks
Source: https://docs.acontext.io/api-reference/block/list-blocks
api-reference/openapi.json get /space/{space_id}/block
List blocks in a space. Use type query parameter to filter by block type (page, folder, sop). Use parent_id query parameter to filter by parent. If both type and parent_id are empty, returns top-level pages and folders.
# Move block
Source: https://docs.acontext.io/api-reference/block/move-block
api-reference/openapi.json put /space/{space_id}/block/{block_id}/move
Move block by updating its parent_id. Works for block types: page, folder, sop. For page and folder types, parent_id can be null (root level).
# Update block properties
Source: https://docs.acontext.io/api-reference/block/update-block-properties
api-reference/openapi.json put /space/{space_id}/block/{block_id}/properties
Update a block's title and properties by its ID (works for block types: page, folder, sop)
# Update block sort
Source: https://docs.acontext.io/api-reference/block/update-block-sort
api-reference/openapi.json put /space/{space_id}/block/{block_id}/sort
Update block sort value (works for block types: page, folder, sop)
# Create disk
Source: https://docs.acontext.io/api-reference/disk/create-disk
api-reference/openapi.json post /disk
Create a disk group under a project
# Delete disk
Source: https://docs.acontext.io/api-reference/disk/delete-disk
api-reference/openapi.json delete /disk/{disk_id}
Delete a disk by its UUID
# List disks
Source: https://docs.acontext.io/api-reference/disk/list-disks
api-reference/openapi.json get /disk
List all disks under a project
# Introduction
Source: https://docs.acontext.io/api-reference/introduction
Build AI agents with context storage, semantic search, and learning capabilities
## Overview
Acontext provides a REST API for building AI agents with persistent context, semantic search, and self-learning capabilities.
RESTful endpoints with predictable URLs
Full OpenAPI specification available
OpenAI, Anthropic, Gemini, and native formats
Python and TypeScript libraries
## Authentication
All API requests require a Bearer token in the `Authorization` header:
```bash theme={null}
Authorization: Bearer YOUR_API_KEY
```
Keep your API keys secure and never expose them in client-side code.
## Install SDKs
```bash Python theme={null}
pip install acontext
```
```bash TypeScript theme={null}
npm install @acontext/acontext
```
## Quick Start
```python Python theme={null}
import os
from acontext import AcontextClient
# Initialize client
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
# Create a knowledge space
space = client.spaces.create(configs={"name": "Customer Support KB"})
# Create a session connected to the space
session = client.sessions.create(space_id=space.id)
# Store a message
client.sessions.store_message(
session.id,
blob={'role': 'user', 'content': 'How do I reset my password?'}
)
# Search the knowledge base
results = client.spaces.experience_search(
space.id,
query='password reset process',
mode='agentic'
)
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
// Initialize client
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
// Create a knowledge space
const space = await client.spaces.create({
configs: { name: 'Customer Support KB' }
});
// Create a session connected to the space
const session = await client.sessions.create({ spaceId: space.id });
// Store a message
await client.sessions.storeMessage(
session.id,
{ role: 'user', content: 'How do I reset my password?' }
);
// Search the knowledge base
const results = await client.spaces.experienceSearch(space.id, {
query: 'password reset process',
mode: 'agentic'
});
```
## Core Concepts
Knowledge spaces store structured information with semantic search capabilities. Each space can contain pages, folders, and content blocks.
Sessions represent conversation threads with message history. Connect sessions to spaces to leverage stored knowledge.
Building units of knowledge: pages, folders, text blocks, and SOPs. Blocks support rich properties and nested structures.
Isolated storage groups for organizing multiple knowledge bases within a project.
Capabilities that AI agents can use. Manage and configure tools across your project.
## API Resources
Manage storage groups
Handle conversations and messages
Create and search knowledge bases
Manage content structure
Configure agent capabilities
Semantic and agentic search
## Message Formats
Acontext supports multiple message formats for compatibility:
* **OpenAI** - Compatible with OpenAI Chat Completion format (default)
* **Anthropic** - Compatible with Anthropic Messages format
* **Gemini** - Compatible with Google Gemini Messages format
Convert between formats when retrieving or storing messages using the `format` parameter.
## Community
Report issues and contribute
Join our community
# Connect session to space
Source: https://docs.acontext.io/api-reference/session/connect-session-to-space
api-reference/openapi.json post /session/{session_id}/connect_to_space
Connect a session to a space by id
# Create session
Source: https://docs.acontext.io/api-reference/session/create-session
api-reference/openapi.json post /session
Create a new session under a space
# Delete session
Source: https://docs.acontext.io/api-reference/session/delete-session
api-reference/openapi.json delete /session/{session_id}
Delete a session by id
# Flush session
Source: https://docs.acontext.io/api-reference/session/flush-session
api-reference/openapi.json post /session/{session_id}/flush
Flush the session buffer for a given session
# Get learning status
Source: https://docs.acontext.io/api-reference/session/get-learning-status
api-reference/openapi.json get /session/{session_id}/get_learning_status
Get learning status for a session. Returns the count of space digested tasks and not space digested tasks. If the session is not connected to a space, returns 0 and 0.
# Get message observing status for a session
Source: https://docs.acontext.io/api-reference/session/get-message-observing-status-for-a-session
api-reference/openapi.json get /session/{session_id}/observing_status
Returns the count of observed, in_process, and pending messages
# Get messages from session
Source: https://docs.acontext.io/api-reference/session/get-messages-from-session
api-reference/openapi.json get /session/{session_id}/messages
Get messages from session. Default format is openai. Can convert to acontext (original), anthropic, or gemini format.
# Get session configs
Source: https://docs.acontext.io/api-reference/session/get-session-configs
api-reference/openapi.json get /session/{session_id}/configs
Get session configs by id
# Get sessions
Source: https://docs.acontext.io/api-reference/session/get-sessions
api-reference/openapi.json get /session
Get all sessions under a project, optionally filtered by space_id
# Get token counts for session
Source: https://docs.acontext.io/api-reference/session/get-token-counts-for-session
api-reference/openapi.json get /session/{session_id}/token_counts
Get total token counts for all text and tool-call parts in a session
# Store message to session
Source: https://docs.acontext.io/api-reference/session/store-message-to-session
api-reference/openapi.json post /session/{session_id}/messages
Supports JSON and multipart/form-data. In multipart mode: the payload is a JSON string placed in a form field. The format parameter indicates the format of the input message (default: openai, same as GET). The blob field should be a complete message object: for openai, use OpenAI ChatCompletionMessageParam format (with role and content); for anthropic, use Anthropic MessageParam format (with role and content); for acontext (internal), use {role, parts} format.
# Update session configs
Source: https://docs.acontext.io/api-reference/session/update-session-configs
api-reference/openapi.json put /session/{session_id}/configs
Update session configs by id
# Confirm experience
Source: https://docs.acontext.io/api-reference/space/confirm-experience
api-reference/openapi.json put /space/{space_id}/experience_confirmations/{experience_id}
Confirm an experience confirmation. If save is false, delete the row. If save is true, get the data first (TODO: process data), then delete the row.
# Create space
Source: https://docs.acontext.io/api-reference/space/create-space
api-reference/openapi.json post /space
Create a new space under a project
# Delete space
Source: https://docs.acontext.io/api-reference/space/delete-space
api-reference/openapi.json delete /space/{space_id}
Delete a space by its ID
# Get experience confirmations
Source: https://docs.acontext.io/api-reference/space/get-experience-confirmations
api-reference/openapi.json get /space/{space_id}/experience_confirmations
Get all experience confirmations in a space with cursor-based pagination
# Get experience search
Source: https://docs.acontext.io/api-reference/space/get-experience-search
api-reference/openapi.json get /space/{space_id}/experience_search
Retrieve the experience search results for a given query within a space by its ID
# Get space configs
Source: https://docs.acontext.io/api-reference/space/get-space-configs
api-reference/openapi.json get /space/{space_id}/configs
Retrieve the configurations of a space by its ID
# Get spaces
Source: https://docs.acontext.io/api-reference/space/get-spaces
api-reference/openapi.json get /space
Get all spaces under a project
# Update space configs
Source: https://docs.acontext.io/api-reference/space/update-space-configs
api-reference/openapi.json put /space/{space_id}/configs
Update the configurations of a space by its ID
# Get tasks from session
Source: https://docs.acontext.io/api-reference/task/get-tasks-from-session
api-reference/openapi.json get /session/{session_id}/task
Get tasks from session with cursor-based pagination
# Get tool names
Source: https://docs.acontext.io/api-reference/tool/get-tool-names
api-reference/openapi.json get /tool/name
Get all tool names within a project
# Rename tool names
Source: https://docs.acontext.io/api-reference/tool/rename-tool-names
api-reference/openapi.json put /tool/name
Rename one or more tool names within a project
# Async Python Client
Source: https://docs.acontext.io/chore/async_python
You can use async python client:
```python Python theme={null}
from acontext import AcontextAsyncClient
client = AcontextAsyncClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextAsyncClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
print(await client.ping())
session = await client.sessions.create()
# ...
```
Every method is available in async client, just add `await` prefix to the method call.
# Use Acontext Badge
Source: https://docs.acontext.io/chore/badge
Tag your projects with Acontext badge, let the community know

```md Markdown theme={null}
[](https://acontext.io)
```

```md Markdown theme={null}
[](https://acontext.io)
```
# What is Acontext
Source: https://docs.acontext.io/index
A great agent application should be able to:
* serve millions of users, not just a local demo.
* work for a long-period of time
* observe agent behavior
* learn from agent's own successes and failures.
Such an application is hard to build.
So we ask ourselves:
> “What's the core of helping developers to build a great agent application?”
That inspires us to create Acontext: **a context data platform for building cloud-native AI Agents**.
## Key Benefits
Short-term memory
Do context engineering for you.
Build a compact context for agents in one api call.
Short-term memory
Store conversations and artifacts with text, images, and files across sessions
Mid-term memory
Monitor what your agent plans vs. what it actually executes
Long-term memory
Agents learn skills from completed tasks without manual training
All-in-one view of context data
View your agent's tasks, conversations, and learned skills in one place
# Agno
Source: https://docs.acontext.io/integrations/agno
Integrate Agno multi-agent framework with Acontext for session persistence, task extraction, and automatic learning
Agno is a Python framework for building multi-agent systems.
When integrated with Acontext, you get persistent session management, automatic task extraction, and the ability for your agents to learn from completed interactions.
## What This Integration Provides
Store conversation history across multiple agent runs and resume sessions seamlessly
Automatically identify and track tasks from agent conversations with progress updates
Capture and learn from user preferences expressed during conversations
Enable agents to search and reuse learned skills from past successful interactions
## Quick Start
### Download Template
Use `acontext-cli` to quickly set up an Agno project with Acontext integration:
```bash theme={null}
acontext create my-agno-project --template-path "python/agno-basic"
```
If you haven't installed `acontext-cli` yet, install it first:
```bash theme={null}
curl -fsSL https://install.acontext.io | sh
```
### Manual Setup
If you prefer to set up manually:
Install Agno and Acontext Python packages:
```bash theme={null}
uv sync
```
Or with pip:
```bash theme={null}
pip install agno acontext python-dotenv
```
Create a `.env` file with your API credentials:
```env theme={null}
OPENAI_API_KEY=your_openai_key_here
ACONTEXT_API_KEY=sk-ac-your-root-api-bearer-token
ACONTEXT_BASE_URL=http://localhost:8029/api/v1
```
Never commit API keys to version control. Always use environment variables or secure secret management.
Create an Acontext client instance:
```python theme={null}
from acontext import AcontextClient
import os
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# acontext_client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
```
## How It Works
The Agno integration works by storing conversation messages to Acontext in OpenAI message format. Agno's message format is compatible with Acontext, so no conversion is needed.
### Message Flow
1. **Create session**: Initialize a new Acontext session for your agent
2. **Store messages**: Append each message (user and assistant) to Acontext as the conversation progresses
3. **Extract tasks**: After the conversation, flush the session and retrieve extracted tasks
4. **Resume sessions**: Load previous conversation history to continue where you left off
### Basic Integration Pattern
Here's the core pattern for integrating Agno with Acontext:
```python theme={null}
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from acontext import AcontextClient
# Initialize Acontext client
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# Create Agno agent
agent = Agent(
name="Assistant",
model=OpenAIChat(id="gpt-4"),
instructions="You are a helpful assistant",
)
# Create Acontext session
space = acontext_client.spaces.create()
session = acontext_client.sessions.create(space_id=space.id)
# Build conversation and store to Acontext
conversation = []
user_msg = {"role": "user", "content": "Hello!"}
conversation.append(user_msg)
acontext_client.sessions.store_message(session_id=session.id, blob=user_msg)
# Run agent
response = agent.run(conversation)
# Store assistant response to Acontext
assistant_msg = {"role": "assistant", "content": response.content}
conversation.append(assistant_msg)
acontext_client.sessions.store_message(session_id=session.id, blob=assistant_msg)
```
## Complete Example
This example demonstrates a multi-turn conversation with task extraction:
```python theme={null}
import asyncio
from agno.agent import Agent, RunOutput
from agno.models.openai import OpenAIChat
from agno.tools import tool
from acontext import AcontextClient
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
@tool
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"
def create_agno_agent() -> Agent:
return Agent(
name="Assistant",
model=OpenAIChat(id="gpt-4"),
instructions="You are a helpful assistant",
tools=[get_weather],
)
def append_message(message: dict, conversation: list[dict], session_id: str):
conversation.append(message)
acontext_client.sessions.store_message(session_id=session_id, blob=message)
return conversation
async def main():
# Create space and session
space = acontext_client.spaces.create()
session = acontext_client.sessions.create(space_id=space.id)
agent = create_agno_agent()
conversation = []
# First interaction
user_msg = {"role": "user", "content": "Plan a 3-day trip to Finland"}
conversation = append_message(user_msg, conversation, session.id)
response: RunOutput = agent.run(conversation)
assistant_msg = {"role": "assistant", "content": response.content}
conversation = append_message(assistant_msg, conversation, session.id)
# Flush and extract tasks
acontext_client.sessions.flush(session.id)
tasks_response = acontext_client.sessions.get_tasks(session.id)
print("Extracted tasks:")
for task in tasks_response.items:
print(f"Task: {task.data.task_description}")
print(f"Status: {task.status}")
if __name__ == "__main__":
asyncio.run(main())
```
## Key Features
### Session Persistence
Resume conversations by loading previous messages from Acontext:
```python theme={null}
# Load previous conversation
messages = acontext_client.sessions.get_messages(session_id)
conversation = messages.items
# Continue conversation
conversation.append({"role": "user", "content": "Summarize our conversation"})
response = agent.run(conversation)
```
### Task Extraction
After completing a conversation, extract tasks with their status and metadata:
```python theme={null}
# Flush session to trigger task extraction
acontext_client.sessions.flush(session_id)
# Retrieve extracted tasks
tasks_response = acontext_client.sessions.get_tasks(session_id)
for task in tasks_response.items:
print(f"Task: {task.data.task_description}")
print(f"Status: {task.status}")
# Access progress updates if available
if task.data.progresses:
for progress in task.data.progresses:
print(f" Progress: {progress}")
# Access user preferences if available
if task.data.user_preferences:
for pref in task.data.user_preferences:
print(f" Preference: {pref}")
```
### Experience Search
Search for learned experiences from past successful interactions:
```python theme={null}
# Wait for learning to complete
while True:
status = acontext_client.sessions.get_learning_status(session_id)
if status.not_space_digested_count == 0:
break
sleep(1)
# Search for relevant experiences
experiences = acontext_client.spaces.experience_search(
space_id=space_id,
query="travel with flight",
mode="fast"
)
print(experiences)
```
## Message Format Compatibility
Agno uses OpenAI-compatible message format, which works seamlessly with Acontext:
```python theme={null}
# Agno message format (compatible with Acontext)
message = {
"role": "user", # or "assistant"
"content": "Your message here"
}
# Store directly to Acontext - no conversion needed
acontext_client.sessions.store_message(session_id=session_id, blob=message)
```
Agno's `RunOutput.messages` can be converted to dictionaries using `[m.to_dict() for m in response.messages]`, which produces Acontext-compatible message format.
## Best Practices
**Batch message storing**: For better performance, you can batch multiple messages before storing them to Acontext, but ensure you store them in chronological order.
**Tool execution tracking**: Acontext automatically tracks tool calls and their results when messages are sent, providing full observability of your agent's actions.
In your production agent, you don't need to call `flush` method after each conversation,
Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to [Session Buffer Mechanism](/observe/buffer).
## Next Steps
Monitor what your agent plans vs. what it executes
Enable your agent to learn from completed tasks
View all agent interactions in one place
Explore the full Acontext API
# Vercel AI SDK
Source: https://docs.acontext.io/integrations/ai-sdk
Integrate Vercel AI SDK with Acontext for session persistence, tool calling, and task extraction
The Vercel AI SDK provides a unified interface for building AI applications with support for multiple providers. When integrated with Acontext, you get persistent session management, automatic task extraction, and seamless conversation resumption across sessions.
## What This Integration Provides
Support for multiple AI providers (OpenAI, Anthropic, etc.) through a single API
Store conversation history across multiple agent runs and resume sessions seamlessly
Define tools with Zod schemas and handle tool execution with full control
Automatically identify and track tasks from agent conversations with progress updates
## Quick Start
### Download Template
Use `acontext-cli` to quickly set up a Vercel AI SDK project with Acontext integration:
```bash theme={null}
acontext create my-ai-project --template-path "typescript/vercel-ai-basic"
```
If you haven't installed `acontext-cli` yet, install it first:
```bash theme={null}
curl -fsSL https://install.acontext.io | sh
```
### Manual Setup
If you prefer to set up manually:
Install Vercel AI SDK and Acontext TypeScript packages:
```bash theme={null}
npm install ai @ai-sdk/openai @acontext/acontext dotenv zod
```
Or with yarn:
```bash theme={null}
yarn add ai @ai-sdk/openai @acontext/acontext dotenv zod
```
Create a `.env` file with your API credentials:
```env theme={null}
OPENAI_API_KEY=your_openai_key_here
ACONTEXT_API_KEY=sk-ac-your-root-api-bearer-token
ACONTEXT_BASE_URL=http://localhost:8029/api/v1
OPENAI_BASE_URL= # Optional, for custom OpenAI-compatible endpoints
```
Never commit API keys to version control. Always use environment variables or secure secret management.
Create Vercel AI SDK provider and Acontext client instances:
```typescript theme={null}
import { createOpenAI } from '@ai-sdk/openai';
import { AcontextClient } from '@acontext/acontext';
import dotenv from 'dotenv';
dotenv.config();
// Create OpenAI provider
const openaiProvider = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: process.env.OPENAI_BASE_URL,
});
// Initialize Acontext client
const acontextClient = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const acontextClient = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
```
## How It Works
The Vercel AI SDK integration works by storing conversation messages to Acontext in OpenAI message format. The SDK uses `generateText` for text generation and requires manual tool execution.
### Message Flow
1. **Create session**: Initialize a new Acontext session for your agent
2. **Generate text**: Use `generateText` with tools to get model responses
3. **Handle tool calls**: Manually execute tools when the model requests them
4. **Store messages**: Append each message (user, assistant, and tool) to Acontext
5. **Extract tasks**: After the conversation, flush the session and retrieve extracted tasks
6. **Resume sessions**: Load previous conversation history to continue where you left off
### Important Notes
**Message format limitations**: Vercel AI SDK v5 only accepts 'user' and 'assistant' roles in the messages array. Tool results must be converted to user messages for the next iteration, but marked as internal so they aren't sent to Acontext.
**Tool execution**: Even though tools have `execute` functions defined, you still need to manually handle tool calls and pass results back to the model in a format it can understand.
## Basic Integration Pattern
Here's the core pattern for integrating Vercel AI SDK with Acontext:
```typescript theme={null}
import { generateText } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
import { AcontextClient } from '@acontext/acontext';
// Create provider and model
const openaiProvider = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const model = openaiProvider('gpt-4o-mini');
// Initialize Acontext
const acontextClient = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// Create session
const space = await acontextClient.spaces.create();
const session = await acontextClient.sessions.create({ spaceId: space.id });
// Build conversation
let conversation: any[] = [];
const userMsg = { role: 'user', content: 'Hello!' };
conversation.push(userMsg);
await acontextClient.sessions.storeMessage(session.id, userMsg, {
format: 'openai',
});
// Generate text
const result = await generateText({
model,
messages: conversation,
});
// Store assistant response to Acontext
const assistantMsg = {
role: 'assistant',
content: result.text,
};
conversation.push(assistantMsg);
await acontextClient.sessions.storeMessage(session.id, assistantMsg, {
format: 'openai',
});
```
## Tool Calling
The Vercel AI SDK uses Zod schemas to define tools. Tools must include an `execute` function, but you still need to manually handle tool calls in a loop.
### Define Tools
Define your tools using the `tool()` function with Zod schemas:
```typescript theme={null}
import { tool } from 'ai';
import { z } from 'zod';
const tools = {
get_weather: tool({
description: 'Returns weather info for the specified city.',
inputSchema: z.object({
city: z.string().describe('The city to get weather for'),
}),
execute: async ({ city }: { city: string }) => {
return `The weather in ${city} is sunny`;
},
}),
book_flight: tool({
description: 'Book a flight.',
inputSchema: z.object({
from_city: z.string().describe('The departure city'),
to_city: z.string().describe('The destination city'),
date: z.string().describe('The date of the flight'),
}),
execute: async ({ from_city, to_city, date }: { from_city: string; to_city: string; date: string }) => {
return `Flight booked successfully for '${from_city}' to '${to_city}' on '${date}'`;
},
}),
};
```
### Execute Tools Manually
Handle tool calls in a loop until the agent provides a final response:
```typescript theme={null}
async function runAgent(conversation: any[]): Promise<[string, any[]]> {
const model = createModel();
const newMessages: any[] = [];
const maxIterations = 10;
let iteration = 0;
let finalContent = '';
while (iteration < maxIterations) {
iteration += 1;
// Filter messages for Vercel AI SDK (only user and assistant)
const messagesToStore = conversation
.filter((msg: any) => {
const role = msg.role;
return (role === 'user' || role === 'assistant') && !msg._internal;
})
.map((msg: any) => {
// Ensure content is a string
let content = msg.content;
if (Array.isArray(content)) {
content = content.map((item: any) =>
typeof item === 'string' ? item : item.text || item.content || ''
).join(' ');
}
if (typeof content !== 'string') {
content = String(content || '');
}
return { role: msg.role, content };
});
const result = await generateText({
model,
system: 'You are a helpful assistant',
messages: messagesToStore,
tools,
});
const messageDict: any = {
role: 'assistant',
content: result.text,
};
// Handle tool calls
const toolCallsWithFunction: Array<{
id: string;
function: { name: string; arguments: string };
}> = [];
if (result.toolCalls && result.toolCalls.length > 0) {
messageDict.tool_calls = result.toolCalls.map((tc: any) => {
// Get arguments from tool call
let args = tc.args || tc.parameters || tc.input || {};
if (typeof args === 'string') {
try {
args = JSON.parse(args);
} catch {
args = {};
}
}
const argsString = JSON.stringify(args);
toolCallsWithFunction.push({
id: tc.toolCallId,
function: {
name: tc.toolName,
arguments: argsString,
},
});
return {
id: tc.toolCallId,
type: 'function',
function: {
name: tc.toolName,
arguments: argsString,
},
};
});
}
conversation.push(messageDict);
newMessages.push(messageDict);
// If there are no tool calls, we're done
if (!result.toolCalls || result.toolCalls.length === 0) {
finalContent = result.text || '';
break;
}
// Execute tool calls manually
const toolResults: Array<{ toolName: string; result: string; toolCallId: string }> = [];
for (const toolCallInfo of toolCallsWithFunction) {
const functionName = toolCallInfo.function.name;
const functionArgs = JSON.parse(toolCallInfo.function.arguments);
const functionResult = executeTool(functionName, functionArgs);
toolResults.push({
toolName: functionName,
result: functionResult,
toolCallId: toolCallInfo.id,
});
// Create tool message for Acontext
const toolMessage = {
role: 'tool' as const,
tool_call_id: toolCallInfo.id,
content: functionResult,
};
newMessages.push(toolMessage);
}
// Convert tool results to user message for next iteration
// Mark as internal so it won't be sent to Acontext
if (toolResults.length > 0) {
const toolResultsText = toolResults
.map(tr => `${tr.toolName} returned: ${tr.result}`)
.join('\n');
const toolResultUserMessage = {
role: 'user' as const,
content: `Tool execution results:\n${toolResultsText}`,
_internal: true, // Mark as internal
};
conversation.push(toolResultUserMessage);
}
}
return [finalContent, newMessages];
}
```
### Store Messages to Acontext
Store all messages (excluding internal ones) to Acontext:
```typescript theme={null}
async function appendMessage(
message: any,
conversation: any[],
sessionId: string
): Promise {
// Skip internal messages (tool results converted to user messages)
if (message._internal) {
conversation.push(message);
return conversation;
}
conversation.push(message);
await acontextClient.sessions.storeMessage(sessionId, message, {
format: 'openai',
});
return conversation;
}
// After running agent
const [responseContent, newMessages] = await runAgent(conversation);
for (const msg of newMessages) {
conversation = await appendMessage(msg, conversation, session.id);
}
```
## Complete Example
This example demonstrates a multi-turn conversation with tool calling and task extraction:
```typescript theme={null}
import { generateText, tool } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
import { z } from 'zod';
import { AcontextClient } from '@acontext/acontext';
import dotenv from 'dotenv';
dotenv.config();
// Initialize Acontext
const acontextClient = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// Tool implementations
function getWeather(city: string): string {
return `The weather in ${city} is sunny`;
}
function executeTool(toolName: string, toolArgs: Record): string {
if (toolName === 'get_weather') {
return getWeather(toolArgs.city);
} else {
return `Unknown tool: ${toolName}`;
}
}
// Tool definitions
const tools = {
get_weather: tool({
description: 'Returns weather info for the specified city.',
inputSchema: z.object({
city: z.string().describe('The city to get weather for'),
}),
execute: async ({ city }: { city: string }) => {
return getWeather(city);
},
}),
};
// Create provider
const openaiProvider = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
function createModel() {
return openaiProvider('gpt-4o-mini');
}
async function appendMessage(
message: any,
conversation: any[],
sessionId: string
): Promise {
if (message._internal) {
conversation.push(message);
return conversation;
}
conversation.push(message);
await acontextClient.sessions.storeMessage(sessionId, message, {
format: 'openai',
});
return conversation;
}
async function main(): Promise {
// Create space and session
const space = await acontextClient.spaces.create();
const session = await acontextClient.sessions.create({ spaceId: space.id });
let conversation: any[] = [];
// First interaction
const userMsg = { role: 'user', content: "What's the weather in Helsinki?" };
conversation = await appendMessage(userMsg, conversation, session.id);
// Run agent with tool calling
const [responseContent, newMessages] = await runAgent(conversation);
// Store all messages to Acontext
for (const msg of newMessages) {
conversation = await appendMessage(msg, conversation, session.id);
}
// Extract tasks
await acontextClient.sessions.flush(session.id);
const tasksResponse = await acontextClient.sessions.getTasks(session.id);
console.log('Extracted tasks:');
for (const task of tasksResponse.items) {
console.log(`Task: ${task.data.task_description}`);
console.log(`Status: ${task.status}`);
}
}
main().catch(console.error);
```
## Key Features
### Session Persistence
Resume conversations by loading previous messages from Acontext:
```typescript theme={null}
// Load previous conversation
const messages = await acontextClient.sessions.getMessages(sessionId, {
format: 'openai',
});
const conversation: any[] = messages.items;
// Continue conversation
conversation.push({
role: 'user',
content: 'Summarize our conversation',
});
const [responseContent] = await runAgent(conversation);
console.log(responseContent);
```
### Task Extraction
After completing a conversation, extract tasks with their status and metadata:
```typescript theme={null}
// Flush session to trigger task extraction
await acontextClient.sessions.flush(sessionId);
// Retrieve extracted tasks
const tasksResponse = await acontextClient.sessions.getTasks(sessionId);
for (const task of tasksResponse.items) {
console.log(`Task: ${task.data.task_description}`);
console.log(`Status: ${task.status}`);
// Access progress updates if available
if (task.data.progresses) {
for (const progress of task.data.progresses) {
console.log(` Progress: ${progress}`);
}
}
// Access user preferences if available
if (task.data.user_preferences) {
for (const pref of task.data.user_preferences) {
console.log(` Preference: ${pref}`);
}
}
}
```
### Message Format Handling
Vercel AI SDK has specific requirements for message formats:
```typescript theme={null}
// Filter messages for Vercel AI SDK
const messagesToStore = conversation
.filter((msg: any) => {
// Only user and assistant roles, exclude internal messages
const role = msg.role;
return (role === 'user' || role === 'assistant') && !msg._internal;
})
.map((msg: any) => {
// Ensure content is always a string
let content = msg.content;
if (Array.isArray(content)) {
content = content.map((item: any) =>
typeof item === 'string' ? item : item.text || item.content || ''
).join(' ');
}
if (typeof content !== 'string') {
content = String(content || '');
}
return { role: msg.role, content };
});
```
## Best Practices
**Internal messages**: Mark tool results converted to user messages with `_internal: true` so they aren't sent to Acontext but are still used for the next model iteration.
**Content format**: Always ensure message content is a string. Vercel AI SDK doesn't accept array content for user and assistant messages.
**Tool execution**: Even though tools have `execute` functions, you still need to manually handle tool calls in a loop and convert results to the appropriate format for the next iteration.
**Message filtering**: Filter out internal messages and ensure only 'user' and 'assistant' roles are sent to `generateText`, as the SDK doesn't support 'tool' role in messages.
**Format specification**: Always specify `format: 'openai'` when storing messages to Acontext to ensure proper format handling.
In your production agent, you don't need to call `flush` method after each conversation,
Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to [Session Buffer Mechanism](/observe/buffer).
## Differences from OpenAI SDK
The Vercel AI SDK differs from the basic OpenAI SDK in several key ways:
The Vercel AI SDK provides a unified interface for multiple AI providers (OpenAI, Anthropic, etc.) through a single API, making it easy to switch providers.
Tools are defined using Zod schemas with the `tool()` function, providing type safety and validation.
Vercel AI SDK v5 only accepts 'user' and 'assistant' roles in messages. Tool results must be converted to user messages for the next iteration.
Message content must be a string, not an array. Array content needs to be converted to a string before passing to the SDK.
## Next Steps
Monitor what your agent plans vs. what it executes
Enable your agent to learn from completed tasks
View all agent interactions in one place
Explore the full Acontext API
# Introduction
Source: https://docs.acontext.io/integrations/intro
Acontext Community provides a collection of integrations for popular AI frameworks and tools.
## The easiest way to get started
Download `acontext-cli` and create a new project with the template:
```bash theme={null}
curl -fsSL https://install.acontext.io | sh
acontext create my-proj --template-path "python/openai-agent-artifacts"
```
For more e2e integrations, please refer to the [Acontext-Examples](https://github.com/memodb-io/Acontext-Examples) repository.
# OpenAI Python SDK
Source: https://docs.acontext.io/integrations/openai-python
Integrate OpenAI Python SDK with Acontext for session persistence, task extraction, and manual tool calling
The OpenAI Python SDK provides direct access to OpenAI's API for building AI applications. When integrated with Acontext, you get persistent session management, automatic task extraction, and full observability of your agent's tool usage and conversations.
## What This Integration Provides
Store conversation history across multiple agent runs and resume sessions seamlessly
Full control over tool execution with explicit handling of function calls
Automatically identify and track tasks from agent conversations with progress updates
Track all tool calls and their results for complete visibility into agent behavior
## Quick Start
### Download Template
Use `acontext-cli` to quickly set up an OpenAI Python SDK project with Acontext integration:
```bash theme={null}
acontext create my-openai-project --template-path "python/openai-basic"
```
If you haven't installed `acontext-cli` yet, install it first:
```bash theme={null}
curl -fsSL https://install.acontext.io | sh
```
### Manual Setup
If you prefer to set up manually:
Install OpenAI and Acontext Python packages:
```bash theme={null}
uv sync
```
Or with pip:
```bash theme={null}
pip install openai acontext python-dotenv
```
Create a `.env` file with your API credentials:
```env theme={null}
OPENAI_API_KEY=your_openai_key_here
ACONTEXT_API_KEY=sk-ac-your-root-api-bearer-token
ACONTEXT_BASE_URL=http://localhost:8029/api/v1
```
Never commit API keys to version control. Always use environment variables or secure secret management.
Create OpenAI and Acontext client instances:
```python theme={null}
from openai import OpenAI
from acontext import AcontextClient
import os
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# acontext_client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
```
## How It Works
The OpenAI Python SDK integration works by storing conversation messages to Acontext in OpenAI message format. Since both use the same format, no conversion is needed.
### Message Flow
1. **Create session**: Initialize a new Acontext session for your agent
2. **Store messages**: Append each message (user, assistant, and tool) to Acontext as the conversation progresses
3. **Handle tool calls**: Manually execute tools when OpenAI requests them
4. **Extract tasks**: After the conversation, flush the session and retrieve extracted tasks
5. **Resume sessions**: Load previous conversation history to continue where you left off
### Basic Integration Pattern
Here's the core pattern for integrating OpenAI Python SDK with Acontext:
```python theme={null}
from openai import OpenAI
from acontext import AcontextClient
# Initialize clients
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# Create Acontext session
space = acontext_client.spaces.create()
session = acontext_client.sessions.create(space_id=space.id)
# Build conversation
conversation = []
user_msg = {"role": "user", "content": "Hello!"}
conversation.append(user_msg)
acontext_client.sessions.store_message(session_id=session.id, blob=user_msg)
# Call OpenAI API
response = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=conversation,
)
# Store assistant response to Acontext
assistant_msg = {
"role": response.choices[0].message.role,
"content": response.choices[0].message.content
}
conversation.append(assistant_msg)
acontext_client.sessions.store_message(session_id=session.id, blob=assistant_msg)
```
## Tool Calling
This integration demonstrates manual tool calling, giving you full control over tool execution:
### Define Tools
Define your tools in OpenAI's function calling format:
```python theme={null}
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Returns weather info for the specified city.",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to get weather for",
}
},
"required": ["city"],
"additionalProperties": False,
},
},
},
]
```
### Execute Tools Manually
Handle tool calls in a loop until the agent provides a final response:
```python theme={null}
def run_agent(client: OpenAI, conversation: list[dict]) -> tuple[str, list[dict]]:
"""Run the agent with tool calling support."""
messages_to_store = list(conversation)
new_messages = []
max_iterations = 10
iteration = 0
while iteration < max_iterations:
iteration += 1
# Call OpenAI API
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages_to_store,
tools=tools,
tool_choice="auto",
)
message = response.choices[0].message
message_dict = {"role": message.role, "content": message.content}
# Handle tool calls
if message.tool_calls:
message_dict["tool_calls"] = [
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
},
}
for tc in message.tool_calls
]
messages_to_store.append(message_dict)
new_messages.append(message_dict)
# Execute tools
for tool_call in message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
function_result = execute_tool(function_name, function_args)
# Add tool response
tool_message = {
"role": "tool",
"tool_call_id": tool_call.id,
"content": function_result,
}
messages_to_store.append(tool_message)
new_messages.append(tool_message)
else:
# No more tool calls, we're done
break
return message.content, new_messages
```
### Store Messages to Acontext
Store all messages (including tool calls and tool responses) to Acontext:
```python theme={null}
def append_message(message: dict, conversation: list[dict], session_id: str):
"""Append a message to conversation and store to Acontext."""
conversation.append(message)
acontext_client.sessions.store_message(session_id=session_id, blob=message)
return conversation
# After running agent
response_content, new_messages = run_agent(openai_client, conversation)
for msg in new_messages:
conversation = append_message(msg, conversation, session_id)
```
## Complete Example
This example demonstrates a multi-turn conversation with tool calling and task extraction:
```python theme={null}
import asyncio
import json
from openai import OpenAI
from acontext import AcontextClient
import os
# Initialize clients
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# Tool definitions
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Returns weather info for the specified city.",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "The city to get weather for"}
},
"required": ["city"],
},
},
},
]
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"
def execute_tool(tool_name: str, tool_args: dict) -> str:
"""Execute a tool by name with given arguments."""
if tool_name == "get_weather":
return get_weather(**tool_args)
else:
return f"Unknown tool: {tool_name}"
def append_message(message: dict, conversation: list[dict], session_id: str):
conversation.append(message)
acontext_client.sessions.store_message(session_id=session_id, blob=message)
return conversation
async def main():
# Create space and session
space = acontext_client.spaces.create()
session = acontext_client.sessions.create(space_id=space.id)
conversation = []
# First interaction
user_msg = {"role": "user", "content": "What's the weather in Helsinki?"}
conversation = append_message(user_msg, conversation, session.id)
# Run agent with tool calling
response_content, new_messages = run_agent(openai_client, conversation)
# Store all messages to Acontext
for msg in new_messages:
conversation = append_message(msg, conversation, session.id)
# Extract tasks
acontext_client.sessions.flush(session.id)
tasks_response = acontext_client.sessions.get_tasks(session.id)
print("Extracted tasks:")
for task in tasks_response.items:
print(f"Task: {task.data.task_description}")
print(f"Status: {task.status}")
if __name__ == "__main__":
asyncio.run(main())
```
## Key Features
### Session Persistence
Resume conversations by loading previous messages from Acontext:
```python theme={null}
# Load previous conversation
messages = acontext_client.sessions.get_messages(session_id)
conversation = messages.items
# Continue conversation
conversation.append({"role": "user", "content": "Summarize our conversation"})
response = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=conversation,
)
```
### Task Extraction
After completing a conversation, extract tasks with their status and metadata:
```python theme={null}
# Flush session to trigger task extraction
acontext_client.sessions.flush(session_id)
# Retrieve extracted tasks
tasks_response = acontext_client.sessions.get_tasks(session_id)
for task in tasks_response.items:
print(f"Task: {task.data.task_description}")
print(f"Status: {task.status}")
# Access progress updates if available
if task.data.progresses:
for progress in task.data.progresses:
print(f" Progress: {progress}")
# Access user preferences if available
if task.data.user_preferences:
for pref in task.data.user_preferences:
print(f" Preference: {pref}")
```
### Tool Call Tracking
Acontext automatically tracks all tool calls and their results when messages are sent:
```python theme={null}
# Tool calls are automatically tracked when you store messages
message_with_tool_call = {
"role": "assistant",
"content": None,
"tool_calls": [
{
"id": "call_123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": '{"city": "Helsinki"}'
}
}
]
}
acontext_client.sessions.store_message(session_id=session_id, blob=message_with_tool_call)
# Tool results are also tracked
tool_result = {
"role": "tool",
"tool_call_id": "call_123",
"content": "The weather in Helsinki is sunny"
}
acontext_client.sessions.store_message(session_id=session_id, blob=tool_result)
```
## Best Practices
**Message format**: OpenAI message format is compatible with Acontext, so you can store messages directly without conversion. This includes user, assistant, and tool messages. System prompts should be handled through session-level or skill-level configuration rather than as messages.
**Tool execution**: Always execute tools in the order they appear in `tool_calls`, and include the `tool_call_id` in tool response messages for proper tracking.
**Iteration limits**: Set a reasonable `max_iterations` limit for tool calling loops to prevent infinite loops if the agent keeps requesting tools.
In your production agent, you don't need to call `flush` method after each conversation,
Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to [Session Buffer Mechanism](/observe/buffer).
## Next Steps
Monitor what your agent plans vs. what it executes
Enable your agent to learn from completed tasks
View all agent interactions in one place
Explore the full Acontext API
# OpenAI TypeScript SDK
Source: https://docs.acontext.io/integrations/openai-typescript
Integrate OpenAI TypeScript SDK with Acontext for session persistence, task extraction, and manual tool calling
The OpenAI TypeScript SDK provides direct access to OpenAI's API for building AI applications in Node.js and TypeScript. When integrated with Acontext, you get persistent session management, automatic task extraction, and full observability of your agent's tool usage and conversations.
## What This Integration Provides
Store conversation history across multiple agent runs and resume sessions seamlessly
Full control over tool execution with explicit handling of function calls
Automatically identify and track tasks from agent conversations with progress updates
Track all tool calls and their results for complete visibility into agent behavior
## Quick Start
### Download Template
Use `acontext-cli` to quickly set up an OpenAI TypeScript SDK project with Acontext integration:
```bash theme={null}
acontext create my-openai-project --template-path "typescript/openai-basic"
```
If you haven't installed `acontext-cli` yet, install it first:
```bash theme={null}
curl -fsSL https://install.acontext.io | sh
```
### Manual Setup
If you prefer to set up manually:
Install OpenAI and Acontext TypeScript packages:
```bash theme={null}
npm install openai @acontext/acontext dotenv
```
Or with yarn:
```bash theme={null}
yarn add openai @acontext/acontext dotenv
```
Create a `.env` file with your API credentials:
```env theme={null}
OPENAI_API_KEY=your_openai_key_here
ACONTEXT_API_KEY=sk-ac-your-root-api-bearer-token
ACONTEXT_BASE_URL=http://localhost:8029/api/v1
OPENAI_BASE_URL= # Optional, for custom OpenAI-compatible endpoints
```
Never commit API keys to version control. Always use environment variables or secure secret management.
Create OpenAI and Acontext client instances:
```typescript theme={null}
import OpenAI from 'openai';
import { AcontextClient } from '@acontext/acontext';
import dotenv from 'dotenv';
dotenv.config();
const openaiClient = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const acontextClient = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const acontextClient = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
```
## How It Works
The OpenAI TypeScript SDK integration works by storing conversation messages to Acontext in OpenAI message format. Since both use the same format, no conversion is needed.
### Message Flow
1. **Create session**: Initialize a new Acontext session for your agent
2. **Store messages**: Append each message (user, assistant, and tool) to Acontext as the conversation progresses
3. **Handle tool calls**: Manually execute tools when OpenAI requests them
4. **Extract tasks**: After the conversation, flush the session and retrieve extracted tasks
5. **Resume sessions**: Load previous conversation history to continue where you left off
### Basic Integration Pattern
Here's the core pattern for integrating OpenAI TypeScript SDK with Acontext:
```typescript theme={null}
import OpenAI from 'openai';
import { AcontextClient } from '@acontext/acontext';
// Initialize clients
const openaiClient = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const acontextClient = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// Create Acontext session
const space = await acontextClient.spaces.create();
const session = await acontextClient.sessions.create({ spaceId: space.id });
// Build conversation
let conversation: any[] = [];
const userMsg = { role: 'user', content: 'Hello!' };
conversation.push(userMsg);
await acontextClient.sessions.storeMessage(session.id, userMsg, {
format: 'openai',
});
// Call OpenAI API
const response = await openaiClient.chat.completions.create({
model: 'gpt-4o-mini',
messages: conversation,
});
// Store assistant response to Acontext
const assistantMsg = {
role: response.choices[0].message.role,
content: response.choices[0].message.content,
};
conversation.push(assistantMsg);
await acontextClient.sessions.storeMessage(session.id, assistantMsg, {
format: 'openai',
});
```
## Tool Calling
This integration demonstrates manual tool calling, giving you full control over tool execution:
### Define Tools
Define your tools in OpenAI's function calling format:
```typescript theme={null}
const tools = [
{
type: 'function' as const,
function: {
name: 'get_weather',
description: 'Returns weather info for the specified city.',
parameters: {
type: 'object',
properties: {
city: {
type: 'string',
description: 'The city to get weather for',
},
},
required: ['city'],
additionalProperties: false,
},
},
},
];
```
### Execute Tools Manually
Handle tool calls in a loop until the agent provides a final response:
```typescript theme={null}
async function runAgent(
client: OpenAI,
conversation: any[]
): Promise<[string, any[]]> {
const messagesToStore: any[] = [
...conversation,
];
const newMessages: any[] = [];
const maxIterations = 10;
let iteration = 0;
let finalContent = '';
while (iteration < maxIterations) {
iteration += 1;
// Call OpenAI API
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: messagesToStore,
tools: tools,
tool_choice: 'auto',
});
const message = response.choices[0].message;
const messageDict: any = {
role: message.role,
content: message.content,
};
// Handle tool calls
const toolCallsWithFunction: Array<{
id: string;
function: { name: string; arguments: string };
}> = [];
if (message.tool_calls) {
messageDict.tool_calls = message.tool_calls.map((tc: any) => {
toolCallsWithFunction.push({
id: tc.id,
function: {
name: tc.function.name,
arguments: tc.function.arguments,
},
});
return {
id: tc.id,
type: 'function',
function: {
name: tc.function.name,
arguments: tc.function.arguments,
},
};
});
}
messagesToStore.push(messageDict);
newMessages.push(messageDict);
// If there are no tool calls, we're done
if (!message.tool_calls || message.tool_calls.length === 0) {
finalContent = message.content || '';
break;
}
// Execute tool calls
for (const toolCallInfo of toolCallsWithFunction) {
const functionName = toolCallInfo.function.name;
const functionArgsStr = toolCallInfo.function.arguments || '{}';
const functionArgs = JSON.parse(functionArgsStr);
const functionResult = executeTool(functionName, functionArgs);
// Add tool response
const toolMessage = {
role: 'tool' as const,
tool_call_id: toolCallInfo.id,
content: functionResult,
};
messagesToStore.push(toolMessage);
newMessages.push(toolMessage);
}
}
return [finalContent, newMessages];
}
```
### Store Messages to Acontext
Store all messages (including tool calls and tool responses) to Acontext:
```typescript theme={null}
async function appendMessage(
message: any,
conversation: any[],
sessionId: string
): Promise {
conversation.push(message);
await acontextClient.sessions.storeMessage(sessionId, message, {
format: 'openai',
});
return conversation;
}
// After running agent
const [responseContent, newMessages] = await runAgent(openaiClient, conversation);
for (const msg of newMessages) {
conversation = await appendMessage(msg, conversation, session.id);
}
```
## Complete Example
This example demonstrates a multi-turn conversation with tool calling and task extraction:
```typescript theme={null}
import OpenAI from 'openai';
import { AcontextClient } from '@acontext/acontext';
import dotenv from 'dotenv';
dotenv.config();
// Initialize clients
const openaiClient = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const acontextClient = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const acontextClient = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
// Tool definitions
const tools = [
{
type: 'function' as const,
function: {
name: 'get_weather',
description: 'Returns weather info for the specified city.',
parameters: {
type: 'object',
properties: {
city: { type: 'string', description: 'The city to get weather for' },
},
required: ['city'],
},
},
},
];
function getWeather(city: string): string {
return `The weather in ${city} is sunny`;
}
function executeTool(toolName: string, toolArgs: Record): string {
if (toolName === 'get_weather') {
return getWeather(toolArgs.city);
} else {
return `Unknown tool: ${toolName}`;
}
}
async function appendMessage(
message: any,
conversation: any[],
sessionId: string
): Promise {
conversation.push(message);
await acontextClient.sessions.storeMessage(sessionId, message, {
format: 'openai',
});
return conversation;
}
async function main(): Promise {
// Create space and session
const space = await acontextClient.spaces.create();
const session = await acontextClient.sessions.create({ spaceId: space.id });
let conversation: any[] = [];
// First interaction
const userMsg = { role: 'user', content: "What's the weather in Helsinki?" };
conversation = await appendMessage(userMsg, conversation, session.id);
// Run agent with tool calling
const [responseContent, newMessages] = await runAgent(
openaiClient,
conversation
);
// Store all messages to Acontext
for (const msg of newMessages) {
conversation = await appendMessage(msg, conversation, session.id);
}
// Extract tasks
await acontextClient.sessions.flush(session.id);
const tasksResponse = await acontextClient.sessions.getTasks(session.id);
console.log('Extracted tasks:');
for (const task of tasksResponse.items) {
console.log(`Task: ${task.data.task_description}`);
console.log(`Status: ${task.status}`);
}
}
main().catch(console.error);
```
## Key Features
### Session Persistence
Resume conversations by loading previous messages from Acontext:
```typescript theme={null}
// Load previous conversation
const messages = await acontextClient.sessions.getMessages(sessionId, {
format: 'openai',
});
const conversation: any[] = messages.items;
// Continue conversation
conversation.push({
role: 'user',
content: 'Summarize our conversation',
});
const response = await openaiClient.chat.completions.create({
model: 'gpt-4o-mini',
messages: conversation,
});
```
### Task Extraction
After completing a conversation, extract tasks with their status and metadata:
```typescript theme={null}
// Flush session to trigger task extraction
await acontextClient.sessions.flush(sessionId);
// Retrieve extracted tasks
const tasksResponse = await acontextClient.sessions.getTasks(sessionId);
for (const task of tasksResponse.items) {
console.log(`Task: ${task.data.task_description}`);
console.log(`Status: ${task.status}`);
// Access progress updates if available
if (task.data.progresses) {
for (const progress of task.data.progresses) {
console.log(` Progress: ${progress}`);
}
}
// Access user preferences if available
if (task.data.user_preferences) {
for (const pref of task.data.user_preferences) {
console.log(` Preference: ${pref}`);
}
}
}
```
### Tool Call Tracking
Acontext automatically tracks all tool calls and their results when messages are sent:
```typescript theme={null}
// Tool calls are automatically tracked when you store messages
const messageWithToolCall = {
role: 'assistant',
content: null,
tool_calls: [
{
id: 'call_123',
type: 'function',
function: {
name: 'get_weather',
arguments: '{"city": "Helsinki"}',
},
},
],
};
await acontextClient.sessions.storeMessage(sessionId, messageWithToolCall, {
format: 'openai',
});
// Tool results are also tracked
const toolResult = {
role: 'tool',
tool_call_id: 'call_123',
content: 'The weather in Helsinki is sunny',
};
await acontextClient.sessions.storeMessage(sessionId, toolResult, {
format: 'openai',
});
```
## Best Practices
**Message format**: Always specify `format: 'openai'` when storing messages to Acontext to ensure proper message format handling.
**Tool execution**: Always execute tools in the order they appear in `tool_calls`, and include the `tool_call_id` in tool response messages for proper tracking.
**Iteration limits**: Set a reasonable `maxIterations` limit for tool calling loops to prevent infinite loops if the agent keeps requesting tools.
**Async/await**: Use async/await consistently when working with both OpenAI and Acontext APIs, as they both return Promises.
In your production agent, you don't need to call `flush` method after each conversation,
Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to [Session Buffer Mechanism](/observe/buffer).
## Next Steps
Monitor what your agent plans vs. what it executes
Enable your agent to learn from completed tasks
View all agent interactions in one place
Explore the full Acontext API
# OpenAI Agents SDK
Source: https://docs.acontext.io/integrations/openai_agent
Integrate OpenAI Agents SDK with Acontext for session persistence, automatic tool execution, and task extraction
The OpenAI Agents SDK provides a high-level framework for building AI agents with automatic tool execution. When integrated with Acontext, you get persistent session management, automatic task extraction, and seamless conversation resumption across sessions.
## What This Integration Provides
The Agents SDK handles tool calls automatically - no manual tool execution needed
Store conversation history across multiple agent runs and resume sessions seamlessly
Automatically identify and track tasks from agent conversations with progress updates
Automatic conversion between Responses API and Chat Completions API formats
## Quick Start
### Download Template
Use `acontext-cli` to quickly set up an OpenAI Agents SDK project with Acontext integration:
```bash theme={null}
acontext create my-agent-project --template-path "python/openai-agent-basic"
```
If you haven't installed `acontext-cli` yet, install it first:
```bash theme={null}
curl -fsSL https://install.acontext.io | sh
```
### Manual Setup
If you prefer to set up manually:
Install OpenAI Agents SDK and Acontext Python packages:
```bash theme={null}
uv sync
```
Or with pip:
```bash theme={null}
pip install openai-agents acontext python-dotenv
```
Create a `.env` file with your API credentials:
```env theme={null}
OPENAI_API_KEY=your_openai_key_here
ACONTEXT_API_KEY=sk-ac-your-root-api-bearer-token
ACONTEXT_BASE_URL=http://localhost:8029/api/v1
```
Never commit API keys to version control. Always use environment variables or secure secret management.
Create an OpenAI Agents SDK agent and Acontext client:
```python theme={null}
from agents import Agent, OpenAIChatCompletionsModel, AsyncOpenAI
from acontext import AcontextClient
import os
# Create agent
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant",
model=OpenAIChatCompletionsModel(
model="gpt-4o-mini",
openai_client=AsyncOpenAI(),
),
)
# Initialize Acontext client
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# acontext_client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
```
## How It Works
The OpenAI Agents SDK uses the Responses API format internally, while Acontext uses the Chat Completions API format. The integration handles conversion between these formats automatically.
### Message Flow
1. **Create session**: Initialize a new Acontext session for your agent
2. **Run agent**: Use `Runner.run()` to execute the agent with automatic tool handling
3. **Convert and store**: Convert Responses API format to Chat Completions format and store to Acontext
4. **Extract tasks**: After the conversation, flush the session and retrieve extracted tasks
5. **Resume sessions**: Load previous conversation history, convert back to Responses API format, and continue
### Format Conversion
The Agents SDK uses Responses API format (with `function_call` and `function_call_output` items), while Acontext uses Chat Completions API format (with `tool_calls` and `tool` messages). The integration provides conversion utilities:
* **To Acontext**: Use `Converter.items_to_messages()` to convert Responses API format to Chat Completions format
* **From Acontext**: Use `message_to_input_items()` to convert Chat Completions format back to Responses API format
## Basic Integration Pattern
Here's the core pattern for integrating OpenAI Agents SDK with Acontext:
```python theme={null}
from agents import Agent, Runner, OpenAIChatCompletionsModel, AsyncOpenAI, function_tool
from agents.models.chatcmpl_converter import Converter
from acontext import AcontextClient
# Create agent with tools
@function_tool
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant",
model=OpenAIChatCompletionsModel(
model="gpt-4o-mini",
openai_client=AsyncOpenAI(),
),
tools=[get_weather],
)
# Initialize Acontext
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# Create session
space = acontext_client.spaces.create()
session = acontext_client.sessions.create(space_id=space.id)
# Run agent
result = await Runner.run(agent, "What's the weather in Helsinki?")
# Convert to Chat Completions format and store to Acontext
messages = Converter.items_to_messages(result.to_input_list())
for msg in messages:
acontext_client.sessions.store_message(
session_id=session.id,
blob=msg,
format="openai"
)
```
## Function Tools
The Agents SDK uses the `@function_tool` decorator to define tools. The SDK automatically handles tool calls and execution:
### Define Tools
```python theme={null}
from agents import function_tool
@function_tool
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"
@function_tool
def book_flight(from_city: str, to_city: str, date: str) -> str:
"""Book a flight."""
return f"Flight booked successfully for '{from_city}' to '{to_city}' on '{date}'"
```
### Register Tools with Agent
```python theme={null}
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant",
model=OpenAIChatCompletionsModel(
model="gpt-4o-mini",
openai_client=AsyncOpenAI(),
),
tools=[get_weather, book_flight],
)
```
The Agents SDK automatically:
* Parses tool calls from model responses
* Executes the appropriate tool functions
* Injects tool results back into the conversation
* Handles multi-turn tool calling workflows
## Complete Example
This example demonstrates a multi-turn conversation with automatic tool execution and task extraction:
```python theme={null}
import asyncio
from agents import Agent, Runner, OpenAIChatCompletionsModel, AsyncOpenAI, function_tool
from agents.models.chatcmpl_converter import Converter
from acontext import AcontextClient
from helper import message_to_input_items
# Initialize Acontext
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
@function_tool
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"
@function_tool
def book_flight(from_city: str, to_city: str, date: str) -> str:
"""Book a flight."""
return f"Flight booked successfully for '{from_city}' to '{to_city}' on '{date}'"
def create_agent():
return Agent(
name="Assistant",
instructions="You are a helpful assistant",
model=OpenAIChatCompletionsModel(
model="gpt-4o-mini",
openai_client=AsyncOpenAI(),
),
tools=[get_weather, book_flight],
)
async def session_1(session_id: str):
agent = create_agent()
# First interaction
result = await Runner.run(
agent,
"I'd like to have a 3-day trip in Finland. I like to see the nature. Give me the plan"
)
# Second interaction - continue conversation
user_msg_2 = {"role": "user", "content": "The plan sounds good, check the weather there"}
new_input = result.to_input_list() + [user_msg_2]
result = await Runner.run(agent, new_input)
# Convert to Chat Completions format and store to Acontext
messages = Converter.items_to_messages(result.to_input_list())
for msg in messages:
acontext_client.sessions.store_message(
session_id=session_id,
blob=msg,
format="openai"
)
# Extract tasks
acontext_client.sessions.flush(session_id)
tasks_response = acontext_client.sessions.get_tasks(session_id)
print("Extracted tasks:")
for task in tasks_response.items:
print(f"Task: {task.data.task_description}")
print(f"Status: {task.status}")
async def main():
space = acontext_client.spaces.create()
session = acontext_client.sessions.create(space_id=space.id)
await session_1(session.id)
if __name__ == "__main__":
asyncio.run(main())
```
## Key Features
### Session Persistence
Resume conversations by loading previous messages from Acontext and converting them back to Responses API format:
```python theme={null}
# Load previous conversation
messages = acontext_client.sessions.get_messages(session_id, format="openai")
# Convert back to Responses API format
conversation = []
for msg in messages.items:
items = message_to_input_items(msg)
conversation.extend(items)
# Continue conversation
conversation.append({"role": "user", "content": "Summarize our conversation"})
result = await Runner.run(agent, conversation)
print(result.final_output)
```
### Task Extraction
After completing a conversation, extract tasks with their status and metadata:
```python theme={null}
# Flush session to trigger task extraction
acontext_client.sessions.flush(session_id)
# Retrieve extracted tasks
tasks_response = acontext_client.sessions.get_tasks(session_id)
for task in tasks_response.items:
print(f"Task: {task.data.task_description}")
print(f"Status: {task.status}")
# Access progress updates if available
if task.data.progresses:
for progress in task.data.progresses:
print(f" Progress: {progress}")
# Access user preferences if available
if task.data.user_preferences:
for pref in task.data.user_preferences:
print(f" Preference: {pref}")
```
### Multi-turn Conversations
The Agents SDK makes it easy to build multi-turn conversations:
```python theme={null}
# First turn
result = await Runner.run(agent, "Plan a trip to Finland")
# Second turn - continue from previous result
user_msg = {"role": "user", "content": "Check the weather there"}
new_input = result.to_input_list() + [user_msg]
result = await Runner.run(agent, new_input)
# Third turn - continue again
user_msg_2 = {"role": "user", "content": "Book a flight"}
new_input = result.to_input_list() + [user_msg_2]
result = await Runner.run(agent, new_input)
```
## Message Format Conversion
The integration requires converting between Responses API format (used by Agents SDK) and Chat Completions API format (used by Acontext).
### Converting to Acontext Format
Use `Converter.items_to_messages()` to convert Responses API format to Chat Completions format:
```python theme={null}
from agents.models.chatcmpl_converter import Converter
# After running agent
result = await Runner.run(agent, "Hello!")
# Convert to Chat Completions format
messages = Converter.items_to_messages(result.to_input_list())
# Store to Acontext
for msg in messages:
acontext_client.sessions.store_message(
session_id=session_id,
blob=msg,
format="openai"
)
```
### Converting from Acontext Format
Use `message_to_input_items()` helper to convert Chat Completions format back to Responses API format:
```python theme={null}
from helper import message_to_input_items
# Load messages from Acontext
messages = acontext_client.sessions.get_messages(session_id, format="openai")
# Convert back to Responses API format
conversation = []
for msg in messages.items:
items = message_to_input_items(msg)
conversation.extend(items)
# Use with Agents SDK
result = await Runner.run(agent, conversation)
```
The `message_to_input_items()` helper function handles conversion of:
* User/system messages → `EasyInputMessageParam`
* Assistant messages → `EasyInputMessageParam` or `ResponseOutputMessageParam` with tool calls
* Tool messages → `FunctionCallOutput`
## Best Practices
**Batch message storing**: Convert the entire conversation at once using `Converter.items_to_messages(result.to_input_list())` rather than converting individual messages.
**Tool execution**: The Agents SDK handles tool execution automatically. You don't need to manually execute tools or handle tool responses.
**Conversation continuation**: Use `result.to_input_list()` to get the conversation history in Responses API format, then append new messages to continue the conversation.
**Format specification**: Always specify `format="openai"` when storing messages to Acontext to ensure proper format handling.
In your production agent, you don't need to call `flush` method after each conversation,
Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to [Session Buffer Mechanism](/observe/buffer).
## Differences from OpenAI Python SDK
The OpenAI Agents SDK differs from the basic OpenAI Python SDK in several key ways:
The Agents SDK automatically executes tools when the model requests them. You don't need to manually check for tool calls or execute tools yourself.
The Agents SDK uses OpenAI's Responses API format internally, which uses `function_call` and `function_call_output` items instead of `tool_calls` and `tool` messages.
The Agents SDK provides a higher-level API with `Runner.run()` and `Agent` class, making it easier to build agents without managing API calls directly.
Tools are defined using the `@function_tool` decorator, which automatically registers them with the agent.
## Next Steps
Monitor what your agent plans vs. what it executes
Enable your agent to learn from completed tasks
View all agent interactions in one place
Explore the full Acontext API
# Design a Complex Task Judgment
Source: https://docs.acontext.io/learn/advance/design-complex
Acontext allows you to add rules to judge what is a complex task.
Acontext won't just learn every success/failed tasks into the skill space. It only learns the complex tasks.
## What is a Complex Task?
By default, the task complexity is judged based on these conditions:
* **(c.1)** Wrong tool parameter passing that could be avoided: **normal**
* **(c.2)** Back-and-forth retries due to wrong strategy (not errors): **normal**
* **(c.3)** User provides feedback/preferences to correct agent's wrong decision: **critical**
* **(c.4)** User explicitly emphasized saving this workflow or experience: **critical**
If one `critical` rule is satisfied, the task will be considered as a complex task.
If at least two `normal` rules are satisfied, the task will be considered as a complex task.
## Design your own Complexity Scoring Rules
You can append more rules to enrich the complexity judgment, by passing `config.yaml`:
```yaml config.yaml theme={null}
sop_agent_custom_scoring_rules:
- description: "User wants another theme of landing page"
level: "normal"
- description: "When the word 'save this as sop' appears in the user's message, it's a complex task"
level: "critical"
```
# Understand the Experience Agent
Source: https://docs.acontext.io/learn/advance/experience-agent
We Give Agent a Notion Workspace
We believe that an agent can learn from its past, just like humans, and become more intelligent.
However, memory mechanisms based on the human brain are very vague. In contrast, humans have a more concrete way of accumulating experience: through note-taking systems.
Acontext draws inspiration from Notion's design philosophy, providing the Experience Agent with a similar environment where it can explore and store its own experiences.
## Enabling Self-Learning
```python Python theme={null}
space = client.spaces.create()
session = client.sessions.create(space_id=space.id)
```
```typescript TypeScript theme={null}
const space = await client.spaces.create();
const session = await client.sessions.create({
spaceId: space.id
});
```
You only need to connect the Session to a Space when creating it, and Acontext will automatically learn tasks as SOPs.
## Experience Agent's Tasks
* Determine whether tasks are difficult. Acontext is not a conversational memory solution, but rather collects difficult tasks and summarizes them into SOPs.
* Properly place these SOPs into the Space
* Search for relevant experiences from the Space
### WorkSpace Structure
Agent experience will be stored in a structured workspace, with folders and pages:
```txt Structure Example theme={null}
/
└── github_ops/ (folder)
└── GTM_sop (page)
├── find_trending_repos (sop block)
└── find_contributor_emails (sop block)
└── basic_ops (page)
├── create_repo (sop block)
└── delete_repo (sop block)
```
### SOP with Conditions
How do we store SOPs?
Acontext automatically learns from context, determines the scenarios where this SOP should be used, and collects complete tool-call patterns:
```json Example theme={null}
{
"use_when": "star a repo on github.com",
"preferences": "use personal account. star but not fork",
"tool_sops": [
{"tool_name": "goto", "action": "goto the user given github repo url"},
{"tool_name": "click", "action": "find login button if any, and start to login first"},
...
]
}
```
For the detailed ideas of Acontext, check this blog: [Context Data Platform for Self-learning AI Agents](https://acontext.io/blog/introducing-acontext-context-data-platform-for-self-learning-ai-agents).
# Wait for User Confirmation
Source: https://docs.acontext.io/learn/advance/wait-user
Acontext can wait for user confirmation before actually learning skills
By default, Acontext will automatically learn skills from completed tasks.
But this might not be what the user wants, some agent apps, like Manus, provide a way to let users decide whether to learn the skills or not.
Acontext also supports this.
## Enable User Confirmation
You can enable user confirmation by setting the `project_enable_user_confirmation_on_new_experiences` to `true` in the project settings.
```yaml config.yaml theme={null}
project_enable_user_confirmation_on_new_experiences: true
```
## Obtain the Unconfirmed Experiences
Once you enable it, for every worth-learning experience, Acontext will wait for user confirmation before actually learning skills.
You can obtain the unconfirmed experiences by calling the `get_unconfirmed_experiences` method.
```python Python theme={null}
experiences = client.spaces.get_unconfirmed_experiences(space_id=space.id)
for e in experiences.items:
print(f"{e.id}: {e.experience_data}")
```
```typescript TypeScript theme={null}
const experiences = await client.spaces.getUnconfirmedExperiences(spaceId);
for (const e of experiences.items) {
console.log(`${e.id}: ${JSON.stringify(e.experience_data)}`);
}
```
You can render those experiences in UI and let users decide whether to learn the skills or not.
Experience data consists of the following fields:
```json experience_data theme={null}
{
"type": "sop",
"data": {
"use_when": "Implement google authentication",
"preferences": "Use NextAuth;",
"tool_sops": [
{
"tool_name": "ls",
"action": "find the middleware folder",
},
...
]
}
```
## Confirm the Experience
Once user decided, you can confirm the experience by calling the `confirm_experience` method.
```python Python theme={null}
# Use this Experience
client.spaces.confirm_experience(space_id=space.id, experience_id=experience.id, save=True)
# Reject this Experience
client.spaces.confirm_experience(space_id=space.id, experience_id=experience.id, save=False)
```
```typescript TypeScript theme={null}
// Use this Experience
await client.spaces.confirmExperience(spaceId, experienceId, { save: true });
// Reject this Experience
await client.spaces.confirmExperience(spaceId, experienceId, { save: false });
```
# Search Skills
Source: https://docs.acontext.io/learn/search-skills
Experience Search is Acontext's advanced search ability to find the relevant skills for Agent.
It can use Embedding or Agent
to find relevant content from skills space.
## Quick Start
```python Python theme={null}
import os
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
client.ping()
# Fast mode - quick semantic search
result = client.spaces.experience_search(
space_id="space-uuid",
query="How to implement authentication?",
limit=10,
mode="fast"
)
# Access results
for block in result.cited_blocks:
print(f"{block.title} (distance: {block.distance})")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
// Fast mode - quick semantic search
const result = await client.spaces.experienceSearch('space-uuid', {
query: 'How to implement authentication?',
limit: 10,
mode: 'fast'
});
// Access results
for (const block of result.cited_blocks) {
console.log(`${block.title} (distance: ${block.distance})`);
}
```
## Search Modes
Quick semantic search using embeddings.
Agentic iterative search using space agent.
### Fast Mode Details
* **Speed**: \~100-500ms response time
* **Output**: Relevant blocks only
* **Best for**: Simple lookups, known information retrieval
* **Use when**: You need quickly search the skills and latency matters.
### Agentic Mode Details
* **Speed**: \~2-10s depending on iterations
* **Output**: Relevant blocks with iterative refinement
* **Best for**: Complex queries, research, analysis
* **Use when**: You want AI to deeply understand it's own past and go through its skills like human.
```python Python theme={null}
# Agentic mode with advanced parameters
result = client.spaces.experience_search(
space_id="space-uuid",
query="What are the best practices for API security?",
mode="agentic",
semantic_threshold=0.8,
max_iterations=20,
limit=15
)
```
```typescript TypeScript theme={null}
// Agentic mode with advanced parameters
const result = await client.spaces.experienceSearch('space-uuid', {
query: 'What are the best practices for API security?',
mode: 'agentic',
semanticThreshold: 0.8,
maxIterations: 20,
limit: 15
});
```
## Response Example
```json theme={null}
{
"cited_blocks": [
{
"block_id": "block-uuid",
"title": "Authentication Guide",
"type": "page",
"props": {
"use_when": "...",
"preferences": "...",
"tool_sops": [{"action": "...", "tool_name": "..."}]},
"distance": 0.23
}
]
}
```
## Parameters
Your search question in natural language
Search mode: `"fast"` or `"agentic"`
Maximum results to return (1-50)
Distance threshold for agentic mode (0=identical, 2=opposite)
Maximum iterations for agentic mode (1-100)
## Distance Scores
* **0.0-0.5**: Very similar content
* **0.5-1.0**: Related content
* **1.0+**: Less relevant content
Lower distance scores indicate higher semantic similarity to your query.
## Using Results in Agent Prompts
Pack search results into prompts for your Agent:
```python Python theme={null}
def build_prompt_with_context(query: str, search_results):
"""Build a prompt with search context for Agent models."""
# Build context from search results
context_parts = []
for block in search_results.cited_blocks:
context_parts.append(f"""
Use when: {block.title}
Skill: {block.props}
""")
context = "\n---\n".join(context_parts)
# Build final prompt
prompt = f"""SKILLS REFERENCES:
{context}
USER REQUEST: {query}
Please Act to complete the request"""
return prompt
# Example usage
result = client.spaces.experience_search(
space_id="space-uuid",
query="How should I handle user authentication?",
mode="fast",
limit=5
)
prompt = build_prompt_with_context(
"How should I handle user authentication?",
result
)
# Send to your AI
# response = openai_client.chat.completions.create(...)
```
```typescript TypeScript theme={null}
function buildPromptWithContext(query: string, searchResults: any): string {
// Build context from search results
const contextParts = searchResults.cited_blocks.map((block: any) => `
Use when: ${block.title}
Skill: ${block.props}
`);
const context = contextParts.join('\n---\n');
// Build final prompt
const prompt = `SKILLS REFERENCES:
${context}
USER REQUEST: ${query}
Please Act to complete the request`;
return prompt;
}
// Example usage
const result = await client.spaces.experienceSearch('space-uuid', {
query: 'How should I handle user authentication?',
mode: 'fast',
limit: 5
});
const prompt = buildPromptWithContext(
'How should I handle user authentication?',
result
);
// Send to your AI model
// const response = await openai.chat.completions.create(...)
```
## Understanding SOP Blocks
SOP (Standard Operating Procedure) blocks are structured skill components that contain actionable procedures and tool usage patterns. When you search for skills, the results include SOP blocks that define specific workflows and tool interactions.
### SOP Block Structure
Each skill block can contain `tool_sops` - an array of standard operating procedures that define:
* **Action**: The specific action or step to perform
* **Tool Name**: The tool or system to use for this action
* **Context**: When and how to apply this procedure
```json sop block example theme={null}
{
"use_when": "star a repo on github.com",
"preferences": "use personal account. star but not fork",
"tool_sops": [
{"tool_name": "goto", "action": "goto the user given github repo url"},
{"tool_name": "click", "action": "find login button if any, and start to login first"},
...
]
}
```
A SOP block has not only some text summaries about task and preferences, but also the exact tool chain to achieve that.
We also prepare [some util sdks](/learn/tool) to manage the tools, in order you may change some tools and invalidate some sops.
# Self-learn Skills
Source: https://docs.acontext.io/learn/skill-space
Enable your agent to automatically learn skills and SOPs from completed tasks
Acontext can enable your agent to self-learn skills and standard operating procedures (SOPs) from successful task executions. When you attach a session to a **Space**, all successfully completed tasks are automatically analyzed and converted into reusable knowledge blocks that your agent can reference in future sessions.
## How Skill Learning Works
When a session is connected to a Space:
1. **Task Extraction**: As your agent works, Acontext automatically [extracts tasks](/observe/agent_tasks) from the conversation
2. **Task Completion**: When tasks are marked as `success`, they're queued for learning
3. **Automatic Learning**: Acontext analyzes the completed task's execution messages and extracts reusable patterns, approaches, and procedures
4. **Skill Storage**: Learned skills and user preferences are stored in the Space as searchable knowledge blocks
5. **Future Use**: Your agent can retrieve and apply these learned skills
Think of Acontext as giving your AI agent a library that remembers and learns, just like humans do.
For now, only tasks with `success` status are learned into skills. Failed or pending tasks are not processed.
## Quick Start: Enable Skill Learning
This example demonstrates how to create a Space and attach sessions to it so your agent can learn and reuse skills.
```python Python theme={null}
import os
import time
from acontext import AcontextClient
# Initialize client
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
# Step 1: Create a Space for skill learning
space = client.spaces.create()
print(f"Created Space: {space.id}")
# Step 2: Create a session attached to the space
session = client.sessions.create(space_id=space.id)
print(f"Created Session: {session.id}")
print(f"Session is connected to Space: {session.space_id}")
# Step 3: Have your agent work and complete tasks
# This example shows a conversation where the agent outlines and completes a plan
messages = [
{
"role": "user",
"parts": [
{
"type": "text",
"text": "I need to write a landing page of iPhone 15 pro max"
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "text",
"text": "Sure, my plan is below:\n1. Search for the latest news about iPhone 15 pro max\n2. Init Next.js project for the landing page\n3. Deploy the landing page to the website"
}
]
},
{
"role": "user",
"parts": [
{
"type": "text",
"text": "That sounds good. Let's first collect the message and report to me before any landing page coding."
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "text",
"text": "Sure, I will first collect the message then report to you before any landing page coding."
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "text",
"text": "I need to search the latest news about iPhone 15 pro max first"
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "tool-call",
"text": "search_news",
"meta": {
"name": "search_news",
"arguments": {
"query": "iPhone 15 pro max launching"
}
}
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "tool-call",
"text": "search_news",
"meta": {
"name": "search_news",
"arguments": {
"query": "iPhone 15 pro max price"
}
}
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "tool-call",
"text": "search_news",
"meta": {
"name": "search_news",
"arguments": {
"query": "iPhone 15 pro max features specifications"
}
}
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "text",
"text": "Based on my search results, here's what I found about the iPhone 15 Pro Max:"
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "text",
"text": "## iPhone 15 Pro Max Key Information\n\n**Launch Details:**\n- Released in September 2023\n- Available in Natural Titanium, Blue Titanium, White Titanium, and Black Titanium\n\n**Key Features:**\n- 6.7-inch Super Retina XDR display\n- A17 Pro chip with 6-core GPU\n- Pro camera system with 48MP main camera\n- Action Button replacing the mute switch\n- USB-C connectivity\n- Titanium design\n\n**Pricing:**\n- Starting at $1,199 for 256GB\n- 512GB model at $1,399\n- 1TB model at $1,599\n\nThis information should be perfect for creating an engaging landing page. Should I proceed with initializing the Next.js project?"
}
]
},
{
"role": "user",
"parts": [
{
"type": "text",
"text": "Great research! Yes, please proceed with the Next.js project setup."
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "text",
"text": "I'll now create the Next.js project for the iPhone 15 Pro Max landing page."
}
]
}
]
# Store messages to the session
for msg in messages:
client.sessions.store_message(
session_id=session.id,
blob=msg,
format="acontext"
)
# Wait for task extraction and completion processing
client.sessions.flush(session.id)
# Check extracted tasks
tasks_response = client.sessions.get_tasks(session.id)
print(f"\nExtracted {len(tasks_response.items)} tasks:")
for task in tasks_response.items:
print(f" Task #{task.order}: {task.status}")
print("\n✅ Tasks completed successfully will be learned into the Space as skills!")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
// Initialize client
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
async function demonstrateSkillLearning() {
// Step 1: Create a Space for skill learning
const space = await client.spaces.create();
console.log(`Created Space: ${space.id}`);
// Step 2: Create a session attached to the space
const session = await client.sessions.create({
spaceId: space.id
});
console.log(`Created Session: ${session.id}`);
console.log(`Session is connected to Space: ${session.space_id}`);
// Step 3: Have your agent work and complete tasks
// This example shows a conversation where the agent outlines and completes a plan
const messages = [
{
role: 'user',
parts: [
{
type: 'text',
text: 'I need to write a landing page of iPhone 15 pro max'
}
]
},
{
role: 'assistant',
parts: [
{
type: 'text',
text: 'Sure, my plan is below:\n1. Search for the latest news about iPhone 15 pro max\n2. Init Next.js project for the landing page\n3. Deploy the landing page to the website'
}
]
},
{
role: 'user',
parts: [
{
type: 'text',
text: "That sounds good. Let's first collect the message and report to me before any landing page coding."
}
]
},
{
role: 'assistant',
parts: [
{
type: 'text',
text: 'Sure, I will first collect the message then report to you before any landing page coding.'
}
]
},
{
role: 'assistant',
parts: [
{
type: 'text',
text: 'I need to search the latest news about iPhone 15 pro max first'
}
]
},
{
role: 'assistant',
parts: [
{
type: 'tool-call',
text: 'search_news',
meta: {
name: 'search_news',
arguments: {
query: 'iPhone 15 pro max launching'
}
}
}
]
},
{
role: 'assistant',
parts: [
{
type: 'tool-call',
text: 'search_news',
meta: {
name: 'search_news',
arguments: {
query: 'iPhone 15 pro max price'
}
}
}
]
},
{
role: 'assistant',
parts: [
{
type: 'tool-call',
text: 'search_news',
meta: {
name: 'search_news',
arguments: {
query: 'iPhone 15 pro max features specifications'
}
}
}
]
},
{
role: 'assistant',
parts: [
{
type: 'text',
text: "Based on my search results, here's what I found about the iPhone 15 Pro Max:"
}
]
},
{
role: 'assistant',
parts: [
{
type: 'text',
text: '## iPhone 15 Pro Max Key Information\n\n**Launch Details:**\n- Released in September 2023\n- Available in Natural Titanium, Blue Titanium, White Titanium, and Black Titanium\n\n**Key Features:**\n- 6.7-inch Super Retina XDR display\n- A17 Pro chip with 6-core GPU\n- Pro camera system with 48MP main camera\n- Action Button replacing the mute switch\n- USB-C connectivity\n- Titanium design\n\n**Pricing:**\n- Starting at $1,199 for 256GB\n- 512GB model at $1,399\n- 1TB model at $1,599\n\nThis information should be perfect for creating an engaging landing page. Should I proceed with initializing the Next.js project?'
}
]
},
{
role: 'user',
parts: [
{
type: 'text',
text: 'Great research! Yes, please proceed with the Next.js project setup.'
}
]
},
{
role: 'assistant',
parts: [
{
type: 'text',
text: "I'll now create the Next.js project for the iPhone 15 Pro Max landing page."
}
]
}
];
// Store messages to the session
for (const msg of messages) {
await client.sessions.storeMessage(session.id, msg, {
format: 'acontext'
});
}
// Wait for task extraction and completion processing
await client.sessions.flush(session.id);
// Check extracted tasks
const tasksResponse = await client.sessions.getTasks(session.id);
console.log(`\nExtracted ${tasksResponse.items.length} tasks:`);
for (const task of tasksResponse.items) {
console.log(` Task #${task.order}: ${task.status}`);
}
console.log('\n✅ Tasks completed successfully will be learned into the Space as skills!');
}
demonstrateSkillLearning();
```
Once tasks complete successfully, they'll be automatically processed and learned into the Space. This happens asynchronously in the background.
## Connecting Existing Sessions to a Space
If you have an existing session that wasn't created with a Space, you can connect it later:
```python Python theme={null}
# Create a space
space = client.spaces.create()
# Create a session without a space
session = client.sessions.create()
# Later, connect the session to the space
client.sessions.connect_to_space(
session_id=session.id,
space_id=space.id
)
print("Session is now connected to the Space!")
print("Future completed tasks will be learned into the Space")
```
```typescript TypeScript theme={null}
// Create a space
const space = await client.spaces.create();
// Create a session without a space
const session = await client.sessions.create();
// Later, connect the session to the space
await client.sessions.connectToSpace(session.id, {
spaceId: space.id
});
console.log('Session is now connected to the Space!');
console.log('Future completed tasks will be learned into the Space');
```
## Understanding the Learning Pipeline
When a task completes successfully in a session connected to a Space:
```mermaid theme={null}
graph TB
A[Task Completed] --> B[Task Extraction]
B --> C{Space Connected?}
C -->|Yes| D[Queue for Learning]
C -->|No| E[Skip Learning]
D --> F[Extract SOP]
F --> G{Hard Enough?}
G -->|No - Too Simple| H[Skip Learning]
G -->|Yes - Complex| I[Store as Skill Block]
I --> J[Available for Future Sessions]
```
If you ran the above examples, you can find the learned skills in space(view it through the dashboard or API).
You will see that the skill only have user preferences on collecting the product news, but no tool steps.
That's because Acontext judges that the New Searching task is too easy, only the user preferences on this task is worthwhile to learn.
If you want to learn more about how the self-learning works, please refer to [Experience Agent](/learn/advance/experience-agent).
### What Gets Learned
Acontext analyzes the task execution to extract:
* **Procedures**: Step-by-step approaches the agent took
* **Patterns**: Common problem-solving strategies
* **Context**: When and how to apply specific techniques
* **Preferences**: User requirements and constraints mentioned during the task
### Complexity-Based Filtering
Not all completed tasks result in learned skills. After extracting the SOP (Standard Operating Procedure), Acontext evaluates the complexity:
* **Simple tasks** (e.g., single-step operations, basic queries) are **skipped** to avoid cluttering the skill library
* **Complex tasks** (e.g., multi-step workflows, problem-solving patterns) are **stored** as reusable skills
This intelligent filtering ensures your Space contains only valuable, reusable knowledge that improves agent performance.
Tasks that complete **before** a session is connected to a Space will not be learned. Connect sessions to Spaces early to capture all learning opportunities.
## Use Cases
Connect all your agent sessions to a single Space to build a centralized knowledge base. As your agent solves more problems, it accumulates expertise that improves performance across all sessions.
```python theme={null}
# Create one space for your entire agent system
agent_space = client.spaces.create()
# Connect all agent sessions to this space
session1 = client.sessions.create(space_id=agent_space.id)
session2 = client.sessions.create(space_id=agent_space.id)
session3 = client.sessions.create(space_id=agent_space.id)
# Each successful task across all sessions builds the shared knowledge base
```
Multiple team members' agents can contribute to and benefit from a shared Space, enabling collaborative learning across your organization.
```python theme={null}
# Shared space for the entire team
team_space = client.spaces.create()
# Different team members create sessions connected to the shared space
alice_session = client.sessions.create(space_id=team_space.id)
bob_session = client.sessions.create(space_id=team_space.id)
carol_session = client.sessions.create(space_id=team_space.id)
# All team members benefit from collective learning
```
## Best Practices
Attach sessions to Spaces at creation time to ensure all completed tasks are learned, not after tasks start executing.
Create separate Spaces for different users to maintain focused, relevant skill libraries.
## Next Steps
Understand how Experience Agent works in Acontext.
Use the learned skills to empower your agent to perform more complex tasks.
# Edit Agent Tools
Source: https://docs.acontext.io/learn/tool
Manage and organize tools that agents have learned and used across your projects
The Tools API allows you to manage and organize the tools that agents have learned and used within your Acontext projects. You can view all available tools and rename them for better organization and consistency.
## List every tool agent have used before
Retrieve a comprehensive list of all tools that have been used by agents in your project, along with usage statistics.
```python Python theme={null}
import os
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
# Get all tool names with usage statistics
tools = client.tools.get_tool_name()
for tool in tools:
print(f"{tool.name} (used in {tool.sop_count} SOPs)")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
// Get all tool names with usage statistics
const tools = await client.tools.getToolName();
for (const tool of tools) {
console.log(`${tool.name} (used in ${tool.sop_count} SOPs)`);
}
```
The `sop_count` field indicates how many Standard Operating Procedures (SOPs) reference each tool, helping you understand tool usage patterns across your project.
## Rename tools at once
Rename multiple tools simultaneously to maintain consistency and improve organization across your project. This is particularly useful when standardizing tool names or updating naming conventions.
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# Rename multiple tools at once
result = client.tools.rename_tool_name(
rename=[
{"old_name": "calculate", "new_name": "calculate_math"},
{"old_name": "search", "new_name": "search_web"},
{"old_name": "file_reader", "new_name": "read_file"}
]
)
if result.status == 0:
print("Tools renamed successfully")
else:
print(f"Error: {result.errmsg}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// Rename multiple tools at once
const result = await client.tools.renameToolName({
rename: [
{ oldName: 'calculate', newName: 'calculate_math' },
{ oldName: 'search', newName: 'search_web' },
{ oldName: 'file_reader', newName: 'read_file' }
]
});
if (result.status === 0) {
console.log('Tools renamed successfully');
} else {
console.log(`Error: ${result.errmsg}`);
}
```
Renaming tools will update all references across your project's SOPs. Make sure the new names follow your project's naming conventions and don't conflict with existing tools.
## Best Practices
**Consistent Naming**: Use descriptive, consistent naming patterns for your tools. Consider prefixes like `web_`, `file_`, or `calc_` to group related functionality.
Tool names are case-sensitive and should follow standard identifier conventions (alphanumeric characters and underscores).
# Quickstart for AI
Source: https://docs.acontext.io/llm_quick
Get started with Acontext API
1. Copy all content (\~10k tokens) from the link: [https://docs.acontext.io/llms-full.txt](https://docs.acontext.io/llms-full.txt)
2. Paste the raw content into your prompt, or just paste this link.
3. Prompt your coding agent (Cursor, Claude, etc.) "Build a backend for agent application with Acontext"
# Run Acontext Locally
Source: https://docs.acontext.io/local
Use `acontext-cli` to run Acontext locally.
## Install `acontext-cli`
```bash theme={null}
curl -fsSL https://install.acontext.io | sh
```
## Start Acontext Server Locally
Acontext requires at least an OpenAI API Key to Start.
```bash .env theme={null}
LLM_API_KEY="YOUR_OPENAI_API_KEY"
```
Place your `.env` file and run `acontext docker up` at the same directory:
```bash server theme={null}
mkdir acontext_server && cd acontext_server
acontext docker up
```
`acontext docker up` will create/use the existing `.env` and `config.yaml` for Acontext, and create a `db` folder to persist data.
Once it's done, you can access the following endpoints:
* Acontext Base URL: [http://localhost:8029/api/v1](http://localhost:8029/api/v1)
* Acontext Dashboard: [http://localhost:3000/](http://localhost:3000/)
To use different providers, check [Core Settings](/settings/core) for more details.
## Next Steps
View your agent's tasks, conversations, and learned skills in one place
Save agent conversations with context
Let your agent learn from experience
Configure Acontext to your needs
# Observe Agent Tasks
Source: https://docs.acontext.io/observe/agent_tasks
Automatically extract and monitor tasks from agent conversations to understand agent planning and execution
Acontext automatically extracts tasks from your agent's conversation messages. When an agent outlines a plan or breaks down work into steps, Acontext detects and tracks these tasks in the background, giving you visibility into what your agent is planning and executing.
## How Task Extraction Works
As your agent converses with users, Acontext analyzes the conversation context to identify planned tasks. For example, when an agent responds with "My plan is: 1. Search for data, 2. Create a project, 3. Deploy", Acontext extracts these as individual trackable tasks.
**Key capabilities:**
* **Automatic extraction**: Tasks are detected from conversation context without manual tracking
* **Status monitoring**: Track whether tasks are `pending`, `running`, `success`, or `failed`
* **Execution insights**: See what your agent planned versus what it actually completed
### Task Extraction Has a Delay
Task extraction happens asynchronously with a small delay to optimize costs and performance:
**Batch Processing for Cost Efficiency:**
* Acontext batches multiple messages together before analyzing them for tasks
* This reduces the number of LLM calls needed for extraction, saving costs
* The system waits a few seconds to collect messages before starting extraction
You can use `flush` method for blocking and waiting for all the tasks to be extracted.
In production, you don't need to call this method, Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to [Session Buffer Mechanism](/observe/buffer).
```python Python theme={null}
client.sessions.flush(session.id)
```
```typescript TypeScript theme={null}
await client.sessions.flush(session.id);
```
## Quick Start: Test Task Extraction
This example demonstrates how to verify that Acontext correctly extracts tasks from your agent's messages. You'll store a conversation where the agent outlines a plan, then retrieve the extracted tasks to confirm they were detected.
```python Python theme={null}
import os
import time
from acontext import AcontextClient
# Initialize client
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
# Create a project and session
session = client.sessions.create()
# Conversation messages
messages = [
{
"role": "user",
"content": "I need to write a landing page of iPhone 15 pro max"
},
{
"role": "assistant",
"content": "Sure, my plan is below:\n1. Search for the latest news about iPhone 15 pro max\n2. Init Next.js project for the landing page\n3. Deploy the landing page to the website"
},
{
"role": "user",
"content": "That sounds good. Let's first collect the message and report to me before any landing page coding."
},
{
"role": "assistant",
"content": "Sure, I will first collect the message then report to you before any landing page coding."
}
]
# Store messages in a loop
for msg in messages:
client.sessions.store_message(
session_id=session.id,
blob=msg,
format="openai"
)
# Wait for task extraction to complete
client.sessions.flush(session.id)
# Display extracted tasks
tasks_response = client.sessions.get_tasks(session.id)
print(tasks_response)
for task in tasks_response.items:
print(f"\nTask #{task.order}:")
print(f" ID: {task.id}")
print(f" Title: {task.data.task_description}")
print(f" Status: {task.status}")
# Show progress updates if available
if task.data.progresses:
print(f" Progress updates: {len(task.data.progresses)}")
for progress in task.data.progresses:
print(f" - {progress}")
# Show user preferences if available
if task.data.user_preferences:
print(" User preferences:")
for pref in task.data.user_preferences:
print(f" - {pref}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
// Initialize client
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
async function demonstrateTasks() {
// Create a project and session
const project = await client.spaces.createProject({
name: 'iPhone Landing Page Project'
});
const session = await client.sessions.create({
projectId: project.id
});
// Conversation messages
const messages = [
{
role: 'user',
content: 'I need to write a landing page of iPhone 15 pro max'
},
{
role: 'assistant',
content: 'Sure, my plan is below:\n1. Search for the latest news about iPhone 15 pro max\n2. Init Next.js project for the landing page\n3. Deploy the landing page to the website'
},
{
role: 'user',
content: "That sounds good. Let's first collect the message and report to me before any landing page coding."
},
{
role: 'assistant',
content: 'Sure, I will first collect the message then report to you before any landing page coding.'
}
];
// Store messages in a loop
for (const msg of messages) {
await client.sessions.storeMessage(session.id, {
blob: msg,
format: 'openai'
});
}
// Wait for task extraction to complete
await client.sessions.flush(session.id);
// Display extracted tasks
const tasksResponse = await client.sessions.getTasks(session.id);
console.log(tasksResponse);
for (const task of tasksResponse.items) {
console.log(`\nTask #${task.order}:`);
console.log(` ID: ${task.id}`);
console.log(` Title: ${task.data.task_description}`);
console.log(` Status: ${task.status}`);
// Show progress updates if available
if (task.data.progresses) {
console.log(` Progress updates: ${task.data.progresses.length}`);
task.data.progresses.forEach(progress => {
console.log(` - ${progress}`);
});
}
// Show user preferences if available
if (task.data.user_preferences) {
console.log(` User preferences:`);
task.data.user_preferences.forEach(pref => {
console.log(` - ${pref}`);
});
}
}
}
demonstrateTasks();
```
After running this code, you'll see the tasks that Acontext automatically extracted from the agent's planned steps, confirming the extraction is working correctly.
## Understanding Task Data
Each extracted task contains a `data` field with structured information captured from the conversation. The `data` field is a `TaskData` object with the following schema:
```python Python theme={null}
from acontext import TaskData
# TaskData structure
task_data = TaskData(
task_description="Search for the latest news about iPhone 15 pro max",
progresses=[
"I searched for iPhone 15 Pro Max specifications and found the latest features",
"I've initialized the Next.js project with the latest template"
],
user_preferences=[
"Focus on the camera capabilities and battery life",
"Make sure the landing page is mobile-responsive"
]
)
```
```typescript TypeScript theme={null}
import { TaskData } from '@acontext/acontext';
// TaskData structure
const taskData: TaskData = {
task_description: "Search for the latest news about iPhone 15 pro max",
progresses: [
"I searched for iPhone 15 Pro Max specifications and found the latest features",
"I've initialized the Next.js project with the latest template"
],
user_preferences: [
"Focus on the camera capabilities and battery life",
"Make sure the landing page is mobile-responsive"
]
};
```
### TaskData Fields
* **`task_description`** (string, required): A clear description of the task
* **`progresses`** (array of strings, optional): Agent's narrative updates as it works through the task
* **`user_preferences`** (array of strings, optional): Specific requirements or preferences mentioned by the user
### Progress Tracking
The `progresses` array captures the agent's narrative updates as it works through tasks. Each entry describes what the agent accomplished, written in first-person perspective.
```python Python theme={null}
# Access progress updates using structured TaskData
for task in tasks_response.items:
if task.data.progresses:
print(f"Task {task.order} progress:")
for progress in task.data.progresses:
print(f" - {progress}")
```
```typescript TypeScript theme={null}
// Access progress updates using structured TaskData
for (const task of tasksResponse.items) {
if (task.data.progresses) {
console.log(`Task ${task.order} progress:`);
task.data.progresses.forEach(progress => {
console.log(` - ${progress}`);
});
}
}
```
### User Preferences
The `user_preferences` array stores specific requirements or preferences the user mentioned for each task during the conversation.
```python Python theme={null}
# Check user preferences for a task using structured TaskData
for task in tasks_response.items:
if task.data.user_preferences:
print(f"Task {task.order} user preferences:")
for pref in task.data.user_preferences:
print(f" - {pref}")
```
```typescript TypeScript theme={null}
// Check user preferences for a task using structured TaskData
for (const task of tasksResponse.items) {
if (task.data.user_preferences) {
console.log(`Task ${task.order} user preferences:`);
task.data.user_preferences.forEach(pref => {
console.log(` - ${pref}`);
});
}
}
```
Progress and preferences are appended to tasks as the conversation continues. Early in a conversation, these arrays may be empty or contain only initial entries.
## View it in Dashboard
You can view the tasks in the UI by clicking on the "Tasks" tab in the session page.
## Use Cases for Agent Developers
Verify that Acontext correctly extracts tasks from your agent's conversation patterns. This is essential when developing or updating agent prompts.
```python theme={null}
# Get all extracted tasks
response = client.sessions.get_tasks(session_id, time_desc=False)
# Verify extraction worked
print(f"Expected 3 tasks, extracted {len(response.items)}")
for task in response.items:
print(f"Task {task.order}: {task.data}")
```
When your agent isn't completing work as expected, check extracted tasks to see if the agent is planning steps correctly or if it's getting stuck at a specific task.
```python theme={null}
response = client.sessions.get_tasks(session_id)
# Identify where the agent got stuck
for task in response.items:
if task.status == "pending":
print(f"Agent hasn't started: Task {task.order}")
elif task.status == "running":
print(f"Agent stuck on: Task {task.order}")
# Check last progress update
if task.data.progresses:
print(f" Last progress: {task.data.progresses[-1]}")
elif task.status == "failed":
print(f"Agent failed at: Task {task.order}")
# Check what was done before failure
if task.data.progresses:
print(f" Completed steps: {len(task.data.progresses)}")
```
Collect task data across multiple sessions to understand how your agent breaks down different types of requests. Use this for optimizing prompts or identifying common failure points.
```typescript theme={null}
const response = await client.sessions.getTasks(sessionId);
// Analyze planning vs execution
const planned = response.items.length;
const completed = response.items.filter(t => t.status === 'success').length;
const completionRate = (completed / planned) * 100;
console.log(`Agent completed ${completionRate.toFixed(0)}% of planned tasks`);
```
Generate reports on what agents are planning and executing to share with stakeholders or for compliance purposes.
```python theme={null}
# Get task history with timestamps
response = client.sessions.get_tasks(session_id, time_desc=True)
# Create activity report
print("=== Agent Activity Report ===")
for task in response.items:
print(f"\n{task.created_at} | Task {task.order} | {task.status}")
print(f" Description: {task.data.task_description}")
# Show progress summary
if task.data.progresses:
print(f" Progress entries: {len(task.data.progresses)}")
print(f" Latest: {task.data.progresses[-1]}")
# Show user requirements
if task.data.user_preferences:
print(f" User requirements: {', '.join(task.data.user_preferences)}")
```
## Best Practices
After storing messages, poll the tasks endpoint in a loop until tasks are extracted rather than using fixed delays.
When developing agents, test different conversation patterns to ensure Acontext reliably extracts the tasks you expect.
Regularly check for tasks stuck in `pending` or `running` status to identify where your agent needs improvement.
Track the ratio of `success` to `failed` tasks across sessions to measure and improve agent reliability.
## Next Steps
View Context, Artifacts and Tasks in one place.
Learn how to enable your agent to self-learn skills and SOPs.
# Session Buffer Mechanism
Source: https://docs.acontext.io/observe/buffer
Acontext will batch process the messages in the session to optimize costs.
## Wait until it's full
Acontext will start to process the messages once the buffer is full (*i.e.* unprocessed message turns >= `PROJECT_SESSION_MESSAGE_BUFFER_MAX_TURNS`)
So if your `PROJECT_SESSION_MESSAGE_BUFFER_MAX_TURNS` is 3:
```python Python theme={null}
import os
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
# this storing won't be processed immediately
client.sessions.store_message(
session_id="session-uuid",
blob=message1,
format="openai"
)
# this storing won't be processed immediately
client.sessions.store_message(
session_id="session-uuid",
blob=message2,
format="openai"
)
# will trigger the processing immediately
client.sessions.store_message(
session_id="session-uuid",
blob=message3,
format="openai"
)
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// this storing won't be processed immediately
await client.sessions.storeMessage('session-uuid', {
blob: message1,
format: 'openai'
});
// this storing won't be processed immediately
await client.sessions.storeMessage('session-uuid', {
blob: message2,
format: 'openai'
});
// will trigger the processing immediately
await client.sessions.storeMessage('session-uuid', {
blob: message3,
format: 'openai'
});
```
## Wait until it's IDLE
Acontext will start to process the messages once the buffer is IDLE for a while (*i.e.* no new messages for `PROJECT_SESSION_MESSAGE_BUFFER_TTL_SECONDS` seconds)
So if your `PROJECT_SESSION_MESSAGE_BUFFER_TTL_SECONDS` is 3:
```python Python theme={null}
from acontext import AcontextClient
import time
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# this storing won't be processed immediately
client.sessions.store_message(
session_id="session-uuid",
blob=message1,
format="openai"
)
time.sleep(3)
# Acontext will start to process the messages in background.
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// this storing won't be processed immediately
await client.sessions.storeMessage('session-uuid', {
blob: message1,
format: 'openai'
});
await new Promise(resolve => setTimeout(resolve, 3000));
// Acontext will start to process the messages in background.
```
## Force Buffer Processing
You can force the buffer processing by calling the `flush` method:
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
client.sessions.flush(session_id="session-uuid")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
await client.sessions.flush('session-uuid');
```
## Monitor Message Processing Status
You can check the observing status of messages in your session to understand how many are observed, in process, or pending:
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# Get message observing status for a session
status = client.sessions.messages_observing_status(session_id="session-uuid")
print(f"Observed: {status.observed}")
print(f"In Process: {status.in_process}")
print(f"Pending: {status.pending}")
print(f"Last Updated: {status.updated_at}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// Get message observing status for a session
const status = await client.sessions.messagesObservingStatus('session-uuid');
console.log(`Observed: ${status.observed}`);
console.log(`In Process: ${status.in_process}`);
console.log(`Pending: ${status.pending}`);
console.log(`Last Updated: ${status.updated_at}`);
```
### Status Meanings
Messages that have been fully processed and observed by the system. These messages are complete and available for analysis.
Messages currently being processed by the system. These are actively being analyzed or transformed.
Messages waiting in the buffer to be processed. These will be processed once the buffer conditions are met (full buffer or TTL expired).
Use `messages_observing_status()` to monitor buffer health and ensure messages are being processed as expected. A high number of pending messages may indicate you need to adjust buffer settings or call `flush()` manually.
## Customization
Customize the behavior of Buffer.
# Dashboard
Source: https://docs.acontext.io/observe/dashboard
View Context, Artifacts, Tasks and Skills in one place
The Acontext dashboard provides a unified interface for monitoring and analyzing your AI agent's behavior. Access real-time insights into tasks, messages, sessions, and skills to understand how your agent processes information and executes operations.
## Overview
The dashboard gives you complete visibility into your agent's operations through six specialized views. Each view is designed to help you monitor different aspects of your agent's performance and debug issues quickly.
Use the dashboard filters to narrow down specific time ranges or search for particular operations when troubleshooting issues.
## Metrics
Gain high-level insights into your agent's overall performance through aggregated metrics and visualizations. The BI dashboard helps you identify trends, measure success, and make data-driven decisions about your agent's configuration.
The BI dashboard includes:
* **Performance trends**: Visualize task completion rates and execution times over time
* **Success metrics**: Track overall agent success rates and error patterns
* **Resource utilization**: Monitor system resource consumption
* **Custom analytics**: Create and save custom views for specific metrics
Export BI dashboard data to integrate with your existing analytics tools or create custom reports for stakeholders.
## Traces Viewer
Monitor distributed traces across your entire system using OpenTelemetry integration.
The traces viewer provides detailed visibility into request flows, service interactions, and performance bottlenecks across acontext-api and acontext-core services.
The traces viewer includes:
* **Time range filtering**: Filter traces by time ranges (15 minutes, 1 hour, 6 hours, 24 hours, or 7 days)
* **Auto-refresh**: Automatically refreshes trace data every 30 seconds to keep information up-to-date
* **Hierarchical span visualization**: Expand traces to view nested spans showing the complete request flow
* **Service identification**: Color-coded spans distinguish between acontext-api (teal) and acontext-core (blue) services
* **HTTP method badges**: Quickly identify request types with color-coded HTTP method indicators
* **Duration visualization**: Visual timeline bars show relative execution times for each span
* **Jaeger integration**: Click the external link icon to view detailed trace information in Jaeger UI
* **Pagination**: Load more traces to explore historical data beyond the initial results
Use the trace ID to correlate issues across logs and metrics. Click the trace ID to copy it to your clipboard for easy reference.
Traces are automatically collected when OpenTelemetry is enabled in your Acontext deployment. The traces viewer integrates with Jaeger for trace storage and visualization.
## Message Viewer
Examine all messages exchanged between your agent and external systems.
The message viewer provides detailed inspection of message content, enabling you to debug communication issues and verify data flow.
## Artifact Viewer
Examine all artifacts in a tree view.
## Task Viewer
Track user sessions and their associated tasks to understand the complete interaction flow.
This view correlates sessions with individual tasks, helping you analyze how your agent handles multi-step operations.
Monitor all tasks executed by your agent in real-time.
The task viewer displays task status, progresses and user preferences to help you identify bottlenecks and optimize your agent's workflow.
## Skill Viewer
Monitor your agent's available skills and their usage patterns.
The skill viewer helps you understand which capabilities your agent have self-learned.
## Next Steps
Learn how to obtain agent tasks in detail, from SDK.
Learn how to enable your agent to self-learn skills and SOPs.
# Disable Task Tracking
Source: https://docs.acontext.io/observe/disable_tasks
Learn how to disable automatic task tracking for specific sessions
## Overview
Task tracking may not always be desirable. For example, when you're launching a new sub-agent with simple tasks, or running test conversations, the session may not need to track tasks.
By default, all sessions have task tracking **enabled**.
When enabled, every message sent to a session will be monitored on agent planning and user feedback.
## When to Disable Task Tracking
Consider disabling task tracking in these scenarios:
* **Testing and Development**: When you're testing your agent and don't want to clutter your task logs
* **Simple Conversations**: For basic Q\&A sessions that don't require task analysis
* **Sub-agents**: When running lightweight sub-agents for simple operations
## How to Disable Task Tracking
You can disable task tracking by setting the `disable_task_tracking` parameter to `true` when creating a session.
```python Python theme={null}
import os
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
# Create a session with task tracking disabled
session = client.sessions.create(
disable_task_tracking=True
)
print(f"Session ID: {session.id}")
print(f"Task tracking disabled: {session.disable_task_tracking}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
// Create a session with task tracking disabled
const session = await client.sessions.create({
disableTaskTracking: true
});
console.log(`Session ID: ${session.id}`);
console.log(`Task tracking disabled: ${session.disable_task_tracking}`);
```
## What Happens When Task Tracking is Disabled
When `disable_task_tracking` is set to `true`:
* ✅ **Messages are saved**: Messages sent to the session are still saved to the database normally
* ❌ **No automatic tasks**: No automatic task extraction.
* ❌ **No task analysis**: The session won't appear in task-based analytics
## Best Practices
1. **Use selectively**: Only disable task tracking when you specifically don't need it
2. **Document your decision**: Make it clear in your code why task tracking is disabled
3. **Testing environments**: Consider disabling task tracking in test/development environments
## Related
* [Understanding Tasks](/observe/agent_tasks)
* [Task Analytics](/observe/dashboard)
* [Session Management](/learn/skill-space)
# Distributed Tracing
Source: https://docs.acontext.io/observe/traces
Monitor request flows across services with OpenTelemetry integration for performance debugging and system observability
Acontext now includes comprehensive distributed tracing support through OpenTelemetry integration. This enables you to track requests as they flow through your entire system, from API endpoints through core services, database operations, and external service calls.
## Overview
Distributed tracing provides end-to-end visibility into how requests are processed across multiple services. When a request comes in, Acontext automatically creates a trace that follows the request through:
* **acontext-api**: HTTP API layer (Go service)
* **acontext-core**: Core business logic (Python service)
* **Database operations**: SQL queries and transactions
* **Cache operations**: Redis interactions
* **Storage operations**: S3 blob storage
* **Message queue**: RabbitMQ message processing
* **LLM operations**: Embedding and completion calls
Traces are automatically collected when OpenTelemetry is enabled in your deployment. The system uses Jaeger as the trace backend for storage and visualization.
## How It Works
Acontext uses OpenTelemetry to instrument both the API and Core services:
### Automatic Instrumentation
The following operations are automatically traced:
* **HTTP requests**: All API endpoints are instrumented with request/response details
* **Database queries**: SQL operations are traced with query details
* **Cache operations**: Redis get/set operations
* **Storage operations**: S3 upload/download operations
* **Message processing**: Async message queue operations
* **LLM calls**: Embedding and completion API calls
### Cross-Service Tracing
When a request flows from `acontext-api` to `acontext-core`, the trace context is automatically propagated using OpenTelemetry's trace context headers. This creates a unified trace showing the complete request flow across both services.
## Viewing Traces
### Dashboard Traces Viewer
Access the traces viewer from the dashboard to see all traces in your system:
* **Time range filtering**: Filter traces by time ranges (15 minutes, 1 hour, 6 hours, 24 hours, or 7 days)
* **Auto-refresh**: Automatically refreshes every 30 seconds
* **Hierarchical visualization**: Expand traces to view nested spans showing the complete request flow
* **Service identification**: Color-coded spans distinguish between services (acontext-api in teal, acontext-core in blue)
* **HTTP method badges**: Quickly identify request types
* **Duration visualization**: Visual timeline bars show relative execution times
* **Trace ID**: Copy trace IDs to correlate with logs and metrics
Click the external link icon next to a trace ID to open the detailed trace view in Jaeger UI for advanced analysis.
### Jaeger UI
For advanced trace analysis, you can access Jaeger UI directly. The traces viewer provides a link to open each trace in Jaeger, where you can:
* View detailed span attributes and tags
* Analyze trace dependencies and service maps
* Filter and search traces by various criteria
* Compare trace performance over time
## Configuration
Tracing is configured through environment variables. The following settings control tracing behavior:
### Core Service (Python)
```bash theme={null}
# Enable/disable tracing
TELEMETRY_ENABLED=true
# OTLP endpoint (Jaeger collector)
TELEMETRY_OTLP_ENDPOINT=http://localhost:4317
# Sampling ratio (0.0-1.0, default 1.0 = 100% sampling)
TELEMETRY_SAMPLE_RATIO=1.0
# Service name for tracing
TELEMETRY_SERVICE_NAME=acontext-core
```
### API Service (Go)
```yaml theme={null}
telemetry:
enabled: true
otlp_endpoint: "localhost:4317"
sample_ratio: 1.0
```
In production environments, consider using a sampling ratio less than 1.0 (e.g., 0.1 for 10% sampling) to reduce storage costs and overhead while still capturing representative traces.
## Understanding Traces
### Trace Structure
Each trace consists of:
* **Root span**: The initial request entry point (usually an HTTP endpoint)
* **Child spans**: Operations performed during request processing
* **Nested spans**: Operations that are part of larger operations
### Span Information
Each span contains:
* **Operation name**: The operation being performed (e.g., `GET /api/v1/session/:session_id/get_learning_status`)
* **Service name**: Which service performed the operation (`acontext-api` or `acontext-core`)
* **Duration**: How long the operation took
* **Tags**: Additional metadata (HTTP method, status codes, error information)
* **Timestamps**: When the operation started and ended
### Service Colors
In the traces viewer, spans are color-coded by service:
* **Teal**: `acontext-api` operations
* **Blue**: `acontext-core` operations
* **Gray**: Other services or unknown operations
## Use Cases
Identify slow operations and bottlenecks in your system by analyzing trace durations. Expand traces to see which specific operation is taking the most time.
```python theme={null}
# Traces automatically show up in the dashboard
# No code changes needed - just enable tracing in your configuration
```
1. Open the traces viewer in the dashboard
2. Filter by time range to focus on recent requests
3. Look for traces with long durations
4. Expand the trace to see which span is slow
5. Check the operation name and service to identify the bottleneck
When an error occurs, use the trace ID to correlate logs and understand the full request flow that led to the error.
1. Find the error in your logs and note the trace ID
2. Search for the trace ID in the traces viewer
3. Expand the trace to see the complete request flow
4. Identify which service and operation failed
5. Check span tags for error details
Understand how your services interact by analyzing trace flows. See which services call which other services and how frequently.
1. View traces in Jaeger UI for advanced analysis
2. Use Jaeger's service map view to visualize dependencies
3. Analyze trace patterns to understand service communication
Compare trace durations before and after optimizations to measure improvements.
1. Note trace durations for specific operations before optimization
2. Make your optimizations
3. Compare new trace durations to verify improvements
4. Use trace data to identify the next optimization target
## Best Practices
Configure a sampling ratio (e.g., 0.1 for 10%) to reduce storage costs while maintaining observability.
Use trace IDs from traces to find related log entries and get complete context for debugging.
Watch trace collection rates to ensure your sampling ratio is appropriate for your traffic volume.
Configure alerts based on trace durations to catch performance regressions early.
## Next Steps
View traces alongside other observability data in the unified dashboard.
Configure tracing settings and sampling ratios for your deployment.
# Quickstart
Source: https://docs.acontext.io/quick
Install Acontext SDK and run a demo script
## Start Acontext Server
### Use Hosted Acontext
Go to [Acontext Dashboard](https://dash.acontext.io) and sign up for a free account.
The onboarding process will guide you to get an API key.
### Use Self-hosted Acontext
Refer to [this doc](/local) to start acontext server in 2 commands.
It will launch Acontext API and Dashboard on your local machine:
* Acontext API Base URL: [http://localhost:8029/api/v1](http://localhost:8029/api/v1)
* Acontext Dashboard: [http://localhost:3000/](http://localhost:3000/)
The default API key is `sk-ac-your-root-api-bearer-token`.
## Install SDK
Acontext provides official SDKs for Python and TypeScript/JavaScript, making it easy to integrate Acontext into your applications. The SDKs handle authentication, request formatting, and provide type-safe interfaces to all Acontext features.
### Requirements
* Python 3.10 or newer
* pip (Python package installer)
### Install via pip
```bash theme={null}
pip install acontext
```
### Requirements
* Node.js 16.x or newer
* npm, yarn, or pnpm package manager
```bash theme={null}
npm install @acontext/acontext
```
```bash theme={null}
yarn add @acontext/acontext
```
```bash theme={null}
pnpm add @acontext/acontext
```
## Quick Start
After installation, verify your setup with a quick connection test:
```python theme={null}
import os
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
print(client.ping())
session = client.sessions.create()
client.sessions.store_message(
session_id=session.id,
blob={
"role": "assistant",
"content": """Here is my plan:
1. Use Next.js for the frontend
2. Use Supabase for the database
3. deploy to Cloudflare Pages
""",
},
)
client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": "Confirm, go ahead. Use tailwind for frontend styling.",
},
)
messages = client.sessions.get_messages(session_id=session.id)
print(messages.items)
```
```typescript theme={null}
import { AcontextClient } from "@acontext/acontext";
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
async function main() {
console.log(await client.ping());
const session = await client.sessions.create();
await client.sessions.storeMessage(session.id, {
role: "assistant",
content: `Here is my plan:
1. Use Next.js for the frontend
2. Use Supabase for the database
3. deploy to Cloudflare Pages
`,
});
await client.sessions.storeMessage(session.id, {
role: "user",
content: "Confirm, go ahead. Use tailwind for frontend styling.",
});
const messages = await client.sessions.getMessages(session.id);
console.log(messages.items);
}
main();
```
If you see the success message, you're ready to start using Acontext!
## Troubleshooting
* Verify your Acontext server is running
* Check that the `base_url` is correct
* Ensure no firewall is blocking the connection
* For local development, confirm you're using `http://localhost:8029/api/v1`
* Verify your API key is correct
* Check that the API key has the necessary permissions
* Ensure you're passing the API key in the correct format
* Confirm the package is installed: `pip list | grep acontext` (Python) or `npm list @acontext/acontext` (TypeScript)
* Try reinstalling the package
* Check your Python version is 3.10+ or Node.js version is 16+
* Ensure you're using TypeScript 4.5 or newer
* Run `npm install` to ensure all type definitions are installed
* Check that your `tsconfig.json` has proper settings for module resolution
## Next Steps
Create and manage sessions for organizing your conversations
Do context engineering for you.
Build a compact context for agents in one api call.
Store and manage files and artifacts in Acontext
Explore the complete API documentation
# Filesystem Tools
Source: https://docs.acontext.io/sdk/disk_tools
Enable LLMs to autonomously manage files on Acontext disks using function calling
Acontext provides pre-built filesystem tools that allow LLMs to read, write, and manage files on disks through function calling. You can integrate these tools with OpenAI or Anthropic APIs to create agents that persist data autonomously.
## Available Tools
The SDK includes five disk operation tools:
* **`write_file`** - Create or overwrite text files
* **`read_file`** - Read file contents with optional line offset and limit
* **`replace_string`** - Find and replace text in files
* **`list_artifacts`** - List files and directories in a path
* **`download_file`** - Get a public URL to download a file
These tools handle path normalization automatically and support nested directory structures like `/notes/`, `/documents/2024/`, etc.
## Building an Agent with Filesystem
You can build an agentic loop where the LLM autonomously calls disk tools to complete file-related tasks. Here's a complete example:
```python Python theme={null}
import json
from acontext import AcontextClient
from acontext.agent.disk import DISK_TOOLS
from openai import OpenAI
# Initialize clients
acontext_client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# acontext_client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
openai_client = OpenAI()
# Create a disk and tool context
disk = acontext_client.disks.create()
ctx = DISK_TOOLS.format_context(acontext_client, disk.id)
# Get tool schemas for OpenAI
tools = DISK_TOOLS.to_openai_tool_schema()
print(tools)
# Simple agentic loop
messages = [
{
"role": "user",
"content": "Create a todo.md file with 3 tasks. Then give me the public download URL",
}
]
while True:
response = openai_client.chat.completions.create(
model="gpt-4.1",
messages=messages,
tools=tools,
)
message = response.choices[0].message
messages.append(message)
# Break if no tool calls
if not message.tool_calls:
print(f"🤖 Assistant: {message.content}")
break
# Execute each tool call
for tool_call in message.tool_calls:
print(f"⚙️ Called {tool_call.function.name}")
result = DISK_TOOLS.execute_tool(
ctx, tool_call.function.name, json.loads(tool_call.function.arguments)
)
print(f"🔍 Result: {result}")
messages.append(
{"role": "tool", "tool_call_id": tool_call.id, "content": result}
)
```
```typescript TypeScript theme={null}
import { AcontextClient, DISK_TOOLS } from '@acontext/acontext';
import OpenAI from 'openai';
// Initialize clients
const acontextClient = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const acontextClient = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
const openaiClient = new OpenAI({ apiKey: 'sk-your-openai-key' });
// Create a disk and tool context
const disk = await acontextClient.disks.create();
const ctx = DISK_TOOLS.formatContext(acontextClient, disk.id);
// Get tool schemas for OpenAI
const tools = DISK_TOOLS.toOpenAIToolSchema();
console.log(tools);
// Simple agentic loop
const messages = [
{
role: 'user',
content: 'Create a todo.md file with 3 tasks. Then check the content in this file',
},
];
while (true) {
const response = await openaiClient.chat.completions.create({
model: 'gpt-4.1',
messages,
tools,
});
const message = response.choices[0].message;
messages.push(message);
// Break if no tool calls
if (!message.tool_calls) {
console.log(`🤖 Assistant: ${message.content}`);
break;
}
// Execute each tool call
for (const toolCall of message.tool_calls) {
console.log(`⚙️ Called ${toolCall.function.name}`);
const result = await DISK_TOOLS.executeTool(
ctx,
toolCall.function.name,
JSON.parse(toolCall.function.arguments)
);
console.log(`🔍 Result: ${result}`);
messages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: result,
});
}
}
```
The agent will automatically call the appropriate tools (`write_file`, `read_file`, etc.) to complete your request, creating a fully autonomous file management system.
## How It Works
Set up both the Acontext client and your LLM client (OpenAI or Anthropic). Create a disk to store files.
```python Python theme={null}
from acontext import AcontextClient
from acontext.agent.disk import DISK_TOOLS
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
disk = client.disks.create()
ctx = DISK_TOOLS.format_context(client, disk.id)
```
```typescript TypeScript theme={null}
import { AcontextClient, DISK_TOOLS } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const disk = await client.disks.create();
const ctx = DISK_TOOLS.formatContext(client, disk.id);
```
Convert the disk tools to the schema format your LLM provider expects.
```python Python theme={null}
# For OpenAI
tools = DISK_TOOLS.to_openai_tool_schema()
# For Anthropic
tools = DISK_TOOLS.to_anthropic_tool_schema()
```
```typescript TypeScript theme={null}
// For OpenAI
const tools = DISK_TOOLS.toOpenAIToolSchema();
// For Anthropic
const tools = DISK_TOOLS.toAnthropicToolSchema();
```
Create a loop that stores messages to Acontext, executes tool calls, and feeds results back until the task is complete.
**Executing Tools After LLM Response:**
When the LLM responds with tool calls, iterate through each one and execute them using `DISK_TOOLS.execute_tool()` (Python) or `DISK_TOOLS.executeTool()` (TypeScript):
```python Python theme={null}
# Execute each tool call from the LLM response
for tool_call in message.tool_calls:
result = DISK_TOOLS.execute_tool(
ctx, # Tool context with disk ID
tool_call.function.name, # Tool name (e.g., "write_file")
json.loads(tool_call.function.arguments) # Parse arguments
)
# Add tool result back to message history
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result
})
```
```typescript TypeScript theme={null}
// Execute each tool call from the LLM response
for (const toolCall of message.tool_calls) {
const result = await DISK_TOOLS.executeTool(
ctx, // Tool context with disk ID
toolCall.function.name, // Tool name (e.g., "write_file")
JSON.parse(toolCall.function.arguments) // Parse arguments
);
// Add tool result back to message history
messages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: result,
});
}
```
The loop continues until the LLM returns a message without tool calls, indicating the task is complete.
## Tool Reference
### write\_file
Create or overwrite a text file on the disk.
**Parameters:**
* `filename` (required) - Name of the file, e.g., `"report.md"`
* `content` (required) - Text content to write to the file
* `file_path` (optional) - Directory path, e.g., `"/notes/"` (defaults to `"/"`)
**Returns:** Success message with file path
### read\_file
Read the contents of a text file from the disk.
**Parameters:**
* `filename` (required) - Name of the file to read
* `file_path` (optional) - Directory path where the file is located (defaults to `"/"`)
* `line_offset` (optional) - Starting line number (defaults to `0`)
* `line_limit` (optional) - Maximum number of lines to return (defaults to `100`)
**Returns:** File content with line range information
### replace\_string
Replace all occurrences of a string in a file.
**Parameters:**
* `filename` (required) - Name of the file to modify
* `old_string` (required) - String to be replaced
* `new_string` (required) - Replacement string
* `file_path` (optional) - Directory path where the file is located (defaults to `"/"`)
**Returns:** Number of replacements made
### list\_artifacts
List all files and directories at a specified path.
**Parameters:**
* `file_path` (required) - Directory path to list, e.g., `"/notes/"` or `"/"`
**Returns:** List of files and directories
### download\_file
Get a public presigned URL to download a file.
**Parameters:**
* `filename` (required) - Name of the file to get the download URL for
* `file_path` (optional) - Directory path where the file is located (defaults to `"/"`)
* `expire` (optional) - URL expiration time in seconds (defaults to `3600`)
**Returns:** Presigned public URL for downloading the file
All file operations are scoped to the specific disk. Ensure you create and configure the disk context correctly before executing tools.
# Dependencies
Source: https://docs.acontext.io/settings/core
Core settings are used to configure important deps and services.
## Basic Environment Variables
Configure your acontext core services using these essential environment variables. All environment variables use uppercase field names corresponding to the configuration schema.
### LLM Configuration
API key for your LLM provider (OpenAI or Anthropic). This is the primary authentication credential for AI model access.
Custom base URL for LLM API endpoints. Leave unset to use the provider's default endpoint.
LLM provider to use. Supported values: `openai`, `anthropic`
Default model identifier for LLM operations. Examples: `gpt-4`, `gpt-3.5-turbo`, `claude-3-sonnet`
Timeout in seconds for LLM API responses. Increase for longer operations.
### Embedding Configuration
Embedding provider for vector operations. Supported values: `openai`, `jina`
Embedding model to use for generating vectors. Examples: `text-embedding-3-small`, `text-embedding-ada-002`
Dimension size for embedding vectors. Must match your chosen embedding model's output dimensions.
Separate API key for embedding service. If not set, uses `LLM_API_KEY`.
Custom base URL for embedding API endpoints. Leave unset to use the provider's default.
Cosine distance threshold for embedding similarity searches. Lower values = more strict matching.
Be careful when choosing your embedding model. Changing the embedding model after data has been stored will require you to clean and rebuild your databases, as existing vector embeddings will be incompatible with the new model's output format and dimensions.
## `.env` Examples
```bash .env theme={null}
# Required LLM Configuration
LLM_API_KEY=sk-your-openai-api-key-here
# Optional LLM Settings
LLM_SDK=openai
LLM_SIMPLE_MODEL=gpt-4
LLM_RESPONSE_TIMEOUT=60
# Embedding Configuration
BLOCK_EMBEDDING_PROVIDER=openai
BLOCK_EMBEDDING_MODEL=text-embedding-3-small
BLOCK_EMBEDDING_DIM=1536
BLOCK_EMBEDDING_SEARCH_COSINE_DISTANCE_THRESHOLD=0.8
```
```bash Anthropic Setup theme={null}
# Using Anthropic Claude
LLM_API_KEY=your-anthropic-api-key
LLM_SDK=anthropic
LLM_SIMPLE_MODEL=claude-3-sonnet-20240229
# Keep OpenAI for embeddings (recommended)
BLOCK_EMBEDDING_PROVIDER=openai
BLOCK_EMBEDDING_API_KEY=sk-your-openai-key-for-embeddings
```
```bash Custom Endpoints theme={null}
# Custom LLM endpoint (e.g., Azure OpenAI)
LLM_API_KEY=your-azure-key
LLM_BASE_URL=https://your-resource.openai.azure.com/
LLM_SDK=openai
# Custom embedding endpoint
BLOCK_EMBEDDING_API_KEY=your-embedding-key
BLOCK_EMBEDDING_BASE_URL=https://api.jina.ai/v1/embeddings
BLOCK_EMBEDDING_PROVIDER=jina
```
```bash Local LLM (Ollama) theme={null}
# Ollama server running locally
LLM_API_KEY=dummy-key-not-required
LLM_BASE_URL=http://localhost:11434/v1
LLM_SDK=openai
LLM_SIMPLE_MODEL=qwen3:8b
# Local embedding with Ollama
BLOCK_EMBEDDING_PROVIDER=openai
BLOCK_EMBEDDING_API_KEY=dummy-key
BLOCK_EMBEDDING_BASE_URL=http://localhost:11434/v1
BLOCK_EMBEDDING_MODEL=qwen3-embedding:0.6b
BLOCK_EMBEDDING_DIM=1024
```
## Appendix
Go to [Ollama](https://ollama.com/download) to download and install Ollama.
```bash theme={null}
# Pull and run a model
ollama pull qwen3:8b
ollama pull qwen3-embedding:0.6b
ollama serve
```
Ollama automatically provides OpenAI-compatible endpoints at `http://localhost:11434/v1`
Local LLM setups are perfect for development, privacy-sensitive applications, or when you want to avoid API costs. Ollama provides OpenAI-compatible APIs, making integration seamless.
# Runtime
Source: https://docs.acontext.io/settings/runtime
Runtime settings are used to configure the runtime behavior of the Acontext Agent.
### Session Message Buffer
Acontext will track the agent task and user feedback in the session,
the following settings are used to decide how and when the tracking will be conducted.
Number of previous message turns to include in the context when processing new incoming messages.
Higher values provide more context for task maintaining but consume more tokens.
Maximum number of untracked message turns to keep in the session buffer.
This controls the size of the messages we want to process at the same time, the larger the buffer,
the longer context we can use for once for task maintaining,
and the less total token costs.
Time-to-live in seconds for the session buffer.
If the session buffer haven't received any new messsages for the TTL seconds, it will start the process the current messages.
This parameter + the max turns parameters make sure the session buffer is not too large and the messages are not too old.
### Agent Iteration Limits
Maximum number of iterations a task agent can perform before stopping. Prevents infinite loops in task execution.
Maximum number of iterations for Standard Operating Procedure (SOP) agents. Controls how many steps an SOP agent can take.
Maximum iterations for space construction agents. These agents typically need more iterations to build complex knowledge spaces.
## `.env` Examples
```bash Default Configuration theme={null}
# Session Management
PROJECT_SESSION_MESSAGE_USE_PREVIOUS_MESSAGES_TURNS=3
PROJECT_SESSION_MESSAGE_BUFFER_MAX_TURNS=16
PROJECT_SESSION_MESSAGE_BUFFER_MAX_OVERFLOW=16
PROJECT_SESSION_MESSAGE_BUFFER_TTL_SECONDS=4
# Agent Limits
DEFAULT_TASK_AGENT_MAX_ITERATIONS=4
DEFAULT_SOP_AGENT_MAX_ITERATIONS=4
DEFAULT_SPACE_CONSTRUCT_AGENT_MAX_ITERATIONS=16
```
```bash High-Context Setup theme={null}
# Increased context for complex conversations
PROJECT_SESSION_MESSAGE_USE_PREVIOUS_MESSAGES_TURNS=8
PROJECT_SESSION_MESSAGE_BUFFER_MAX_TURNS=32
PROJECT_SESSION_MESSAGE_BUFFER_MAX_OVERFLOW=24
PROJECT_SESSION_MESSAGE_BUFFER_TTL_SECONDS=10
# More iterations for complex tasks
DEFAULT_TASK_AGENT_MAX_ITERATIONS=8
DEFAULT_SOP_AGENT_MAX_ITERATIONS=6
DEFAULT_SPACE_CONSTRUCT_AGENT_MAX_ITERATIONS=24
```
```bash Performance Optimized theme={null}
# Reduced context for faster processing
PROJECT_SESSION_MESSAGE_USE_PREVIOUS_MESSAGES_TURNS=2
PROJECT_SESSION_MESSAGE_BUFFER_MAX_TURNS=8
PROJECT_SESSION_MESSAGE_BUFFER_MAX_OVERFLOW=8
PROJECT_SESSION_MESSAGE_BUFFER_TTL_SECONDS=2
# Lower iteration limits for speed
DEFAULT_TASK_AGENT_MAX_ITERATIONS=3
DEFAULT_SOP_AGENT_MAX_ITERATIONS=3
DEFAULT_SPACE_CONSTRUCT_AGENT_MAX_ITERATIONS=12
```
Setting iteration limits too high may lead to excessive API usage and longer response times. Setting them too low may prevent agents from completing complex tasks.
## Message Buffer Tuning
### Buffer Size Impact
* **Small buffers** (8-16 turns):
* ✅ **Lower update latency** - Tasks and skills update faster
* ❌ **Higher token cost** - More frequent processing with less context sharing
* **Large buffers** (32+ turns):
* ✅ **Lower token cost** - Batch processing with shared context
* ❌ **Higher update latency** - Tasks and skills update less frequently
### TTL Configuration Impact
* **Short TTL** (1-4 seconds):
* ✅ **Near real-time** task extraction and skill updates
* ❌ **Higher costs** due to frequent processing of smaller batches
* **Long TTL** (10+ seconds):
* ✅ **Cost efficient** batch processing
* ❌ **Delayed updates** for tasks and skills
For **development**: Use shorter TTL (2-4s) and smaller buffers (8-16) for faster feedback loops. For **production**: Use longer TTL (6-10s) and larger buffers (24-32) to optimize costs.
# Artifacts
Source: https://docs.acontext.io/store/disk
Acontext offers a filesystem-like disk for storing artifacts.
The Disk API provides persistent, S3-backed storage for file artifacts in your Acontext projects.
You can organize files using a filesystem-like structure with paths, store custom metadata, and generate secure download URLs.
## Setup Filesystem for your Agent
Bring filesystem to your Agent with one-line code.
If you like to know the details of Disk APIs, please read the following documentation.
## What you'll build
In this quickstart, you'll learn how to:
* Create a disk for storing files
* Upload files with custom metadata
* Retrieve files with presigned URLs
* Organize files using paths
* Update metadata and manage artifacts
## Prerequisites
Before you begin, ensure you have:
* A running Acontext server ([run locally](/local))
* An Acontext API key (default is `sk-ac-your-root-api-bearer-token`)
## Initialize the client
First, create a client instance with your API key and base URL.
```python Python theme={null}
import os
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
client.ping()
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
```
Never hardcode API keys in production code. Use environment variables instead.
## Step-by-step tutorial
Create a new disk to store your artifacts. Each disk is an isolated storage container with a unique ID.
```python Python theme={null}
# Create a new disk
disk = client.disks.create()
print(f"Created disk: {disk.id}")
```
```typescript TypeScript theme={null}
// Create a new disk
const disk = await client.disks.create();
console.log(`Created disk: ${disk.id}`);
```
The disk object contains:
* `id`: Unique disk identifier
* `project_id`: Your project ID
* `created_at`: ISO 8601 timestamp
* `updated_at`: ISO 8601 timestamp
Save the disk ID - you'll need it to upload and retrieve files.
Upload your first file to the disk. You can specify a path to organize files and add custom metadata.
```python Python theme={null}
from acontext import FileUpload
# Create a file upload
file = FileUpload(
filename="sprint-plan.md",
content=b"# Sprint Plan\n\n## Goals\n- Complete user authentication\n- Fix critical bugs"
)
# Upload to disk with path and metadata
artifact = client.disks.artifacts.upsert(
disk.id,
file=file,
file_path="/documents/2024/",
meta={
"department": "engineering",
"author": "alice",
"version": "1.0"
}
)
print(f"Uploaded: {artifact.filename}")
print(f"Path: {artifact.path}")
```
```typescript TypeScript theme={null}
import { FileUpload } from '@acontext/acontext';
// Create a file upload
const file = new FileUpload({
filename: "sprint-plan.md",
content: Buffer.from("# Sprint Plan\n\n## Goals\n- Complete user authentication\n- Fix critical bugs"),
contentType: "text/markdown"
});
// Upload to disk with path and metadata
const artifact = await client.disks.artifacts.upsert(disk.id, {
file: file,
filePath: "/documents/2024/",
meta: {
department: "engineering",
author: "alice",
version: "1.0"
}
});
console.log(`Uploaded: ${artifact.filename}`);
console.log(`Path: ${artifact.path}`);
```
The `upsert` method creates a new artifact or updates an existing one if a file with the same path and filename already exists.
Use meaningful paths like `/images/`, `/documents/`, or `/reports/2024/` to organize your files logically.
Get an artifact and generate a presigned URL for secure downloading. You can also retrieve the file content directly.
```python Python theme={null}
# Get artifact with public URL and content
result = client.disks.artifacts.get(
disk.id,
file_path="/documents/2024/",
filename="sprint-plan.md",
with_public_url=True,
with_content=True
)
print(f"Public URL: {result.public_url}")
print(f"Metadata: {result.artifact.meta}")
# Access file content if available
if result.content:
print(f"Content type: {result.content.type}")
print(f"Content: {result.content.raw[:100]}...") # First 100 chars
```
```typescript TypeScript theme={null}
// Get artifact with public URL and content
const result = await client.disks.artifacts.get(disk.id, {
filePath: "/documents/2024/",
filename: "sprint-plan.md",
withPublicUrl: true,
withContent: true
});
console.log(`Public URL: ${result.public_url}`);
console.log(`Metadata: ${result.artifact.meta}`);
// Access file content if available
if (result.content) {
console.log(`Content type: ${result.content.type}`);
console.log(`Content: ${result.content.raw.substring(0, 100)}...`);
}
```
The response includes:
* `artifact`: Full artifact object with metadata
* `public_url`: Presigned URL for downloading (if requested)
* `content`: Parsed file content for text, JSON, CSV, or code files (if requested)
Presigned URLs are temporary and expire after a default duration. You can customize expiration using the `expire` parameter (in seconds).
View all files and subdirectories at a specific path.
```python Python theme={null}
# List artifacts in the documents folder
result = client.disks.artifacts.list(
disk.id,
path="/documents/2024/"
)
print("Files:")
for artifact in result.artifacts:
print(f" - {artifact.filename} ({artifact.meta.get('author', 'unknown')})")
print("\nDirectories:")
for directory in result.directories:
print(f" - {directory}")
```
```typescript TypeScript theme={null}
// List artifacts in the documents folder
const result = await client.disks.artifacts.list(disk.id, {
path: "/documents/2024/"
});
console.log("Files:");
result.artifacts.forEach(artifact => {
console.log(` - ${artifact.filename} (${artifact.meta.author || 'unknown'})`);
});
console.log("\nDirectories:");
result.directories.forEach(directory => {
console.log(` - ${directory}`);
});
```
Use this to browse your file structure and discover what's stored in each directory.
Modify an artifact's metadata without re-uploading the file.
```python Python theme={null}
# Update metadata
result = client.disks.artifacts.update(
disk.id,
file_path="/documents/2024/",
filename="sprint-plan.md",
meta={
"department": "engineering",
"author": "alice",
"version": "1.1",
"reviewed": True,
"reviewed_by": "bob"
}
)
print(f"Updated metadata: {result.artifact.meta}")
```
```typescript TypeScript theme={null}
// Update metadata
const result = await client.disks.artifacts.update(disk.id, {
filePath: "/documents/2024/",
filename: "sprint-plan.md",
meta: {
department: "engineering",
author: "alice",
version: "1.1",
reviewed: true,
reviewed_by: "bob"
}
});
console.log(`Updated metadata: ${result.artifact.meta}`);
```
Metadata updates are efficient - you don't need to re-upload the entire file.
Delete artifacts and disks when you're done.
```python Python theme={null}
# Delete an artifact
client.disks.artifacts.delete(
disk.id,
file_path="/documents/2024/",
filename="sprint-plan.md"
)
print("Artifact deleted")
# Delete the entire disk
client.disks.delete(disk.id)
print("Disk deleted")
```
```typescript TypeScript theme={null}
// Delete an artifact
await client.disks.artifacts.delete(disk.id, {
filePath: "/documents/2024/",
filename: "sprint-plan.md"
});
console.log("Artifact deleted");
// Delete the entire disk
await client.disks.delete(disk.id);
console.log("Disk deleted");
```
Deleting a disk removes all artifacts stored within it. This action cannot be undone.
You can skip this clean-up step and goto dashboard to view the artifacts ([screenshot](/observe/dashboard#artifact-viewer)).
## Complete example
Here's a complete working example that demonstrates the full workflow:
```python Python theme={null}
from acontext import AcontextClient, FileUpload
def main():
# Initialize client
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
try:
# Create disk
disk = client.disks.create()
print(f"✓ Created disk: {disk.id}")
# Upload file
artifact = client.disks.artifacts.upsert(
disk.id,
file=FileUpload(
filename="notes.md",
content=b"# Meeting Notes\nDiscussed Q4 goals."
),
file_path="/meetings/",
meta={"date": "2024-01-15", "attendees": 5}
)
print(f"✓ Uploaded: {artifact.filename}")
# Retrieve with URL
result = client.disks.artifacts.get(
disk.id,
file_path="/meetings/",
filename="notes.md",
with_public_url=True
)
print(f"✓ Download URL: {result.public_url}")
# List files
files = client.disks.artifacts.list(disk.id, path="/meetings/")
print(f"✓ Found {len(files.artifacts)} file(s)")
# Update metadata
client.disks.artifacts.update(
disk.id,
file_path="/meetings/",
filename="notes.md",
meta={"date": "2024-01-15", "attendees": 5, "status": "completed"}
)
print("✓ Updated metadata")
# Cleanup
client.disks.artifacts.delete(disk.id, file_path="/meetings/", filename="notes.md")
client.disks.delete(disk.id)
print("✓ Cleaned up resources")
except Exception as e:
print(f"✗ Error: {e}")
if __name__ == "__main__":
main()
```
```typescript TypeScript theme={null}
import { AcontextClient, FileUpload } from '@acontext/acontext';
async function main() {
// Initialize client
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
try {
// Create disk
const disk = await client.disks.create();
console.log(`✓ Created disk: ${disk.id}`);
// Upload file
const artifact = await client.disks.artifacts.upsert(disk.id, {
file: new FileUpload({
filename: "notes.md",
content: Buffer.from("# Meeting Notes\nDiscussed Q4 goals."),
contentType: "text/markdown"
}),
filePath: "/meetings/",
meta: { date: "2024-01-15", attendees: 5 }
});
console.log(`✓ Uploaded: ${artifact.filename}`);
// Retrieve with URL
const result = await client.disks.artifacts.get(disk.id, {
filePath: "/meetings/",
filename: "notes.md",
withPublicUrl: true
});
console.log(`✓ Download URL: ${result.public_url}`);
// List files
const files = await client.disks.artifacts.list(disk.id, {
path: "/meetings/"
});
console.log(`✓ Found ${files.artifacts.length} file(s)`);
// Update metadata
await client.disks.artifacts.update(disk.id, {
filePath: "/meetings/",
filename: "notes.md",
meta: { date: "2024-01-15", attendees: 5, status: "completed" }
});
console.log("✓ Updated metadata");
// Cleanup
await client.disks.artifacts.delete(disk.id, {
filePath: "/meetings/",
filename: "notes.md"
});
await client.disks.delete(disk.id);
console.log("✓ Cleaned up resources");
} catch (error) {
console.error(`✗ Error: ${error}`);
}
}
main();
```
## Advanced features
Both SDKs support multiple file upload formats for convenience:
```python Python theme={null}
# Using FileUpload object (recommended)
file = FileUpload(filename="doc.txt", content=b"data")
# Using tuple with bytes
file = ("doc.txt", b"data")
# Using tuple with file handle
file = ("doc.txt", open("file.txt", "rb"))
# Using tuple with content type
file = ("doc.txt", b"data", "text/plain")
```
```typescript TypeScript theme={null}
// Using FileUpload object (recommended)
const file = new FileUpload({
filename: "doc.txt",
content: Buffer.from("data"),
contentType: "text/plain"
});
// Using tuple with Buffer
const file = ["doc.txt", Buffer.from("data")];
// Using tuple with stream
const file = ["doc.txt", fs.createReadStream("file.txt")];
// Using tuple with content type
const file = ["doc.txt", Buffer.from("data"), "text/plain"];
```
When you have many disks, use pagination to retrieve them in batches:
```python Python theme={null}
# List disks with pagination
result = client.disks.list(limit=10, time_desc=True)
for disk in result.items:
print(f"Disk: {disk.id}")
# Get next page if available
if result.has_more:
next_result = client.disks.list(
limit=10,
cursor=result.next_cursor,
time_desc=True
)
```
```typescript TypeScript theme={null}
// List disks with pagination
const result = await client.disks.list({
limit: 10,
timeDesc: true
});
result.items.forEach(disk => {
console.log(`Disk: ${disk.id}`);
});
// Get next page if available
if (result.has_more) {
const nextResult = await client.disks.list({
limit: 10,
cursor: result.next_cursor,
timeDesc: true
});
}
```
The Python SDK includes full async support for non-blocking operations:
```python theme={null}
import asyncio
from acontext import AsyncAcontextClient, FileUpload
async def main():
async with AsyncAcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
) as client:
# All methods are async
disk = await client.disks.create()
artifact = await client.disks.artifacts.upsert(
disk.id,
file=FileUpload(
filename="async.txt",
content=b"Async content"
)
)
result = await client.disks.artifacts.get(
disk.id,
file_path="/",
filename="async.txt"
)
await client.disks.delete(disk.id)
asyncio.run(main())
```
Each artifact includes system-generated metadata in the `__artifact_info__` key:
```json theme={null}
{
"__artifact_info__": {
"size_bytes": 512,
"content_type": "text/markdown",
"last_modified": "2024-01-15T10:30:00Z"
},
"department": "engineering",
"author": "alice"
}
```
You can access this information alongside your custom metadata.
## Common use cases
Store files generated by AI agents, such as reports, diagrams, or code snippets, with metadata tracking generation parameters.
Handle user-uploaded files with custom metadata for categorization, access control, and searchability.
Build a document repository with version tracking, metadata tagging, and organized folder structures.
Store build outputs, logs, and assets from CI/CD pipelines with metadata for traceability.
## Troubleshooting
**Problem**: Getting a 404 error when retrieving an artifact.
**Solution**:
* Verify the disk ID is correct
* Ensure the file path exactly matches (including leading/trailing slashes)
* Check that the filename is spelled correctly
* Use `artifacts.list()` to see what files actually exist
# Context Editing
Source: https://docs.acontext.io/store/editing
Acontext offers few context editing methods for your agent context management.
Context Editing is a crucial part of context engineering. If you have questions about what is context editing, please refer to the [Context Engineering](#context-engineering-and-editing) section.
## Count the Session Current Tokens
Before performing context editing, it's essential to know the current token count of your session. This helps you understand how much context is being consumed and when you need to apply editing strategies.
The `get_token_counts()` method returns the total token count for all text and tool-call parts in a session, giving you visibility into your context window usage.
```python Python theme={null}
import os
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
# Get token counts for a session
token_counts = client.sessions.get_token_counts(session_id="session-uuid")
print(f"Total tokens: {token_counts.total_tokens}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
// Get token counts for a session
const tokenCounts = await client.sessions.getTokenCounts('session-uuid');
console.log(`Total tokens: ${tokenCounts.total_tokens}`);
```
You can use the token count to determine if you need to edit the context.
The token count of Acontext is relative and proportional to the length of your session.
You can use it to determine whether the current session is too long and needs to be edited.
Please do not use the token count to calculate the cost of LLM, as the actual token consumption of each LLM can vary subtly.
## Context Editing On-the-fly
Acontext supports to edit the session context when you obtain the current messages.
The basic usage is to pass the `edit_strategies` to the `get_messages` method to get the edited session messages without modifying the original session storage:
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
edited_session = client.sessions.get_messages(
session_id="session-uuid"
edit_strategies=[
{"type": "STRATEGY_NAME1", "params": {...}},
{"type": "STRATEGY_NAME2", "params": {...}},
...
],
)
original_session = client.sessions.get_messages(
session_id="session-uuid"
)
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const editedSession = await client.sessions.getMessages('session-uuid', {
editStrategies:[
{ type: 'STRATEGY_NAME1' as const, params: {...} },
{ type: 'STRATEGY_NAME2' as const, params: {...} },
...
]
});
const originalSession = await client.sessions.getMessages('session-uuid');
```
### Token Limit
This strategy truncates messages based on token count, removing the oldest messages until the total token count is within the specified limit.
It's useful for managing context window limits and ensuring your session stays within model constraints.
It will:
* Removes messages from oldest to newest
* Maintains tool-call/tool-result pairing (when removing a tool-call, its corresponding tool-result is also removed)
**Example Output:**
```json Before (5 messages, ~25,000 tokens) theme={null}
[
{"role": "user", "content": "What's the weather in NYC?"},
{"role": "assistant", "content": "...", "tool_calls": [...]},
{"role": "tool", "content": "Temperature: 72°F"},
{"role": "assistant", "content": "The weather in NYC is 72°F and sunny."},
{"role": "user", "content": "What about Boston?"}
]
```
```json After (limit_tokens: 20000) theme={null}
[
{"role": "assistant", "content": "The weather in NYC is 72°F and sunny."},
{"role": "user", "content": "What about Boston?"}
]
```
**Usage:**
```python Python theme={null}
# Limit session to 20,000 tokens
edited_session = client.sessions.get_messages(
session_id="session-uuid",
edit_strategies=[
{
"type": "token_limit",
"params": {
"limit_tokens": 20000
}
}
],
)
```
```typescript TypeScript theme={null}
// Limit session to 20,000 tokens
const editedSession = await client.sessions.getMessages('session-uuid', {
editStrategies: [
{
type: 'token_limit' as const,
params: {
limit_tokens: 20000
}
}
],
});
```
### Remove Tool Result
This strategy will replace the oldest tool results' content with a placeholder text to reduce the session context, while keeping the most recent N tool results intact.
**Parameters:**
* `keep_recent_n_tool_results` (optional, default: 3): Number of most recent tool results to keep with original content
* `tool_result_placeholder` (optional, default: "Done"): Custom text to replace old tool results with
**Example Output:**
```json Before (3 tool results with full content) theme={null}
[
{"role": "user", "content": "Check weather in 5 cities"},
{"role": "assistant", "tool_calls": [{"id": "1", "name": "get_weather", "arguments": "{\"city\":\"NYC\"}"}]},
{"role": "tool", "tool_call_id": "1", "content": "Temperature: 72°F, Humidity: 65%, Wind: 10mph..."},
{"role": "assistant", "tool_calls": [{"id": "2", "name": "get_weather", "arguments": "{\"city\":\"LA\"}"}]},
{"role": "tool", "tool_call_id": "2", "content": "Temperature: 85°F, Humidity: 45%, Wind: 5mph..."},
{"role": "assistant", "tool_calls": [{"id": "3", "name": "get_weather", "arguments": "{\"city\":\"Chicago\"}"}]},
{"role": "tool", "tool_call_id": "3", "content": "Temperature: 68°F, Humidity: 70%, Wind: 15mph..."}
]
```
```json After (keep_recent_n_tool_results: 1) theme={null}
[
{"role": "user", "content": "Check weather in 5 cities"},
{"role": "assistant", "tool_calls": [{"id": "1", "name": "get_weather", "arguments": "{\"city\":\"NYC\"}"}]},
{"role": "tool", "tool_call_id": "1", "content": "Done"},
{"role": "assistant", "tool_calls": [{"id": "2", "name": "get_weather", "arguments": "{\"city\":\"LA\"}"}]},
{"role": "tool", "tool_call_id": "2", "content": "Done"},
{"role": "assistant", "tool_calls": [{"id": "3", "name": "get_weather", "arguments": "{\"city\":\"Chicago\"}"}]},
{"role": "tool", "tool_call_id": "3", "content": "Temperature: 68°F, Humidity: 70%, Wind: 15mph..."}
]
```
**Usage:**
```python Python theme={null}
# With explicit parameters
edited_session = client.sessions.get_messages(
session_id="session-uuid",
edit_strategies=[
{
"type": "remove_tool_result",
"params": {
"keep_recent_n_tool_results": 5,
"tool_result_placeholder": "Tool output removed"
}
}
],
)
# Using defaults (keeps 3 most recent, replaces with "Done")
edited_session = client.sessions.get_messages(
session_id="session-uuid",
edit_strategies=[
{"type": "remove_tool_result"}
],
)
```
```typescript TypeScript theme={null}
// With explicit parameters
const editedSession = await client.sessions.getMessages('session-uuid', {
editStrategies: [
{
type: 'remove_tool_result' as const,
params: {
keep_recent_n_tool_results: 5,
tool_result_placeholder: 'Tool output removed'
}
}
],
});
// Using defaults (keeps 3 most recent, replaces with "Done")
const editedSession = await client.sessions.getMessages('session-uuid', {
editStrategies: [
{ type: 'remove_tool_result' as const }
],
});
```
### Remove Tool Call Params
This strategy removes parameters from old tool-call parts to reduce the session context, while keeping the most recent N tool calls with their full parameters intact.
This is particularly useful when you have many tool calls in your session history and want to reduce token usage by removing the detailed arguments from older tool calls, while still maintaining the tool call structure (ID and name) so that tool-results can still reference them.
**Parameters:**
* `keep_recent_n_tool_calls` (optional, default: 3): Number of most recent tool calls to keep with full parameters
**How it works:**
* Keeps the most recent N tool calls with their original parameters
* Replaces older tool call arguments with empty JSON `{}`
* Tool call ID and name remain intact so tool-results can still reference them
**Example Output:**
```json Before (3 tool calls with full arguments) theme={null}
[
{"role": "user", "content": "Analyze sales data for all regions"},
{"role": "assistant", "tool_calls": [
{"id": "call_1", "name": "query_database", "arguments": "{\"query\":\"SELECT * FROM sales WHERE region='North' AND date > '2024-01-01'\",\"limit\":1000}"}
]},
{"role": "tool", "tool_call_id": "call_1", "content": "Results: 1500 rows..."},
{"role": "assistant", "tool_calls": [
{"id": "call_2", "name": "query_database", "arguments": "{\"query\":\"SELECT * FROM sales WHERE region='South' AND date > '2024-01-01'\",\"limit\":1000}"}
]},
{"role": "tool", "tool_call_id": "call_2", "content": "Results: 1200 rows..."},
{"role": "assistant", "tool_calls": [
{"id": "call_3", "name": "calculate_metrics", "arguments": "{\"data\":[...],\"metrics\":[\"average\",\"total\",\"growth\"]}"}
]},
{"role": "tool", "tool_call_id": "call_3", "content": "Average: $5000, Total: $2.5M..."}
]
```
```json After (keep_recent_n_tool_calls: 1) theme={null}
[
{"role": "user", "content": "Analyze sales data for all regions"},
{"role": "assistant", "tool_calls": [
{"id": "call_1", "name": "query_database", "arguments": "{}"}
]},
{"role": "tool", "tool_call_id": "call_1", "content": "Results: 1500 rows..."},
{"role": "assistant", "tool_calls": [
{"id": "call_2", "name": "query_database", "arguments": "{}"}
]},
{"role": "tool", "tool_call_id": "call_2", "content": "Results: 1200 rows..."},
{"role": "assistant", "tool_calls": [
{"id": "call_3", "name": "calculate_metrics", "arguments": "{\"data\":[...],\"metrics\":[\"average\",\"total\",\"growth\"]}"}
]},
{"role": "tool", "tool_call_id": "call_3", "content": "Average: $5000, Total: $2.5M..."}
]
```
**Usage:**
```python Python theme={null}
# With explicit parameters
edited_session = client.sessions.get_messages(
session_id="session-uuid",
edit_strategies=[
{
"type": "remove_tool_call_params",
"params": {
"keep_recent_n_tool_calls": 5
}
}
],
)
# Using defaults (keeps 3 most recent tool calls)
edited_session = client.sessions.get_messages(
session_id="session-uuid",
edit_strategies=[
{"type": "remove_tool_call_params"}
],
)
```
```typescript TypeScript theme={null}
// With explicit parameters
const editedSession = await client.sessions.getMessages('session-uuid', {
editStrategies: [
{
type: 'remove_tool_call_params' as const,
params: {
keep_recent_n_tool_calls: 5
}
}
],
});
// Using defaults (keeps 3 most recent tool calls)
const editedSession = await client.sessions.getMessages('session-uuid', {
editStrategies: [
{ type: 'remove_tool_call_params' as const }
],
});
```
## Context Engineering and Editing
Context Engineering is an emerging discipline focused on designing, managing, and optimizing the information provided to large language models (LLMs) and AI agents to enhance their performance, reliability, and consistency.
While prompt engineering concentrates on crafting specific inputs, context engineering encompasses the entire spectrum of contextual elements that influence an AI system's behavior and outputs.
**Context Editing** is the most important part of context engineering that keep your agent running smoothly.
### Further Reading
* [Context Editing by Anthropic](https://platform.claude.com/docs/en/build-with-claude/context-editing)
* [Context Engineering Handbook](https://github.com/davidkimai/Context-Engineering)
* [Context Engineering by Langgraph](https://blog.langchain.com/context-engineering-for-agents/)
# Multi-modal Messages
Source: https://docs.acontext.io/store/messages/multi-modal
Store and retrieve messages with images, audio, and documents in OpenAI and Anthropic formats
Acontext supports multi-modal messages that include text, images, audio, and PDF documents. You can store and retrieve these messages in both OpenAI and Anthropic formats, with automatic format conversion between providers.
## Prerequisites
Before working with multi-modal messages, ensure you have:
* A running Acontext server ([run locally](/local))
* An Acontext API key
Multi-modal content is stored as assets in S3, while message metadata is stored in PostgreSQL. Acontext automatically handles file uploads and generates presigned URLs for retrieval.
## Supported content types
Acontext supports the following multi-modal content types:
PNG, JPEG, GIF, WebP formats for visual content
WAV, MP3 formats for voice and sound
PDF documents for analysis and summarization
## Storing images
### Images with OpenAI format
OpenAI supports images through the `image_url` content part type, which accepts both external URLs and base64-encoded data URLs:
```python Python theme={null}
import os
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
client.ping()
session = client.sessions.create()
# Store a message with an image URL
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.png",
"detail": "high" # Options: "low", "high", "auto"
}
}
]
},
format="openai"
)
print(f"Message with image sent: {message.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
const session = await client.sessions.create();
// Store a message with an image URL
const message = await client.sessions.storeMessage(session.id, {
role: 'user',
content: [
{
type: 'text',
text: 'What\'s in this image?'
},
{
type: 'image_url',
image_url: {
url: 'https://example.com/image.png',
detail: 'high' // Options: "low", "high", "auto"
}
}
]
}, { format: 'openai' });
console.log(`Message with image sent: ${message.id}`);
```
```python Python theme={null}
import base64
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Read and encode image as base64
with open("image.png", "rb") as image_file:
image_data = base64.b64encode(image_file.read()).decode("utf-8")
# Store message with base64 image (data URL format)
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this image?"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{image_data}",
"detail": "high" # Options: "low", "high", "auto"
}
}
]
},
format="openai"
)
print(f"Message with base64 image sent: {message.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
import fs from 'fs';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Read and encode image as base64
const imageBuffer = fs.readFileSync('image.png');
const imageData = imageBuffer.toString('base64');
// Store message with base64 image (data URL format)
const message = await client.sessions.storeMessage(session.id, {
role: 'user',
content: [
{
type: 'text',
text: 'What\'s in this image?'
},
{
type: 'image_url',
image_url: {
url: `data:image/png;base64,${imageData}`,
detail: 'high' // Options: "low", "high", "auto"
}
}
]
}, { format: 'openai' });
console.log(`Message with base64 image sent: ${message.id}`);
```
The `detail` parameter controls image processing quality. Use `"high"` for detailed analysis, `"low"` for faster processing, or `"auto"` to let the system decide.
Base64-encoded images in OpenAI format use the data URL scheme: `data:image/[type];base64,[base64-data]`. The image data is stored within the message parts and returned as base64 when retrieved.
### Images with Anthropic format
Anthropic requires images to be base64-encoded:
```python Python theme={null}
import base64
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Read and encode image as base64
with open("image.png", "rb") as image_file:
image_data = base64.b64encode(image_file.read()).decode("utf-8")
# Store message with base64 image
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image"
},
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": image_data
}
}
]
},
format="anthropic"
)
print(f"Message with image sent: {message.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
import fs from 'fs';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Read and encode image as base64
const imageBuffer = fs.readFileSync('image.png');
const imageData = imageBuffer.toString('base64');
// Store message with base64 image
const message = await client.sessions.storeMessage(session.id, {
role: 'user',
content: [
{
type: 'text',
text: 'Describe this image'
},
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/png',
data: imageData
}
}
]
}, { format: 'anthropic' });
console.log(`Message with image sent: ${message.id}`);
```
Anthropic format requires images to be base64-encoded. The base64 data is stored within the message parts and returned as base64 when you retrieve the message.
## Storing audio
Audio content can be included in messages for speech-to-text or audio analysis use cases:
```python Python theme={null}
import base64
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Read and encode audio file
with open("audio.wav", "rb") as audio_file:
audio_data = base64.b64encode(audio_file.read()).decode("utf-8")
# Store message with audio (OpenAI format)
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{
"type": "text",
"text": "Transcribe this audio"
},
{
"type": "input_audio",
"input_audio": {
"data": audio_data,
"format": "wav"
}
}
]
},
format="openai"
)
print(f"Message with audio sent: {message.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
import fs from 'fs';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Read and encode audio file
const audioBuffer = fs.readFileSync('audio.wav');
const audioData = audioBuffer.toString('base64');
// Store message with audio (OpenAI format)
const message = await client.sessions.storeMessage(session.id, {
role: 'user',
content: [
{
type: 'text',
text: 'Transcribe this audio'
},
{
type: 'input_audio',
input_audio: {
data: audioData,
format: 'wav'
}
}
]
}, { format: 'openai' });
console.log(`Message with audio sent: ${message.id}`);
```
## Storing Files
You can store files for analysis and understanding using base64-encoded content. Different formats handle files differently:
Anthropic supports storing files using the `document` content type with base64-encoded data:
```python Python theme={null}
import base64
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Read and encode PDF file as base64
with open("report.pdf", "rb") as pdf_file:
pdf_data = base64.b64encode(pdf_file.read()).decode("utf-8")
# Store message with PDF document (Anthropic format)
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": pdf_data
}
},
{
"type": "text",
"text": "Summarize the key findings in this report"
}
]
},
format="anthropic"
)
print(f"Message with PDF sent: {message.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
import fs from 'fs';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Read and encode PDF file as base64
const pdfBuffer = fs.readFileSync('report.pdf');
const pdfData = pdfBuffer.toString('base64');
// Store message with PDF document (Anthropic format)
const message = await client.sessions.storeMessage(session.id, {
role: 'user',
content: [
{
type: 'document',
source: {
type: 'base64',
media_type: 'application/pdf',
data: pdfData
}
},
{
type: 'text',
text: 'Summarize the key findings in this report'
}
]
}, { format: 'anthropic' });
console.log(`Message with PDF sent: ${message.id}`);
```
OpenAI format supports base64 file uploads using the `file` content type with embedded `file_data`:
```python Python theme={null}
import base64
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Read and encode PDF file as base64
with open("document.pdf", "rb") as pdf_file:
pdf_data = base64.b64encode(pdf_file.read()).decode("utf-8")
# Store message with PDF file (OpenAI format)
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this PDF document?"
},
{
"type": "file",
"file": {
"file_data": pdf_data,
"filename": "document.pdf"
}
}
]
},
format="openai"
)
print(f"Message with PDF sent: {message.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
import fs from 'fs';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Read and encode PDF file as base64
const pdfBuffer = fs.readFileSync('document.pdf');
const pdfData = pdfBuffer.toString('base64');
// Store message with PDF file (OpenAI format)
const message = await client.sessions.storeMessage(session.id, {
role: 'user',
content: [
{
type: 'text',
text: 'What\'s in this PDF document?'
},
{
type: 'file',
file: {
file_data: pdfData,
filename: 'document.pdf'
}
}
]
}, { format: 'openai' });
console.log(`Message with PDF sent: ${message.id}`);
```
When you store a PDF with base64 data, the base64 content is stored within the message parts JSON in S3. When you retrieve the message, the PDF is returned as base64 data again—not as a presigned URL. This keeps the PDF data inline with the message content.
### Supported document formats
Acontext will not limit the document formats in the context, you can store any file type in Acontext.
However, not every file type is supported by your LLM Provider. You may check their documentations to see if the file type is supported:
* [OpenAI Files](https://platform.openai.com/docs/guides/pdf-files?api-mode=chat#page-top)
* [Anthropic Files](https://docs.claude.com/en/docs/build-with-claude/pdf-support)
## Retrieving multi-modal messages
When retrieving messages, the content format depends on how the message was originally sent:
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# Retrieve messages
result = client.sessions.get_messages(
session_id="session_uuid",
format="anthropic", # or "openai"
)
print(f"Retrieved {len(result.items)} messages")
# Access messages
for msg in result.items:
for block in msg.content:
if block.get('type') == 'image':
# Images sent as base64 are returned as base64
print(f"Image source type: {block['source']['type']}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// Retrieve messages
const result = await client.sessions.getMessages('session_uuid', {
format: 'anthropic', // or "openai"
limit: 50
});
console.log(`Retrieved ${result.items.length} messages`);
// Access messages
result.items.forEach(msg => {
msg.content.forEach(block => {
if (block.type === 'image') {
// Images sent as base64 are returned as base64
console.log(`Image source type: ${block.source.type}`);
}
});
});
```
**How content is returned:**
* Images/PDFs sent as **base64 data** are returned as **base64 data** (stored within the message parts)
* Images/PDFs sent as **URLs** in OpenAI format are stored as URLs in metadata
* Files uploaded via **multipart form-data** are stored as separate S3 assets (not covered in this guide)
## Format conversion
Acontext automatically converts between formats when retrieving messages:
**Format conversion for images:**
* Images sent as **base64** are stored as base64 and returned as base64 in any format
* Images sent as **URLs** (in OpenAI format) are stored as URLs and can be:
* Retrieved as URLs in OpenAI format (URL is preserved)
* Retrieved as base64 in Anthropic format (URL is downloaded and converted on-the-fly)
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Store in OpenAI format
client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{"type": "text", "text": "Analyze this image"},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/photo.jpg"
}
}
]
},
format="openai"
)
# Retrieve in Anthropic format
result = client.sessions.get_messages(
session_id=session.id,
format="anthropic" # Different format!
)
# Image is automatically converted to Anthropic format
print("Message retrieved in Anthropic format")
for msg in result.items:
print(f"Role: {msg.role}")
for block in msg.content:
print(f" Block type: {block.type}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Store in OpenAI format
await client.sessions.storeMessage(session.id, {
role: 'user',
content: [
{ type: 'text', text: 'Analyze this image' },
{
type: 'image_url',
image_url: {
url: 'https://example.com/photo.jpg'
}
}
]
}, { format: 'openai' });
// Retrieve in Anthropic format
const result = await client.sessions.getMessages(session.id, {
format: 'anthropic' // Different format!
});
// Image is automatically converted to Anthropic format
console.log('Message retrieved in Anthropic format');
result.items.forEach(msg => {
console.log(`Role: ${msg.role}`);
msg.content.forEach(block => {
console.log(` Block type: ${block.type}`);
});
});
```
```python Python theme={null}
import base64
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Read image
with open("chart.png", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")
# Store in Anthropic format
client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{"type": "text", "text": "Explain this chart"},
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": image_data
}
}
]
},
format="anthropic"
)
# Retrieve in OpenAI format
result = client.sessions.get_messages(
session_id=session.id,
format="openai" # Different format!
)
# Image is automatically converted to OpenAI format
print("Message retrieved in OpenAI format")
for msg in result.items:
if hasattr(msg, 'content') and isinstance(msg.content, list):
for part in msg.content:
print(f" Part type: {part.get('type')}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
import fs from 'fs';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Read image
const imageBuffer = fs.readFileSync('chart.png');
const imageData = imageBuffer.toString('base64');
// Store in Anthropic format
await client.sessions.storeMessage(session.id, {
role: 'user',
content: [
{ type: 'text', text: 'Explain this chart' },
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/png',
data: imageData
}
}
]
}, { format: 'anthropic' });
// Retrieve in OpenAI format
const result = await client.sessions.getMessages(session.id, {
format: 'openai' // Different format!
});
// Image is automatically converted to OpenAI format
console.log('Message retrieved in OpenAI format');
result.items.forEach(msg => {
if (Array.isArray(msg.content)) {
msg.content.forEach(part => {
console.log(` Part type: ${part.type}`);
});
}
});
```
Format conversion is bidirectional and lossless for common content types. Use the format that best matches your workflow when retrieving messages.
## Best practices
* Compress images and PDFs to reduce storage costs and improve performance
* Use appropriate resolutions (e.g., 2048px max for most image analysis tasks)
* Consider using `"detail": "low"` in OpenAI format for simple image understanding tasks
* Base64-encoded content increases message size by \~33%, so optimization is important
* Base64 encoding works well for files under 10MB
* For very large files (>10MB), consider using multipart file uploads instead
* Monitor your storage usage and clean up old sessions regularly
* Use **OpenAI format** for GPT-4 Vision and similar models
* Use **Anthropic format** for Claude with vision and document analysis capabilities
* Format conversion is automatic and lossless for common content types
* Base64 data (images, PDFs, audio) is stored within the message parts JSON
* Message parts are stored in S3, with metadata in PostgreSQL
* When retrieving, base64 content is returned as-is (not converted to URLs)
## Complete workflow example
Here's a complete example that demonstrates storing and retrieving multi-modal messages:
```python Python theme={null}
import base64
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
try:
# Create session
session = client.sessions.create()
print(f"Session created: {session.id}")
# Store text + image message
with open("screenshot.png", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{
"type": "text",
"text": "What UI improvements would you suggest for this design?"
},
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": image_data
}
}
]
},
format="anthropic"
)
print(f"Message sent: {message.id}")
# Retrieve messages
result = client.sessions.get_messages(
session_id=session.id,
format="openai" # Convert to OpenAI format
)
print(f"\nRetrieved {len(result.items)} messages:")
for msg in result.items:
print(f" Role: {msg.role}")
if isinstance(msg.content, list):
for part in msg.content:
if part.get('type') == 'text':
print(f" Text: {part.get('text')[:50]}...")
finally:
client.close()
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
import fs from 'fs';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
async function processMultiModalMessage() {
try {
// Create session
const session = await client.sessions.create();
console.log(`Session created: ${session.id}`);
// Store text + image message
const imageBuffer = fs.readFileSync('screenshot.png');
const imageData = imageBuffer.toString('base64');
const message = await client.sessions.storeMessage(session.id, {
role: 'user',
content: [
{
type: 'text',
text: 'What UI improvements would you suggest for this design?'
},
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/png',
data: imageData
}
}
]
}, { format: 'anthropic' });
console.log(`Message sent: ${message.id}`);
// Retrieve messages
const result = await client.sessions.getMessages(session.id, {
format: 'openai' // Convert to OpenAI format
});
console.log(`\nRetrieved ${result.items.length} messages:`);
result.items.forEach(msg => {
console.log(` Role: ${msg.role}`);
if (Array.isArray(msg.content)) {
msg.content.forEach(part => {
if (part.type === 'text') {
console.log(` Text: ${part.text.substring(0, 50)}...`);
}
});
}
});
} catch (error) {
console.error(`Error: ${error}`);
}
}
processMultiModalMessage();
```
## Next steps
Store and manage file artifacts alongside messages
View your messages in the dashboard
# Multi-provider Messages
Source: https://docs.acontext.io/store/messages/multi-provider
Store and retrieve messages from multiple LLM providers
Acontext provides a flexible message storage system that supports multiple LLM providers (OpenAI, Anthropic, Google Gemini). You can store messages then retrieve them later for analysis, debugging, or continuing conversations.
## Prerequisites
Before storing messages, you need:
* A running Acontext server ([run locally](/local))
* An Acontext API key (default is `sk-ac-your-root-api-bearer-token`)
Messages are stored within sessions. Create a session first, then store messages to it.
## Storing messages
### Create a session first
Before storing messages, create a session:
```python Python theme={null}
import os
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# If you're using self-hosted Acontext:
# client = AcontextClient(
# base_url="http://localhost:8029/api/v1",
# api_key="sk-ac-your-root-api-bearer-token",
# )
client.ping()
# Create a session
session = client.sessions.create()
print(f"Session created: {session.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// If you're using self-hosted Acontext:
// const client = new AcontextClient({
// baseUrl: "http://localhost:8029/api/v1",
// apiKey: "sk-ac-your-root-api-bearer-token",
// });
// Create a session
const session = await client.sessions.create();
console.log(`Session created: ${session.id}`);
```
### Store a single message
Store messages in OpenAI, Anthropic, or Gemini format:
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Store a message in OpenAI format
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": "What is the capital of France?"
},
format="openai"
)
print(f"Message stored: {message.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Store a message in OpenAI format
const message = await client.sessions.storeMessage(session.id, {
blob: {
role: 'user',
content: 'What is the capital of France?'
},
format: 'openai'
});
console.log(`Message stored: ${message.id}`);
```
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Store a message in Anthropic format
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"content": [
{"type": "text", "text": "Explain quantum computing"}
]
},
format="anthropic"
)
print(f"Message stored: {message.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Store a message in Anthropic format
const message = await client.sessions.storeMessage(session.id, {
blob: {
role: 'user',
content: [
{ type: 'text', text: 'Explain quantum computing' }
]
},
format: 'anthropic'
});
console.log(`Message stored: ${message.id}`);
```
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Store a message in Gemini format
message = client.sessions.store_message(
session_id=session.id,
blob={
"role": "user",
"parts": [
{"text": "Explain quantum computing"}
]
},
format="gemini"
)
print(f"Message stored: {message.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Store a message in Gemini format
const message = await client.sessions.storeMessage(session.id, {
blob: {
role: 'user',
parts: [
{ text: 'Explain quantum computing' }
]
},
format: 'gemini'
});
console.log(`Message stored: ${message.id}`);
```
### Store multiple messages
You can store multiple messages sequentially to build a conversation:
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
session = client.sessions.create()
# Store multiple messages to build a conversation
messages_to_store = [
{"role": "user", "content": "Hello! Can you help me with Python?"},
{"role": "assistant", "content": "Of course! I'd be happy to help with Python. What would you like to know?"},
{"role": "user", "content": "How do I read a JSON file?"},
{"role": "assistant", "content": "You can use the json module: import json; with open('file.json') as f: data = json.load(f)"}
]
for msg in messages_to_store:
message = client.sessions.store_message(
session_id=session.id,
blob=msg,
format="openai"
)
print(f"Stored message: {message.id}")
print(f"Total messages stored: {len(messages_to_store)}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
const session = await client.sessions.create();
// Store multiple messages to build a conversation
const messagesToStore = [
{ role: 'user', content: 'Hello! Can you help me with Python?' },
{ role: 'assistant', content: 'Of course! I'd be happy to help with Python. What would you like to know?' },
{ role: 'user', content: 'How do I read a JSON file?' },
{ role: 'assistant', content: 'You can use the json module: import json; with open("file.json") as f: data = json.load(f)' }
];
for (const msg of messagesToStore) {
const message = await client.sessions.storeMessage(session.id, {
blob: msg,
format: 'openai'
});
console.log(`Stored message: ${message.id}`);
}
console.log(`Total messages stored: ${messagesToStore.length}`);
```
Each message receives a unique ID upon creation. You can use these IDs to reference specific messages later.
## Retrieving messages
### Get all messages from a session
Retrieve messages from a session:
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# Get messages from a session
result = client.sessions.get_messages(
session_id="session_uuid",
format="openai"
)
print(f"Retrieved {len(result.items)} messages")
for msg in result.items:
print(f"- {msg.role}: {msg.content[:50]}...")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// Get messages from a session
const result = await client.sessions.getMessages('session_uuid', {
format: 'openai'
});
console.log(`Retrieved ${result.items.length} messages`);
result.items.forEach(msg => {
console.log(`- ${msg.role}: ${msg.content.substring(0, 50)}...`);
});
```
## Session observability
Acontext provides built-in observability features to help you monitor and analyze your message sessions in real-time.
You can track various metrics and insights about your sessions:
Monitor the processing status of messages in your session: observed, in\_process, and pending counts.
View all tasks and agent operations associated with a session for debugging and analysis.
Access the Acontext dashboard to visualize message flows, traces, and session analytics.
Disable automatic task tracking for sessions that don't need task analysis or monitoring.
## Message format compatibility
Acontext supports three message formats for maximum compatibility:
Compatible with OpenAI's Chat Completion API format. Use for GPT-3.5, GPT-4, and similar models.
Compatible with Anthropic's Messages API format. Use for Claude models.
Compatible with Google's Gemini API format. Use for Gemini Pro, Gemini Flash, and other Gemini models.
You can store messages in one format and retrieve them in another. Acontext automatically handles format conversion.
## Complete workflow example
Here's a complete example showing how to store and retrieve a conversation:
```python Python theme={null}
from acontext import AcontextClient
# Initialize client
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
try:
# 1. Create a session
session = client.sessions.create()
# 2. Store multiple messages
conversation = [
{"role": "user", "content": "I'm having trouble logging in"},
{"role": "assistant", "content": "I can help with that. What error are you seeing?"},
{"role": "user", "content": "It says 'Invalid credentials'"},
{"role": "assistant", "content": "Try resetting your password using the forgot password link."}
]
for msg in conversation:
client.sessions.store_message(
session_id=session.id,
blob=msg,
format="openai"
)
print(f"Stored {len(conversation)} messages in session {session.id}")
# 3. Retrieve messages later
result = client.sessions.get_messages(
session_id=session.id,
format="openai"
)
print(f"\nRetrieved conversation ({len(result.items)} messages):")
for msg in result.items:
print(f"{msg.role}: {msg.content}")
finally:
client.close()
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
// Initialize client
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
async function storeAndRetrieveConversation() {
try {
// 1. Create a session
const session = await client.sessions.create();
// 2. Store multiple messages
const conversation = [
{ role: 'user', content: 'I\'m having trouble logging in' },
{ role: 'assistant', content: 'I can help with that. What error are you seeing?' },
{ role: 'user', content: 'It says \'Invalid credentials\'' },
{ role: 'assistant', content: 'Try resetting your password using the forgot password link.' }
];
for (const msg of conversation) {
await client.sessions.storeMessage(session.id, {
blob: msg,
format: 'openai'
});
}
console.log(`Stored ${conversation.length} messages in session ${session.id}`);
// 3. Retrieve messages later
const result = await client.sessions.getMessages(session.id, {
format: 'openai'
});
console.log(`\nRetrieved conversation (${result.items.length} messages):`);
result.items.forEach(msg => {
console.log(`${msg.role}: ${msg.content}`);
});
} finally {
await client.close();
}
}
storeAndRetrieveConversation();
```
## Managing sessions
### Delete a session
When you're done with a conversation or workflow, you can delete the session to clean up storage:
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# Delete a session when you're done with it
session_id = "session_uuid"
client.sessions.delete(session_id)
print(f"Session {session_id} deleted successfully")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// Delete a session when you're done with it
const sessionId = 'session_uuid';
await client.sessions.delete(sessionId);
console.log(`Session ${sessionId} deleted successfully`);
```
Deleting a session permanently removes all associated messages. This action cannot be undone.
### List and clean up old sessions
You can list sessions and delete the ones you no longer need:
```python Python theme={null}
from acontext import AcontextClient
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# List all sessions
sessions = client.sessions.list(limit=100, time_desc=True)
print(f"Found {len(sessions.items)} sessions")
# Delete old sessions (example: delete all except the most recent 10)
for session in sessions.items[10:]:
client.sessions.delete(session.id)
print(f"Deleted session {session.id}")
```
```typescript TypeScript theme={null}
import { AcontextClient } from '@acontext/acontext';
const client = new AcontextClient({
apiKey: process.env.ACONTEXT_API_KEY,
});
// List all sessions
const sessions = await client.sessions.list({ limit: 100, timeDesc: true });
console.log(`Found ${sessions.items.length} sessions`);
// Delete old sessions (example: delete all except the most recent 10)
for (const session of sessions.items.slice(10)) {
await client.sessions.delete(session.id);
console.log(`Deleted session ${session.id}`);
}
```
## Best practices
* Use **sessions** to group related messages (e.g., one conversation, one task)
* Create a new session for each distinct conversation or workflow
* Use **OpenAI format** if you're working with OpenAI models or want broad compatibility
* Use **Anthropic format** if you're primarily using Claude models
* Use **Gemini format** if you're working with Google Gemini models (Gemini Pro, Gemini Flash, etc.)
* You can convert between formats when retrieving messages
* Use **message observing status** to track message processing states in real-time
* Monitor **token counts** to stay within context limits and manage API costs
* Review **session tasks** to understand agent behavior and debug issues
* Leverage the **dashboard** for visual insights into message flows and patterns
## Next steps
Learn how to store files and documents alongside your messages
View and analyze your messages with visual insights and metrics
Debug and monitor message flows with detailed execution traces
Track and observe agent operations and task executions
# Store Special Flag of Anthropic Messages
Source: https://docs.acontext.io/store/messages/special/anthropic
## Prompt Cache
[Prompt caching](https://platform.claude.com/docs/en/build-with-claude/prompt-caching) is a powerful feature that optimizes your API usage by allowing resuming from specific prefixes in your prompts.
This approach significantly reduces processing time and costs for repetitive tasks or prompts with consistent elements.
Acontext will automatically store the prompt cache flag in the message metadata.
```python Python theme={null}
client.sessions.store_message(
session_id="session-uuid",
blob={
"role": "user",
"content": [
{
"type": "text",
"text": "",
"cache_control": {"type": "ephemeral"}
}
]
},
format="anthropic"
)
# ...
messages = client.sessions.get_messages(session_id="session-uuid")
# has cache_control flag in the message metadata
```