Skip to main content
The OpenAI Python SDK provides direct access to OpenAI’s API for building AI applications. When integrated with Acontext, you get persistent session management, automatic task extraction, and full observability of your agent’s tool usage and conversations.

What This Integration Provides

Session Persistence

Store conversation history across multiple agent runs and resume sessions seamlessly

Manual Tool Calling

Full control over tool execution with explicit handling of function calls

Task Extraction

Automatically identify and track tasks from agent conversations with progress updates

Tool Observability

Track all tool calls and their results for complete visibility into agent behavior

Quick Start

Download Template

Use acontext-cli to quickly set up an OpenAI Python SDK project with Acontext integration:
acontext create my-openai-project --template-path "python/openai-basic"
If you haven’t installed acontext-cli yet, install it first:
curl -fsSL https://install.acontext.io | sh

Manual Setup

If you prefer to set up manually:
1

Install dependencies

Install OpenAI and Acontext Python packages:
uv sync
Or with pip:
pip install openai acontext python-dotenv
2

Configure environment

Create a .env file with your API credentials:
OPENAI_API_KEY=your_openai_key_here
ACONTEXT_API_KEY=sk-ac-your-root-api-bearer-token
ACONTEXT_BASE_URL=http://localhost:8029/api/v1
Never commit API keys to version control. Always use environment variables or secure secret management.
3

Initialize clients

Create OpenAI and Acontext client instances:
from openai import OpenAI
from acontext import AcontextClient
import os

openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

acontext_client = AcontextClient(
    api_key=os.getenv("ACONTEXT_API_KEY", "sk-ac-your-root-api-bearer-token"),
    base_url=os.getenv("ACONTEXT_BASE_URL", "http://localhost:8029/api/v1"),
)

How It Works

The OpenAI Python SDK integration works by sending conversation messages to Acontext in OpenAI message format. Since both use the same format, no conversion is needed.

Message Flow

  1. Create session: Initialize a new Acontext session for your agent
  2. Send messages: Append each message (user, assistant, and tool) to Acontext as the conversation progresses
  3. Handle tool calls: Manually execute tools when OpenAI requests them
  4. Extract tasks: After the conversation, flush the session and retrieve extracted tasks
  5. Resume sessions: Load previous conversation history to continue where you left off

Basic Integration Pattern

Here’s the core pattern for integrating OpenAI Python SDK with Acontext:
from openai import OpenAI
from acontext import AcontextClient

# Initialize clients
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
acontext_client = AcontextClient(
    api_key="sk-ac-your-root-api-bearer-token",
    base_url="http://localhost:8029/api/v1"
)

# Create Acontext session
space = acontext_client.spaces.create()
session = acontext_client.sessions.create(space_id=space.id)

# Build conversation
conversation = []
user_msg = {"role": "user", "content": "Hello!"}
conversation.append(user_msg)
acontext_client.sessions.send_message(session_id=session.id, blob=user_msg)

# Call OpenAI API
response = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=conversation,
)

# Send assistant response to Acontext
assistant_msg = {
    "role": response.choices[0].message.role,
    "content": response.choices[0].message.content
}
conversation.append(assistant_msg)
acontext_client.sessions.send_message(session_id=session.id, blob=assistant_msg)

Tool Calling

This integration demonstrates manual tool calling, giving you full control over tool execution:

Define Tools

Define your tools in OpenAI’s function calling format:
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Returns weather info for the specified city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "The city to get weather for",
                    }
                },
                "required": ["city"],
                "additionalProperties": False,
            },
        },
    },
]

Execute Tools Manually

Handle tool calls in a loop until the agent provides a final response:
def run_agent(client: OpenAI, conversation: list[dict]) -> tuple[str, list[dict]]:
    """Run the agent with tool calling support."""
    messages_to_send = list(conversation)
    
    new_messages = []
    max_iterations = 10
    iteration = 0
    
    while iteration < max_iterations:
        iteration += 1
        
        # Call OpenAI API
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=messages_to_send,
            tools=tools,
            tool_choice="auto",
        )
        
        message = response.choices[0].message
        message_dict = {"role": message.role, "content": message.content}
        
        # Handle tool calls
        if message.tool_calls:
            message_dict["tool_calls"] = [
                {
                    "id": tc.id,
                    "type": "function",
                    "function": {
                        "name": tc.function.name,
                        "arguments": tc.function.arguments,
                    },
                }
                for tc in message.tool_calls
            ]
            messages_to_send.append(message_dict)
            new_messages.append(message_dict)
            
            # Execute tools
            for tool_call in message.tool_calls:
                function_name = tool_call.function.name
                function_args = json.loads(tool_call.function.arguments)
                function_result = execute_tool(function_name, function_args)
                
                # Add tool response
                tool_message = {
                    "role": "tool",
                    "tool_call_id": tool_call.id,
                    "content": function_result,
                }
                messages_to_send.append(tool_message)
                new_messages.append(tool_message)
        else:
            # No more tool calls, we're done
            break
    
    return message.content, new_messages

Send Messages to Acontext

Send all messages (including tool calls and tool responses) to Acontext:
def append_message(message: dict, conversation: list[dict], session_id: str):
    """Append a message to conversation and send to Acontext."""
    conversation.append(message)
    acontext_client.sessions.send_message(session_id=session_id, blob=message)
    return conversation

# After running agent
response_content, new_messages = run_agent(openai_client, conversation)
for msg in new_messages:
    conversation = append_message(msg, conversation, session_id)

Complete Example

This example demonstrates a multi-turn conversation with tool calling and task extraction:
import asyncio
import json
from openai import OpenAI
from acontext import AcontextClient
import os

# Initialize clients
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
acontext_client = AcontextClient(
    api_key="sk-ac-your-root-api-bearer-token",
    base_url="http://localhost:8029/api/v1"
)

# Tool definitions
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Returns weather info for the specified city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "The city to get weather for"}
                },
                "required": ["city"],
            },
        },
    },
]

def get_weather(city: str) -> str:
    """Returns weather info for the specified city."""
    return f"The weather in {city} is sunny"

def execute_tool(tool_name: str, tool_args: dict) -> str:
    """Execute a tool by name with given arguments."""
    if tool_name == "get_weather":
        return get_weather(**tool_args)
    else:
        return f"Unknown tool: {tool_name}"

def append_message(message: dict, conversation: list[dict], session_id: str):
    conversation.append(message)
    acontext_client.sessions.send_message(session_id=session_id, blob=message)
    return conversation

async def main():
    # Create space and session
    space = acontext_client.spaces.create()
    session = acontext_client.sessions.create(space_id=space.id)
    
    conversation = []
    
    # First interaction
    user_msg = {"role": "user", "content": "What's the weather in Helsinki?"}
    conversation = append_message(user_msg, conversation, session.id)
    
    # Run agent with tool calling
    response_content, new_messages = run_agent(openai_client, conversation)
    
    # Send all messages to Acontext
    for msg in new_messages:
        conversation = append_message(msg, conversation, session.id)
    
    # Extract tasks
    acontext_client.sessions.flush(session.id)
    tasks_response = acontext_client.sessions.get_tasks(session.id)
    
    print("Extracted tasks:")
    for task in tasks_response.items:
        print(f"Task: {task.data['task_description']}")
        print(f"Status: {task.status}")

if __name__ == "__main__":
    asyncio.run(main())

Key Features

Session Persistence

Resume conversations by loading previous messages from Acontext:
# Load previous conversation
messages = acontext_client.sessions.get_messages(session_id)
conversation = messages.items

# Continue conversation
conversation.append({"role": "user", "content": "Summarize our conversation"})
response = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=conversation,
)

Task Extraction

After completing a conversation, extract tasks with their status and metadata:
# Flush session to trigger task extraction
acontext_client.sessions.flush(session_id)

# Retrieve extracted tasks
tasks_response = acontext_client.sessions.get_tasks(session_id)

for task in tasks_response.items:
    print(f"Task: {task.data['task_description']}")
    print(f"Status: {task.status}")
    
    # Access progress updates if available
    if "progresses" in task.data:
        for progress in task.data["progresses"]:
            print(f"  Progress: {progress}")
    
    # Access user preferences if available
    if "user_preferences" in task.data:
        for pref in task.data["user_preferences"]:
            print(f"  Preference: {pref}")

Tool Call Tracking

Acontext automatically tracks all tool calls and their results when messages are sent:
# Tool calls are automatically tracked when you send messages
message_with_tool_call = {
    "role": "assistant",
    "content": None,
    "tool_calls": [
        {
            "id": "call_123",
            "type": "function",
            "function": {
                "name": "get_weather",
                "arguments": '{"city": "Helsinki"}'
            }
        }
    ]
}
acontext_client.sessions.send_message(session_id=session_id, blob=message_with_tool_call)

# Tool results are also tracked
tool_result = {
    "role": "tool",
    "tool_call_id": "call_123",
    "content": "The weather in Helsinki is sunny"
}
acontext_client.sessions.send_message(session_id=session_id, blob=tool_result)

Best Practices

Message format: OpenAI message format is compatible with Acontext, so you can send messages directly without conversion. This includes user, assistant, and tool messages. System prompts should be handled through session-level or skill-level configuration rather than as messages.
Tool execution: Always execute tools in the order they appear in tool_calls, and include the tool_call_id in tool response messages for proper tracking.
Iteration limits: Set a reasonable max_iterations limit for tool calling loops to prevent infinite loops if the agent keeps requesting tools.
In your production agent, you don’t need to call flush method after each conversation, Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to Session Buffer Mechanism.

Next Steps