Integrate OpenAI Agents SDK with Acontext for session persistence, automatic tool execution, and task extraction
The OpenAI Agents SDK provides a high-level framework for building AI agents with automatic tool execution. When integrated with Acontext, you get persistent session management, automatic task extraction, and seamless conversation resumption across sessions.
The OpenAI Agents SDK uses the Responses API format internally, while Acontext uses the Chat Completions API format. The integration handles conversion between these formats automatically.
The Agents SDK uses Responses API format (with function_call and function_call_output items), while Acontext uses Chat Completions API format (with tool_calls and tool messages). The integration provides conversion utilities:
To Acontext: Use Converter.items_to_messages() to convert Responses API format to Chat Completions format
From Acontext: Use message_to_input_items() to convert Chat Completions format back to Responses API format
from agents import function_tool@function_tooldef get_weather(city: str) -> str: """Returns weather info for the specified city.""" return f"The weather in {city} is sunny"@function_tooldef book_flight(from_city: str, to_city: str, date: str) -> str: """Book a flight.""" return f"Flight booked successfully for '{from_city}' to '{to_city}' on '{date}'"
This example demonstrates a multi-turn conversation with automatic tool execution and task extraction:
Copy
import asynciofrom agents import Agent, Runner, OpenAIChatCompletionsModel, AsyncOpenAI, function_toolfrom agents.models.chatcmpl_converter import Converterfrom acontext import AcontextClientfrom helper import message_to_input_items# Initialize Acontextacontext_client = AcontextClient( api_key="sk-ac-your-root-api-bearer-token", base_url="http://localhost:8029/api/v1")@function_tooldef get_weather(city: str) -> str: """Returns weather info for the specified city.""" return f"The weather in {city} is sunny"@function_tooldef book_flight(from_city: str, to_city: str, date: str) -> str: """Book a flight.""" return f"Flight booked successfully for '{from_city}' to '{to_city}' on '{date}'"def create_agent(): return Agent( name="Assistant", instructions="You are a helpful assistant", model=OpenAIChatCompletionsModel( model="gpt-4o-mini", openai_client=AsyncOpenAI(), ), tools=[get_weather, book_flight], )async def session_1(session_id: str): agent = create_agent() # First interaction result = await Runner.run( agent, "I'd like to have a 3-day trip in Finland. I like to see the nature. Give me the plan" ) # Second interaction - continue conversation user_msg_2 = {"role": "user", "content": "The plan sounds good, check the weather there"} new_input = result.to_input_list() + [user_msg_2] result = await Runner.run(agent, new_input) # Convert to Chat Completions format and send to Acontext messages = Converter.items_to_messages(result.to_input_list()) for msg in messages: acontext_client.sessions.send_message( session_id=session_id, blob=msg, format="openai" ) # Extract tasks acontext_client.sessions.flush(session_id) tasks_response = acontext_client.sessions.get_tasks(session_id) print("Extracted tasks:") for task in tasks_response.items: print(f"Task: {task.data['task_description']}") print(f"Status: {task.status}")async def main(): space = acontext_client.spaces.create() session = acontext_client.sessions.create(space_id=space.id) await session_1(session.id)if __name__ == "__main__": asyncio.run(main())
After completing a conversation, extract tasks with their status and metadata:
Copy
# Flush session to trigger task extractionacontext_client.sessions.flush(session_id)# Retrieve extracted taskstasks_response = acontext_client.sessions.get_tasks(session_id)for task in tasks_response.items: print(f"Task: {task.data['task_description']}") print(f"Status: {task.status}") # Access progress updates if available if "progresses" in task.data: for progress in task.data["progresses"]: print(f" Progress: {progress}") # Access user preferences if available if "user_preferences" in task.data: for pref in task.data["user_preferences"]: print(f" Preference: {pref}")
Use message_to_input_items() helper to convert Chat Completions format back to Responses API format:
Copy
from helper import message_to_input_items# Load messages from Acontextmessages = acontext_client.sessions.get_messages(session_id, format="openai")# Convert back to Responses API formatconversation = []for msg in messages.items: items = message_to_input_items(msg) conversation.extend(items)# Use with Agents SDKresult = await Runner.run(agent, conversation)
The message_to_input_items() helper function handles conversion of:
User/system messages → EasyInputMessageParam
Assistant messages → EasyInputMessageParam or ResponseOutputMessageParam with tool calls
Batch message sending: Convert the entire conversation at once using Converter.items_to_messages(result.to_input_list()) rather than converting individual messages.
Tool execution: The Agents SDK handles tool execution automatically. You don’t need to manually execute tools or handle tool responses.
Conversation continuation: Use result.to_input_list() to get the conversation history in Responses API format, then append new messages to continue the conversation.
Format specification: Always specify format="openai" when sending messages to Acontext to ensure proper format handling.
In your production agent, you don’t need to call flush method after each conversation,
Acontext will automatically flush the buffer when the buffer is full or IDLE. To understand the buffer mechanism, please refer to Session Buffer Mechanism.
The OpenAI Agents SDK differs from the basic OpenAI Python SDK in several key ways:
Automatic Tool Execution
The Agents SDK automatically executes tools when the model requests them. You don’t need to manually check for tool calls or execute tools yourself.
Responses API Format
The Agents SDK uses OpenAI’s Responses API format internally, which uses function_call and function_call_output items instead of tool_calls and tool messages.
Higher-Level API
The Agents SDK provides a higher-level API with Runner.run() and Agent class, making it easier to build agents without managing API calls directly.
Function Tool Decorator
Tools are defined using the @function_tool decorator, which automatically registers them with the agent.