Skip to main content
Acontext automatically extracts tasks from your agent’s conversation messages. When an agent outlines a plan or breaks down work into steps, Acontext detects and tracks these tasks in the background, giving you visibility into what your agent is planning and executing.

How Task Extraction Works

As your agent converses with users, Acontext analyzes the conversation context to identify planned tasks. For example, when an agent responds with “My plan is: 1. Search for data, 2. Create a project, 3. Deploy”, Acontext extracts these as individual trackable tasks. Key capabilities:
  • Automatic extraction: Tasks are detected from conversation context without manual tracking
  • Status monitoring: Track whether tasks are pending, running, success, or failed
  • Execution insights: See what your agent planned versus what it actually completed

Task Extraction Has a Delay

Task extraction happens asynchronously with a small delay to optimize costs and performance: Batch Processing for Cost Efficiency:
  • Acontext batches multiple messages together before analyzing them for tasks
  • This reduces the number of LLM calls needed for extraction, saving costs
  • The system waits a few seconds to collect messages before starting extraction
You can use flush method for blocking and waiting for all the tasks to be extracted.
client.sessions.flush(session.id)

Quick Start: Test Task Extraction

This example demonstrates how to verify that Acontext correctly extracts tasks from your agent’s messages. You’ll send a conversation where the agent outlines a plan, then retrieve the extracted tasks to confirm they were detected.
import time
from acontext import AcontextClient

# Initialize client
client = AcontextClient(
    api_key="sk-ac-your-root-api-bearer-token"
)

# Create a project and session
session = client.sessions.create()

# Conversation messages
messages = [
    {
        "role": "user",
        "content": "I need to write a landing page of iPhone 15 pro max"
    },
    {
        "role": "assistant",
        "content": "Sure, my plan is below:\n1. Search for the latest news about iPhone 15 pro max\n2. Init Next.js project for the landing page\n3. Deploy the landing page to the website"
    },
    {
        "role": "user",
        "content": "That sounds good. Let's first collect the message and report to me before any landing page coding."
    },
    {
        "role": "assistant",
        "content": "Sure, I will first collect the message then report to you before any landing page coding."
    }
]

# Send messages in a loop
for msg in messages:
    client.sessions.send_message(
        session_id=session.id,
        blob=msg,
        format="openai"
    )

# Wait for task extraction to complete
client.sessions.flush(session.id)
# Display extracted tasks
tasks_response = client.sessions.get_tasks(session.id)
print(tasks_response)
for task in tasks_response.items:
    print(f"\nTask #{task.order}:")
    print(f"  ID: {task.id}")
    print(f"  Title: {task.data['task_description']}")
    print(f"  Status: {task.status}")

    # Show progress updates if available
    if "progresses" in task.data:
        print(f"  Progress updates: {len(task.data['progresses'])}")
        for progress in task.data["progresses"]:
            print(f"    - {progress}")

    # Show user preferences if available
    if "user_preferences" in task.data:
        print("  User preferences:")
        for pref in task.data["user_preferences"]:
            print(f"    - {pref}")
After running this code, you’ll see the tasks that Acontext automatically extracted from the agent’s planned steps, confirming the extraction is working correctly.

Understanding Task Data

Each extracted task contains a data field with structured information captured from the conversation:
{
    "progresses": [
        "I searched for iPhone 15 Pro Max specifications and found the latest features",
        "I've initialized the Next.js project with the latest template"
    ],
    "user_preferences": [
        "Focus on the camera capabilities and battery life",
        "Make sure the landing page is mobile-responsive"
    ]
}

Progress Tracking

The progresses array captures the agent’s narrative updates as it works through tasks. Each entry describes what the agent accomplished, written in first-person perspective.
# Access progress updates
for task in tasks_response.items:
    if "progresses" in task.data:
        print(f"Task {task.order} progress:")
        for progress in task.data["progresses"]:
            print(f"  - {progress}")

User Preferences

The user_preferences array stores specific requirements or preferences the user mentioned for each task during the conversation.
# Check user preferences for a task
for task in tasks_response.items:
    if "user_preferences" in task.data:
        print(f"Task {task.order} user preferences:")
        for pref in task.data["user_preferences"]:
            print(f"  - {pref}")
Progress and preferences are appended to tasks as the conversation continues. Early in a conversation, these arrays may be empty or contain only initial entries.

View it in Dashboard

You can view the tasks in the UI by clicking on the “Tasks” tab in the session page.
Task list view displaying multiple tasks with their order, status, and basic information

Task list showing all extracted tasks with their status

Task detail view showing individual task information including progress updates and user preferences

Detailed task view showing progress updates and user preferences

Use Cases for Agent Developers

Verify that Acontext correctly extracts tasks from your agent’s conversation patterns. This is essential when developing or updating agent prompts.
# Get all extracted tasks
response = client.sessions.get_tasks(session_id, time_desc=False)

# Verify extraction worked
print(f"Expected 3 tasks, extracted {len(response.items)}")
for task in response.items:
    print(f"Task {task.order}: {task.data}")
When your agent isn’t completing work as expected, check extracted tasks to see if the agent is planning steps correctly or if it’s getting stuck at a specific task.
response = client.sessions.get_tasks(session_id)

# Identify where the agent got stuck
for task in response.items:
    if task.status == "pending":
        print(f"Agent hasn't started: Task {task.order}")
    elif task.status == "running":
        print(f"Agent stuck on: Task {task.order}")
        # Check last progress update
        if "progresses" in task.data and task.data["progresses"]:
            print(f"  Last progress: {task.data['progresses'][-1]}")
    elif task.status == "failed":
        print(f"Agent failed at: Task {task.order}")
        # Check what was done before failure
        if "progresses" in task.data:
            print(f"  Completed steps: {len(task.data['progresses'])}")
Collect task data across multiple sessions to understand how your agent breaks down different types of requests. Use this for optimizing prompts or identifying common failure points.
const response = await client.sessions.getTasks(sessionId);

// Analyze planning vs execution
const planned = response.items.length;
const completed = response.items.filter(t => t.status === 'success').length;
const completionRate = (completed / planned) * 100;

console.log(`Agent completed ${completionRate.toFixed(0)}% of planned tasks`);
Generate reports on what agents are planning and executing to share with stakeholders or for compliance purposes.
# Get task history with timestamps
response = client.sessions.get_tasks(session_id, time_desc=True)

# Create activity report
print("=== Agent Activity Report ===")
for task in response.items:
    print(f"\n{task.created_at} | Task {task.order} | {task.status}")
    
    # Show progress summary
    if "progresses" in task.data:
        print(f"  Progress entries: {len(task.data['progresses'])}")
        if task.data["progresses"]:
            print(f"  Latest: {task.data['progresses'][-1]}")
    
    # Show user requirements
    if "user_preferences" in task.data and task.data["user_preferences"]:
        print(f"  User requirements: {', '.join(task.data['user_preferences'])}")

Best Practices

Poll for extraction

After sending messages, poll the tasks endpoint in a loop until tasks are extracted rather than using fixed delays.

Test extraction patterns

When developing agents, test different conversation patterns to ensure Acontext reliably extracts the tasks you expect.

Monitor execution gaps

Regularly check for tasks stuck in pending or running status to identify where your agent needs improvement.

Analyze completion rates

Track the ratio of success to failed tasks across sessions to measure and improve agent reliability.

Next Steps