Skip to main content
Acontext provides pre-built filesystem tools that allow LLMs to read, write, and manage files on disks through function calling. You can integrate these tools with OpenAI or Anthropic APIs to create agents that persist data autonomously.

Available Tools

The SDK includes seven disk operation tools:
  • write_file - Create or overwrite text files
  • read_file - Read file contents with optional line offset and limit
  • replace_string - Find and replace text in files
  • list_artifacts - List files and directories in a path
  • download_file - Get a public URL to download a file
  • grep_artifacts - Search file contents using regex patterns
  • glob_artifacts - Find files by path pattern using glob syntax
These tools handle path normalization automatically and support nested directory structures like /notes/, /documents/2024/, etc.

Building an Agent with Filesystem

You can build an agentic loop where the LLM autonomously calls disk tools to complete file-related tasks. Here’s a complete example:
import json
from acontext import AcontextClient
from acontext.agent.disk import DISK_TOOLS
from openai import OpenAI

# Initialize clients
acontext_client = AcontextClient(
    api_key=os.getenv("ACONTEXT_API_KEY"),
)

# If you're using self-hosted Acontext:
# acontext_client = AcontextClient(
#     base_url="http://localhost:8029/api/v1",
#     api_key="sk-ac-your-root-api-bearer-token",
# )
openai_client = OpenAI()

# Create a disk and tool context
disk = acontext_client.disks.create()
ctx = DISK_TOOLS.format_context(acontext_client, disk.id)

# Get tool schemas for OpenAI
tools = DISK_TOOLS.to_openai_tool_schema()
print(tools)
# Simple agentic loop
messages = [
    {
        "role": "user",
        "content": "Create a todo.md file with 3 tasks. Then give me the public download URL",
    }
]

while True:
    response = openai_client.chat.completions.create(
        model="gpt-4.1",
        messages=messages,
        tools=tools,
    )

    message = response.choices[0].message
    messages.append(message)

    # Break if no tool calls
    if not message.tool_calls:
        print(f"🤖 Assistant: {message.content}")
        break

    # Execute each tool call
    for tool_call in message.tool_calls:
        print(f"⚙️ Called {tool_call.function.name}")
        result = DISK_TOOLS.execute_tool(
            ctx, tool_call.function.name, json.loads(tool_call.function.arguments)
        )
        print(f"🔍 Result: {result}")
        messages.append(
            {"role": "tool", "tool_call_id": tool_call.id, "content": result}
        )

The agent will automatically call the appropriate tools (write_file, read_file, etc.) to complete your request, creating a fully autonomous file management system.

How It Works

1

Initialize clients and create a disk

Set up both the Acontext client and your LLM client (OpenAI or Anthropic). Create a disk to store files.
from acontext import AcontextClient
from acontext.agent.disk import DISK_TOOLS

client = AcontextClient(
    api_key=os.getenv("ACONTEXT_API_KEY"),
)
disk = client.disks.create()
ctx = DISK_TOOLS.format_context(client, disk.id)
2

Get tool schemas for your LLM

Convert the disk tools to the schema format your LLM provider expects.
# For OpenAI
tools = DISK_TOOLS.to_openai_tool_schema()

# For Anthropic
tools = DISK_TOOLS.to_anthropic_tool_schema()
3

Implement the agentic loop

Create a loop that stores messages to Acontext, executes tool calls, and feeds results back until the task is complete.Executing Tools After LLM Response:When the LLM responds with tool calls, iterate through each one and execute them using DISK_TOOLS.execute_tool() (Python) or DISK_TOOLS.executeTool() (TypeScript):
# Execute each tool call from the LLM response
for tool_call in message.tool_calls:
    result = DISK_TOOLS.execute_tool(
        ctx,                                      # Tool context with disk ID
        tool_call.function.name,                  # Tool name (e.g., "write_file")
        json.loads(tool_call.function.arguments)  # Parse arguments
    )
    # Add tool result back to message history
    messages.append({
        "role": "tool",
        "tool_call_id": tool_call.id,
        "content": result
    })
The loop continues until the LLM returns a message without tool calls, indicating the task is complete.

Tool Reference

write_file

Create or overwrite a text file on the disk. Parameters:
  • filename (required) - Name of the file, e.g., "report.md"
  • content (required) - Text content to write to the file
  • file_path (optional) - Directory path, e.g., "/notes/" (defaults to "/")
Returns: Success message with file path

read_file

Read the contents of a text file from the disk. Parameters:
  • filename (required) - Name of the file to read
  • file_path (optional) - Directory path where the file is located (defaults to "/")
  • line_offset (optional) - Starting line number (defaults to 0)
  • line_limit (optional) - Maximum number of lines to return (defaults to 100)
Returns: File content with line range information

replace_string

Replace all occurrences of a string in a file. Parameters:
  • filename (required) - Name of the file to modify
  • old_string (required) - String to be replaced
  • new_string (required) - Replacement string
  • file_path (optional) - Directory path where the file is located (defaults to "/")
Returns: Number of replacements made

list_artifacts

List all files and directories at a specified path. Parameters:
  • file_path (required) - Directory path to list, e.g., "/notes/" or "/"
Returns: List of files and directories

download_file

Get a public presigned URL to download a file. Parameters:
  • filename (required) - Name of the file to get the download URL for
  • file_path (optional) - Directory path where the file is located (defaults to "/")
  • expire (optional) - URL expiration time in seconds (defaults to 3600)
Returns: Presigned public URL for downloading the file

grep_artifacts

Search for text patterns within file contents using regex. Only searches text-based files (code, markdown, JSON, CSV, etc.). Parameters:
  • query (required) - Regex pattern to search for (e.g., "TODO.*", "function.*calculate", "import.*pandas")
  • limit (optional) - Maximum number of results to return (defaults to 100)
Returns: List of files matching the pattern
This uses regex syntax, not glob. In regex:
  • .* means “any characters” (zero or more)
  • * alone means “zero or more of the preceding character”
For example, "test*" matches “tes”, “test”, “testt” — NOT “test-file”. Use "test.*" to match “test-file”.
Common patterns: "TODO.*" (TODO comments), "#.*Summary" (markdown headers), "error" (case-sensitive match), "(?i)error" (case-insensitive).

glob_artifacts

Find files by path pattern using glob syntax. Use * for any characters, ? for single character, ** for recursive directories. Parameters:
  • query (required) - Glob pattern (e.g., "**/*.md" for all markdown files, "*.txt" for text files in root, "/docs/**/*.md" for markdown in docs)
  • limit (optional) - Maximum number of results to return (defaults to 100)
Returns: List of files matching the glob pattern
Use glob_artifacts to find files by extension or location, perfect for discovering files without knowing exact names.
All file operations are scoped to the specific disk. Ensure you create and configure the disk context correctly before executing tools.
For async Python usage, see Async Python Client which covers async_format_context() and async_execute_tool() methods.