Store and retrieve messages with images, audio, and documents in OpenAI and Anthropic formats
Acontext supports multi-modal messages that include text, images, audio, and PDF documents. You can store and retrieve these messages in both OpenAI and Anthropic formats, with automatic format conversion between providers.
Multi-modal content is stored as assets in S3, while message metadata is stored in PostgreSQL. Acontext automatically handles file uploads and generates presigned URLs for retrieval.
OpenAI supports images through the image_url content part type, which accepts both external URLs and base64-encoded data URLs:
Image URL
Base64-encoded
Copy
from acontext import AcontextClientclient = AcontextClient( api_key="sk-ac-your-root-api-bearer-token", base_url="http://localhost:8029/api/v1")client.ping()session = client.sessions.create()# Store a message with an image URLmessage = client.sessions.store_message( session_id=session.id, blob={ "role": "user", "content": [ { "type": "text", "text": "What's in this image?" }, { "type": "image_url", "image_url": { "url": "https://example.com/image.png", "detail": "high" # Options: "low", "high", "auto" } } ] }, format="openai")print(f"Message with image sent: {message.id}")
Copy
import base64from acontext import AcontextClientclient = AcontextClient( api_key="sk-ac-your-root-api-bearer-token", base_url="http://localhost:8029/api/v1")session = client.sessions.create()# Read and encode image as base64with open("image.png", "rb") as image_file: image_data = base64.b64encode(image_file.read()).decode("utf-8")# Store message with base64 image (data URL format)message = client.sessions.store_message( session_id=session.id, blob={ "role": "user", "content": [ { "type": "text", "text": "What's in this image?" }, { "type": "image_url", "image_url": { "url": f"data:image/png;base64,{image_data}", "detail": "high" # Options: "low", "high", "auto" } } ] }, format="openai")print(f"Message with base64 image sent: {message.id}")
The detail parameter controls image processing quality. Use "high" for detailed analysis, "low" for faster processing, or "auto" to let the system decide.
Base64-encoded images in OpenAI format use the data URL scheme: data:image/[type];base64,[base64-data]. The image data is stored within the message parts and returned as base64 when retrieved.
import base64from acontext import AcontextClientclient = AcontextClient( api_key="sk-ac-your-root-api-bearer-token", base_url="http://localhost:8029/api/v1")session = client.sessions.create()# Read and encode image as base64with open("image.png", "rb") as image_file: image_data = base64.b64encode(image_file.read()).decode("utf-8")# Store message with base64 imagemessage = client.sessions.store_message( session_id=session.id, blob={ "role": "user", "content": [ { "type": "text", "text": "Describe this image" }, { "type": "image", "source": { "type": "base64", "media_type": "image/png", "data": image_data } } ] }, format="anthropic")print(f"Message with image sent: {message.id}")
Anthropic format requires images to be base64-encoded. The base64 data is stored within the message parts and returned as base64 when you retrieve the message.
You can store files for analysis and understanding using base64-encoded content. Different formats handle files differently:
Anthropic Format
OpenAI Format
Anthropic supports storing files using the document content type with base64-encoded data:
Copy
import base64from acontext import AcontextClientclient = AcontextClient( api_key="sk-ac-your-root-api-bearer-token", base_url="http://localhost:8029/api/v1")session = client.sessions.create()# Read and encode PDF file as base64with open("report.pdf", "rb") as pdf_file: pdf_data = base64.b64encode(pdf_file.read()).decode("utf-8")# Store message with PDF document (Anthropic format)message = client.sessions.store_message( session_id=session.id, blob={ "role": "user", "content": [ { "type": "document", "source": { "type": "base64", "media_type": "application/pdf", "data": pdf_data } }, { "type": "text", "text": "Summarize the key findings in this report" } ] }, format="anthropic")print(f"Message with PDF sent: {message.id}")
OpenAI format supports base64 file uploads using the file content type with embedded file_data:
Copy
import base64from acontext import AcontextClientclient = AcontextClient( api_key="sk-ac-your-root-api-bearer-token", base_url="http://localhost:8029/api/v1")session = client.sessions.create()# Read and encode PDF file as base64with open("document.pdf", "rb") as pdf_file: pdf_data = base64.b64encode(pdf_file.read()).decode("utf-8")# Store message with PDF file (OpenAI format)message = client.sessions.store_message( session_id=session.id, blob={ "role": "user", "content": [ { "type": "text", "text": "What's in this PDF document?" }, { "type": "file", "file": { "file_data": pdf_data, "filename": "document.pdf" } } ] }, format="openai")print(f"Message with PDF sent: {message.id}")
When you store a PDF with base64 data, the base64 content is stored within the message parts JSON in S3. When you retrieve the message, the PDF is returned as base64 data again—not as a presigned URL. This keeps the PDF data inline with the message content.
Acontext will not limit the document formats in the context, you can store any file type in Acontext.
However, not every file type is supported by your LLM Provider. You may check their documentations to see if the file type is supported:
When retrieving messages, the content format depends on how the message was originally sent:
Copy
from acontext import AcontextClientclient = AcontextClient( api_key="sk-ac-your-root-api-bearer-token", base_url="http://localhost:8029/api/v1")# Retrieve messagesresult = client.sessions.get_messages( session_id="session_uuid", format="anthropic", # or "openai")print(f"Retrieved {len(result.items)} messages")# Access messagesfor msg in result.items: for block in msg.content: if block.get('type') == 'image': # Images sent as base64 are returned as base64 print(f"Image source type: {block['source']['type']}")
How content is returned:
Images/PDFs sent as base64 data are returned as base64 data (stored within the message parts)
Images/PDFs sent as URLs in OpenAI format are stored as URLs in metadata
Files uploaded via multipart form-data are stored as separate S3 assets (not covered in this guide)