Skip to main content

Basic Environment Variables

Configure your acontext core services using these essential environment variables. All environment variables use uppercase field names corresponding to the configuration schema.

LLM Configuration

LLM_API_KEY
string
required
API key for your LLM provider (OpenAI or Anthropic). This is the primary authentication credential for AI model access.
LLM_BASE_URL
string
default:"null"
Custom base URL for LLM API endpoints. Leave unset to use the provider’s default endpoint.
LLM_SDK
string
default:"openai"
LLM provider to use. Supported values: openai, anthropic
LLM_SIMPLE_MODEL
string
default:"gpt-4.1"
Default model identifier for LLM operations. Examples: gpt-4, gpt-3.5-turbo, claude-3-sonnet
LLM_RESPONSE_TIMEOUT
float
default:"60"
Timeout in seconds for LLM API responses. Increase for longer operations.

Embedding Configuration

BLOCK_EMBEDDING_PROVIDER
string
default:"openai"
Embedding provider for vector operations. Supported values: openai, jina
BLOCK_EMBEDDING_MODEL
string
default:"text-embedding-3-small"
Embedding model to use for generating vectors. Examples: text-embedding-3-small, text-embedding-ada-002
BLOCK_EMBEDDING_DIM
integer
default:"1536"
Dimension size for embedding vectors. Must match your chosen embedding model’s output dimensions.
BLOCK_EMBEDDING_API_KEY
string
default:"null"
Separate API key for embedding service. If not set, uses LLM_API_KEY.
BLOCK_EMBEDDING_BASE_URL
string
default:"null"
Custom base URL for embedding API endpoints. Leave unset to use the provider’s default.
BLOCK_EMBEDDING_SEARCH_COSINE_DISTANCE_THRESHOLD
float
default:"0.8"
Cosine distance threshold for embedding similarity searches. Lower values = more strict matching.
Be careful when choosing your embedding model. Changing the embedding model after data has been stored will require you to clean and rebuild your databases, as existing vector embeddings will be incompatible with the new model’s output format and dimensions.

.env Examples

# Required LLM Configuration
LLM_API_KEY=sk-your-openai-api-key-here

# Optional LLM Settings
LLM_SDK=openai
LLM_SIMPLE_MODEL=gpt-4
LLM_RESPONSE_TIMEOUT=60

# Embedding Configuration
BLOCK_EMBEDDING_PROVIDER=openai
BLOCK_EMBEDDING_MODEL=text-embedding-3-small
BLOCK_EMBEDDING_DIM=1536
BLOCK_EMBEDDING_SEARCH_COSINE_DISTANCE_THRESHOLD=0.8

Appendix

1

Install Ollama

Go to Ollama to download and install Ollama.
2

Start Ollama

# Pull and run a model
ollama pull qwen3:8b
ollama pull qwen3-embedding:0.6b
ollama serve
3

Enable OpenAI compatibility

Ollama automatically provides OpenAI-compatible endpoints at http://localhost:11434/v1
Local LLM setups are perfect for development, privacy-sensitive applications, or when you want to avoid API costs. Ollama provides OpenAI-compatible APIs, making integration seamless.