Aegra is configured through environment variables in your .env file. Copy .env.example as a starting point:
Application
| Variable | Default | Description |
|---|
PROJECT_NAME | Aegra | Application name |
VERSION | 0.1.0 | Application version |
DEBUG | false | Enable debug mode |
AEGRA_CONFIG | aegra.json | Path to the configuration file |
Database
Two ways to configure the database connection:
Option 1: Connection string
DATABASE_URL=postgresql://user:password@host:5432/aegra?sslmode=require
The URL is used by both SQLAlchemy (async) and LangGraph (sync) with the appropriate driver prefix applied automatically. Query parameters are preserved.
Option 2: Individual fields
Used when DATABASE_URL is not set:
POSTGRES_DB=aegra
POSTGRES_HOST=localhost
POSTGRES_PASSWORD=password
POSTGRES_PORT=5432
POSTGRES_USER=user
DATABASE_URL takes precedence. When set, individual POSTGRES_* variables are ignored.
| Variable | Default | Description |
|---|
DATABASE_URL | — | Full PostgreSQL connection string |
POSTGRES_DB | aegra | Database name |
POSTGRES_HOST | localhost | Database host |
POSTGRES_PASSWORD | password | Database password |
POSTGRES_PORT | 5432 | Database port |
POSTGRES_USER | user | Database user |
DB_ECHO_LOG | false | Log all SQL statements |
Connection pools
Aegra uses two connection pools: one for SQLAlchemy (metadata) and one for LangGraph (agent runtime).
| Variable | Default | Description |
|---|
SQLALCHEMY_POOL_SIZE | 10 | SQLAlchemy connection pool size |
SQLALCHEMY_MAX_OVERFLOW | 20 | Max overflow connections for SQLAlchemy |
LANGGRAPH_MIN_POOL_SIZE | 5 | Minimum connections for LangGraph pool |
LANGGRAPH_MAX_POOL_SIZE | 20 | Maximum connections for LangGraph pool |
Server
| Variable | Default | Description |
|---|
HOST | 0.0.0.0 | Server host |
PORT | 2026 | Server port |
SERVER_URL | http://localhost:2026 | Public-facing server URL |
Authentication
| Variable | Default | Description |
|---|
AUTH_TYPE | noop | Authentication mode: noop (no auth) or custom |
Logging
| Variable | Default | Description |
|---|
LOG_LEVEL | INFO | Logging level (DEBUG, INFO, WARNING, ERROR) |
ENV_MODE | LOCAL | Environment mode: LOCAL, DEVELOPMENT, PRODUCTION (PRODUCTION outputs JSON logs) |
LOG_VERBOSITY | standard | standard or verbose (verbose includes request-id) |
LLM providers
| Variable | Description |
|---|
OPENAI_API_KEY | OpenAI API key |
ANTHROPIC_API_KEY | Anthropic API key |
TOGETHER_API_KEY | Together AI API key |
Redis
| Variable | Default | Description |
|---|
REDIS_BROKER_ENABLED | false | Enable Redis for multi-instance SSE streaming and worker job dispatch |
REDIS_URL | redis://localhost:6379/0 | Redis connection URL |
REDIS_CHANNEL_PREFIX | aegra:run: | Prefix for Redis pub/sub channels |
REDIS_MAX_CONNECTIONS | 250 | Maximum Redis connection pool size |
Workers
When REDIS_BROKER_ENABLED=true, runs are dispatched via a Redis job queue (BLPOP) and executed by concurrent asyncio worker tasks. Each instance runs multiple worker loops, each with a semaphore limiting concurrent jobs. Workers use lease-based crash recovery with heartbeats and a reaper process. See the worker architecture guide for the full design.
In dev mode (REDIS_BROKER_ENABLED=false), runs execute as in-process asyncio tasks with no Redis required.
| Variable | Default | Description |
|---|
WORKER_COUNT | 3 | Number of worker loops per instance |
N_JOBS_PER_WORKER | 10 | Maximum concurrent runs per worker loop |
BG_JOB_TIMEOUT_SECS | 3600 | Maximum execution time per run (seconds) |
BG_JOB_MAX_RETRIES | 3 | Maximum retry attempts before a crashed run is permanently failed |
STUCK_PENDING_THRESHOLD_SECONDS | 120 | How long a pending run can sit before the reaper re-enqueues it |
LEASE_DURATION_SECONDS | 30 | Lease TTL before a crashed run is reclaimed |
HEARTBEAT_INTERVAL_SECONDS | 10 | How often workers extend their lease |
REAPER_INTERVAL_SECONDS | 15 | How often the reaper scans for expired leases |
POSTGRES_POLL_INTERVAL_SECONDS | 5 | Fallback poll interval when Redis is unavailable |
WORKER_DRAIN_TIMEOUT | 30.0 | Graceful shutdown wait time (seconds) |
Total capacity per instance = WORKER_COUNT x N_JOBS_PER_WORKER (default: 30 concurrent runs).
Observability (OpenTelemetry)
| Variable | Default | Description |
|---|
OTEL_SERVICE_NAME | aegra-backend | Service name for traces |
OTEL_TARGETS | "" | Comma-separated list: LANGFUSE, PHOENIX, GENERIC |
OTEL_CONSOLE_EXPORT | false | Log traces to console |
Langfuse
| Variable | Description |
|---|
LANGFUSE_BASE_URL | Langfuse API endpoint (e.g., https://cloud.langfuse.com) |
LANGFUSE_PUBLIC_KEY | Langfuse public key |
LANGFUSE_SECRET_KEY | Langfuse secret key |
Arize Phoenix
| Variable | Default | Description |
|---|
PHOENIX_COLLECTOR_ENDPOINT | http://127.0.0.1:6006/v1/traces | Phoenix OTLP endpoint |
PHOENIX_API_KEY | — | Phoenix API key (optional) |
Generic OTLP
| Variable | Description |
|---|
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP collector endpoint |
OTEL_EXPORTER_OTLP_HEADERS | Headers as comma-separated key=value pairs |
See observability guide for configuration examples.