Create a stateless background run.
Generates an ephemeral thread, delegates to the threaded create_run
endpoint, and schedules cleanup as a background task (unless
on_completion="keep").
Request model for creating runs
Assistant to execute
Input data for the run. Optional when resuming from a checkpoint.
Execution config
Execution context
Checkpoint configuration (e.g., {'checkpoint_id': '...', 'checkpoint_ns': ''})
Enable streaming response
Requested stream mode(s)
Behavior on client disconnect: 'cancel' (default) or 'continue'.
Behavior after stateless run completes: 'delete' (default) removes the ephemeral thread, 'keep' preserves it.
delete, keep Strategy for handling concurrent runs on same thread: 'reject', 'interrupt', 'rollback', or 'enqueue'.
Command for resuming interrupted runs with state updates or navigation
Nodes to interrupt immediately before they get executed. Use '*' for all nodes.
Nodes to interrupt immediately after they get executed. Use '*' for all nodes.
Whether to include subgraph events in streaming. When True, includes events from all subgraphs. When False (default when None), excludes subgraph events. Defaults to False for backwards compatibility.
Request metadata (e.g., from_studio flag)
Successful Response
Run entity model
Status values: pending, running, error, success, timeout, interrupted
Unique identifier for the run.
Thread this run belongs to.
Assistant that is executing this run.
Input data provided to the run.
Identifier of the user who owns this run.
Timestamp when the run was created.
Timestamp when the run was last updated.
Current run status: pending, running, error, success, timeout, or interrupted.
Final output produced by the run, or null if not yet complete.
Error message if the run failed.
Configuration passed to the graph at runtime.
Context variables available during execution.