Integration packages bundle agents, functions, skills, components, templates, and other resources into a shareable YAML file that can be installed with one click.
How packages work:
Create: Select resources from your Sinas instance → export as SinasPackage YAML
Share: Distribute the YAML file (GitHub, email, package registry)
Install: Paste/upload the YAML → preview changes → confirm install
Uninstall: Removes all resources created by the package in one operation
Manifests are application declarations that describe what resources, permissions, and stores an application built on Sinas requires. They enable a single API call to validate whether a user has everything needed to run the application.
Namespace filter per resource type (e.g., {"agents": ["support"]})
store_dependencies
Stores the application expects: [{"store": "ns/name", "key": "optional_key"}]
Endpoints:
POST /api/v1/manifests #Create manifestGET /api/v1/manifests #List manifestsGET /api/v1/manifests/{namespace}/{name} # Get manifestPUT /api/v1/manifests/{namespace}/{name} # UpdateDELETE /api/v1/manifests/{namespace}/{name} # Delete
POST /api/v1/manifests #Create manifestGET /api/v1/manifests #List manifestsGET /api/v1/manifests/{namespace}/{name} # Get manifestPUT /api/v1/manifests/{namespace}/{name} # UpdateDELETE /api/v1/manifests/{namespace}/{name} # Delete
POST /api/v1/manifests #Create manifestGET /api/v1/manifests #List manifestsGET /api/v1/manifests/{namespace}/{name} # Get manifestPUT /api/v1/manifests/{namespace}/{name} # UpdateDELETE /api/v1/manifests/{namespace}/{name} # Delete
Runtime status validation:
GET /api/runtime/manifests/{namespace}/{name}/status #Validate dependencies
GET /api/runtime/manifests/{namespace}/{name}/status #Validate dependencies
GET /api/runtime/manifests/{namespace}/{name}/status #Validate dependencies
Returns ready: true/false with details on satisfied/missing resources, granted/missing permissions, and existing/missing store dependencies.
Users & Roles
Users are identified by email. They can be created automatically on first login (OTP or OIDC) or provisioned via the management API or declarative config.
Roles group permissions together. Users can belong to multiple roles. All permissions across all of a user's roles are combined.
User endpoints:
GET /api/v1/users #List users(admin)POST /api/v1/users #Create user(admin)GET /api/v1/users/{id} # Get userPATCH /api/v1/users/{id} # Update userDELETE /api/v1/users/{id} # Delete user(admin)
GET /api/v1/users #List users(admin)POST /api/v1/users #Create user(admin)GET /api/v1/users/{id} # Get userPATCH /api/v1/users/{id} # Update userDELETE /api/v1/users/{id} # Delete user(admin)
GET /api/v1/users #List users(admin)POST /api/v1/users #Create user(admin)GET /api/v1/users/{id} # Get userPATCH /api/v1/users/{id} # Update userDELETE /api/v1/users/{id} # Delete user(admin)
Role endpoints: See Managing Roles.
Permissions
See Role-Based Access Control (RBAC) for the full permission system documentation, including format, matching rules, custom permissions, and the check-permissions endpoint.
Quick reference of action verbs:
Verb
Usage
create
Create a resource
read
View/list a resource
update
Modify a resource
delete
Remove a resource
execute
Run a function or query
chat
Chat with an agent
render
Render a template
send
Send a rendered template
upload
Upload a file
download
Download a file
install
Approve a package
System Workers & Containers
Sinas has a dual-execution model for functions, plus dedicated queue workers for async job processing.
Sandbox Containers
The sandbox container pool is a set of pre-warmed, generic Docker containers for executing untrusted user code. This is the default execution mode for all functions (shared_pool=false).
How it works:
On startup, the pool creates sandbox_min_size containers (default: 4) ready to accept work.
When a function executes, a container is acquired from the idle pool, used, and returned.
Containers are recycled (destroyed and replaced) after sandbox_max_executions uses (default: 100) to prevent state leakage between executions.
If a container errors during execution, it's marked as tainted and destroyed immediately.
A background replenishment loop monitors the idle count and creates new containers whenever it drops below sandbox_min_idle (default: 2), up to sandbox_max_size (default: 20).
Health checks run every 60 seconds to detect and replace dead containers.
Isolation guarantees:
Each container runs with strict resource limits and security hardening:
Constraint
Default
Memory
512 MB (MAX_FUNCTION_MEMORY)
CPU
1.0 cores (MAX_FUNCTION_CPU)
Disk
1 GB (MAX_FUNCTION_STORAGE)
Execution time
300 seconds (FUNCTION_TIMEOUT)
Temp storage
100 MB tmpfs at /tmp
Capabilities
All dropped, only CHOWN/SETUID/SETGID added
Privilege escalation
Disabled (no-new-privileges)
Runtime scaling:
The pool can be scaled up or down at runtime without restarting the application:
# Check current pool stateGET /api/v1/containers/stats
# → {"idle":2,"in_use":3,"total":5,"max_size":20,...}
# Scale up forhigh loadPOST /api/v1/containers/scale{"target":15}
# → {"action":"scale_up","previous":5,"current":15,"added":10}
# Scale back down(only removesidle containers — never interrupts running executions)POST /api/v1/containers/scale{"target":4}
# → {"action":"scale_down","previous":15,"current":4,"removed":11}
# Check current pool stateGET /api/v1/containers/stats
# → {"idle":2,"in_use":3,"total":5,"max_size":20,...}
# Scale up forhigh loadPOST /api/v1/containers/scale{"target":15}
# → {"action":"scale_up","previous":5,"current":15,"added":10}
# Scale back down(only removesidle containers — never interrupts running executions)POST /api/v1/containers/scale{"target":4}
# → {"action":"scale_down","previous":15,"current":4,"removed":11}
# Check current pool stateGET /api/v1/containers/stats
# → {"idle":2,"in_use":3,"total":5,"max_size":20,...}
# Scale up forhigh loadPOST /api/v1/containers/scale{"target":15}
# → {"action":"scale_up","previous":5,"current":15,"added":10}
# Scale back down(only removesidle containers — never interrupts running executions)POST /api/v1/containers/scale{"target":4}
# → {"action":"scale_down","previous":15,"current":4,"removed":11}
Package installation:
When new packages are approved, existing containers don't have them yet. Use the reload endpoint to install approved packages into all idle containers:
POST /api/v1/containers/reload
# → {"status":"completed","idle_containers":4,"success":4,"failed":0}
POST /api/v1/containers/reload
# → {"status":"completed","idle_containers":4,"success":4,"failed":0}
POST /api/v1/containers/reload
# → {"status":"completed","idle_containers":4,"success":4,"failed":0}
Containers that are currently executing are unaffected. New containers created by the replenishment loop automatically include all approved packages.
Shared Containers
Functions marked shared_pool=true run in persistent shared containers instead of sandbox containers. This is an admin-only option for trusted code that benefits from longer-lived containers.
Differences from sandbox:
Sandbox Containers
Shared Containers
Trust level
Untrusted user code
Trusted admin code only
Isolation
Per-request (recycled after N uses)
Shared (persistent containers)
Lifecycle
Created/destroyed automatically
Persist until explicitly scaled down
Scaling
Auto-replenishment + manual
Manual via API only
Load balancing
First available idle container
Round-robin across workers
Best for
User-submitted functions
Admin functions, long-startup libraries
When to use shared_pool=true (shared containers):
Functions created and maintained by admins (not user-submitted code)
Functions that import heavy libraries (pandas, scikit-learn) where container startup cost matters
Performance-critical functions that benefit from warm containers
All function and agent executions are processed asynchronously through Redis-based queues (arq). Two separate worker types handle different workloads:
Worker
Docker service
Queue
Concurrency
Retries
Function workers
queue-worker
sinas:queue:functions
10 jobs/worker
Up to 3
Agent workers
queue-agent
sinas:queue:agents
5 jobs/worker
None (not idempotent)
Function workers dequeue function execution jobs, route them to either sandbox or shared containers, track results in Redis, and handle retries. Failed jobs that exhaust retries are moved to a dead letter queue (DLQ) for inspection and manual retry.
Agent workers handle chat message processing — they call the LLM, execute tool calls, and stream responses back via Redis Streams. Agent jobs don't retry because LLM calls with tool execution have side effects.
Scaling is controlled via Docker Compose replicas:
Each worker sends a heartbeat to Redis every 10 seconds (TTL: 30 seconds). If a worker dies, its heartbeat key auto-expires, making it easy to detect dead workers.
Jobs go through states: queued → running → completed or failed. Stale or orphaned jobs can be cancelled via the admin API:
# Cancel a running or queued jobPOST /api/v1/queue/jobs/{job_id}/cancel
# → {"status":"cancelled","job_id":"..."}
# Cancel a running or queued jobPOST /api/v1/queue/jobs/{job_id}/cancel
# → {"status":"cancelled","job_id":"..."}
# Cancel a running or queued jobPOST /api/v1/queue/jobs/{job_id}/cancel
# → {"status":"cancelled","job_id":"..."}
Cancellation updates the Redis status to cancelled and marks the DB execution record as CANCELLED. It also publishes to the done channel so any waiters unblock. This is a soft cancel — it does not kill running containers.
Results are stored in Redis with a 24-hour TTL.
System Endpoints
Admin endpoints for monitoring and managing the Sinas deployment. All require sinas.system.read:all or sinas.system.update:all permissions.
Health check:
GET /api/v1/system/health
GET /api/v1/system/health
GET /api/v1/system/health
Returns a comprehensive health report:
services — All Docker Compose containers with status, health, uptime, CPU %, and memory usage. Infrastructure containers (redis, postgres, pgbouncer) are listed first, followed by application containers sorted alphabetically. Sandbox and shared worker containers are included.
host — Host-level CPU, memory, and disk usage (read from /proc on Linux).
warnings — Auto-generated alerts at three levels:
critical — No queue workers running, or infrastructure services (redis, postgres, pgbouncer) down
Optionally restrict which packages can be approved with a whitelist:
# In .env— only these packages can be approvedALLOWED_PACKAGES=requests,pandas,numpy,redis,boto3
# In .env— only these packages can be approvedALLOWED_PACKAGES=requests,pandas,numpy,redis,boto3
# In .env— only these packages can be approvedALLOWED_PACKAGES=requests,pandas,numpy,redis,boto3
Configuration Reference
Container pool:
Variable
Default
Description
POOL_MIN_SIZE
4
Containers created on startup
POOL_MAX_SIZE
20
Maximum total containers
POOL_MIN_IDLE
2
Replenish when idle count drops below this
POOL_MAX_EXECUTIONS
100
Recycle container after this many uses
POOL_ACQUIRE_TIMEOUT
30
Seconds to wait for an available container
Function execution:
Variable
Default
Description
FUNCTION_TIMEOUT
300
Max execution time in seconds
MAX_FUNCTION_MEMORY
512
Memory limit per container (MB)
MAX_FUNCTION_CPU
1.0
CPU cores per container
MAX_FUNCTION_STORAGE
1g
Disk storage limit
FUNCTION_CONTAINER_IDLE_TIMEOUT
3600
Idle container cleanup (seconds)
Workers and queues:
Variable
Default
Description
DEFAULT_WORKER_COUNT
4
Shared workers created on startup
QUEUE_WORKER_REPLICAS
2
Function queue worker processes
QUEUE_AGENT_REPLICAS
2
Agent queue worker processes
QUEUE_FUNCTION_CONCURRENCY
10
Concurrent jobs per function worker
QUEUE_AGENT_CONCURRENCY
5
Concurrent jobs per agent worker
QUEUE_MAX_RETRIES
3
Retry attempts before DLQ
QUEUE_RETRY_DELAY
10
Seconds between retries
Packages:
Variable
Default
Description
ALLOW_PACKAGE_INSTALLATION
true
Enable pip in containers
ALLOWED_PACKAGES
(empty)
Comma-separated whitelist (empty = all allowed)
Icons
Agents and functions support configurable icons via the icon field. Two formats are supported:
Format
Example
Description
url:<url>
url:https://example.com/icon.png
Direct URL to an image
collection:<ns>/<coll>/<file>
collection:assets/icons/bot.png
File stored in a Sinas collection
Collection-based icons generate signed JWT URLs for private files and direct URLs for public collection files. Icons are resolved at read time via the icon resolver service.
Config Manager
The config manager supports GitOps-style declarative configuration. Define all your resources in a YAML file and apply it idempotently.
All sections are optional — include only what you need.
Key behaviors:
Idempotent — Applying the same config twice does nothing. Unchanged resources are skipped (SHA256 checksum comparison).
Config-managed tracking — Resources created via config are tagged with managed_by: "config". The system won't overwrite resources that were created manually (it warns instead).
Environment variable interpolation — Use ${VAR_NAME} in values (e.g., apiKey: "${OPENAI_API_KEY}").
Reference validation — Cross-references (e.g., an agent referencing a function) are validated before applying.
Dry run — Set dryRun: true to preview changes without applying.
Endpoints (admin only):
POST /api/v1/config/validate #Validate YAML syntax and referencesPOST /api/v1/config/apply #Apply config(supports dryRun and force flags)GET /api/v1/config/export # Export current configurationas
POST /api/v1/config/validate #Validate YAML syntax and referencesPOST /api/v1/config/apply #Apply config(supports dryRun and force flags)GET /api/v1/config/export # Export current configurationas
POST /api/v1/config/validate #Validate YAML syntax and referencesPOST /api/v1/config/apply #Apply config(supports dryRun and force flags)GET /api/v1/config/export # Export current configurationas
Auto-apply on startup:
# In .envCONFIG_FILE=config/production.yamlAUTO_APPLY_CONFIG=true
# In .envCONFIG_FILE=config/production.yamlAUTO_APPLY_CONFIG=true
# In .envCONFIG_FILE=config/production.yamlAUTO_APPLY_CONFIG=true