Deployment

Prerequisites

  • A VPS or server with 2+ CPU cores, 4GB+ RAM, 50GB+ storage

  • A domain name pointed at the server (A record)

  • An SMTP service for login emails (SendGrid, Mailgun, AWS SES, etc.)

Install

sudo curl -fsSL https://raw.githubusercontent.com/sinas-platform/sinas/main/install.sh -o /tmp/sinas-install.sh && sudo bash /tmp/sinas-install.sh
sudo curl -fsSL https://raw.githubusercontent.com/sinas-platform/sinas/main/install.sh -o /tmp/sinas-install.sh && sudo bash /tmp/sinas-install.sh
sudo curl -fsSL https://raw.githubusercontent.com/sinas-platform/sinas/main/install.sh -o /tmp/sinas-install.sh && sudo bash /tmp/sinas-install.sh

The installer will:

  • Install Docker if needed

  • Generate secure keys (SECRET_KEY, ENCRYPTION_KEY, DATABASE_PASSWORD)

  • Prompt for your domain, admin email, and SMTP credentials

  • Create .env in /opt/sinas/

  • Pull pre-built images from the container registry and start all services

  • Caddy automatically provisions SSL via Let's Encrypt

All services start automatically: PostgreSQL, PgBouncer, Redis, ClickHouse, the backend API (port 8000), queue workers, the scheduler, and the web console (port 51245). Migrations run automatically on startup.

Update

cd /opt/sinas
docker compose pull
docker compose up -d
cd /opt/sinas
docker compose pull
docker compose up -d
cd /opt/sinas
docker compose pull
docker compose up -d

Manual development installation

For local development, see INSTALL.md.

3. Log in

  1. Open the console at https://yourdomain.com:51245

  2. Enter your SUPERADMIN_EMAIL address

  3. Check your inbox for the 6-digit OTP code

  4. Enter the code to receive your access token

4. Configure an LLM provider

Before agents can work, you need at least one LLM provider:

curl -X POST https://yourdomain.com/api/v1/llm-providers \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "openai",
    "provider_type": "openai",
    "api_key": "sk-...",
    "default_model": "gpt-4o",
    "is_default": true
  }'
curl -X POST https://yourdomain.com/api/v1/llm-providers \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "openai",
    "provider_type": "openai",
    "api_key": "sk-...",
    "default_model": "gpt-4o",
    "is_default": true
  }'
curl -X POST https://yourdomain.com/api/v1/llm-providers \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "openai",
    "provider_type": "openai",
    "api_key": "sk-...",
    "default_model": "gpt-4o",
    "is_default": true
  }'

5. Start chatting

A default agent is created on startup. Create a chat and send a message:

# Create a chat with the default agent
curl -X POST https://yourdomain.com/agents/default/default/chats \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{}'

# Send a message (use the chat_id from the response)
curl -X POST https://yourdomain.com/chats/{chat_id}/messages/stream \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"content": "Hello!"}'
# Create a chat with the default agent
curl -X POST https://yourdomain.com/agents/default/default/chats \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{}'

# Send a message (use the chat_id from the response)
curl -X POST https://yourdomain.com/chats/{chat_id}/messages/stream \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"content": "Hello!"}'
# Create a chat with the default agent
curl -X POST https://yourdomain.com/agents/default/default/chats \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{}'

# Send a message (use the chat_id from the response)
curl -X POST https://yourdomain.com/chats/{chat_id}/messages/stream \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"content": "Hello!"}'

Was this helpful?

deployment