Announcement February 2026

Introducing Sinas

An open source platform for building AI-powered applications. Self-hosted, integrated, no vendor lock-in.

We've been building AI applications for the past two years — customer support agents, internal tools, workflow automation, research assistants. Every time, we ran into the same problem: there's no good backend for this.

You can pick an LLM API. You can pick a vector database. You can wire up a task queue, a container runtime, an auth layer, a permissions system. But there's nothing that gives you all of this as one coherent platform — something you can self-host, configure declaratively, and build multiple applications on top of.

That's why we built Sinas.

What Sinas is

Sinas is an open source backend platform for AI-powered applications. It gives you six integrated subsystems behind a single REST API:

  • Agents — Multi-provider LLM agents (OpenAI, Anthropic, Mistral, Ollama) with tool calling, streaming, and agent-to-agent orchestration. Configure them with system prompts, input/output schemas, and fine-grained tool access.
  • Functions — Serverless Python execution in isolated Docker containers. Triggered by agents, webhooks, cron schedules, or the API. Every call is automatically tracked — including nested function calls that build execution trees.
  • Queries — SQL templates against your external databases (PostgreSQL, ClickHouse, Snowflake). Parameterized, validated against JSON Schema, and usable as agent tools. Your agents can look up customer data without you writing glue code.
  • Skills — Reusable instruction documents that agents retrieve on demand. Preload them into system prompts for always-on guidance, or expose them as tools for progressive disclosure. Think of them as expertise modules.
  • State — Persistent key-value storage with namespaces. Agents maintain memory and context across conversations. Private or shared visibility, optional TTL, and namespace-based access control.
  • Files & Templates — File collections with versioning, metadata validation, and content search. Jinja2 templates for emails and dynamic content with variable validation and XSS protection.

All of this is permission-controlled with role-based access control, namespace isolation, and wildcard pattern matching. Define everything in YAML for GitOps workflows. Deploy with Docker Compose.

Why not just use [X]?

We looked at the alternatives. LangChain and similar frameworks are libraries, not platforms — you still need to build and host everything yourself. Managed platforms like AWS Bedrock or Azure AI Studio lock you into a cloud vendor and don't give you fine-grained control over permissions, state, or function execution.

What we wanted was something closer to Supabase or Firebase, but for AI applications. A backend you deploy once and build many applications on top of. Something where adding a new agent or function is a YAML change, not a new microservice.

How it works in practice

Here's a concrete example. Say you're building a customer support system. You'd configure:

  • An agent with a system prompt, access to your CRM query, and a skill for tone guidelines
  • A query that looks up customer data from your PostgreSQL database
  • A function that sends follow-up emails via your SMTP server
  • A skill with your company's communication guidelines, preloaded into the system prompt
  • State namespaces for conversation memory and customer preferences

All of this is defined in a single YAML file. Apply it with one API call. The agent is immediately available via the chat API with streaming support. Your frontend just sends messages and renders responses.

Need a second application — say an internal research assistant? Add another agent definition to the same YAML file. It shares the same infrastructure but has its own permissions, tools, and state. One deployment, unlimited applications.

What makes it different

Self-hosted by design. Sinas runs on your infrastructure. Your data never leaves your environment unless you choose to send it to an LLM provider. Deploy in EU data centers for GDPR compliance. Use your existing identity provider via OIDC.

Declarative configuration. Everything is defined in YAML — agents, functions, queries, skills, permissions, users. Idempotent apply with SHA256 change detection, environment variable interpolation, reference validation, and dry-run mode. Treat your AI infrastructure like code.

Real execution tracking. When a function runs, Sinas parses the Python AST and injects tracking decorators. If your function calls another function, which calls another function, you get a complete execution tree with timing, inputs, outputs, and errors at every level. No manual instrumentation required.

Agent-to-agent orchestration. Agents can call other agents as tools. The calls go through an async queue via Redis Streams, so there's no recursive blocking. Results stream back in real-time. Build complex multi-agent workflows where a coordinator agent delegates to specialized sub-agents.

No vendor lock-in. Switch LLM providers per agent without code changes. Your functions are plain Python. Your queries are standard SQL. Your configuration is portable YAML. If you ever want to move away from Sinas, your code and data are yours.

Who it's for

We built Sinas for development teams that are serious about shipping AI applications — not just prototyping them. Teams that need proper permissions, execution tracking, and deployment workflows. Agencies delivering AI solutions to multiple clients. Companies that need to keep data on their own infrastructure.

If you're building one chatbot for fun, you probably don't need this. If you're building five to twenty AI-powered tools for your organization or your clients, Sinas saves you from reinventing the backend every time.

Get started

Sinas is open source under AGPL v3. Three commands to install:

git clone https://github.com/sinas-platform/sinas
cd sinas
./install.sh

That gives you the API server, management console, PostgreSQL, Redis, and workers — all running locally via Docker Compose. Read the documentation for the full guide, or browse the source on GitHub.

Sinas is an initiative of Pulsr. We'd love to hear what you build with it.