Skip to main content

Self-hosted platform for discovering and running AI agents

Deploy specialized agents on your infrastructure—without building from scratch or using SaaS.
What is AgentSystems? A self-hosted platform that combines a federated agent ecosystem, provider portability, and container isolation. Browse community agents, deploy on your infrastructure, and switch between AI providers through configuration.

Why AgentSystems?

Teams want to use specialized AI agents but face a dilemma:
  • 🔒 SaaS agents require sending data to third parties
  • 🛠️ Building from scratch takes weeks of development per agent (most teams lack ML expertise)
  • 🐳 Manual Docker orchestration means configuring networks, volumes, proxies, and API keys for each agent
AgentSystems provides a standardized runtime and ecosystem:
  • Federated Agent Ecosystem: Git-based agent index where developers publish via GitHub forks
  • Provider Portability: Switch from OpenAI to Anthropic to Ollama through configuration
  • Container Isolation: Each agent runs in its own Docker container with configurable egress filtering
  • Audit Trail: Hash-chained logs for operation tracking

Key Capabilities

Community Indexes

Browse and add agents from community indexes

Local Execution

Deploy AI agents on your hardware with Docker

Multiple Providers

Use OpenAI, Anthropic, AWS Bedrock, or local models

Audit Logging

Operation logging to PostgreSQL database

Use Cases

The platform targets scenarios requiring local AI execution:
  • Processing documents on your infrastructure
  • Running AI workloads on-premise
  • Keeping data within your own environment
  • Testing multiple AI providers
  • Building containerized AI agents
I