100% Self-Hosted & Open Source

AI BackendsOpen Source

Simple AI API server which supports multiple models and providers. Works with projects built with AI app builders. Easily integrate with your business applications.

Enterprise Deployment: Interested in deploying AI Backends for your organization or business? Contact me

Quick Start

terminal
$
git clone https://github.com/donvito/ai-backends.git
$
cd ai-backends
$
bun install
$
bun run dev
Server running at: http://localhost:3000

Available Endpoints

Powerful AI capabilities accessible through simple REST API endpoints

Text Summarization

/api/summarize

Extract key insights and create concise summaries from long text content

Language Translation

/api/translate

Translate text between multiple languages with high accuracy and context awareness

Sentiment Analysis

/api/sentiment

Analyze emotional tone and sentiment in text content with detailed confidence scores

Keyword Extraction

/api/keywords

Extract important keywords and phrases from text with relevance scoring

Email Reply Generation

/api/emailReply

Generate contextual and professional email responses based on conversation history

Image Analysis

/api/describeImage

Analyze and describe images with detailed visual understanding and context

Text Q&A

/api/askText

Answer questions based on provided text context using LLM comprehension

Highlights Extraction

/api/highlights

Extract and identify the most important highlights and key points from your text content

Project Planner

/api/project-planner

Generate structured project plans with tasks, timelines, and dependencies for complex projects

Meeting Notes

/api/meeting-notes

Extract structured meeting notes with attendees, decisions, action items, and summaries

Supported LLM Providers

Choose from multiple AI providers with unified API access. Switch between providers without changing your code.

OpenAI

GPT Models

GPT-4 and GPT-5 with structured outputs & vision.

Function calling
Vision API

Ollama

Self-Hosted

Run open-source models locally with complete privacy.

100% private
JSON mode

Anthropic

Claude Models

Advanced reasoning AI with safety focus and large context windows.

200K+ context
Safety-focused

OpenRouter

Multiple Models

Access 480+ AI models through a single unified API.

480+ models
OpenAI-compatible

LMStudio

Local Desktop

Run LLMs locally with user-friendly desktop interface.

Desktop GUI
Supports HuggingFace models and MLX format

Vercel AI Gateway

Multiple Models

Unified OpenAI-compatible gateway with routing, caching, and observability.

One endpoint, all your models
Eliminate overhead, ship faster
Intelligent failovers, increase uptime

Unified Configuration

Switch between any provider with a simple configuration change. No code refactoring required.

Request Configuration

json request
{
  "config": {
    "provider": "openai",
    "model": "gpt-4"
  },
  "text": "Your input text here..."
}

Provider Examples

O
"provider": "openai"
Models: gpt-4, gpt-3.5-turbo
O
"provider": "ollama"
Models: llama3.2, mistral, codellama
A
"provider": "anthropic"
Models: claude-3-5-sonnet, claude-3-haiku
O
"provider": "openrouter"
Models: any supported model
L
"provider": "lmstudio"
Models: any locally loaded model
A
"provider": "aigateway"
Models: route to upstream via gateway