llm-server
Here are 12 public repositories matching this topic...
A lightweight LiteLLM server boilerplate pre-configured with uv and Docker for hosting your own OpenAI- and Anthropic-compatible endpoints. Includes LibreChat as an optional web UI.
-
Updated
Dec 8, 2025 - Python
Function-calling API for LLM from multiple providers
-
Updated
Aug 10, 2024 - Go
Self-hosted local LLM server & AI control plane — OpenAI-compatible proxy for Ollama, multi-agent orchestration, unified dashboard, and zero API bills.
-
Updated
May 12, 2026 - Python
A complete, menu-driven AI model interface for Windows that simplifies running local GGUF language models with llama.cpp. This tool automatically manages dependencies, provides multiple interaction modes, and prioritizes user privacy through fully offline operation.
-
Updated
Jan 30, 2026 - PowerShell
API server for `llm` CLI tool
-
Updated
Aug 12, 2025 - Python
PHP Frontend for Hosting local LLM's (run via VSCode or basic php execution methods/ add to project)
-
Updated
Jul 13, 2025 - PHP
OpenAI-compatible local inference server for Apple Silicon using MLX. FastAPI server with Chat Completions and Responses APIs, multi-turn conversations, and streaming support.
-
Updated
Mar 7, 2026 - Python
A flexible FastAPI-based framework for handling AI tasks using Large Language Models (LLMs). Supports multiple providers, extensible tasks and routers, Redis caching, and OpenAI integration. Easily scalable for various LLM-based applications.
-
Updated
Sep 3, 2024 - Python
Unified simple LLM server wrapper with intelligent routing based on model ID
-
Updated
May 11, 2026 - Python
Host an LLM and make it accessible on a network via API.
-
Updated
May 12, 2026 - Python
Run local AI models in VS Code with automatic model detection, server start, and built-in MCP endpoint—no cloud or manual setup required.
-
Updated
May 13, 2026 - PHP
Improve this page
Add a description, image, and links to the llm-server topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llm-server topic, visit your repo's landing page and select "manage topics."