Quick Start
Get LLM Gateway API running in minutes
Quick Start
Get the LLM Gateway API running locally in just a few steps.
Prerequisites
- Docker and Docker Compose installed
- API keys for at least one LLM provider (OpenAI, Groq, or Google)
Setup
1. Build the Docker Image
bash docker/build.sh2. Configure Environment
Create a .env file in the project root:
# Required: Add keys for the providers you want to use
OPENAI_API_KEY=your_openai_api_key
GROQ_API_KEY=your_groq_api_key
GOOGLE_API_KEY=your_google_api_key
# Optional: For local LLM servers
STIP_API_KEY=your_local_api_key
LOCAL_LLM_API_URL=http://localhost:8001
# Optional: API authentication
SECRET_TOKEN=your_secret_tokenYou only need to provide keys for the providers you plan to use. The service will work with any combination of providers.
3. Start the Service
docker-compose -f docker-compose.dev.yaml upThe API will be available at http://localhost:8000.
Verify Installation
Check API Status
Visit http://localhost:8000 to see the API status and available providers:
{
"status": "success",
"api_status": "up",
"providers": ["OPENAI", "GROQ", "GOOGLE", "LOCAL"]
}View Interactive Documentation
Open http://localhost:8000/docs to explore the interactive API documentation powered by Swagger UI.
Your First Request
Text Generation
curl -X POST "http://localhost:8000/llm/generation" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o",
"prompt": "You are a helpful assistant.",
"user_prompt": "Explain what an API is in one sentence.",
"temperature": 0.7,
"max_tokens": 100
}'With Authentication (if SECRET_TOKEN is set)
curl -X POST "http://localhost:8000/llm/generation" \
-H "Content-Type: application/json" \
-H "X-API-Key: your_secret_token" \
-d '{
"model": "openai/gpt-4o",
"prompt": "You are a helpful assistant.",
"user_prompt": "What is machine learning?"
}'Next Steps
- Explore API endpoints for classification and summarization
- Learn about configuration options
- Set up streaming responses for real-time text generation
Troubleshooting
Port Already in Use
If port 8000 is already in use, modify the port mapping in docker-compose.dev.yaml:
ports:
- "8080:8000" # Use 8080 instead of 8000View Logs
Logs are written to the logs/ directory:
tail -f logs/llm.log