Requirements — External APIs & Dependencies
Required External API
OpenAI-compatible Chat Completions API
This application requires access to an AI model that supports the OpenAI Chat Completions API format. You have several options:
Option A: Google Gemini (Recommended)
- API: Uses Google’s OpenAI-compatible endpoint at
https://generativelanguage.googleapis.com/v1beta/openai/
- Sign up: https://aistudio.google.com/apikey
- Recommended model:
gemini-3-pro
- Auto-detected: If
GEMINI_API_KEY is set in .env, the app automatically uses Gemini
- Configuration:
GEMINI_API_KEY=your-gemini-key-here
GEMINI_MODEL=gemini-3-pro
Option B: OpenAI (Paid)
- API: https://api.openai.com/v1
- Endpoint used:
POST /chat/completions
- Sign up: https://platform.openai.com/signup
- Get API key: https://platform.openai.com/api-keys
- Recommended model:
gpt-4o-mini (cheapest, good quality)
- Cost: ~$0.15 per 1M input tokens, ~$0.60 per 1M output tokens
- Configuration:
OPENAI_API_KEY=sk-your-key-here
OPENAI_MODEL=gpt-4o-mini
Option C: Ollama (Free, Local)
- Website: https://ollama.ai
- Install:
brew install ollama (macOS) or download from website
- Pull a model:
ollama pull llama3.2
- Start server:
ollama serve (runs on port 11434)
- Configuration:
OPENAI_API_BASE=http://localhost:11434/v1
OPENAI_API_KEY=ollama
OPENAI_MODEL=llama3.2
Option D: Any OpenAI-compatible provider
- LM Studio, vLLM, Together AI, Groq, etc.
- Set
OPENAI_API_BASE to the provider’s URL
- Set
OPENAI_API_KEY to their API key
- Set
OPENAI_MODEL to the model name
The app sends requests in this format:
{
"model": "gpt-4o-mini",
"messages": [
{"role": "system", "content": "You are an expert AI cycling coach..."},
{"role": "system", "content": "Current context: ...user schedule, workouts..."},
{"role": "user", "content": "I want to train for a century ride"}
],
"temperature": 0.7,
"max_tokens": 2000
}
Standard OpenAI chat completion response with choices[0].message.content.
System Requirements
| Requirement |
Minimum |
| Python |
3.11+ |
| Node.js |
18+ (20+ recommended) |
| npm |
9+ |
| Disk space |
~200MB (deps) |
| RAM |
2GB+ (8GB+ if running Ollama locally) |
Python Dependencies (backend/requirements.txt)
- flask — Web framework
- flask-cors — Cross-origin resource sharing
- flask-sqlalchemy — ORM integration
- flask-migrate — Database migrations
- sqlalchemy — ORM
- python-dotenv — Environment variable loading
- openai — OpenAI Python client (works with any compatible API)
- gunicorn — Production WSGI server
- pytest / pytest-flask — Testing
Node.js Dependencies (frontend/package.json)
- react / react-dom — UI framework
- react-router-dom — Client-side routing
- @fullcalendar/react — Calendar component
- @fullcalendar/daygrid — Month/day calendar view
- @fullcalendar/timegrid — Week/time calendar view
- @fullcalendar/interaction — Click/drag interactions
- vite — Build tool & dev server
- typescript — Type system
No Other External APIs Required
- Database: SQLite (bundled with Python) — no external DB needed for dev
- No payment processing, no third-party auth, no external calendar sync
- The ONLY external dependency is an AI chat completions endpoint