Connecting an LLM Provider
RepoFlow allows you to integrate an external Large Language Model (LLM) provider to enable AI-powered features.
Configuration Options
The environment variables required to configure an LLM provider are listed in the AI Environment Variables section.
Setting Up an LLM Provider
To enable AI features in a self-hosted RepoFlow instance, follow these steps:
-
Choose an LLM Provider
Select from supported providers (openai,ollama,anthropic,mistral,cohere,huggingface,azure,bedrock,vertex). -
Configure Environment Variables
Set the appropriate values for the environment variables in your configuration file. -
Ensure Your License Includes AI Features
AI functionality requires a valid license that includes AI support. If you are unsure whether your license includes AI features, please contact us at hello@repoflow.io. -
Restart RepoFlow
Restart the RepoFlow server to apply the changes.
Once configured, your RepoFlow instance will utilize the selected LLM provider to enable AI-related features.
For more details on AI capabilities, check AI Features.
RepoFlow supports openai, ollama, anthropic, mistral, cohere, huggingface, azure, bedrock, and vertex.
In general, API-key providers use LLM_MODEL_NAME + LLM_API_KEY, Ollama and Hugging Face use LLM_SERVER_URL, and Bedrock / Vertex use additional provider-specific settings.
Examples
General Environment Setup
For a basic configuration, you can start with the following environment variables. More details are available in the AI Environment Variables section.
ENABLE_AI_FEATURES=true
LLM_MAX_CONTEXT_SIZE=8000
API-Key Providers Setup
Use the same pattern for openai, anthropic, mistral, cohere, and azure.
Replace the provider, API key, and model name to match your setup.
LLM_PROVIDER=mistral # openai | anthropic | mistral | cohere | azure
LLM_API_KEY=<your_provider_api_key>
LLM_MODEL_NAME="mistral-small-2506"
Ollama Setup
Update the LLM_MODEL_NAME to match a model available in your Ollama setup.
LLM_PROVIDER=ollama
LLM_SERVER_URL=http://127.0.0.1:11434
LLM_MODEL_NAME="llama3.2:3b"
Hugging Face Setup
Replace <your_huggingface_api_key> with your actual Hugging Face API key.
LLM_PROVIDER=huggingface
LLM_SERVER_URL=https://router.huggingface.co
LLM_MODEL_NAME=HuggingFaceTB/SmolLM3-3B
LLM_API_KEY=<your_huggingface_api_key>
Vertex Setup
Use one of the supported Vertex authentication methods.
The example below uses LLM_API_KEY; LLM_GOOGLE_CLOUD_CREDENTIALS_JSON or ambient ADC / Workload Identity are also supported.
LLM_PROVIDER=vertex
LLM_MODEL_NAME=gemini-2.5-flash
LLM_REGION=global
LLM_API_KEY=<your_vertex_api_key>
AWS Bedrock Setup
Replace the placeholders below with your actual AWS credentials and region.
LLM_PROVIDER=bedrock
LLM_MODEL_NAME="us.anthropic.claude-3-5-haiku-20241022-v1:0"
LLM_ACCESS_KEY_ID=<your_aws_access_key_id>
LLM_SECRET_ACCESS_KEY=<your_aws_secret_access_key>
LLM_REGION=<your_aws_region> # e.g. us-east-1