Skip to content

Bring Your Own LLM

You can use any LLM that’s compatible with OpenAI’s API with Savvy.

Let’s modify Savvy’s config file to use codellama:13b running locally via ollama

~/.config/savvy/config.json
{
"token":"svy_F1H.....",
+ "llm_base_url": "http://localhost:11434/v1",
+ "llm_model_name" : "codellama:13b"
}

Save your changes and Savvy’s CLI will immediately start routing all LLM calls to the configured model and URL