Any server that implements the OpenAI-compatible API (Ollama, LM Studio, Groq, etc.) can be used.General configuration:
Field
Value
API Provider
OpenAI
Base URL
http://<HOST>:<PORT>/v1
API Key
Server token, or any value if not required
Skip Authentication
On (for local servers without auth)
Server-specific notes:
Ollama
Ollama is a local model server with OpenAI-compatible API.
Field
Value
API Provider
OpenAI
Base URL
http://<HOST>:11434/v1
API Key
Any value (e.g. ollama)
Skip Authentication
On
Always enable Skip Authentication for Ollama. Without it, the integration attempts to validate your API key against Ollama’s endpoint, which can cause misleading errors during setup.
The /v1 suffix in the Base URL is required. Using http://<HOST>:11434 without /v1 causes a 404 error during setup.
Network troubleshooting:If Ollama is unreachable from Home Assistant, verify that Ollama is bound to 0.0.0.0 and not just 127.0.0.1. Test connectivity from the HA host with:
curl http://<HOST>:11434/v1/models
qwen2.5:14b model for function calling (issue)Many older or smaller models do not reliably emit structured tool calls. If the assistant verbally acknowledges your request but nothing actually happens, add this to your system prompt:
When you want to take action, you MUST use the execute_services function.You must wrap your commands in a list array.
LM Studio
LM Studio exposes an OpenAI-compatible local API.
Field
Value
API Provider
OpenAI
Base URL
http://<HOST>:1234/v1
API Key
Any value (e.g. lm-studio)
Skip Authentication
On
The default port is 1234. Adjust the host/port if running LM Studio on a different machine or port.Model name: Enter the model identifier shown in LM Studio’s loaded model list (e.g. qwen2.5-7b-instruct).The same function-calling limitations and prompt engineering tips that apply to Ollama also apply to LM Studio. Use models explicitly trained for tool use.
Groq
Groq provides a cloud API with OpenAI-compatible endpoints.