Access Configuration
Configuration Options
Options
Prompt
Prompt
Type: Template (Jinja2)Default: System prompt with Home Assistant contextCustom system prompt to guide AI behavior. Supports Jinja2 templates with access to context variables and custom functions.See the default prompt for a complete example.
Available Variables
Context Variables:ha_name- Home Assistant location nameexposed_entities- List of exposed entity objectscurrent_device_id- Device ID where conversation was initiateduser_input- User input object with conversation contextskills- List of enabled skill objects
{{extended_openai.exposed_entities()}}- Get list of exposed entities{{extended_openai.working_directory()}}- Get working directory path{{extended_openai.skill_dir(name)}}- Get skill directory path by name
{{now()}}- Current datetime{{area_id(entity_id)}}- Get area ID for entity{{states}}- Access all entity states- And all other Home Assistant template functions
Example Usage
Model
Model
Type: StringDefault:
gpt-5-miniSelect the OpenAI model to use. Examples:gpt-5-mini- Fast and cost-effectivegpt-4o-mini- Balanced performance- Custom models from compatible providers
Max Tokens
Max Tokens
Type: IntegerDefault:
500Maximum number of tokens in the AI response. Controls response length and API costs.For o1/o3/o4/gpt-5 models, this becomes
max_completion_tokens.Maximum Function Calls Per Conversation
Maximum Function Calls Per Conversation
Type: IntegerDefault:
10Limit the number of function calls in a single conversation to prevent infinite loops.Recommended value: 5-10Skills
Skills
Type: Multi-selectSelect which skills to enable for this assistant. Skills provide reusable AI capabilities and instructions.See Skills Overview for detailed information.
Skills are loaded from
/config/extended_openai_conversation/skills/. Use the download_skill service to add new skills.Functions
Functions
Type: YAMLDefault:
execute_services, get_attributes, load_skill, bashDefine custom functions that the AI can call. Each function consists of:spec: OpenAI function schema defining parametersfunction: Implementation details (type and configuration)
Context Threshold
Context Threshold
Type: IntegerDefault:
40000Maximum number of tokens in conversation history before context truncation is triggered.When exceeded, messages are cleared according to the Context Truncate Strategy.
Context Truncate Strategy
Context Truncate Strategy
Type: SelectDefault:
clearOptions: clearStrategy for handling context when the threshold is exceeded.clear: Remove all previous messages and start fresh
Working Directory: All file operations and bash commands use
/config/extended_openai_conversation/ as the default working directory. See File Functions and Bash Functions for details.Advanced Options
Enable Advanced Options in the configuration to access these model-specific parameters. The available options depend on the selected model.Top P
Top P
Type: Number (0-1, step 0.05)Default:
1Available for: Standard models (gpt-4, gpt-4o, etc.)Nucleus sampling parameter. Controls diversity via nucleus sampling. Lower values make responses more focused and deterministic.Not available for reasoning models (o1, o3, o4, gpt-5).
Temperature
Temperature
Type: Number (0-2, step 0.05)Default:
0.5Available for: Standard models (gpt-4, gpt-4o, etc.)Controls randomness in responses:0.0- Deterministic, consistent answers0.5- Balanced creativity1.0- Creative, varied responses2.0- Maximum creativity
Not available for reasoning models (o1, o3, o4, gpt-5).
Reasoning Effort
Reasoning Effort
Type: SelectDefault:
lowOptions: low, medium, highAvailable for: Reasoning models (o1, o3, o4, gpt-5)Controls the reasoning depth for reasoning models:low: Faster responses, basic reasoningmedium: Balanced reasoning depthhigh: Deep reasoning, slower responses
Higher effort increases response time and token usage.
Service Tier
Service Tier
Type: SelectDefault:
flexOptions: auto, default, flex, priorityAvailable for: o3, o4, gpt-5 modelsControls the service tier affecting response latency and throughput:flex: Balanced performance and costpriority: Faster responses, higher costauto: Automatic selection based on loaddefault: Standard tier
Shorten Tool Call ID
Shorten Tool Call ID
Type: BooleanDefault:
falseEnable to shorten tool call IDs for compatibility with certain providers (e.g., Mistral AI).Functions Configuration
Functions are defined in YAML format. Here’s the default configuration:You can add multiple function definitions to extend the AI’s capabilities. See the Functions section for examples.
Skills Configuration
Skills are enabled through a simple multi-select interface:- Download skills using the
download_skillservice - In Options, select which skills to enable
- Skills provide context-specific instructions to the AI
Logging
Monitor API requests and responses by adding this to yourconfiguration.yaml:
- Function calls and responses
- OpenAI API interactions
- Skill loading and execution
- Error messages and debugging info
Best Practices
Limit function calls
Set a reasonable max function calls (5-10) to prevent loops
Test functions
Test each custom function individually before deploying
Monitor logs
Enable logging during initial setup to debug issues
Start simple
Begin with basic functions, add complexity gradually