Skip to main content
Configure Extended OpenAI Conversation through the Options menu in your Voice Assistant settings.

Access Configuration

1

Navigate to Voice Assistants

Go to Settings > Voice Assistants
2

Edit Assistant

Click on your Extended OpenAI Conversation assistant
3

Open Options

Click the Options button to access configuration
Edit Assist

Configuration Options

Options

Type: Template (Jinja2)Default: System prompt with Home Assistant contextCustom system prompt to guide AI behavior. Supports Jinja2 templates with access to context variables and custom functions.

Available Variables

Context Variables:
  • ha_name - Home Assistant location name
  • exposed_entities - List of exposed entity objects
  • current_device_id - Device ID where conversation was initiated
  • user_input - User input object with conversation context
  • skills - List of enabled skill objects
Extended OpenAI Functions:
  • {{extended_openai.exposed_entities()}} - Get list of exposed entities
  • {{extended_openai.working_directory()}} - Get working directory path
  • {{extended_openai.skill_dir(name)}} - Get skill directory path by name
Home Assistant Template Functions:

Example Usage

You are a helpful assistant for {{ha_name}}.
Current time: {{now()}}
Current area: {{area_id(current_device_id)}}
Working directory: {{extended_openai.working_directory()}}

Available devices:
{% for entity in exposed_entities -%}
- {{entity.name}} ({{entity.entity_id}}): {{entity.state}}
{% endfor -%}

{% if skills %}
Enabled skills: {% for skill in skills %}{{skill.name}}{% if not loop.last %}, {% endif %}{% endfor %}
{% endif %}
See the default prompt for a complete example.
Type: StringDefault: gpt-5-miniSelect the OpenAI model to use. Examples:
  • gpt-5-mini - Fast and cost-effective
  • gpt-4o-mini - Balanced performance
  • Custom models from compatible providers
Type: IntegerDefault: 500Maximum number of tokens in the AI response. Controls response length and API costs.
For o1/o3/o4/gpt-5 models, this becomes max_completion_tokens.
Type: IntegerDefault: 10Limit the number of function calls in a single conversation to prevent infinite loops.
Without this limit, functions might call each other repeatedly, consuming API credits.
Recommended value: 5-10
Type: Multi-selectSelect which skills to enable for this assistant. Skills provide reusable AI capabilities and instructions.
Skills are loaded from /config/extended_openai_conversation/skills/. Use the download_skill service to add new skills.
See Skills Overview for detailed information.
Type: YAMLDefault: execute_services, get_attributes, load_skill, bashDefine custom functions that the AI can call. Each function consists of:
  • spec: OpenAI function schema defining parameters
  • function: Implementation details (type and configuration)
See Functions Overview for detailed information.
Type: IntegerDefault: 40000Maximum number of tokens in conversation history before context truncation is triggered.
When exceeded, messages are cleared according to the Context Truncate Strategy.
Type: SelectDefault: clearOptions: clearStrategy for handling context when the threshold is exceeded.
  • clear: Remove all previous messages and start fresh
Working Directory: All file operations and bash commands use /config/extended_openai_conversation/ as the default working directory. See File Functions and Bash Functions for details.

Advanced Options

Enable Advanced Options in the configuration to access these model-specific parameters. The available options depend on the selected model.
Type: Number (0-1, step 0.05)Default: 1Available for: Standard models (gpt-4, gpt-4o, etc.)Nucleus sampling parameter. Controls diversity via nucleus sampling. Lower values make responses more focused and deterministic.
Not available for reasoning models (o1, o3, o4, gpt-5).
Type: Number (0-2, step 0.05)Default: 0.5Available for: Standard models (gpt-4, gpt-4o, etc.)Controls randomness in responses:
  • 0.0 - Deterministic, consistent answers
  • 0.5 - Balanced creativity
  • 1.0 - Creative, varied responses
  • 2.0 - Maximum creativity
Not available for reasoning models (o1, o3, o4, gpt-5).
Type: SelectDefault: lowOptions: low, medium, highAvailable for: Reasoning models (o1, o3, o4, gpt-5)Controls the reasoning depth for reasoning models:
  • low: Faster responses, basic reasoning
  • medium: Balanced reasoning depth
  • high: Deep reasoning, slower responses
Higher effort increases response time and token usage.
Type: SelectDefault: flexOptions: auto, default, flex, priorityAvailable for: o3, o4, gpt-5 modelsControls the service tier affecting response latency and throughput:
  • flex: Balanced performance and cost
  • priority: Faster responses, higher cost
  • auto: Automatic selection based on load
  • default: Standard tier
Type: BooleanDefault: falseEnable to shorten tool call IDs for compatibility with certain providers (e.g., Mistral AI).
Only enable if your API provider explicitly requires shortened tool call IDs. Most providers work with standard IDs.

Functions Configuration

Functions are defined in YAML format. Here’s the default configuration:
- spec:
    name: execute_services
    description: Use this function to execute service of devices in Home Assistant.
    parameters:
      type: object
      properties:
        list:
          type: array
          items:
            type: object
            properties:
              domain:
                type: string
                description: The domain of the service
              service:
                type: string
                description: The service to be called
              service_data:
                type: object
                description: The service data object to indicate what to control.
                properties:
                  entity_id:
                    type: string
                    description: The entity_id retrieved from available devices. It must start with domain, followed by dot character.
                required:
                - entity_id
            required:
            - domain
            - service
            - service_data
  function:
    type: native
    name: execute_service
You can add multiple function definitions to extend the AI’s capabilities. See the Functions section for examples.

Skills Configuration

Skills are enabled through a simple multi-select interface:
  1. Download skills using the download_skill service
  2. In Options, select which skills to enable
  3. Skills provide context-specific instructions to the AI
Learn more in Skills Overview.

Logging

Monitor API requests and responses by adding this to your configuration.yaml:
logger:
  logs:
    custom_components.extended_openai_conversation: info
This logs:
  • Function calls and responses
  • OpenAI API interactions
  • Skill loading and execution
  • Error messages and debugging info

Best Practices

Limit function calls

Set a reasonable max function calls (5-10) to prevent loops

Test functions

Test each custom function individually before deploying

Monitor logs

Enable logging during initial setup to debug issues

Start simple

Begin with basic functions, add complexity gradually

Next Steps