API Keys and Model Configuration

Configure multi-LLM providers in PromptOwl - add API keys for OpenAI, Anthropic, Google, Groq, and customize model parameters.

This guide explains how to configure AI model providers, manage API keys, and customize model settings in PromptOwlarrow-up-right.


Table of Contents


Overview

PromptOwl connects to multiple AI providers, allowing you to use different models for different prompts. You bring your own API keys for each provider you want to use.

How It Works

Key Concepts

Concept
Description

Provider

Company offering AI models (OpenAI, Anthropic, etc.)

Model

Specific AI model version (GPT-4, Claude Sonnet, etc.)

API Key

Your authentication credential for the provider

Parameters

Settings like temperature and max tokens


Supported AI Providers

PromptOwl supports six major AI providers:

OpenAI

Info
Details

Models

GPT-4, GPT-4o, GPT-3.5-turbo, O1, O3

Billing

Pay-per-token usage

Anthropic (Claude)

Info
Details

Models

Claude Opus, Claude Sonnet, Claude Haiku

Billing

Pay-per-token usage

Google (Gemini)

Info
Details

Models

Gemini Pro, Gemini Ultra

Billing

Free tier available, then pay-per-use

Groq

Info
Details

Models

LLaMA, Mixtral (fast inference)

Billing

Free tier available

Grok (xAI)

Info
Details

Models

Grok models

Billing

Varies

Screenshot: Supported Providers

Adding API Keys

Accessing API Key Settings

  1. Click Settings in the sidebar (or your profile)

  2. Navigate to API Keys section

  3. View all available providers

Adding a Key

  1. Find the provider section (e.g., OpenAI)

  2. Paste your API key in the input field

  3. Click Save

  4. System validates the key automatically

  5. Status shows "API key Found" if valid

Screenshot: Adding API Key

Validation Process

When you save an API key:

  1. System makes a test call to the provider

  2. Validates the key is active and has permissions

  3. Shows success or error message

  4. Encrypts and stores valid keys securely

Status Indicators

Status
Meaning

API key Found (green)

Key is valid and saved

API key not Found (red)

No key or invalid key

Validating...

Currently checking key

Getting API Keys

OpenAI:

  1. Sign in or create account

  2. Navigate to API Keys

  3. Click "Create new secret key"

  4. Copy the key immediately (only shown once)

Anthropic:

  1. Sign in or create account

  2. Go to Settings → API Keys

  3. Click "Create Key"

  4. Copy and save the key

Google Gemini:

  1. Sign in with Google account

  2. Click "Get API Key"

  3. Create or select a project

  4. Copy the generated key


Model Selection

Selecting a Model for a Prompt

  1. Open a prompt for editing

  2. Find the Model section

  3. Click the Provider dropdown

  4. Select your provider (only enabled providers show)

  5. Click the Model dropdown

  6. Select the specific model

Screenshot: Model Selection

Provider Dropdown

Shows only providers where:

  • Feature flag is enabled

  • You have added a valid API key

Disabled providers appear grayed out with a note to add API key.

Model Dropdown

Shows all models for the selected provider:

  • Active models available for selection

  • Deprecated models shown with warning badge

  • Cannot select deprecated models

Model Information

Each model shows:

  • Model name and version

  • Deprecation status (if applicable)

  • Special notes (e.g., "Tools not supported")


Model Parameters

Fine-tune model behavior with these parameters.

Available Parameters

Parameter
Range
Description

Temperature

0-2

Creativity/randomness of responses

Max Tokens

Varies

Maximum length of response

Top P

0-1

Nucleus sampling threshold

Frequency Penalty

0-2

Reduce word repetition

Presence Penalty

0-2

Encourage topic diversity

Accessing Parameters

  1. In prompt editor, find Model section

  2. Click LLM Settings to expand

  3. Adjust sliders or input values

  4. Changes save automatically

Screenshot: LLM Settings

Temperature

Controls randomness in responses:

Value
Behavior

0

Deterministic, focused responses

0.7

Balanced creativity (default)

1.5-2

Highly creative, varied responses

Use Cases:

  • Low (0-0.3): Factual Q&A, code generation

  • Medium (0.5-0.8): General conversation

  • High (1.0+): Creative writing, brainstorming

Max Tokens

Limits response length:

Model Type
Typical Max

GPT-4

8,192 - 128,000

Claude

4,096 - 200,000

Gemini

8,192 - 32,768

Note: Higher limits cost more tokens. Set appropriately for your use case.

Top P (Nucleus Sampling)

Alternative to temperature:

Value
Behavior

0.1

Very focused, predictable

0.9

More varied word choices

1.0

Consider all possibilities

Tip: Use either temperature OR top_p, not both at extreme values.

Frequency Penalty (OpenAI)

Reduces repetition of words:

Value
Effect

0

No penalty (may repeat)

1

Moderate avoidance

2

Strong avoidance

Presence Penalty (OpenAI)

Encourages new topics:

Value
Effect

0

Stay on topic

1

Introduce new concepts

2

Actively explore new topics

Parameter Support by Provider

Parameter
OpenAI
Claude
Gemini
Groq
Grok

Temperature

Yes

Partial*

Yes

Yes

Yes

Max Tokens

Yes

Yes

Yes

Yes

Yes

Top P

Yes

Partial*

Yes

Yes

Yes

Frequency Penalty

Yes

No

No

No

No

Presence Penalty

Yes

No

No

No

No

*Claude Opus 4.1 and Sonnet 4.5 don't support temperature/top_p


Per-Prompt vs Per-Block Settings

Prompt-Level Settings

The default model and parameters for the entire prompt:

  • Set in the main Model section

  • Applies to simple prompts

  • Acts as default for sequential prompts

Block-Level Override

In Sequential and Supervisor prompts, each block can override:

  1. Open a block for editing

  2. Find Use Page Settings toggle

  3. Disable to show block-specific settings

  4. Select different model/parameters

  5. Block uses its own settings

Screenshot: Block Model Override

When to Use Block Overrides

Scenario
Recommendation

Simple analysis step

Use smaller, faster model

Creative writing block

Use higher temperature

Code generation block

Use low temperature, capable model

Summary block

Use cost-effective model

Example: Mixed Model Workflow


Model Deprecation

AI providers regularly deprecate older models.

How Deprecation Works

  1. Provider announces model deprecation

  2. PromptOwl marks model as deprecated

  3. Deprecated models show warning badge

  4. Cannot select deprecated models for new prompts

  5. Existing prompts with deprecated models show alerts

Deprecation Indicators

Indicator
Location

Red "Deprecated" badge

Model dropdown

Warning icon

Prompt card

Alert message

Prompt editor

Handling Deprecated Models

If your prompt uses a deprecated model:

  1. Open the prompt for editing

  2. You'll see a deprecation warning

  3. Select a new model from the dropdown

  4. Save the prompt

Preventing Issues

  • Regularly review your prompts

  • Update models when new versions release

  • Test prompts after model changes


API Key Security

Your API keys are protected with multiple security measures.

Encryption

Measure
Description

AES Encryption

Keys encrypted before storage

Secret Key

Server-side encryption key

Hidden Display

Keys never shown after saving

Storage

  • Keys stored in encrypted format in database

  • Only decrypted at runtime when needed

  • Each user's keys isolated

Access Control

  • Only you can see/modify your keys

  • Keys not shared with team members

  • Each user adds their own keys

Best Practices

  1. Never share your API keys

  2. Rotate keys periodically

  3. Use separate keys for different environments

  4. Monitor usage in provider dashboards

  5. Set spending limits with providers


Enterprise Controls

Administrators can control model availability.

Feature Flags

Enterprise settings can enable/disable providers:

Setting
Effect

showModelSwitcher

Show/hide model selection

showModelSwitcherInChat

Allow model switching in chat

showModelInResponse

Display model name in responses

Provider Visibility

Administrators can control which providers are available:

  • Enable/disable specific providers

  • Set default models for organization

  • Configure recommended settings

Default Models

Set organization defaults:

  • Default model for new prompts

  • Default concierge model

  • Fallback models


Best Practices

Choosing Models

Use Case
Recommended Model

Complex reasoning

GPT-4, Claude Opus

Fast responses

GPT-4o-mini, Claude Haiku, Groq

Code generation

GPT-4o, Claude Sonnet

Creative writing

GPT-4o (high temp), Claude

Real-time info

Grok-2

Cost-sensitive

GPT-4o-mini, Claude Haiku

Parameter Tuning

For Factual/Technical:

  • Temperature: 0-0.3

  • Top P: 0.9

  • Max tokens: As needed

For Creative:

  • Temperature: 0.8-1.2

  • Top P: 1.0

  • Max tokens: Higher limit

For Conversational:

  • Temperature: 0.5-0.7

  • Balanced penalties

  • Moderate token limit

Cost Management

  1. Use appropriate models - Don't use GPT-4 for simple tasks

  2. Set token limits - Prevent unexpectedly long responses

  3. Monitor usage - Check provider dashboards regularly

  4. Test efficiently - Use smaller models during development

Multi-Provider Strategy

  1. Primary provider for main workloads

  2. Backup provider for redundancy

  3. Specialized models for specific tasks

  4. Cost-effective options for high-volume tasks


Troubleshooting

API Key Not Working

  1. Verify key is correct - Copy again from provider

  2. Check key permissions - Some keys have restrictions

  3. Verify billing - Ensure account has credits

  4. Check key format - Keys start with specific prefixes:

    • OpenAI: sk-...

    • Anthropic: sk-ant-...

    • Google: Various formats

Model Not Available

  1. Check API key - Provider may not be enabled

  2. Check feature flags - Enterprise may restrict providers

  3. Check deprecation - Model may be deprecated

  4. Refresh page - Model list may need updating

Responses Too Short/Long

  1. Adjust max tokens - Increase for longer responses

  2. Check prompt - May be requesting brevity

  3. Model limits - Each model has maximum

Unexpected Responses

  1. Check temperature - Lower for more predictable

  2. Review prompt - May need clearer instructions

  3. Try different model - Some models better for certain tasks

  4. Check penalties - May be affecting output

High Costs

  1. Review token usage - Check provider dashboard

  2. Lower max tokens - Prevent over-generation

  3. Use efficient models - GPT-3.5 vs GPT-4

  4. Optimize prompts - Shorter prompts cost less

Provider-Specific Issues

OpenAI:

  • Rate limits: Wait and retry

  • Context length: Use model with larger context

Anthropic:

  • Rate limits: Implement backoff

  • No streaming: Check model compatibility

Google:

  • Quota limits: Request increase

  • Regional restrictions: Check availability


Quick Reference

API Key Locations

Provider
Where to Get Key

OpenAI

platform.openai.com/api-keys

Anthropic

console.anthropic.com

Google

aistudio.google.com

Groq

console.groq.com

xAI

x.ai

Default Parameter Values

Parameter
Default

Temperature

1.0

Max Tokens

4096

Top P

1.0

Frequency Penalty

0

Presence Penalty

0

Model Selection Checklist


Last updated