GitBook Assistant Ask chevron-down Enterprise Guides API Keys and Model Configuration Configure multi-LLM providers in PromptOwl - add API keys for OpenAI, Anthropic, Google, Groq, and customize model parameters.
This guide explains how to configure AI model providers, manage API keys, and customize model settings in PromptOwlarrow-up-right .
Table of Contents
PromptOwl connects to multiple AI providers, allowing you to use different models for different prompts. You bring your own API keys for each provider you want to use.
Company offering AI models (OpenAI, Anthropic, etc.)
Specific AI model version (GPT-4, Claude Sonnet, etc.)
Your authentication credential for the provider
Settings like temperature and max tokens
Supported AI Providers
PromptOwl supports six major AI providers:
GPT-4, GPT-4o, GPT-3.5-turbo, O1, O3
Anthropic (Claude)
Claude Opus, Claude Sonnet, Claude Haiku
Google (Gemini)
Free tier available, then pay-per-use
LLaMA, Mixtral (fast inference)
Screenshot: Supported Providers Adding API Keys
Accessing API Key Settings
Click Settings in the sidebar (or your profile)
Navigate to API Keys section
View all available providers
Find the provider section (e.g., OpenAI)
Paste your API key in the input field
System validates the key automatically
Status shows "API key Found" if valid
Screenshot: Adding API Key Validation Process
When you save an API key:
System makes a test call to the provider
Validates the key is active and has permissions
Shows success or error message
Encrypts and stores valid keys securely
Status Indicators
Getting API Keys
OpenAI:
Sign in or create account
Click "Create new secret key"
Copy the key immediately (only shown once)
Anthropic:
Sign in or create account
Go to Settings → API Keys
Google Gemini:
Sign in with Google account
Create or select a project
Model Selection
Selecting a Model for a Prompt
Open a prompt for editing
Click the Provider dropdown
Select your provider (only enabled providers show)
Select the specific model
Screenshot: Model Selection Provider Dropdown
Shows only providers where:
You have added a valid API key
Disabled providers appear grayed out with a note to add API key.
Shows all models for the selected provider:
Active models available for selection
Deprecated models shown with warning badge
Cannot select deprecated models
Each model shows:
Deprecation status (if applicable)
Special notes (e.g., "Tools not supported")
Model Parameters
Fine-tune model behavior with these parameters.
Available Parameters
Parameter
Range
Description
Creativity/randomness of responses
Maximum length of response
Nucleus sampling threshold
Encourage topic diversity
Accessing Parameters
In prompt editor, find Model section
Click LLM Settings to expand
Adjust sliders or input values
Changes save automatically
Controls randomness in responses:
Deterministic, focused responses
Balanced creativity (default)
Highly creative, varied responses
Use Cases:
Low (0-0.3): Factual Q&A, code generation
Medium (0.5-0.8): General conversation
High (1.0+): Creative writing, brainstorming
Limits response length:
Note : Higher limits cost more tokens. Set appropriately for your use case.
Top P (Nucleus Sampling)
Alternative to temperature:
Very focused, predictable
Consider all possibilities
Tip : Use either temperature OR top_p, not both at extreme values.
Frequency Penalty (OpenAI)
Reduces repetition of words:
Presence Penalty (OpenAI)
Encourages new topics:
Actively explore new topics
Parameter Support by Provider
Parameter
OpenAI
Claude
Gemini
Groq
Grok
*Claude Opus 4.1 and Sonnet 4.5 don't support temperature/top_p
Per-Prompt vs Per-Block Settings
Prompt-Level Settings
The default model and parameters for the entire prompt:
Set in the main Model section
Applies to simple prompts
Acts as default for sequential prompts
Block-Level Override
In Sequential and Supervisor prompts, each block can override:
Find Use Page Settings toggle
Disable to show block-specific settings
Select different model/parameters
Block uses its own settings
Screenshot: Block Model Override When to Use Block Overrides
Use smaller, faster model
Use low temperature, capable model
Example: Mixed Model Workflow
Model Deprecation
AI providers regularly deprecate older models.
How Deprecation Works
Provider announces model deprecation
PromptOwl marks model as deprecated
Deprecated models show warning badge
Cannot select deprecated models for new prompts
Existing prompts with deprecated models show alerts
Deprecation Indicators
Handling Deprecated Models
If your prompt uses a deprecated model:
Open the prompt for editing
You'll see a deprecation warning
Select a new model from the dropdown
Preventing Issues
Regularly review your prompts
Update models when new versions release
Test prompts after model changes
API Key Security
Your API keys are protected with multiple security measures.
Keys encrypted before storage
Server-side encryption key
Keys never shown after saving
Keys stored in encrypted format in database
Only decrypted at runtime when needed
Each user's keys isolated
Only you can see/modify your keys
Keys not shared with team members
Each user adds their own keys
Never share your API keys
Use separate keys for different environments
Monitor usage in provider dashboards
Set spending limits with providers
Enterprise Controls
Administrators can control model availability.
Enterprise settings can enable/disable providers:
Show/hide model selection
Allow model switching in chat
Display model name in responses
Provider Visibility
Administrators can control which providers are available:
Enable/disable specific providers
Set default models for organization
Configure recommended settings
Set organization defaults:
Default model for new prompts
Choosing Models
Use Case
Recommended Model
GPT-4o-mini, Claude Haiku, Groq
GPT-4o (high temp), Claude
GPT-4o-mini, Claude Haiku
Parameter Tuning
For Factual/Technical:
For Creative:
For Conversational:
Cost Management
Use appropriate models - Don't use GPT-4 for simple tasks
Set token limits - Prevent unexpectedly long responses
Monitor usage - Check provider dashboards regularly
Test efficiently - Use smaller models during development
Multi-Provider Strategy
Primary provider for main workloads
Backup provider for redundancy
Specialized models for specific tasks
Cost-effective options for high-volume tasks
Troubleshooting
API Key Not Working
Verify key is correct - Copy again from provider
Check key permissions - Some keys have restrictions
Verify billing - Ensure account has credits
Check key format - Keys start with specific prefixes:
Model Not Available
Check API key - Provider may not be enabled
Check feature flags - Enterprise may restrict providers
Check deprecation - Model may be deprecated
Refresh page - Model list may need updating
Responses Too Short/Long
Adjust max tokens - Increase for longer responses
Check prompt - May be requesting brevity
Model limits - Each model has maximum
Unexpected Responses
Check temperature - Lower for more predictable
Review prompt - May need clearer instructions
Try different model - Some models better for certain tasks
Check penalties - May be affecting output
Review token usage - Check provider dashboard
Lower max tokens - Prevent over-generation
Use efficient models - GPT-3.5 vs GPT-4
Optimize prompts - Shorter prompts cost less
Provider-Specific Issues
OpenAI:
Rate limits: Wait and retry
Context length: Use model with larger context
Anthropic:
Rate limits: Implement backoff
No streaming: Check model compatibility
Google:
Quota limits: Request increase
Regional restrictions: Check availability
Quick Reference
API Key Locations
platform.openai.com/api-keys
Default Parameter Values
Model Selection Checklist