Editing and Maintaining Prompts
Create and edit AI prompts in PromptOwl - build simple prompts, sequential workflows, or multi-agent supervisors with version control.
This guide explains how to create, edit, version, and maintain prompts in PromptOwl for enterprise users.
Prompts are the core building blocks of your AI workflows in PromptOwl. This guide covers:
Understanding prompt types (Simple, Sequential, Supervisor)
Working with blocks and variations
Version control and publishing
Creating a New Prompt
Starting a New Prompt
Click the + New button on the Dashboard
Or navigate directly to Create Prompt from the sidebar
Screenshot: Create Prompt Button Fill in the foundational details:
A descriptive name for your prompt
Brief explanation of what the prompt does
Keywords for organization and search
Screenshot: Prompt Header
Understanding Prompt Types
PromptOwl supports three types of prompts to handle different complexity levels:
Simple Prompt (Default)
Best for straightforward AI interactions with a single system context.
Use cases:
Structure:
Optional RAG (document retrieval)
Screenshot: Simple Prompt
Sequential Prompt
Best for multi-step workflows where each step builds on the previous.
Use cases:
Multi-stage content creation (research → draft → polish)
Data processing pipelines
Complex analysis with multiple phases
Structure:
Multiple blocks executed in order
Each block can use a different AI model
Output from one block can feed into the next
Screenshot: Sequential Tab
Supervisor Prompt
Best for complex tasks requiring multiple specialized AI agents.
Use cases:
Tasks requiring different expertise
Complex decision-making workflows
Structure:
One supervisor block orchestrates the workflow
Multiple agent blocks with specialized roles
Supervisor decides which agents to invoke
Screenshot: Supervisor Prompt
Working with Blocks
Blocks are the building units of Sequential and Supervisor prompts.
Click Add Block at the bottom of the blocks list
Enter a block name (e.g., "Research", "Summarize", "Format")
Configure the block settings
Block Configuration Options
Each block has several configurable options:
Descriptive name for this step
"Inline" (write here) or "Use Existing" (reference another prompt)
Select the model for this block
Enable tools available to this block
Connect documents for RAG
Map variables specific to this block
Optional message shown between blocks
Screenshot: Block Settings Using Existing Prompts in Blocks
To reuse an existing prompt within a block:
Set Prompt Source to "Use Existing"
Browse and select the prompt to reference
Choose the version to use
Map any required variables
Screenshot: Select Existing Prompt
Working with Variations
Variations allow you to test different prompt versions within the same block.
Creating a Variation
Click the + button next to the variations tabs
Optionally name the variation
Screenshot: Add Variation Switching Between Variations
Click on the variation tab to switch the active version. The active variation is used when:
Best Practices for Variations
Use variations for A/B testing different phrasings
Keep track of which variation performs best
Document why each variation was created
Managing Variables
Variables make your prompts dynamic and reusable.
Adding a Variable
Click Add Variable in the Variables section
Enter the variable name (e.g., company_name, tone)
Set a default value (optional)
Check "Show as question" if users should provide this value
Screenshot: Variable Manager Using Variables in Prompts
Insert variables using curly braces: {variable_name}
Example:
Connecting Variables to Documents
To connect a variable to a document or folder:
Click Connect Data next to the variable
Select a document or folder
The document content will be injected at runtime
Output from previous block (Sequential/Supervisor)
Configuring AI Models
Setting the Default Model
Click on the Model dropdown in the settings panel
Select your preferred AI provider:
OpenAI (GPT-4, GPT-4 Turbo, etc.)
Anthropic (Claude 3 Opus, Sonnet, Haiku)
Google (Gemini Pro, Gemini Ultra)
Choose the specific model version
Screenshot: Model Selection Fine-tune model behavior with these settings:
Controls randomness (lower = more focused)
Nucleus sampling threshold
Screenshot: Model Settings Per-Block Model Configuration
In Sequential and Supervisor prompts, each block can use a different model:
Expand the block settings
Toggle "Override default model"
Select the model for this block
Adjust settings as needed
Version Control
PromptOwl automatically tracks all changes to your prompts.
Understanding Versions
Draft Version - Work in progress, not visible to users
Production Version - The active version users interact with
Version History - Complete record of all changes
Viewing Version History
Open a prompt in edit mode
Click the Versions panel on the right
Browse all previous versions with:
Screenshot: Version Panel Click Save to create a new draft version without publishing. This allows you to:
Work on changes over multiple sessions
Test changes before going live
Collaborate with team members on updates
Publishing a Version
To make your changes live:
Add change notes describing what's different
The new version becomes the production version immediately.
Restoring a Previous Version
To revert to an earlier version:
Find the version you want to restore
Click Publish on that version
Note: This creates a new version based on the old one rather than deleting recent versions.
Testing Your Prompt
Test your prompt without affecting production:
Click Preview in the editor
Enter test values for variables
Best Practices for Testing
Test with various input scenarios
Try edge cases and unusual requests
Verify variable substitution works correctly
Check that document retrieval returns relevant content
Prompt Settings
General Settings
Remember conversation history
Show source references for RAG
Allow AI to use connected tools
Display Settings
Display prompt in user sidebar
Screenshot: Prompt Settings
Sharing and Permissions
Sharing with Individuals
Select permission level (View, Edit)
Sharing with Teams
Maintenance Best Practices
Regular Reviews
Review prompt performance monthly
Check annotation feedback for improvement areas
Update prompts when AI models improve
Use descriptive names and descriptions
Add change notes when publishing
Tag prompts for easy organization
Version Management
Keep production versions stable
Use drafts for experimental changes
Document major changes in version notes
Troubleshooting
Prompt not responding as expected
Check variable values are correct
Verify the model settings
Review the system context for clarity
Test with simpler inputs first
Variables not being replaced
Ensure variable names match exactly (case-sensitive)
Check for typos in {variable_name} syntax
Verify variables have values assigned
Sequential blocks not passing data
Use {{block-key}} syntax to reference previous outputs
Verify block keys are correct
Check that previous blocks are generating output
Verify your API keys are configured
Check if the model is deprecated (warning will show)
Try a different model to isolate the issue