Understanding Agents and RAG

Build AI agents in PromptOwl - simple prompts, sequential workflows, and multi-agent supervisors with RAG knowledge retrieval and citations.

This comprehensive guide explains how to build AI agents in PromptOwlarrow-up-right, including the different agent types, how knowledge retrieval (RAG) works, version management, and citation systems.


Table of Contents


Agent Overview

In PromptOwl, an "agent" is an AI-powered prompt that can answer questions, perform tasks, and retrieve information from your documents. Agents come in three types:

Type
Best For
Complexity

Simple

Single-purpose tasks, Q&A

Low

Sequential

Multi-step workflows

Medium

Supervisor

Complex multi-agent orchestration

High

Choosing the Right Agent Type


Simple Agents

Simple agents are the foundation of PromptOwl. They consist of a single system context that defines the AI's behavior.

When to Use Simple Agents

  • FAQ bots and knowledge bases

  • Single-purpose assistants (customer support, onboarding)

  • Document Q&A with citations

  • Basic content generation

Creating a Simple Agent

  1. Click + New on the Dashboard

  2. Keep the default Simple type selected

  3. Enter your agent details:

    • Name: Descriptive name (e.g., "Product Support Bot")

    • Description: What this agent does

  4. Write your System Context

Screenshot: Simple Agent Creation

System Context Best Practices

The system context defines your agent's personality, capabilities, and constraints:

Adding Knowledge with RAG

To give your agent access to documents:

  1. Go to the Variables section

  2. Click Add Variable

  3. Name it (e.g., knowledge_base)

  4. Click Connect Data

  5. Select a folder from your Data Room

  6. Reference it in your system context: {knowledge_base}

Screenshot: Connecting Knowledge Base

Your system context becomes:

Simple Agent Architecture


Sequential Agents

Sequential agents execute multiple steps in order, where each step can have its own AI model, prompt, and data connections.

When to Use Sequential Agents

  • Multi-stage content creation (research → draft → edit)

  • Data processing pipelines

  • Analysis workflows (extract → analyze → summarize)

  • Quality assurance chains (generate → review → refine)

Creating a Sequential Agent

  1. Click + New on the Dashboard

  2. Change the type to Sequential

  3. You'll see the block-based interface

Screenshot: Sequential Type Selection

Understanding Blocks

Each block is a separate AI step. Blocks execute in order from top to bottom.

Block Configuration

Setting
Description

Name

Descriptive step name (e.g., "Research", "Analyze")

Prompt Source

"Inline" (write here) or "Use Existing" (reference another prompt)

AI Model

Which model handles this step

Tools

Tools available to this block

Dataset

Documents for this block's RAG

Variables

Values passed to this block

Screenshot: Block Configuration

Example: Content Creation Pipeline

Block 1: Research

  • Model: GPT-4 (good at analysis)

  • Prompt: "Research the following topic thoroughly: {topic}"

  • Dataset: Research documents folder

Block 2: Draft

  • Model: Claude 3 (good at writing)

  • Prompt: "Write a draft article based on: {{research}}"

  • Variables: research mapped to Block 1 output

Block 3: Polish

  • Model: GPT-4

  • Prompt: "Edit for clarity and grammar: {{draft}}"

  • Variables: draft mapped to Block 2 output

Passing Data Between Blocks

Use double curly braces {{block-key}} to reference previous block outputs:

Screenshot: Variable Mapping

Block Keys

Each block has a unique key used for referencing:

  • Auto-generated from block name (e.g., "Research" → research)

  • Used in {{block-key}} syntax

  • Can be customized in block settings

Using Existing Prompts in Blocks

Instead of writing inline, reference existing prompts:

  1. Set Prompt Source to "Use Existing"

  2. Click Select Prompt

  3. Choose from your prompt library

  4. Select the version to use

  5. Map any required variables

This enables:

  • Reusing tested prompts

  • Maintaining single source of truth

  • Version control within blocks

Screenshot: Select Existing Prompt

Sequential Agent Architecture

Human Messages Between Blocks

Add messages shown to users between steps:

  1. Expand block settings

  2. Find Human Message field

  3. Enter the message (e.g., "Analyzing your request...")

This provides feedback during long workflows.


Supervisor Agents (Multi-Agent)

Supervisor agents use a coordinator that orchestrates multiple specialized agents. The supervisor decides which agent(s) to invoke based on the task.

When to Use Supervisor Agents

  • Tasks requiring different expertise (legal + financial + technical)

  • Dynamic routing based on query type

  • Complex decision-making workflows

  • Parallel agent execution

Creating a Supervisor Agent

  1. Click + New on the Dashboard

  2. Change the type to Supervisor

  3. Configure the supervisor block and agent blocks

![Screenshot: Supervisor Type Selection] Screenshot: Supervisor Type Selection

The Supervisor Block

The supervisor block is marked with a special indicator. It:

  • Receives all user queries first

  • Decides which agent(s) to call

  • Coordinates responses from multiple agents

  • Synthesizes final answers

Supervisor Prompt Example

Screenshot: Supervisor Block

Agent Blocks

Each non-supervisor block is a specialized agent:

Setting
Description

Name

Agent specialty (e.g., "Legal Agent")

Prompt

Agent-specific instructions

Model

Can differ from other agents

Dataset

Agent-specific knowledge base

Tools

Agent-specific tools

Example: Customer Support Supervisor

Supervisor Block:

Billing Agent Block:

  • Dataset: Billing policies folder

  • Prompt: "You are a billing specialist. Help with payment-related questions..."

Technical Agent Block:

  • Dataset: Product documentation folder

  • Prompt: "You are a technical support specialist. Troubleshoot issues..."

Account Agent Block:

  • Dataset: Account FAQ folder

  • Prompt: "You are an account specialist. Help with account management..."

Supervisor Agent Architecture

Multi-Agent Invocation

The supervisor can call multiple agents for complex queries:


Version Management

Every change to your agent is tracked through the version system.

Understanding Versions

Term
Definition

Draft

Work-in-progress, not visible to users

Production

Active version users interact with

Version History

Complete record of all changes

Version Workflow

Saving Drafts

Click Save to create a new draft version:

  • Preserves all current settings

  • Does not affect production

  • Allows testing before publishing

![Screenshot: Save Button] Screenshot: Save Button

Publishing a Version

Click Publish to make a version live:

  1. Click Publish in the editor

  2. Add change notes describing updates

  3. Confirm the publication

Screenshot: Publish Dialog

Viewing Version History

  1. Open the Versions panel (right sidebar)

  2. See all versions with:

    • Version number

    • Creation date

    • Creator name

    • Change notes (if added)

    • Production indicator

Screenshot: Version History

Comparing Versions

To understand what changed between versions:

  1. Click on a version to preview it

  2. Compare settings, prompts, and configurations

  3. Decide whether to restore or continue

Rolling Back

To revert to a previous version:

  1. Find the version in history

  2. Click Publish on that version

  3. Confirm the rollback

Note: This creates a new version based on the old one. No versions are deleted.

Version Best Practices

  • Add change notes for every publish

  • Test in preview before publishing

  • Keep production stable - only publish tested changes

  • Use drafts freely - they don't affect users


RAG: Retrieval Augmented Generation

RAG enables your agents to answer questions using your documents. Understanding RAG is key to building effective knowledge-based agents.

How RAG Works

Two Ways to Connect Documents

PromptOwl offers two methods to connect documents, each with different behaviors:

Method 1: Prompt-Level RAG (System Context)

Connect documents to the entire agent via variables.

How to set up:

  1. Add a variable (e.g., knowledge_base)

  2. Connect it to a folder

  3. Reference in system context: {knowledge_base}

Behavior:

  • Documents retrieved automatically on every message

  • Content injected into system context before AI sees the query

  • Available to all blocks in sequential/supervisor workflows

  • Best for: Core knowledge that applies to all queries

Screenshot: Prompt-Level RAG Setup

Method 2: Block-Level RAG (Dataset)

Connect documents to specific blocks.

How to set up:

  1. Expand block settings

  2. Find Dataset field

  3. Select a folder or document

Behavior:

  • Documents retrieved only when that block executes

  • AI decides when to search based on the query

  • Each block can have different documents

  • Best for: Specialized knowledge per step/agent

Screenshot: Block-Level RAG Setup

Comparing RAG Methods

Aspect
Prompt-Level
Block-Level

Timing

Every query

On-demand

Scope

Entire agent

Single block

Control

Automatic

AI-decided

Use Case

Core knowledge

Specialized knowledge

Citations

Combined

Per-block

When to Use Each Method

Use Prompt-Level RAG when:

  • Documents are always relevant

  • Building a simple Q&A bot

  • Need consistent knowledge access

Use Block-Level RAG when:

  • Different steps need different documents

  • Building specialized agents

  • Want AI to decide when to search

  • Optimizing for performance (not searching unnecessarily)

Combining Both Methods

For complex agents, combine both approaches:

Document Processing for RAG

When you sync documents, they're processed for AI search:

  1. Chunking: Documents split into ~1000 character pieces

  2. Embedding: Each chunk converted to a vector representation

  3. Indexing: Vectors stored in searchable database

  4. Metadata: Title, author, date preserved for citations

Sync Status and RAG

Documents must be synced before RAG works:

Status
RAG Available?
Action Needed

Synced (Green)

Yes

None

Modified (Orange)

Partial

Re-sync folder

Unsynced (Red)

No

Sync folder

Always check sync status when RAG isn't returning expected results.


Citations

Citations show users where answers come from, building trust and enabling verification.

How Citations Work

When RAG retrieves documents, citation data is captured:

Citation Display Modes

PromptOwl offers two display modes:

Non-Aggregated (Default)

Shows individual citations:

Screenshot: Non-Aggregated Citations

Aggregated

Groups citations by document:

Screenshot: Aggregated Citations

Configuring Citations

In your prompt settings, configure citation display:

Setting
Description

Aggregate Citations

Group by document vs. show individually

Show Similarity Score

Display relevance percentage

Show Author

Display document author

Show Publish Date

Display document date

Screenshot: Citation Settings

Citation Data Fields

Each citation includes:

Field
Description
Example

Title

Document name

"Return Policy Guide"

Display Name

Friendly name

"Returns FAQ"

Author

Creator

"Support Team"

Publish Date

Date created

"Jan 15, 2024"

Text

Retrieved passage

"Returns within 30 days..."

Score

Relevance (0-1)

0.89

URL

Link to source

"https://..."

Improving Citation Quality

To get better citations:

  1. Set Display Names on documents

  2. Add Authors during upload

  3. Include Publish Dates for currency

  4. Write clear document titles

  5. Structure documents well (headers, sections)

Citations in Sequential/Supervisor Workflows

Citations accumulate across all blocks:

Duplicates are automatically removed.

Viewing Full Citations

Click on any citation to open the full citation modal:

  • Complete text passage

  • All metadata fields

  • Link to original document (if available)

  • Similarity score (if enabled)

Screenshot: Citation Modal

Best Practices

Simple Agent Best Practices

  • Keep system context focused and clear

  • Connect only relevant document folders

  • Test with various query types

  • Enable citations for transparency

Sequential Agent Best Practices

  • Name blocks descriptively (action-oriented)

  • Use appropriate models per step (GPT-4 for analysis, Claude for writing)

  • Map variables explicitly between blocks

  • Keep chain length reasonable (3-5 blocks)

  • Add human messages for long workflows

Supervisor Agent Best Practices

  • Write clear agent descriptions for routing

  • Give agents distinct, non-overlapping roles

  • Test edge cases where routing is ambiguous

  • Consider fallback handling

  • Keep agent count manageable (2-5 agents)

RAG Best Practices

  • Organize documents into logical folders

  • Keep documents focused (don't combine unrelated content)

  • Sync regularly after updates

  • Test retrieval with expected queries

  • Use block-level RAG for specialized knowledge

Citation Best Practices

  • Always enable for document-based agents

  • Fill in document metadata during upload

  • Use aggregated mode for many citations

  • Review citation quality in Monitor

Version Management Best Practices

  • Save drafts frequently while editing

  • Add meaningful change notes

  • Test thoroughly before publishing

  • Keep production versions stable

  • Use rollback when issues arise


Troubleshooting

Agent not using documents

  1. Check folder sync status (must be green)

  2. Verify variable is connected properly

  3. Confirm variable is referenced in prompt

  4. Test with simple, direct questions

Sequential blocks not passing data

  1. Check {{block-key}} syntax

  2. Verify block key matches exactly

  3. Ensure previous block generates output

  4. Check for typos in variable names

Supervisor not routing correctly

  1. Review supervisor prompt clarity

  2. Ensure agent roles don't overlap

  3. Add explicit routing instructions

  4. Test with clear-cut queries first

Citations not appearing

  1. Verify RAG is configured

  2. Check citation settings are enabled

  3. Ensure documents are synced

  4. Look for similarity score threshold issues

Version publish not taking effect

  1. Refresh the page

  2. Check you published (not just saved)

  3. Verify you're on the correct prompt

  4. Clear browser cache if needed


Last updated