Skip to main content

Quick Start

Introduction

In this tutorial, we will create a prompt in the web UI, publish it to a deployment, and integrate it with our codebase using the Agenta SDK.

note

If you want to do this whole process programmatically, jump to this guide.

1. Create a Prompt

We will create a prompt from the web UI. This can be done by going to the app overview and clicking on Create a Prompt. You have the choice between using a chat prompt or a text prompt:

  • Text Prompt: Useful for single-turn LLM applications such as question answering, text generation, entity extraction, classification, etc.
  • Chat Application: Designed for multi-turn applications like chatbots.

2. Publish a Variant

Within each LLM application, you can create multiple variants. Variants are like Git branches, allowing you to experiment with different configurations. Each variant is versioned, and each version has its own commit number and is immutable.

When you are satisfied with a variant's version (after evaluating it, for instance), you can publish it to a deployment. A deployment is tagged with an environment (production, development, or staging) and provides you with access to endpoints for both the published configuration and proxying the calls.

To publish a variant, go to the overview, click on the three dots on the variant that you want to publish, and select Deploy (see screenshot):


You can now select which environments you want to publish the variant to:


caution

New changes to the variant will not be automatically published to the deployment unless you explicitly publish it again.

The reason is that we have published a specific version/commit of the variant to that deployment, not the variant itself!

3. Integrate with Your Code

To use the prompt in your code, you can utilize the Agenta SDK to fetch the configuration. Here's how you can do it in Python:

a. Initialize the SDK

First, import the agenta module and initialize the SDK using ag.init(). Make sure you have your API key set in your environment variables or configuration file.

import agenta as ag

# Initialize the SDK (API key can be set in environment variables or passed directly)
ag.init(api_key="your_api_key") # Replace with your actual API key or omit if set elsewhere

b. Fetch the Configuration

To fetch the configuration from the production environment, use the ConfigManager.get_from_registry method:

# Fetch configuration from the production environment
config = ag.ConfigManager.get_from_registry(
app_slug="your-app-slug"
)
  • Note: If you do not provide a variant_ref or environment_ref, the SDK defaults to fetching the latest configuration deployed to the production environment.

c. Use the Configuration

The config object will be a dictionary containing your prompt configuration. You can use it directly in your application:

# Use the configuration
print(config)

Example Output:

{
'temperature': 1.0,
'model': 'gpt-3.5-turbo',
'max_tokens': -1,
'prompt_system': 'You are an expert in geography.',
'prompt_user': 'What is the capital of {country}?',
'top_p': 1.0,
'frequency_penalty': 0.0,
'presence_penalty': 0.0,
'force_json': 0
}

d. (Optional) Schema Validation with Pydantic

If you have a predefined schema for your configuration, you can use Pydantic to validate it:

from pydantic import BaseModel

# Define your configuration schema
class ConfigSchema(BaseModel):
temperature: float
model: str
max_tokens: int
prompt_system: str
prompt_user: str
top_p: float
frequency_penalty: float
presence_penalty: float
force_json: int

# Fetch configuration with schema validation
config = ag.ConfigManager.get_from_registry(
app_slug="your-app-slug",
schema=ConfigSchema
)

# Use the configuration
print(config)
  • The config object will now be an instance of ConfigSchema, allowing you to access its fields directly with type checking.

e. Asynchronous Fetching (Optional)

If your application is asynchronous, you can use the async version of the method:

# Asynchronous fetching of configuration
config = await ag.ConfigManager.aget_from_registry(
app_slug="your-app-slug"
)

4. Revert to Previous Deployment (Optional)

note

This feature is only available in the cloud and enterprise versions.

If you need to revert to a previously published commit, click on the deployment in the overview view, then click on History. You will see all the previously published versions. You can revert to a previous version by clicking on Revert.

Next Steps

Now that you've created and published your first prompt, you can learn how to do prompt engineering in the playground.