Task Actions
The Actions button on each task is your gateway to Empromptu's optimization capabilities. This is where you access all the tools needed to improve your AI application's performance.
What you'll learn ⏱️ 5 minutes
What happens when you click the Actions button
Overview of all available optimization tools
When to use each action type
How actions work together to improve performance
Best practices for task optimization
Accessing Task Actions
From your project dashboard, each task has an Actions button in the rightmost column. Click this button to access optimization tools for that specific task.
Example Task Row:
Review Summarizer
This task summarizes reviews we scraped off the web
50%
65%
Active
Actions ← Click here
When you click Actions, you'll see five optimization options available for that task.
Available Actions
1. Prompt Optimization
What it does: Creates and manages Prompt Families to improve response quality and accuracy.
Key capabilities:
Build Prompt Families (collections of specialized prompts)
Run automatic optimization to improve performance
Manually refine prompts for specific scenarios
View optimization history and results
When to use:
Your task has low accuracy scores
Responses are inconsistent across different inputs
You want to achieve 90%+ accuracy
Initial setup after creating a new task
What you'll find: Event Log, Prompt Family management, Manual Optimization wizard, Automatic Optimization
2. Input Optimization
What it does: Manages test data and analyzes real user inputs to improve task performance.
Key capabilities:
Create manual test inputs for optimization
Monitor real end-user inputs and performance
Analyze input patterns and edge cases
Use input data to guide optimization
When to use:
Before running optimization (need test data)
After deployment (analyze real user behavior)
When discovering new edge cases
To understand what inputs cause problems
What you'll find: Manual Inputs creation, End User Inputs analytics, input performance tracking
3. Model Optimization
What it does: Tests different AI models to find the best fit for your specific use case.
Key capabilities:
Compare performance across different models
Test models like GPT-4o, Claude 3 Opus, Claude 3 Sonnet
Adjust temperature and other parameters
Analyze cost vs performance trade-offs
When to use:
Task performance plateaus with current model
Cost optimization is needed
Specific model features are required
Initial setup to find optimal model
What you'll find: Model selection interface, side-by-side comparisons, parameter tuning options
4. Edge Case Detection
What it does: Identifies problematic inputs using visual analysis and helps resolve them.
Key capabilities:
Visual scatter plot of task performance
Identify clusters of low-performing inputs
Select problem areas for targeted optimization
Understand performance patterns
When to use:
After running initial optimization (need data points)
When overall accuracy is good but some inputs fail
To find patterns in problematic scenarios
For systematic problem identification
What you'll find: Performance scatter plot, score clustering visualization, optimization targeting tools
5. Evaluations
What it does: Defines success criteria that guide optimization and measure performance.
Key capabilities:
Create custom evaluation criteria
Use automatic evaluation generation
Manage active and inactive evaluations
Set specific quality standards
When to use:
Before any optimization (define success first)
When optimization isn't targeting the right goals
To add new quality requirements
When business requirements change
What you'll find: Evaluation creation tools, criteria management, success metrics definition
How Actions Work Together
Recommended Workflow:
1. Start with Evaluations Define what success looks like before optimizing:
Actions → Evaluations → Create success criteria2. Add Input Data Provide test data for optimization:
Actions → Input Optimization → Add manual inputs3. Run Initial Optimization Use automatic optimization to establish baseline:
Actions → Prompt Optimization → Automatic Optimization4. Find and Fix Problems Use visual tools to identify issues:
Actions → Edge Case Detection → Target problem areas5. Fine-tune Performance Test different models and manual optimization:
Actions → Model Optimization → Compare modelsActions → Prompt Optimization → Manual refinementAction Interdependencies
Actions That Build on Each Other:
Evaluations → All Other Actions
Evaluations define success criteria for all optimization
Must be set up before meaningful optimization can occur
Input Optimization → Edge Case Detection
Need input data to generate the scatter plot visualization
More inputs create better problem identification
Prompt Optimization → Model Optimization
Different models may work better with different prompt styles
Optimize prompts first, then test models
Edge Case Detection → Prompt Optimization
Identifies specific problems for targeted prompt improvement
Creates focused optimization goals
Best Practices by Action Type
Prompt Optimization Best Practices:
Start with automatic optimization for baseline performance
Use manual optimization for specific problem areas
Build diverse Prompt Families for different input types
Monitor Event Log to understand what works
Input Optimization Best Practices:
Create representative manual inputs that match real use cases
Include edge cases and difficult scenarios
Monitor end-user inputs after deployment
Add new inputs when discovering problems
Model Optimization Best Practices:
Test models after prompt optimization is complete
Consider cost vs performance trade-offs
Adjust temperature based on use case (creative vs precise)
Document which models work best for which scenarios
Edge Case Detection Best Practices:
Wait until you have sufficient data points (15-20+ optimization runs)
Focus on clusters of low-performing inputs
Use targeted optimization for identified problem areas
Regular review as new data comes in
Evaluations Best Practices:
Start with 3-5 core evaluations covering key requirements
Use specific, measurable criteria
Balance automatic and manual evaluation creation
Update evaluations as requirements evolve
Common Action Workflows
New Task Setup:
Evaluations → Define success criteria
Input Optimization → Add test inputs
Prompt Optimization → Run automatic optimization
Review results → Check if performance meets goals
Performance Improvement:
Edge Case Detection → Identify problem areas
Input Optimization → Add problematic inputs as test cases
Prompt Optimization → Manual optimization for specific issues
Model Optimization → Test if different model performs better
Production Monitoring:
Input Optimization → Monitor end-user inputs
Edge Case Detection → Find new problem patterns
Evaluations → Add criteria for new requirements
Prompt Optimization → Continuous improvement based on real usage
Troubleshooting Actions
Actions Button Not Working:
Check: Task status is "Active" Solution: Inactive tasks can't be optimized
Limited Action Options:
Check: Task has been used/optimized before Solution: Some actions require initial data or optimization
Poor Optimization Results:
Check: Evaluations are well-defined and inputs are representative Solution: Improve evaluation criteria and add better test inputs
Can't Access Certain Features:
Check: Sufficient optimization runs have occurred Solution: Edge Case Detection requires 15-20+ runs to unlock
Understanding Action Results
Success Indicators:
Accuracy scores improving (5.0 → 7.5 → 8.9)
More consistent performance across different inputs
Better real-world user feedback after deployment
Fewer edge cases identified in detection analysis
Warning Signs:
Scores plateauing despite optimization attempts
High variance in performance across similar inputs
New problems appearing with different input types
End-user performance declining after changes
Next Steps
Now that you understand Task Actions:
Learn Prompt Optimization: Master the core optimization technology
Understand Evaluations: Set up effective success criteria
Explore Edge Case Detection: Use visual tools to find problems
Check Input Optimization: Manage test data and real user analytics
Last updated