Measuring Success
Learn how to track, analyze, and improve your AI avatar's performance with key metrics.
What gets measured gets managed. Define your success metrics before launch, then optimize relentlessly.
Key Metrics Framework
The CORE Framework
| Metric | What It Measures | Why It Matters |
|---|---|---|
| Conversion | Goal completion | Business impact |
| Operation | System health | Reliability |
| Retention | User loyalty | Long-term value |
| Engagement | User interaction | Immediate value |
Engagement Metrics
Primary Metrics
| Metric | Calculation | Target |
|---|---|---|
| Session Length | Avg time per conversation | 3-5 minutes |
| Messages/Session | Total messages / sessions | 5-10 messages |
| Bounce Rate | Single-message sessions / total | <20% |
| Return Rate | Users with 2+ sessions / total | >25% |
Tracking in Dashboard
Navigate to Analytics > Engagement to see:
- Daily/weekly/monthly trends
- Session length distribution
- Message count histogram
- User journey patterns
Engagement Benchmarks
| Rating | Session Length | Messages | Return Rate |
|---|---|---|---|
| Poor | <30 seconds | 1-2 | <10% |
| Average | 1-2 minutes | 3-5 | 15-20% |
| Good | 3-5 minutes | 6-10 | 25-35% |
| Excellent | 5+ minutes | 10+ | 35%+ |
Operational Metrics
System Health
| Metric | Target | Alert Threshold |
|---|---|---|
| Uptime | 99.9% | <99.5% |
| Response Time | <3 seconds | >5 seconds |
| Error Rate | <1% | >3% |
| Completion Rate | >95% | <85% |
Performance Monitoring
// Track performance in your app
const metrics = {
firstResponseTime: [],
totalResponseTime: [],
errorCount: 0,
sessionCount: 0,
};
function trackPerformance(event) {
if (event.type === 'response_start') {
metrics.firstResponseTime.push(event.latency);
}
if (event.type === 'response_complete') {
metrics.totalResponseTime.push(event.latency);
}
if (event.type === 'error') {
metrics.errorCount++;
}
}
// Calculate averages
function getMetrics() {
return {
avgFirstResponse: average(metrics.firstResponseTime),
avgTotalResponse: average(metrics.totalResponseTime),
errorRate: metrics.errorCount / metrics.sessionCount,
};
}Cost Efficiency
| Metric | Calculation | Target |
|---|---|---|
| Cost per Session | Total cost / sessions | Depends on tier |
| Cost per Resolution | Total cost / resolved issues | Minimise |
| Cost Efficiency | Issues resolved / cost | Maximize |
Quality Metrics
User Satisfaction
| Method | Implementation | Target |
|---|---|---|
| Thumbs Up/Down | After each response | >80% positive |
| Star Rating | End of session | >4.0/5.0 |
| CSAT Survey | Post-conversation | >4.2/5.0 |
| NPS | Periodic survey | >50 |
Implementing Feedback
<Avatar
feedback={{
enabled: true,
type: 'thumbs', // 'thumbs' | 'stars' | 'csat'
position: 'after-response',
}}
onFeedback={(data) => {
analytics.track('avatar_feedback', {
sessionId: data.sessionId,
messageId: data.messageId,
rating: data.rating,
comment: data.comment,
});
}}
/>Quality Benchmarks
| Metric | Poor | Average | Good | Excellent |
|---|---|---|---|---|
| Positive feedback | <60% | 70% | 80% | 90%+ |
| CSAT score | <3.5 | 3.8 | 4.2 | 4.5+ |
| Resolution rate | <50% | 65% | 80% | 90%+ |
| Escalation rate | >30% | 20% | 10% | <5% |
Business Impact Metrics
Conversion Metrics
| Use Case | Primary Metric | Secondary Metrics |
|---|---|---|
| Sales | Conversion rate | Lead quality, ACV |
| Support | Resolution rate | Ticket deflection |
| Education | Completion rate | Test scores |
| Engagement | Time on site | Page views |
ROI Calculation
For Customer Support
Deflection Value = Tickets Deflected × Cost per Ticket
ROI = ((Deflection Value - Avatarium Cost) / Avatarium Cost) × 100Example:
- 5,000 support sessions/month
- 60% resolution rate = 3,000 tickets deflected
- Cost per human ticket = $15
Deflection Value = 3,000 × $15 = $45,000See avatarium.ai/pricing (opens in a new tab) for current plan costs.
For Sales
Lead Value = Leads Generated × Conversion Rate × Average Deal Size
ROI = ((Lead Value - Cost) / Cost) × 100Funnel Metrics
Track users through your conversion funnel:
| Stage | Metric | Target |
|---|---|---|
| Awareness | Avatar impressions | Baseline |
| Engagement | Sessions started | >5% of impressions |
| Interest | 3+ message sessions | >50% of sessions |
| Intent | Action requested | >20% of engaged |
| Conversion | Goal completed | >10% of intent |
Analytics Dashboard
Setting Up Tracking
Enable Analytics
In your Avatarium dashboard, go to Settings > Analytics and enable tracking.
Configure Events
Choose which events to track:
- Session start/end
- Messages sent/received
- Errors and fallbacks
- User feedback
- Custom events
Set Up Exports
Export data to your analytics platform:
- Google Analytics
- Mixpanel
- Amplitude
- Custom webhook
Custom Event Tracking
import { trackEvent } from '@avatarium/analytics';
// Track custom business events
function onProductRecommended(product: Product) {
trackEvent('product_recommended', {
productId: product.id,
category: product.category,
price: product.price,
});
}
function onCheckoutStarted() {
trackEvent('checkout_started_from_avatar', {
source: 'avatar_recommendation',
});
}Reporting
Weekly Report Template
| Section | Metrics | Analysis |
|---|---|---|
| Overview | Sessions, messages, users | Week-over-week trends |
| Engagement | Avg session length, return rate | Engagement quality |
| Quality | Satisfaction scores, resolution rate | User happiness |
| Operations | Error rate, response time | System health |
| Business | Conversions, cost per conversion | ROI tracking |
Monthly Business Review
Prepare these analyses monthly:
- Trend Analysis: How are key metrics trending?
- Cohort Analysis: How do different user groups perform?
- Content Gaps: What questions aren't being answered?
- Optimization Opportunities: Where can we improve?
- Cost Analysis: Are we efficient?
Optimization Strategies
Based on Engagement Metrics
| Issue | Indicator | Fix |
|---|---|---|
| Low engagement | Short sessions, few messages | Improve greeting, add proactive prompts |
| High bounce | Single-message sessions | Better initial response, clearer value prop |
| Low return | Few repeat users | Add value, improve satisfaction |
Based on Quality Metrics
| Issue | Indicator | Fix |
|---|---|---|
| Low satisfaction | Poor feedback scores | Review negative conversations, improve responses |
| High escalation | Many human handoffs | Expand knowledge base, improve prompts |
| Low resolution | Issues not solved | Better instructions, more training data |
Based on Business Metrics
| Issue | Indicator | Fix |
|---|---|---|
| Low conversion | Few goal completions | Clearer CTAs, better guidance |
| High cost/conversion | Expensive results | Optimize model choice, reduce unnecessary sessions |
| Poor ROI | Cost > value | Review use case fit, optimize funnel |
A/B Testing
What to Test
| Element | Variations | Success Metric |
|---|---|---|
| Greeting | Formal vs casual | Engagement rate |
| System Prompt | Detailed vs concise | Resolution rate |
| AI Model | GPT-4o vs Groq | Quality + cost |
| TTS Voice | Different voices | Satisfaction |
| Avatar | Different models | Engagement |
Running Tests
// A/B test framework
function getVariant(userId: string, testId: string): string {
const hash = hashString(`${userId}-${testId}`);
return hash % 2 === 0 ? 'control' : 'variant';
}
// Usage
const variant = getVariant(userId, 'greeting-test');
const greeting = variant === 'control'
? "Hi! How can I help you today?"
: "Welcome back! What would you like to know?";Analyzing Results
| Metric | Control | Variant | Lift | Significance |
|---|---|---|---|---|
| Session length | 2.5 min | 3.1 min | +24% | p < 0.01 |
| Messages/session | 4.2 | 5.8 | +38% | p < 0.05 |
| Satisfaction | 4.1 | 4.3 | +5% | p = 0.08 |
Alert Configuration
Recommended Alerts
| Alert | Threshold | Priority |
|---|---|---|
| Error rate spike | >5% | P1 |
| Response time slow | >10s avg | P2 |
| Satisfaction drop | <3.5 | P2 |
| Budget at 80% | 80% of limit | P3 |
| Zero sessions | 0 for 1 hour | P1 |
Setting Up Alerts
// In dashboard settings or via API
const alerts = [
{
metric: 'error_rate',
condition: 'greater_than',
threshold: 0.05,
window: '15m',
priority: 'p1',
channels: ['email', 'slack'],
},
{
metric: 'avg_response_time',
condition: 'greater_than',
threshold: 10000, // ms
window: '5m',
priority: 'p2',
channels: ['email'],
},
];Summary
Metrics Priority
| Priority | Metrics | Review Frequency |
|---|---|---|
| Critical | Error rate, uptime | Real-time |
| High | Satisfaction, resolution | Daily |
| Medium | Engagement, conversion | Weekly |
| Low | Cost efficiency, trends | Monthly |
Success Checklist
- Analytics enabled and tracking
- Key metrics defined and baselined
- Dashboards configured
- Alerts set up
- Weekly review process in place
- Monthly business review scheduled
Next Steps
- Best Practices - Improve your metrics
- Analytics Guide - Deep dive into data
- Going to Production - Launch checklist