Are they showing up?
Participation. How many eligible users actually start?
Are they finishing?
Completion. Of those who start, what percentage cross the finish line?
Where are they bailing?
Drop-off. Which action makes users bounce?
The Metrics That Matter
Start Rate: What percentage of eligible users begin the challenge? Think of this as your hook. Low start rates mean your title, description, or visibility isn’t working. Like a fishing lure nobody’s biting. Completion Rate: Of people who start, what percentage finish? This is your actual challenge performance. Target: 60% or higher. Below 40%? Something’s wrong—either it’s too hard, too confusing, or too long. Drop-Off Points: Which specific action causes the most people to bounce? Most challenges have one action that’s the culprit. Fix that one action and everything else improves. Time to Complete: How long does it actually take users? Compare this to your estimate. If you said 10 minutes but users take 25, you’ve overscooped it. They’ll feel it. User Satisfaction: Post-completion ratings or feedback. Numbers are great, but hearing “I loved this” beats any metric.Where to Find Your Data
- Location
- Overview
- Actions
- Users
- Results
Control Room > Engagement > Challenges > [Select Your Challenge] > AnalyticsYou’ll land in the Overview tab. Four views are available below.
Read the Patterns
Every challenge performance pattern tells a story. Learn to read yours.The Thriving Challenge
What it looks like:
- 60%+ completion rate
- Steady participation over first week
- Drop-off below 50% at any single action
- Completion time matches your estimate
The Hook Problem
What it looks like:
- High start rate (good title/description)
- Low completion (people bail partway)
The Visibility Gap
What it looks like:
- Very low start rate (below 20%)
- You know the challenge is good
The Stumbling Block
What it looks like:
- Drop-off spikes at one specific action
- Everything before and after is fine
The Endurance Test
What it looks like:
- Completion time is 2x-3x your estimate
- Users are abandoning mid-way
The Engagement Fade
What it looks like:
- Completion rate drops below 30% after a week
- Started strong, lost momentum
Optimize: Your Action Plan
1
Check in at 24-48 hours
Launch, then check completion rate and drop-off points. If you see 50%+ drop-off at one action, investigate immediately. Don’t wait.
2
Identify the problem
Look at which action has the highest drop-off. Read the action content. Is it unclear? Too hard? The wrong action type for what you’re asking?
3
Fix it
Clearer instructions? Simpler task? Different action type? Pick one change, make it, and document what you changed.
4
Relaunch and compare
Run the updated challenge a week later. Compare completion rates. Did it improve? Keep the fix. Didn’t work? Try a different approach.
5
Iterate
Don’t be afraid to update live challenges. Small clarifications often double completion rates. Users don’t mind improvements.
Decision Tree
When you see a problem, here’s what to check:| Problem | Check This First | Likely Fix |
|---|---|---|
| Very low start (below 20%) | Title, description, visibility | Improve copy or increase promotion |
| Low completion (below 40%) | Drop-off points | Simplify or remove the problem action |
| Drop-off 70%+ at one action | That action’s content | Replace it with a simpler action type |
| Users taking 2-3x longer than expected | Number of actions and complexity | Trim 1-2 actions, remove optional steps |
| High completion but users report confusion | User feedback and comments | Rewrite instructions, add examples, add context |
State of a Challenge Over Time
Every challenge has a lifecycle. Manage it.Pause
Challenge isn’t working now but might work later (seasonality, timing, team capacity). Stops new starts but existing participants can finish. Useful for seasonal campaigns.
Archive
Challenge ran its course and you’re done with it. Hidden from most views, data stays, you can duplicate it later for variations.
Refresh
Create a new version with the same goal but different actions, examples, or rewards. Keeps things fresh and tests what resonates.
Sunset
Challenge consistently underperforms and you’ve tried fixes. Let it go. Not everything needs to live forever.
The Nuance: What the Numbers Don’t Always Say
Don’t chase raw numbers. 10 people completing at 80% beats 100 people starting at 15% every single time. Quality over volume. Always. Completion variance is normal. Mandatory challenges (employees have to do this) hit 60-85%. Voluntary ones 30-50%. If yours is voluntary and hitting 50%+, you’re winning. One bad action doesn’t tank the whole thing. If drop-off is high at one action, that’s fixable. It doesn’t mean the challenge concept is wrong. Time estimates are optimistic. You think it takes 5 minutes. Users take 12. Not because they’re slow—they’re thoughtful. Add 30% padding to your predictions.Real insight comes from combining data with user feedback. The numbers show you what happened. User feedback tells you why.
Next Steps
- Challenge Creation Guide - Build from scratch
- Challenge Dos and Donts - Learn best practices before you launch
- Management - Pause, archive, and optimize active challenges

