Skip to main content
Your challenges are live. Now the real work begins: watching how people engage with them and fixing what breaks. Analytics answer three questions every challenge needs answered:

Are they showing up?

Participation. How many eligible users actually start?

Are they finishing?

Completion. Of those who start, what percentage cross the finish line?

Where are they bailing?

Drop-off. Which action makes users bounce?

The Metrics That Matter

Start Rate: What percentage of eligible users begin the challenge? Think of this as your hook. Low start rates mean your title, description, or visibility isn’t working. Like a fishing lure nobody’s biting. Completion Rate: Of people who start, what percentage finish? This is your actual challenge performance. Target: 60% or higher. Below 40%? Something’s wrong—either it’s too hard, too confusing, or too long. Drop-Off Points: Which specific action causes the most people to bounce? Most challenges have one action that’s the culprit. Fix that one action and everything else improves. Time to Complete: How long does it actually take users? Compare this to your estimate. If you said 10 minutes but users take 25, you’ve overscooped it. They’ll feel it. User Satisfaction: Post-completion ratings or feedback. Numbers are great, but hearing “I loved this” beats any metric.

Where to Find Your Data

  • Location
  • Overview
  • Actions
  • Users
  • Results
Control Room > Engagement > Challenges > [Select Your Challenge] > AnalyticsYou’ll land in the Overview tab. Four views are available below.

Read the Patterns

Every challenge performance pattern tells a story. Learn to read yours.

The Thriving Challenge

What it looks like:
  • 60%+ completion rate
  • Steady participation over first week
  • Drop-off below 50% at any single action
  • Completion time matches your estimate
What it means: You nailed it. Users are engaged, the difficulty is right, and the experience flows. Keep doing this.

The Hook Problem

What it looks like:
  • High start rate (good title/description)
  • Low completion (people bail partway)
What it means: Users are biting, but something goes wrong mid-challenge. Check your drop-off points. Usually one action is too hard, unclear, or doesn’t fit the flow. Fix that action.

The Visibility Gap

What it looks like:
  • Very low start rate (below 20%)
  • You know the challenge is good
What it means: Not enough people are seeing it. Or your title/description doesn’t speak to them. Improve visibility, tweak your copy, or boost promotion.

The Stumbling Block

What it looks like:
  • Drop-off spikes at one specific action
  • Everything before and after is fine
What it means: One action is the weak link. It’s either too hard, confusing, or the wrong type for your audience. Replace it or rewrite the instructions.

The Endurance Test

What it looks like:
  • Completion time is 2x-3x your estimate
  • Users are abandoning mid-way
What it means: You’ve overscooped it. The challenge is too long or each action takes longer than you thought. Trim 1-2 actions and relaunch.

The Engagement Fade

What it looks like:
  • Completion rate drops below 30% after a week
  • Started strong, lost momentum
What it means: The novelty wore off. Try launching a sequence of related challenges instead of single one-offs. Users crave progression.

Optimize: Your Action Plan

1

Check in at 24-48 hours

Launch, then check completion rate and drop-off points. If you see 50%+ drop-off at one action, investigate immediately. Don’t wait.
2

Identify the problem

Look at which action has the highest drop-off. Read the action content. Is it unclear? Too hard? The wrong action type for what you’re asking?
3

Fix it

Clearer instructions? Simpler task? Different action type? Pick one change, make it, and document what you changed.
4

Relaunch and compare

Run the updated challenge a week later. Compare completion rates. Did it improve? Keep the fix. Didn’t work? Try a different approach.
5

Iterate

Don’t be afraid to update live challenges. Small clarifications often double completion rates. Users don’t mind improvements.
The best data point is a real user completing a real challenge. Get it live, measure it, learn from it, and iterate. That’s the flywheel.

Decision Tree

When you see a problem, here’s what to check:
ProblemCheck This FirstLikely Fix
Very low start (below 20%)Title, description, visibilityImprove copy or increase promotion
Low completion (below 40%)Drop-off pointsSimplify or remove the problem action
Drop-off 70%+ at one actionThat action’s contentReplace it with a simpler action type
Users taking 2-3x longer than expectedNumber of actions and complexityTrim 1-2 actions, remove optional steps
High completion but users report confusionUser feedback and commentsRewrite instructions, add examples, add context

State of a Challenge Over Time

Every challenge has a lifecycle. Manage it.

Pause

Challenge isn’t working now but might work later (seasonality, timing, team capacity). Stops new starts but existing participants can finish. Useful for seasonal campaigns.

Archive

Challenge ran its course and you’re done with it. Hidden from most views, data stays, you can duplicate it later for variations.

Refresh

Create a new version with the same goal but different actions, examples, or rewards. Keeps things fresh and tests what resonates.

Sunset

Challenge consistently underperforms and you’ve tried fixes. Let it go. Not everything needs to live forever.

The Nuance: What the Numbers Don’t Always Say

Don’t chase raw numbers. 10 people completing at 80% beats 100 people starting at 15% every single time. Quality over volume. Always. Completion variance is normal. Mandatory challenges (employees have to do this) hit 60-85%. Voluntary ones 30-50%. If yours is voluntary and hitting 50%+, you’re winning. One bad action doesn’t tank the whole thing. If drop-off is high at one action, that’s fixable. It doesn’t mean the challenge concept is wrong. Time estimates are optimistic. You think it takes 5 minutes. Users take 12. Not because they’re slow—they’re thoughtful. Add 30% padding to your predictions.
Real insight comes from combining data with user feedback. The numbers show you what happened. User feedback tells you why.

Next Steps