-
Total Users
?
Total registered users in this cohort. Includes both active and inactive accounts.e.g. 15 total users signed up for beta
-
Active (7d)
?
Users who logged at least one entry (bathroom, wellness, or note) in the last 7 days. Your best signal for current engagement.e.g. 8 of 15 users active this week = 53% weekly engagement
-
Onboarded
?
Users who completed the onboarding flow. Low completion may signal friction in the signup experience.e.g. 12 of 15 completed = 80%. If dropping, check for onboarding UX issues
-
Active (30d)
?
Users active in the last 30 days. Compare with Active (7d) — a big gap means users try the app but don't stick around weekly.e.g. 10 active (30d) but only 4 active (7d) = 60% drop-off after first use
╱
Feature Adoption Over Time
?
Weekly unique users per feature over the last 12 weeks. Shows which features are gaining momentum vs. being abandoned. A feature with rising lines is worth investing in.If Bathroom rises from 3 to 8 users/week while Notes stays at 2, double down on bathroom UX
☰ Feature Adoption
i
Which features are being used, by how many people, and how deeply. Sort mentally by Adoption % to find underperforming features.
|
Feature
?
Each tracked feature in the app with a sub-detail row showing the breakdown (e.g. urination vs bowel, full vs quick check-ins).e.g. "Bathroom Logging — 120 urination, 45 bowel"
|
Total Entries
?
The total number of logged entries across all users for this feature. Higher counts indicate heavier overall usage.e.g. 342 total bathroom events logged by everyone combined
|
Unique Users
?
How many distinct users have used this feature at least once. Compare against Total Users to see breadth of adoption.e.g. 8 of 12 total users have logged at least one bathroom event
|
Adoption
?
Percentage of active users (30d) who have used this feature. Shows how widely a feature has been discovered and tried.e.g. 67% means 8 out of 12 active users have tried it
|
Per User
?
Average entries per user who has used this feature (Total Entries ÷ Unique Users). Measures depth of engagement — are adopters actually using it regularly?e.g. 5.3 means each user who adopted this feature logged ~5 entries on average
|
❤ Wellness Check-in Breakdown
i
Deep dive into wellness check-in variants. Compare Active (7d) to Users to spot variants losing steam.
No wellness check-in data available
|
Variant
?
The type of wellness check-in. "Full" check-ins capture all dimensions (discomfort, stress, energy, mood). "Quick" captures a single dimension for fast logging.e.g. "Morning" = full check-in with sleep data, "Quick: Stress" = single stress-only entry
|
Entries
?
Total number of check-ins of this variant across all users. Compare across variants to see which check-in styles are most popular.e.g. 42 Morning check-ins total — if this is much higher than Evening, users may prefer logging in the AM
|
Users
?
Number of distinct users who have used this variant at least once. Shows breadth — how many people discovered and tried this check-in style.e.g. 8 users tried Morning check-ins vs. 3 for Evening — Morning is more widely adopted
|
Active (7d)
?
Users who logged this variant in the last 7 days. A recency signal — is this variant gaining traction or was it a one-time experiment?e.g. 5 of 8 Morning users active recently = healthy retention. 1 of 6 Evening users = variant may be dying
|
Per User
?
Average entries per user for this variant (Entries ÷ Users). Measures how deeply engaged adopters are with this specific check-in style.e.g. 5.3 per user for Morning vs 2.1 for Quick: Energy — Morning users are more committed
|
╱
Daily Activity (Last 30 Days)
?
Total entries per day broken down by type. Look for consistent daily usage vs. sporadic bursts. Gaps indicate days with zero engagement across all users.Steady 5-10 entries/day = healthy habit. Spikes then silence = users trying then abandoning
★
Engagement Depth
?
Quality signals beyond just "are they logging." Provider tags mean users are preparing for appointments. Streaks mean daily habits are forming. Time-to-first-entry shows onboarding friction.
Episode completion rate
?
% of started episodes that were completed. Low rates may mean episodes are too long or users don't see value in finishing them.e.g. 40% completion = users start tracking bad days but give up halfway
-
Total notes
?
Total notes logged by users. Notes capture free-form observations about symptoms, contributors, and questions for providers.e.g. 25 notes = users actively documenting their experiences
-
Avg days to first entry
?
Average days between signup and first logged entry. Lower is better — high values suggest onboarding friction or unclear value proposition.e.g. 0.5 days = users start immediately. 3+ days = they signed up but hesitated to engage
-
↻
Retention
?
When users last tracked something. Healthy apps have most users in "Today" or "1-7 days." Heavy "30+ days" or "Never" segments signal churn risk.If 40% are in "Never tracked" — onboarding isn't converting signups to active users
When users last tracked something
💬
App Feedback Activity
?
How actively users are providing feedback. High participation rates mean engaged users. Track submission trends and response times to maintain feedback loop health.
i
Measures the health of your feedback loop — are users engaged, and are you responding?
-
Total Feedback
?
All feedback submissions ever. Includes bug reports, feature requests, and general feedback across all time.
-
Submitters
?
Unique users who have submitted at least one piece of feedback. Compare to total users to gauge participation breadth.e.g. 5 of 12 users submitted feedback = 42% participation
-
Participation Rate
?
% of cohort users who have ever submitted feedback. Target 50%+ for healthy engagement. Below 30% means most users are silent.e.g. 42% — decent but could improve. Consider prompting inactive users
-
Last 7 Days
?
Feedback received in the past week. A recency signal — are testers still actively providing input, or has feedback dried up?e.g. 0 in last 7 days after initial burst = feedback fatigue, may need to re-engage
Feedback by Type
?
Distribution of feedback types. Mostly bug reports = stability issues. Mostly feature requests = users want to invest in the product. Balance is healthy.
Response Pipeline
?
Where feedback items sit in the workflow. Heavy "New" = falling behind on reviews. Growing "Resolved" = healthy feedback loop. Track response rate and time to maintain trust with testers.
Top Feedback Areas
?
Which parts of the app generate the most feedback. High-feedback areas need the most attention — either they're broken or they're the most-used surfaces.
Weekly Submissions
?
Feedback volume per week. An initial spike is normal. Watch for sustained trickle vs. complete drop-off. Bug reports trending down = stability improving.
👤
Per-User Activity
?
Individual user breakdown showing signup date, last activity, and per-feature usage counts. Identify power users, at-risk users (no recent activity), and users who never engaged.
i
Sorted by total entries. Green dot = active in last 7 days. Red = dormant (7+ days). Gray = never tracked.
|
User↕
|
Signed Up↕
|
Last Active↕
|
Bathroom↕
|
Wellness↕
|
Notes↕
|
Episodes↕
|
Appts↕
|
Reports↕
|
Total↓
|