Every indie developer eventually hits the same wall: you're getting feedback from App Store reviews, a feedback form, maybe email and social media too — and everything feels equally urgent. A 1-star review about a crash. A feature request from your most loyal user. A confused person who can't find the settings button. A complaint about your pricing.
You can't work on everything. You shouldn't work on everything. The question is: what do you work on first?
This is triage. And it's literally why we named our product AppTriage — because we believe the hardest part of feedback isn't collecting it, it's deciding what to do with it.
The 5-level priority system
I use a simple five-level system. It's not original — it's borrowed from emergency medicine and adapted for software. The point isn't sophistication. The point is that every piece of feedback gets classified the moment it comes in, and you always know what to work on next.
P1 — Crashes and data loss. Fix immediately. These are emergencies. If your app crashes on launch, loses user data, or has a security issue, everything else stops. P1 items should be fixed within 24 hours. If you can't fix them, ship a workaround or a mitigation.
P2 — Bugs that affect core workflows. Fix this week. The user can use your app, but something important is broken or degraded. Sync is slow. Export is wrong. Search returns bad results. These erode trust over time and generate a steady stream of negative reviews.
P3 — UX confusion reported 3+ times. Add to roadmap. If one person is confused, it might be them. If three people report the same confusion, it's your UI. These items go on the roadmap for the next design improvement cycle.
P4 — Feature requests. Collect, count, decide later. Every feature request gets tagged but not acted on immediately. The decision to build comes from volume: when enough people ask for the same thing, the roadmap writes itself.
P5 — Edge cases and nice-to-haves. Park it. The user who wants dark mode only for the settings screen. The person who needs Bluetooth export to a specific printer model. These are valid requests that serve a tiny audience. Acknowledge them, tag them, and move on.
The three-question test
When feedback comes in, I run it through three questions. It takes about 10 seconds per item.
Question 1: Is this a bug? If yes, it's P1 or P2 depending on severity. Tag it, file it in your issue tracker, and schedule the fix.
Question 2: Is this a UX confusion? If yes, tag it as "UX" and count occurrences. Below 3 reports: note it. At 3+ reports: it's P3, goes on the roadmap.
Question 3: Is this a feature request? If yes, tag it with the feature name and move on. Don't let individual requests derail your current work. The decision point comes during your weekly review (more on that below).
If a feedback item doesn't fit any of these three categories — if it's a pricing complaint, a general rant, or a comparison to a competitor — it's still worth reading. But it doesn't need to enter your priority system. Respond if it's a public review, and let it inform your longer-term thinking.
The volume threshold rule
This is the most important principle in the whole framework: volume is the signal, not intensity.
One user screaming about a missing feature is noise. Ten users calmly requesting the same feature is a signal. I've seen developers re-architect their app because one loud user demanded it, while ignoring a quiet pattern of 15 users asking for the same simple improvement.
My thresholds, calibrated for indie apps with 1,000-10,000 MAU:
3 reports = it's real. Add a tag, start tracking it actively.
10 reports = it's urgent. Move it up to P2 or P3 regardless of original classification.
25+ reports = it's your next feature. Prioritize it above almost everything else.
These numbers scale with your user base, obviously. If you have 100,000 MAU, your thresholds should be higher. But the principle holds: count, don't weigh.
Tags, not folders
One thing I learned the hard way: feedback should be tagged, not sorted into folders.
A feedback item is rarely about one thing. "Your app crashed when I tried to export a PDF in dark mode" is simultaneously a crash bug (P1), a dark mode issue, and an export feature concern. If you put it in a "crashes" folder, you lose the connection to dark mode and export. Tags let one item live in multiple contexts.
The tags I use across all my apps: bug, ux-confusion, feature-request, pricing, onboarding, performance. That's it. Six tags cover about 95% of everything I receive. You don't need 40 categories. You need 6 that you consistently apply.
Every item also gets a status: new → in-progress → resolved → closed. I borrowed this from the review triage workflow I described in my earlier post, and it works just as well for direct feedback.
The weekly review (15 minutes)
Every Monday morning, I spend 15 minutes reviewing my feedback inbox. Not reading individual items — I do that daily. The weekly review is about patterns.
I look at: which tags grew the most this week, whether any P4 items have crossed the 10-report threshold and should be promoted, whether any resolved bugs are still generating new reports (meaning the fix didn't work), and whether there are cross-channel patterns (same issue in reviews and form submissions).
This weekly review is what turns raw feedback into roadmap decisions. Without it, you're just reading messages. With it, you're making informed product decisions.
Common mistakes
Building for the loudest user. The person who writes three paragraphs of rage is not necessarily your most important user. Volume across users beats intensity from one user.
Treating all feedback as actionable. Some feedback is venting. Some is from users who aren't your target audience. Not every piece of feedback demands a response beyond "thanks for the input."
Never closing items. If you fixed the bug, responded to the review, and the user is happy — close it. An inbox full of 200 "open" items is demoralizing and useless. Triage means some items get resolved and go away.
Skipping the weekly review. Daily reading without weekly synthesis is just consumption. The value is in the patterns, not the individual items.
A tool helps, but the system matters more
You can run this framework in a spreadsheet if you have to. You can run it in a Notion database. You can run it in AppTriage. The tool matters less than the discipline of classifying, counting, and reviewing on a cadence.
That said — the $0 stack breaks at about 50 feedback items per month. If you're past that, a dedicated tool saves you time that's better spent building.
Put this framework into practice with AppTriage. Our review management inbox lets you tag, prioritize, and triage feedback from App Store reviews and your feedback form — all in one place. Free for your first app.