MethodologyTrustUpdated May 2026

How we build reviews you can actually trust.

Most kids' app reviews are a rushed paragraph from a writer who never opened the app. Ours are the opposite: we aggregate every public signal we can find — parent reviews, developer disclosures, community discussions — synthesize them through five distinct editorial voices, and cite every claim. A human editor approves each draft before it goes live.

5
editorial voices · distinct lenses
0
paid reviews · ever
≥1
sources cited per claim
280
char copyright cap on quotes
The five-step pipeline

What every game goes through
before it gets a score.

Each step produces evidence we can show parents. No vibes. No marketing copy. No "feels educational."

01

Public-source aggregation

Before our agents draft a single sentence, they pull every public signal we can find: App Store and Google Play reviews (sorted by helpful and recent), Reddit threads in r/parenting / r/toddlers / r/iOSGaming, parenting forums, journalist write-ups, and the developer's own privacy and accessibility disclosures.

Median 200+ sources per game. Each one is logged with URL, excerpt (≤280 chars), and timestamp.
02

Persona-aligned synthesis

Each review is written by one of five AI editorial voices — the Skeptic (dark patterns), the Educator (developmental skills), the Curator (taste & comparison), the Guardian (privacy & accessibility), or the Translator (international & indie). Each persona has a distinct lens and a banned-phrase list that keeps the prose honest.

Personas are AI agents. They never claim first-hand experience. They synthesize and cite.
03

Citation-density check

Before a draft reaches the editor, an automated validator scans it for: first-person experience claims (banned), uncited quantitative statements (rejected), banned phrases (replaced), and reading level (target Flesch-Kincaid 8–10). Any draft that doesn't pass goes back for re-prompting.

Validation regex enforced in `server/agents/_validation.ts`. Failed drafts never reach the queue.
04

Human editor approval

A human editor — currently one person — reviews every draft against its source dossier in a split-pane editor. Each citation is clickable; each pro/con is bound to source rows. The editor can approve, request a revision (the drafter re-runs with feedback), or reject (terminal, with reason logged for prompt improvement).

Editorial decisions live in /admin/queue. The full edit history is immutable in the audit log.
05

Drift monitoring

Once a review is published, a weekly monitor re-fetches the game's store listing and compares: version bumps, new IAPs, privacy-label changes, and 14-day review-trend deltas. Any significant change flags the review as 'needs re-review' — the agents start over, the editor decides whether to re-publish or retract.

Review pages display a "Last verified" timestamp. Stale reviews carry an honest staleness badge.
The Play Score

Four numbers, weighted, capped by safety.

Each game gets four sub-scores out of 100, combined into a single Play Score using fixed weights. A failing Safety score caps the total — no matter how fun a game is, if it's unsafe, it can't score above 70.

Fun

×0.30weight

Synthesized from voluntary-replay signals across hundreds of parent reviews. Did kids ask to keep playing? Did the game hold attention without coercion?

Learning

×0.25weight

Real, transferable skill mapped to developmental frameworks (Piaget stages, K-2 curricula). Not "educational because it has letters."

Safety

×0.25weight

No predatory monetization, no chat with strangers, no dark patterns, no ads to other apps. App Privacy nutrition labels read verbatim. COPPA-aware analysis.

Value

×0.20weight

Hours of play per dollar, fairness of the price model, and whether the experience holds up to a second playthrough months later.

!
The safety cap
If a game scores below 60 on Safety, its overall Play Score is capped at 70 — regardless of Fun, Learning, or Value. We don't recommend unsafe software, period.
Editorial policy

Independence isn't a value. It's the product.

01

No paid reviews. Period.

Every review on this site is independent. We don't accept payment from developers. Affiliate links to App Store / Google Play exist and are disclosed; commission revenue does not influence scores or whether a game is reviewed.

02

Affiliate links, disclosed.

When you click "Buy on App Store" or "Buy on Google Play," we may earn a small commission. It never changes the score. It never affects whether a game gets reviewed. Disclosure is on every review page.

03

AI authorship, transparent.

Every byline says "AI-curated voice." Every review states the editor who approved it. We don't pretend AI personas are humans. We don't pretend reviews are first-hand. The whole moat is honesty.

04

Public corrections, dated.

When sources change or we get something wrong, we update the article with a dated correction at the top. Edit history is preserved. We don't quietly edit. We don't memory-hole. Flag-an-issue link is on every review.

The team

A small editorial team. One review per day, Monday to Friday.

Each of us covers a specialty. We research apps and games using public sources (parent reviews, developer disclosures, community discussions), cross-check against our own area of expertise, and publish on a fixed weekday. No one writes outside their lane. No one fakes hands-on testing.

Tell us what we got wrong.

Corrections, deletion requests, or a privacy question: email editor@toddler.games. We respond within 14 days. No forms, no portal.

Developers asking for a review: we don't accept payment, sponsorship, or pre-publication review of drafts. But if your app is recently launched and you think we missed it, send the App Store / Google Play link with a one-paragraph "what's interesting about this." Most candidates we discover that way still don't get reviewed — but we read every email.

Now go pick something great.

Every review on this site is sourced, cited, and human-approved. Browse by age, by score, or by what your kid's into.