Lingokids runs a high-scale mobile app with two distinct audiences and success criteria: kids need a delightful learning experience, while parents need clarity and confidence to subscribe. That makes experimentation high leverage, but also high risk: a “small” paywall change can swing revenue, retention, and long-term engagement.
The monetization team wanted to move faster and test more ideas in parallel, without letting overlapping experiments invalidate results or let a bad change quietly roll out to 100% of traffic. At the same time, they needed guardrails to ensure short-term monetization wins didn’t damage retention.
GrowthBook is used across Lingokids by engineering, product, and data, with roughly 40 active users inside a 150+ person company. GrowthBook has been broadly adopted across the organization for feature flags and experimentation, with multiple read-only stakeholders following along. The platform supports Lingokids’ React Native mobile app and Ruby backend, with results analyzed alongside their data stack.
“We use GrowthBook extensively for both experimentation and feature flags. We often have five or six experiments in flight, and things have been quite smooth.
— Paulo Alves Pereira, VP of Engineering
On the monetization side, experimentation is deeply embedded into how the team operates:
“For the monetization metrics we normally use conversion and revenue. From a guardrail point of view, we keep an eye on retention.”
— Filipa Batista, Product Manager, Billing, Pricing & Monetization
Over time, the monetization team dramatically increased experiment throughput. Today, they run around 15 experiments per month, roughly doubling the number of tests they can execute in parallel.
Two GrowthBook capabilities unlocked this scale:
As experimentation increased, overlap became a real problem. Multiple experiments touching the same purchase flow could clash and contaminate results.
Namespaces solved this by letting the team bucket users into isolated traffic segments, for example, in 20% slices, so multiple experiments could run simultaneously without interference.
“Namespaces really shifted the velocity difference. We can bucket experiments, avoid clashes, and run more in parallel.”
— Alejandro Martí, Engineering Manager, Monetization & Billing
For many experiments, Lingokids uses GrowthBook configuration parameters (via JSON) to control behavior dynamically.
Instead of shipping new code or waiting on mobile release cycles, the team can immediately launch new variants to adjust:
“We don’t need any code changes, we don’t need an app release. We just configure the new tests and launch right away.”
— Filipa Batista, Product Manager, Billing, Pricing & Monetization
This approach significantly reduced time-to-data and removed mobile release bottlenecks from the experimentation process.
High velocity only works if risk is controlled. Lingokids uses a disciplined rollout process for every experiment:
“Whenever we see it impacting conversion or retention by a certain amount, we stop it right away.”
— Filipa Batista, Product Manager, Billing, Pricing & Monetization
This process has helped Lingokids avoid shipping costly losers at scale.
The monetization team now runs around 15 experiments/month, roughly a 2× increase in parallel experimentation volume since adopting GrowthBook more heavily. Their typical “solid win” is ~5–6% uplift, with occasional standouts nearing 15%. Roughly one-third of experiments are winners.
One of Lingokids’ proudest examples is a paywall redesign in the gaming experience. The team iterated 4–5 times, evolving from a bare-bones layout (title, two products, button) to a clearer flow emphasizing one product, reduced cognitive load, and better communication of subscription benefits. The outcome: ~7% increase in conversion and revenue, without harming the play experience.
As experimentation scales, the team is candid about attribution complexity. With overlapping experiments and variability from user acquisition channels, it’s not realistic to sum per-test uplifts into a single “conversion went from X to Y” number. Instead, they focus on repeatable lift ranges, strong win rates, and post-rollout validation with their data team to confirm experiment effects show up in real-world business performance.
“It’s one thing to run one experiment. It’s another to connect it to all the previous iterations to understand if it’s worth continuing in that direction.”
— Alejandro Martí, Engineering Manager, Monetization & Billing
The team is actively exploring GrowthBook’s APIs, data access, and upcoming capabilities to help automate insight generation and guide future experimentation.