FluentPlay is a set of voice-driven games designed around how stuttering actually works in the brain. Your voice controls everything. No scripts. No breathing exercises. Just real practice, real data, and tools you can use at home.
When someone stutters, the problem isn't in their mouth — it's in the brain's timing system. The part of the brain that plans speech sends signals that are mistimed or unstable. FluentPlay's games are designed around this science: they track how speech is produced, syllable by syllable, and show where the timing breaks down.
Technical: the speech-motor circuit — from phonological encoding in the IFG to motor gating in the putamen to execution at M1 — breaks down at specific points. FluentPlay's PAD framework monitors those breakdowns in real time, per syllable, during gameplay.
Each game listens to your speech through the microphone and responds in real time. You don't type or tap — you talk. The games are designed to exercise specific parts of how your brain plans and produces speech. We're finalizing the suite now — join the waitlist to get access when they go live.
Easy onset and prolonged speech are among the most widely used fluency shaping techniques. Rainbow Syllables operationalizes these by requiring sustained, controlled voicing across an entire phrase. The volume zone feedback reinforces proprioceptive awareness — the speaker learns to feel and regulate their airflow and laryngeal tension in real time, which is the foundation of easy onset work.
Phrase-level production exercises the full basal ganglia-thalamocortical loop. Each syllable requires the putamen to gate the next motor plan in sequence. By practicing full phrases with real-time feedback, the speaker is training the SMA to coordinate smoother handoffs from one articulatory gesture to the next — the exact point where stuttering disrupts the timing chain.
Continuous phonation and coarticulation drills are standard in fluency therapy. Sound Bridge isolates the exact moment where voicing breaks down — the transition between two sounds. SLPs use connected speech tasks to train this, but most tools don't give real-time feedback on whether the speaker actually sustained voicing through the boundary. Sound Bridge does.
Coarticulation is controlled by the premotor cortex, which programs the specific sequence of articulatory muscle movements. In stuttering, premotor programming is less reliable during transitions — the abstract phonological plan fails to convert smoothly into continuous motor commands. Sound Bridge directly trains this conversion by requiring unbroken voicing across phoneme boundaries, strengthening the feedforward control described in the DIVA model of speech production.
Avoidance reduction therapy (Sheehan) and voluntary stuttering (Van Riper) are foundational approaches in stuttering treatment. The principle: avoidance reinforces fear, and fear drives more avoidance. Summit applies structured exposure — the speaker confronts their most feared words through repetition until the emotional charge diminishes. This is the same habituation mechanism used in cognitive behavioral approaches to anxiety.
Word-specific fear activates the amygdala, which modulates the basal ganglia gating system. When a feared word triggers an anxiety response, the putamen's motor gating becomes more unreliable — the gate misfires, producing blocks and repetitions. Repeated voluntary production of the feared word reduces the amygdala's threat response over time, which stabilizes the downstream motor gating circuit. Summit quantifies this process: the progress bar is a visible record of exposure volume.
Motor learning principles — specificity of practice, distributed practice, and variable practice — are the foundation of speech-motor training. Phoneme Drill isolates individual sound production so the speaker can build accuracy and consistency at the smallest unit before combining sounds into connected speech. Rhythm Pad trains proprioceptive control (hitting volume targets). Cadence trains temporal regulation (steady rhythm). Bubble Hunt trains precision under cognitive load (accuracy with time pressure).
Isolated phoneme production targets the M1 orofacial region — the final cortical output for speech motor commands. By training individual sound production with immediate feedback, the speaker strengthens the mapping between motor plan and motor execution. The cerebellum provides the millisecond-level timing coordination required here. The three sub-modes exercise different aspects of cerebellar-cortical coordination: volume regulation, temporal pacing, and adaptive motor control under varying demands.
We're finishing the game suite now. Drop your email and we'll let you know the moment you can play.
Will Carbone is the founder of FluentPlay Technologies. He's stuttered since childhood — the kind that teaches you to plan every sentence before you say it.
He came to speech technology from synthetic mRNA manufacturing — process development, tech transfer, production science. When he looked at what existed for people who stutter, he found breathing exercises dressed up as apps and clinical tools that hadn't evolved in decades. The neuroscience was clear that stuttering is a motor timing problem. The tools ignored it.
He left biotech, learned to build, and made the tools himself. Voice-driven games. Real-time speech scoring. A detection framework grounded in how the speech-motor circuit actually works. SLP and neuroimaging partnerships to keep it honest.
FluentPlay operates out of Somerville, Massachusetts. The games work because they were built by someone who needs them.
Have a question? Want to try the games with your SLP? Looking for tools for your child? Just want to say hi? Will reads every message personally.