OpenMic runs a multi-layer acoustic engine on every syllable you speak — real-time signal analysis, speech recognition, and weighted scoring that measures pre-articulatory instability before the block happens. In any browser.
A browser-based speech fluency console. OpenMic listens through your microphone and runs a layered analysis pipeline — acoustic feature extraction, speech recognition, and weighted scoring — producing a per-syllable fluency profile in real time, session over session.
Blocks, prolongations, and repetitions detected from the audio signal — not self-report. Every syllable scored for acoustic stability across multiple analysis layers.
Practice any text. Words advance as you speak. The scoring engine runs in parallel on every word.
Session history, trend lines, and per-word scoring maps that update every session.
Your camera stays on while you speak so you can watch your own delivery in real time.
Language-agnostic acoustic engine with cloud-based speech recognition. Supports dozens of languages and regional variants.
OpenMic doesn't run a single algorithm on your speech. It runs a layered analysis pipeline — three independent systems that each see different things in the audio signal. Their outputs converge through weighted scoring into a single per-syllable fluency score. That score is what you see. The layers underneath are what make it trustworthy enough to track progress over time.
U.S. Provisional 64/016,001 · Filed March 24, 2026The Disfluency Feature Stream runs at ~60 frames per second directly on the audio signal. Every frame is classified — silent, building, or voiced — producing a continuous stream of acoustic features: intensity, onset count, voiced duration, block detection. This is the raw measurement layer. No transcription, no interpretation — just signal.
A cloud-based speech recognition layer operates independently from DFS. It returns what was said — phoneme identity, phoneme-level accuracy scores, word boundaries, and timing. Where DFS tells you how the speech sounded acoustically, SR tells you what was produced linguistically. Two independent verdicts on the same utterance.
DFS features and SR output converge through a weighted scoring layer with defined, calibrated weights. The result is a single per-syllable fluency score — the PAD score — that reflects acoustic stability, articulatory accuracy, and timing. This is what the speaker sees. The layers underneath are what make session-over-session comparison reliable.
DFS and SR see different things. DFS detects that something acoustically unstable happened — a repeated onset, an intensity spike, a voiced duration anomaly. SR detects that the phoneme produced didn't match the target, or that a word was repeated. When both layers flag the same syllable, the scoring confidence is high. When they disagree, the system surfaces the disagreement for clinical review.
This is what makes progress tracking reliable. The score isn't one algorithm's opinion — it's the convergence of two.
Each session produces a six-axis profile — Pressure Reduction, Anxiety Reduction, Ground Strength, Fluency Rate, Session Consistency, and Phoneme Range. Stacked across sessions, the shape shows longitudinal trajectory. Drag to rotate, scroll to zoom.
Eight voice-driven practice modes, all running on the same layered engine. Each targets a different aspect of speech-motor control. Microphone and browser only — no downloads.
Choose a phrase. Press the mic. Speak naturally. Syllable blobs light up in real time — blue for soft, green for medium, orange for loud. Your fluency score shows after each round. Difficulty adjusts automatically.
Easy onset and prolonged speech are among the most widely used fluency shaping techniques. Rainbow Syllables operationalizes these by requiring sustained, controlled voicing across an entire phrase.
Phrase-level production exercises the full basal ganglia–thalamocortical loop. Each syllable requires the putamen to gate the next motor plan in sequence.
Two sounds appear on screen. Say both without breaking your voice between them. 28 sound pairs across four difficulty levels. Start easy, work up.
Continuous phonation and coarticulation drills are standard in fluency therapy. Sound Bridge isolates the exact moment where voicing breaks down — the transition between two sounds.
Coarticulation is controlled by the premotor cortex. Sound Bridge directly trains feedforward control described in the DIVA model of speech production.
Type your scariest word. Hit start. Say it. Say it again. Watch your score climb and the climber rise with each repetition. The word that felt impossible becomes the word you've said fifty times.
Avoidance reduction therapy (Sheehan) and voluntary stuttering (Van Riper) are foundational. Summit applies structured exposure — confront feared words through repetition until the emotional charge diminishes.
Word-specific fear activates the amygdala, which modulates the basal ganglia gating system. Repeated voluntary production reduces the amygdala's threat response, stabilizing downstream motor gating.
Pick a word from the bank or type your own. Tap Speak and say it naturally. Each sound slot lights up in sequence — blue means it passed, red means it detected repeated attempts or high effort. Aim for all green. Less effort = better score.
Effort monitoring and proprioceptive awareness are core to fluency shaping. Articulation Trainer makes effort visible — showing which articulators are over-engaged and teaching minimal-pressure production.
Stuttering blocks involve excessive co-contraction of antagonist muscles at the articulatory level. Per-phoneme effort visualization provides biofeedback on motor overflow patterns that traditional therapy relies on clinician observation to detect.
Pads appear with a target volume zone. Make a sound and land in the zone. Green = nailed it. Difficulty increases as you improve. Three modes: hold, alternate, and burst.
Motor learning principles — specificity of practice, distributed practice, variable practice. Rhythm Pad trains proprioceptive control through volume targeting.
Volume regulation targets the M1 orofacial region and cerebellar-cortical coordination for sound intensity mapping.
Watch the beat indicator. When it hits the zone, make your sound. Start slow, speed up. The game scores how close you land to each beat.
Rhythmic cueing is used clinically to externalize the timing signal that the basal ganglia typically provides internally — reducing the system's dependence on a disrupted internal clock.
External rhythm engages the supplementary motor area via the cerebellum, providing an alternative timing pathway when the basal ganglia–SMA loop is dysregulated.
Watch bubbles move in wave patterns. When one enters the green zone, make a sound at the right volume. Miss the zone and it floats away. Speed changes as you level up.
Dual-task practice — producing controlled speech while tracking a moving target — trains attention resource management, building resilience against the cognitive load that exacerbates disfluency in real conversation.
Simultaneous visuomotor tracking and vocal output engages prefrontal executive control alongside the speech-motor circuit, training the system to perform under divided attention.
Stuttering changes in the time between sessions — not just during them. OpenMic gives your clients an independent practice tool and gives you the session data to see what changed.
Share OpenMic with clients. Review per-syllable session data, trend lines, and challenge-word Pressure maps between appointments.
Session-over-session PAD scores, disfluency event logs, and per-word history. Data export in CSV.
Caseload management, session-over-session progress tracking, per-phoneme analysis, and a six-axis clinical outcome profile — all from the engine's scoring output.
View clinical dashboard ↗Integrating the OpenMic engine inside your platform or product? B2B licensing and white-label arrangements available. Contact for scope and pricing.
Protocol FP-PILOT-001 · IRB track active. Contact for research partnership inquiries.
Full four-level resolution breakdown — session, word, syllable, phoneme — with the complete OpenMic signal record. For clinical and technical partners.
View capabilities deck ↗Research by Per Alm and others locates the neurological origin of stuttering in the basal ganglia–SMA timing loop — not at articulation, but in pre-articulatory motor planning. The motor plan fails before the mouth moves.
Most speech therapy tools measure at or after articulation. OpenMic's engine targets the pre-articulatory window acoustically, treating voice onset time and formant stability as proxies for what happened upstream in the planning circuit.
The engine targets instability at the Pre-SMA → SMA gate.
Phonetic complexity doesn't predict stuttering frequency. Word-specific neural history does. A simple word spoken hundreds of times with a stutter carries a conditioned motor planning burden that a complex novel word doesn't.
OpenMic tracks per-word scoring across sessions. The scoring map shows which words are stabilizing — and which aren't.
FluentPlay is in initial discussions with NIRx about developing a wearable fNIRS protocol for pre-SMA hemodynamic monitoring during speech planning tasks.
No active protocol is underway. If this work proceeds, PAD scores will be compared against fNIRS-measured pre-SMA activation, with the goal of validating PAD as a non-invasive proxy for a neural biomarker — which would enable clinical trial endpoints without neuroimaging infrastructure.
Every game shares the same audio engine. The microphone feeds a real-time analysis pipeline — the DFS runs at approximately 60 frames per second, classifying every frame into a disfluency feature stream. A parallel cloud-based speech recognition layer provides phoneme-level accuracy. Both streams converge into the scoring layer. No audio is recorded. Nothing is stored.
View pipeline walkthrough ↗FluentPlay is HIPAA-compliant by architecture. No personally identifiable or protected health information is created, collected, or stored at any layer. Audio is processed in real time through cloud-based speech recognition with no retention. No accounts or identifiers are required to use OpenMic.
Protocol FP-PILOT-001 · IRB track active. Research partnership inquiries welcome — contact below.
Will Carbone is the founder of FluentPlay Technologies. He's stuttered since childhood. Before FluentPlay, he spent more than a decade inside clinical and commercial biotech — building, monitoring, and stress-testing the systems that take drug molecules from synthesis to verified purity. Extraction, purification, analytical measurement, quality control from bench to commercial scale. The discipline of making invisible things legible through rigorous data.
When he looked at what existed for people who stutter, he found tools that hadn't kept pace with the neuroscience. The research was clear: stuttering is a timing problem rooted in pre-articulatory motor planning. The tools treated it as something else.
He left biotech and built the measurement instrument the field was missing.
Protocol FP-PILOT-001 · IRB track active.
Contact for research partnership inquiries.
Engine licensing, white-label deployments, and enterprise agreements handled directly. No RFPs — just a conversation.