top of page

🔍

 AI & Signal Processing Logic

FluentPlay converts thought into speech support through real-time EEG decoding and intelligent signal processing. Our proprietary pipeline detects intention-to-speak states and fluency disruptions, delivering precision-timed cues that guide the user back to fluent control.


🧬

 Feature Extraction

The cleaned signal is segmented and mined for fluency-relevant neural features:

  • Power Spectral Density (PSD): mental effort & speech planning

  • Event-Related Potentials (ERP): intention-to-speak detection

  • Phase-Amplitude Coupling (PAC): timing of speech onset

🔄

Real-Time Preprocessing Pipeline

Each EEG stream is filtered, cleaned, and prepared within milliseconds to enable accurate interpretation:

  1. Noise Filtering

    • Bandpass: 0.5–40 Hz

    • Notch: 50/60 Hz powerline filter

  2. Artifact Rejection

    • ICA + supervised Machine Learning for eye blinks, motion, and muscle artifacts

  3. Normalization

    • Z-score standardization across channels and users

 
🎤

 Speech Synchronization Algorithm

When FluentPlay detects a fluent or blocked state, it initiates a cue to assist timing:

  • Cue Types: Auditory, Haptic, or Visual

  • Timing: ±150 ms alignment with cortical readiness

  • Learning: Adaptive loop reduces cue reliance over time

🤖

 AI Classification Engine

Our AI model classifies brain states and predicts fluency outcomes in real-time.

  • Architecture: Hybrid CNN + LSTM

  • Output: Probability of fluent vs. blocked state

  • Latency: <300 ms from signal to decision

 🧠

 Neural Signal Acquisition

FluentPlay uses the Emotiv EEG headset to capture high-fidelity brainwave activity from speech-related regions.

  • Sampling Rate: 128 Hz – 256 Hz

  • Signal Types: Delta, Theta, Alpha, Beta, Gamma

  • Key Electrode Sites:

    • Frontal: speech initiation

    • Temporal: auditory feedback

    • Central: motor coordination

bottom of page