
BBC Inside Science Thought-to-speech machine, City Nature Challenge, Science of Storytelling
11 snips
Apr 25, 2019 Gopala Anumanchipalli, a neuroscientist at UCSF, unveils groundbreaking research on decoding neural signals to create a speech prosthesis for those unable to speak. Geoff Marsh shares insights from the City Nature Challenge, highlighting iNaturalist as a tool for urban biodiversity. Then, journalist Will Storr explores the psychology of storytelling, linking narrative structure to human evolution and brain function, emphasizing how storytelling can effectively communicate complex science while addressing its challenges. This captivating conversation bridges technology, ecology, and narrative.
AI Snips
Chapters
Books
Transcript
Episode notes
Brain Signals Can Drive Fluent Speech Synthesis
- Decoding motor cortex signals can recreate speech by modeling the vocal tract and its movements.
- This approach targets fluent speech rates rather than slow text-to-speech pipelines.
Proof-Of-Principle Speech Clips From Brain Data
- The team demonstrates synthesized phrases produced from neural recordings of speech attempts.
- The output sounded distorted but clearly reproduced the intended sentences from brain data.
Train With Broad Phonetic Sentence Sets
- Train models using many sentences covering phonetic and articulatory contexts so brain signals map to speech.
- Use subjects speaking aloud during training to pair neural activity with known audio outputs.




