In a groundbreaking study published in The Guardian, scientists have successfully reconstructed Pink Floyd’s iconic song “Another Brick in the Wall” by tapping into people’s brainwaves. This marks the first instance where a recognizable song has been decoded from recordings of electrical brain activity.
The primary objective behind this research is to potentially restore the musicality of natural speech in individuals who face communication challenges due to severe neurological conditions, such as stroke or amyotrophic lateral sclerosis (ALS) – the same disease that Stephen Hawking battled.
While earlier efforts by the same lab managed to decipher speech and even silently imagined words from brain recordings, these reconstructions often sounded robotic, according to Prof. Robert Knight from the University of California in Berkeley, who led the study. Music, he points out, is inherently emotional and prosodic, encompassing rhythm, stress, accent, and intonation. This offers a broader spectrum than mere phonemes in language, potentially adding another layer to implantable speech decoders.
Unlike previous studies that focused on the brain’s speech motor cortex, this research analyzed recordings from the brain’s auditory regions, responsible for processing all sound facets. The experiment involved 29 patients who listened to a segment of the Pink Floyd song while undergoing epilepsy surgery. Their brain activity was captured using electrodes placed directly on their brain’s surface. Subsequently, artificial intelligence was employed to decode these recordings and reproduce the sounds and words.
The reconstructed song, while slightly muffled, retained the song’s recognizable rhythms and melody. Knight believes that the quality of reconstructions can be enhanced by using a higher density of electrodes. He mentioned that the best reconstructions came from patients with electrodes spaced 3mm apart, suggesting that even closer spacing could yield better sound quality. As brain recording techniques advance, it might soon be feasible to capture such recordings non-invasively, possibly using sensitive scalp-attached electrodes.
Dr. Alexander Huth from the University of Texas in Austin, who earlier this year announced a system to translate brain activity into a continuous text stream using non-invasive MRI scan data, lauded the study. He emphasized the significance of music in our lives and the potential of brain-machine interfaces to convert imagined music into reality in the future.
This deeper understanding of music and language processing could also illuminate why individuals with Broca’s aphasia, who find it challenging to articulate the right words, can often sing words effortlessly.