Home / Technology / Turning Thoughts into Words: Brain Activity Translator Breaks New Ground

Turning Thoughts into Words: Brain Activity Translator Breaks New Ground

A conceptual image illustrating the translation of human thoughts into text, featuring a brain divided into two halves; one side is a golden, organic brain structure and the other is transformed into a circuit board glowing with interconnected lines of data.

A conceptual image illustrating the translation of human thoughts into text, featuring a brain divided into two halves; one side is a golden, organic brain structure and the other is transformed into a circuit board glowing with interconnected lines of data.
Photo Source: DALL·E 2

The Brain Decoder is a pioneering system developed by researchers at the University of Texas at Austin. It utilizes artificial intelligence (AI) algorithms to translate

brain activity patterns into continuous text. While it’s not exactly mind-reading, this technology holds significant promise for individuals who have lost the ability to speak due to various conditions such as paralysis or neurological disorders.

The system works by analyzing patterns of brain activity captured through advanced imaging techniques such as functional magnetic resonance imaging (fMRI). These patterns are then processed and interpreted by AI algorithms, which generate corresponding text output.

The study detailing the Brain Decoder’s development and functionality was published in the journal Nature Neuroscience (Nat Neurosci 2023, 26, 858–866). It was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.

Initially, the researchers recorded fMRI signals from three individuals while they listened to 16 hours of spoken narratives. Based on this data, the team developed a decoder for each participant, associating fMRI signals with the meanings of specific words and phrases.

Subsequently, the same participants were asked to listen to new narratives, and the researchers assessed the decoders’ performance in reproducing the text of the stories. While the decoders accurately reproduced certain words and phrases, they more frequently generated text that did not precisely match the original but still conveyed the essence of the narrative. For example, when a participant heard the phrase “I don’t have my driver’s license yet,” the decoder interpreted their brain activity as “she has not even started to learn to drive yet.” The decoder predictions were much more accurate than would be expected by chance, based on several different measures.

A conceptual image illustrating the translation of human thoughts into text, featuring a brain divided into two halves; one side is a golden, organic brain structure and the other is transformed into a circuit board glowing with interconnected lines of data.
Photo Source: DALL·E 2

At present, the system depends on fMRI, which is not portable and thus limited to lab settings. However, the
researchers envision adapting the system to utilize more portable methods of measuring brain activity. This adaptation would enable its application in real-world scenarios. The development of this brain decoder marks a significant leap forward in the realm of brain–computer interfaces. Its non-invasive nature, coupled with impressive decoding capabilities, holds the promise of revolutionizing communication for individuals facing speech challenges. As the technology advances, the future looks bright for enhancing the quality of life for those who have lost their ability to speak.

Researchers Alex Huth (left), Jerry Tang (right) and Shailee Jain (center) prepare to collect brain activity data in the Biomedical Imaging Center at The University of Texas at Austin. The researchers trained their semantic decoder on dozens of hours of brain activity data from members of the lab, collected in an fMRI scanner.
Photo Source: Nolan Zunk/University of Texas at Austin

– Rajan Poudel
Ankuram Academy (2023)