Researchers at the University of Technology, Sydney (UTS) have unveiled a revolutionary device: a portable, non-intrusive system with the uncanny ability to uncover a person’s silent thoughts and convert them into readable text.
The demo the team shared shows a person wearing an electroencephalography (EEG) headset or cap, which helps read the brain waves that occur when we are performing some activity — in this case, the brain cap measures the activity and translates the thoughts into words.
Translating the brain’s language into understandable words would require invasive surgery for a brain implant like Elon Musk’s Neuralink, or expensive MRI scans.
“This research is a pioneering effort to directly convert raw EEG waves into language and marks a major advance in the field,” said Professor CT Lin, Director of the GrapheneX-UTS HAI Centre.
Portable, non-invasive mind-reading AI
EEG signals, which represent electrical activity in the brain, are divided into separate units with their own unique characteristics and patterns, the researchers explained in a press release.
This segmentation is achieved using DeWave, an artificial intelligence (AI) model developed by in-house researchers that is able to understand and interpret EEG signals by learning from extensive datasets of EEG data.
Essentially, the DeWave acts like an EEG signal translator: It takes the complex patterns and information embedded in EEG waves and converts them into understandable forms like words and sentences.
This process is made possible by training AI models with large amounts of EEG data, allowing them to recognise specific patterns within EEG signals and associate them with meaningful linguistic expressions.
The end result is a way to represent and communicate the information contained in EEG signals in a more accessible, human-readable format.
“This is the first study to incorporate discrete encoding techniques into the brain-to-text translation process, introducing an innovative approach to neural decoding. Its integration with large-scale language models also opens up new areas in neuroscience and AI,” added Professor Lin.
The system is currently only 40% efficient.
For this grand experiment, the researchers invited 29 intrepid participants, a sample size that suggests their results are “likely to be more robust and adaptable than previous decoding techniques that have only been tested on one or two individuals.”
However, there are concerns: When dealing with nouns that represent people, places, or things, the model tends to favor synonym pairs rather than providing an exact translation. For example, if the original word is “the author,” the model might generate a synonym such as “the man” rather than a more accurate translation.
“We believe this is because semantically similar words produce similar brainwave patterns when the brain processes these words. Despite challenges, our model produces meaningful results by aligning keywords and forming similar sentence structures,” explained Yikun Duan, lead author of the study.
The system’s translation capability is currently at 40 percent, but the team hopes to increase that to 90 percent.
This study arXiv It has not yet been peer reviewed.
About the Editor
Sejal Sharma Sejal is a Delhi-based journalist currently focused on reporting on technology and culture. She is particularly passionate about covering Artificial Intelligence and Semiconductor industry and strives to make people understand the power and pitfalls of technology. Outside work, she enjoys playing badminton and spending time with her dog. Please feel free to email us with any comments or feedback on our work.