Researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre have developed a portable, non-invasive system that can decode silent thoughts and turn them into text.
The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.
This is great, but for the rest of us it also means it can be a way to have a conversation with someone else without needing to look at a screen or speak out loud.
I wonder how this research compares to research on subvocalization. When ChatGPT was announced last year one of my first thoughts was that the technology could create an enormous leap in subvocalization technology where sensors on your neck could detect your inner voice and output it to text. Subvocalization seems much more precise than reading brain waves for this type of use case.
This is great, but for the rest of us it also means it can be a way to have a conversation with someone else without needing to look at a screen or speak out loud.
I wonder how this research compares to research on subvocalization. When ChatGPT was announced last year one of my first thoughts was that the technology could create an enormous leap in subvocalization technology where sensors on your neck could detect your inner voice and output it to text. Subvocalization seems much more precise than reading brain waves for this type of use case.