Researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre have developed a portable, non-invasive system that can decode silent thoughts and turn them into text.
That is a very good question and may help trace where the monologue ‘sounds’ in the brain. It would also be interesting if this were done on sign-language speakers.
The mental pathway from reading to idea to utterance goes through several portions of the brain:
Visual processing of the text
Text to phoneme to possible rehearsal of the muscles saying the word (which could be the source of internal monologue, at least while reading)
Idea/concept of the individual word
Grammatical analysis of the sentence
The mental model of the complete thought.
One interesting thing that suggests it would work was where the author stated i) it was better for verbs than nouns and ii) it would often pick a similar, related word rather than the actual one being read.
This suggests that (at least part of) what is being detected is the semantic idea rather than the phoneme encoding or the muscle rehearsal portions of the brain.
That is a very good question and may help trace where the monologue ‘sounds’ in the brain. It would also be interesting if this were done on sign-language speakers.
The mental pathway from reading to idea to utterance goes through several portions of the brain:
One interesting thing that suggests it would work was where the author stated i) it was better for verbs than nouns and ii) it would often pick a similar, related word rather than the actual one being read.
This suggests that (at least part of) what is being detected is the semantic idea rather than the phoneme encoding or the muscle rehearsal portions of the brain.