The High Search for the Brain Computer Interface That Speaks Your Mind


Here is the research setup: A woman speaks Dutch into a microphone, while 11 small needles made of platinum and iridium record her brain waves.

The 20-year-old volunteer had epilepsy, and her doctors placed those 2-millimeter-long pieces of metal — each with 18 electrodes — into the front and left sides of her brain in hope. to find his origin. attacks. But that little neural micro-acupuncture was also a lucky break for a separate group of researchers because the electrodes had contact with the parts of his brain responsible for making and saying spoken words.

That’s the cool part. After the woman speaks (called “overt speech”), and after a computer algorithmically equates the sounds to her brain activity, the researchers ask her to do it again. This time he barely whispered, mimicking the words of his mouth, tongue, and jaw. That’s “intentional speech.” And then he did it all over again — but nothing at all. The researchers just asked him think speaks the words.

It’s a version of how people talk, but the opposite. In real life, we create silent ideas in one part of our brain, the other part makes these words, and the other controls the movement of the mouth, tongue, lips, and larynx, producing audible sounds. voice at the right frequency to speak. . Here, the computers allow the woman’s mind to jump in line. They register when he thinks — speaks — the technical term is “imagined speech” —and is able to play, in real time, an audible signal generated from interpolated signals from his brain. Sounds are not as intelligible as words. This work, published at the end of September, is still relatively preliminary. But the simple fact that it occurs at the millisecond-speed of thought and action shows a strange progression toward a more consistent use of the brain’s computer interfaces: giving a voice to people who can’t speak.

That inability — from a neurological disorder or brain damage — is called “anarthria.” It can be exhausting and scary, but people have some ways to deal with it. Instead of speaking directly, people with anarthria may use devices that translate the movement of other body parts into letters or words; even a flash will act. Recently, the computer brain interface inserted into the cortex of a person with locked-in syndrome allows them to interpret the imagination. written by hand to an output of 90 characters per minute. Good but not good; The average spoken word conversation in English is a pretty blistering 150 words per minute.

The problem is, so to speak moving one arm (or a cursor), the formation and production of speech is more complex. It depends on the feedback, a 50-millisecond loop between when we say something and hear ourselves saying it. That’s what allows people to do real-time quality control of their own language. For that matter, it allows people to learn to speak in the first place – listening to speech, producing sounds, hearing ourselves producing sounds (through the ear and auditory cortex, a whole part of the brain) and compare what we are. doing what we are trying to do.



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *