Many people around the world live on loss of loss on speaking ability, whether caused by a catastrophe or illness; It is convicted that the words that appear in your head are caught there.
Most have no choice to communicate more than being directed by voice movements with some moves, as happened to the expert Stephen Stephen. But technology can help them change that sooner.
From electrons to speech idiom
So far, an extraordinary step has been taken: that people with a paralysis can deeply respond with a " no.
Now, three teams of researchers have taken important steps in the last few months conversion of raw data captured by electronic networks (applied in the brain) ** in consistent words and sentences ** that, in some cases, were capable of human audiences.
Did not throw the fields to flight: one of the three teams got the computers talk about the speech as they were thinking study topics.
The examination subjects were tested (listening to audio recordings, thinking in quiet, reading, etc.) while researchers They were watching the cortex message of the brain, which is still active when we listen and when we talk.
The team was led by Nima Mesgarani at Univ. Of Columbia a collection of data collected from five people with epilepsy: a strange network analyzed the data when patients listened to people who named numbers from nineteen to nine.
Then the AI was able to reconstruct the words by a 75% accuracy. Examinations carried out by UCLA's UCLA ambulance team, a number of sentences were marked by 166 people in 80% of cases.
It is not easy "listening to the brain"
However, to improve these outcomes, they have three problems:
The way in which these signals are translated into speech sounds different from one person to another, so the AI must be trained for each individual.
AI works best when they are fed by accurate data, which explains why you need to open its chulagin to install the electrodes. But for researchers it is only possible to do this test to intervene to solve previous diseases (brain tumors, schizophrenia, etc.).
How will those IAs work when they work directly with people who can not speak? There are no brain signs that are not connected to external sound that can match complicated brain functions for a computer (for example, it is difficult to identify where to start and & # 39; finishing the & # 39; lecture inside & # 39;).
The need for a "big jump" will be depopulated in the thinking of the bands, in the words Gerwin Schalk, a neurologist at the National Center for Adaptive Neurotechnologies at the State Health Department of New York.
However, if the company succeeds, it is & # 39; Expect not only do they control the phrases they are in & # 39; Thinking to reconstruct, but also be able capture and express other aspects of speech, such as tone or sound.
Via | Science