Monday , June 17 2019
Home / argentina / Scientists have AI training to turn their brain symptoms into speech

Scientists have AI training to turn their brain symptoms into speech



gettyimages-91560242

Researchers were working with epilepsy patients who were attacking brain.

Paseika / Getty Images science library library

Neuroengineers have created a startup that uses smart networks to use a machine to read and translate brain activity.

An article, published on Tuesday in the Scientific Reports magazine, describes how the team at the Columbia University's Brain Behavior Institute at Columbia University had developed a deep learning algorithm and the same technology technology that supports Siri and Apple's Amazon Echo thinks "easily and easily rewritten" research was reported earlier this month but the journal article goes to a much larger depth.

A human computer framework may be able to give patients a " The ability to speak opportunity to use their thoughts to communicate with a connected robotic voice disappeared.

"We have shown, with the right technology, that those people's thoughts could be closed and understood by any listener," said Nima Mesgarani, the main researcher of the project, in the statement.

When we talk about a light rainbow, we will send electrical signals around the old thinking box. If scientists are able to write these marks and understand how to create or hear words, we will get one step closer to translation translates. With enough understanding – and enough processing power – that could create a device that translates to think to & # 39; speak.

And that is what the team has done, creating a "dictionary" that "s"; Using algorithms and strange networks to express features.

To do this, the research team asked five contingent patients who already had a " brain surgery to help. They connect electrodes to an open landlord in the brain, then the patient was listening to the value of 40 seconds of speech sentences, repeatedly repeated six times. He helped listen to the stories to train the words.

The next thing, the patient listened to the speakers who were in attendance; counted from nine to nine, and returned their brain symptoms to their voice. Then, the algorithm of voices, called the WORLD, created its own sounds, which was cleared by a strange network, came to an end in its strategic sense; including the counting. You can hear what's like here. It is not perfect, but it is easy to understand.

"We found that people can understand about 75% of the time, which is significantly higher than any other endeavors," said Mesgarani.

Researchers concluded that the reconstruction accuracy is dependent on the number of electrodes placed on the patient's brains and how long their voices were trained. As expected, by increasing the electrodes and its output; Increasing the duration of the training, the speaker is able to collect more data and results in a better reconstruction.

Looking forward, the team wants to verify what type of signal will be given when a person is Thinking of speaking, instead of listening to a speech. They also hope to verify a set of words and more complex sentences. The development of the algorithms with more data may end up to include a brain that is infected; abolish language as a whole, and express ideas to words.

That would be an important step for many.

"It would give anyone who lost their ability to speak, at least through bad or disease, the renewable opportunity linked to the world around them," said Mesgarani.

NASA is turning 60: Its & # 39; spacecraft to bring humanity beyond any other person, and she has plans to go further.

Taking It to Extremes: Uncertain mixing situations – volcanic breaking, nuclear holes, 30-foot wavelengths – with daily technology. Here's what happens.


Source link