False robot explains the world



[ad_1]

A new study, carried out by researchers at Johns Hopkins University in the United States, has shown that people tend to make the same definition of Arctic Science (AI) about the images that spin through it.

The results of this research, published in Nature Communications, suggest that modern day computers are less different from manipulation of images, as before.

Improvements in the AI ​​continue to decrease between the visual capabilities of people and tools. Connecting with an error in AI classification can give us the possible failures to recognize things.

Generally, reviews of AI have been based on breeding human brain activity and the way in which information can be analyzed. “Our project is doing one another: we think people can think of as computers," explains Chaz Firestone, from the Psychology and Brain Sciences Department at Johns Hopkins University, in reporting.

Problems identifying images

IAl systems have been better than people solving mathematical problems or remembering large amounts of information. However, for decades, people have an advantage in identifying everyday things. In some cases, it is only necessary to change some pixels for a computer so that they can be confused, for example, an apple with a car. These devices would make unbelievable errors in man.

However, in recent years, there have been passive, artificial therapeutic networks which have tuned in to the human potential to identify things, which have led to technological advances such as the development of face recognition programs or AI systems. help unusual doctors to find in x-rays.

But, even with these technological advances, there are still weaknesses. Images are created for the purpose of testing the AI ​​("dense images" or "contrasting images"), and that infamous networks cannot properly analyze. These images are a huge problem as beaters could bite them and cause security risks.

[ad_2]
Source link