CarderPlanet
Professional
- Messages
- 2,549
- Reaction score
- 724
- Points
- 113
A biased algorithm is not the most reliable helper.
A study conducted by psychologists from the University of Deusto in Spain showed how artificial intelligence affects our way of thinking.
Advances in artificial intelligence systems (for example, the ability to conduct a dialogue with a person on an equal footing) have made the technology as convincing and reliable as possible in our eyes. Many companies are actively implementing neural networks and machine learning in the workflow to make life easier for employees.
Despite all the advantages, the results that neural networks produce can be quite biased. It is important to understand that the basis for training AI models is human — made materials. If the introductory texts contain errors, the algorithm will reproduce them as well.
As part of the study, volunteers had to diagnose a patient with a fictional disease. Participants were divided into two groups: some used AI prompts, while others made decisions on their own.
The fake algorithm (in fact, the subjects interacted with a fully controlled program) intentionally made the same mistakes. Later, when the AI assistant was disabled, people, relying on the same logic, began to make similar mistakes.
This effect was not observed in the control group.
Obviously, if models trained on open data and supposedly reliable systematically transmit any information to us,it will still be stored in memory.
Not only do we unconsciously spread misinformation online, but we also run the risk of becoming victims of it by adopting the biases of the "authoritative" system. It turns out to be a kind of vicious circle, which can only be interrupted by careful fact checking and new regulations on the part of developers.
The results of the study are published in the journal Scientific Reports.
A study conducted by psychologists from the University of Deusto in Spain showed how artificial intelligence affects our way of thinking.
Advances in artificial intelligence systems (for example, the ability to conduct a dialogue with a person on an equal footing) have made the technology as convincing and reliable as possible in our eyes. Many companies are actively implementing neural networks and machine learning in the workflow to make life easier for employees.
Despite all the advantages, the results that neural networks produce can be quite biased. It is important to understand that the basis for training AI models is human — made materials. If the introductory texts contain errors, the algorithm will reproduce them as well.
As part of the study, volunteers had to diagnose a patient with a fictional disease. Participants were divided into two groups: some used AI prompts, while others made decisions on their own.
The fake algorithm (in fact, the subjects interacted with a fully controlled program) intentionally made the same mistakes. Later, when the AI assistant was disabled, people, relying on the same logic, began to make similar mistakes.
This effect was not observed in the control group.
Obviously, if models trained on open data and supposedly reliable systematically transmit any information to us,it will still be stored in memory.
Not only do we unconsciously spread misinformation online, but we also run the risk of becoming victims of it by adopting the biases of the "authoritative" system. It turns out to be a kind of vicious circle, which can only be interrupted by careful fact checking and new regulations on the part of developers.
The results of the study are published in the journal Scientific Reports.
