This One Twist Was Enough to Fool ChatGPT – And It Could Cost Lives – New Study/Science Updates
AI systems like ChatGPT may appear impressively smart, but a new Mount Sinai-led study shows they can fail in surprisingly human ways—especially when ethical reasoning is on the line. By subtly tweaking classic medical dilemmas, researchers revealed that large language models often default to familiar or intuitive answers, even when they contradict the facts. These […]
Summary
A Mount Sinai study reveals that AI systems like ChatGPT, despite their apparent intelligence, exhibit flaws in ethical reasoning akin to human shortcomings. Researchers subtly altered classic medical dilemmas and observed that large language models frequently defaulted to familiar or intuitive responses, contradicting factual information. This suggests AI may struggle with nuanced ethical considerations and rely heavily on pre-existing biases, raising concerns about their reliability in complex decision-making scenarios requiring ethical judgement.
Read more…
This post is part of “Science and Technology News”, Follow for more…!!!
Credits: Source
Disclaimer