Web Analytics

This One Twist Was Enough to Fool ChatGPT – And It Could Cost Lives – New Study/Science Updates



AI Interface Prompt Error Warning System AlertAI systems like ChatGPT may appear impressively smart, but a new Mount Sinai-led study shows they can fail in surprisingly human ways—especially when ethical reasoning is on the line. By subtly tweaking classic medical dilemmas, researchers revealed that large language models often default to familiar or intuitive answers, even when they contradict the facts. These […]



Summary

A Mount Sinai study reveals that AI systems like ChatGPT, despite their apparent intelligence, exhibit flaws in ethical reasoning akin to human shortcomings. Researchers subtly altered classic medical dilemmas and observed that large language models frequently defaulted to familiar or intuitive responses, contradicting factual information. This suggests AI may struggle with nuanced ethical considerations and rely heavily on pre-existing biases, raising concerns about their reliability in complex decision-making scenarios requiring ethical judgement.

Read more…

This post is part of “Science and Technology News”, Follow for more…!!!

Credits: Source

Disclaimer

Dr AF Saeed

Related post

Thank you for Visiting. Leave a Reply!

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.