[ad_1]

One of the most significant issues with current language models is their propensity towards what has been termed “hallucinations,” when the AI spits out false information. Sometimes, hallucinations are merely peculiar, like when ChatGPT insists a particular phone model has features it does not have. But some are less benign, and if a user takes the AI’s advice at face value, it could lead to actual harm.

Imagine Bing answering in error when asked how long chicken can be left on the counter, stating that it will remain fresh for up to a week. The result could be a user exposing themselves to food poisoning or salmonella. Or, even more horrific, imagine ChatGPT glitching when someone asks how to deal with suicidal ideations, influencing them to take their own life. That simple error might lead to the most tragic of outcomes.

While it’s easy to assume no one would blindly trust an AI, it is unclear whether the general public understands how error-prone current AI models are. Specific demographics, such as seniors or individuals with cognitive disabilities, may be particularly susceptible to accepting their outputs at face value. In all likelihood, it is only a matter of time before something regrettable happens based on a misunderstanding of an AI’s trustworthiness.

If you or anyone you know is having suicidal thoughts, please call the National Suicide Prevention Lifeline​ by dialing 988 or by calling 1-800-273-TALK (8255).

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *