[ad_1]

Google’s latest generative AI chatbot, Bard, has been in and out of the news ever since it rolled out to the public in the experimental stage. Google has taken multiple steps to enhance the chatbot’s logical abilities and usefulness with Workspace integrations, but there are avenues where a lot remains to be done. It was inevitable that someone would bring up the infamous Android vs. iOS debate with Bard, and surprisingly, the AI seems to have a preference.




In an interesting test, South Africa-based iOS developer Junaid Abdurahman on Twitter asked Bard “Do you prefer iOS or Android,” to which the generative AI chatbot responded in favor of iOS. Although Google openly disclaims Bard’s responses don’t represent the company’s views, it is peculiar that an AI has a preference to begin with.

In our testing with the same query, Bard explained it prefers iOS for the user-friendliness, polished UI, and frequent software updates. However, when we flipped the question around to ask “Do you prefer Android or iOS,” Bard expressed a preference for Android’s customization, app selection, and more affordable devices. Many Twitter users and a few of my colleagues at AP confirmed that the OS we mention first in the question is the one Bard prefers, suggesting the AI is eager to conform to your biases.

Bard prefers Android when we mention it first in the query (left), and prefers iOS when that’s mentioned first (right)

We also peeked at the drop-down for other responses Bard drafted in both cases, but in our testing we only saw the same OS as the original answer preferred in the drafts as well. On the other hand, Twitter users found drafts preferring Android when the original response said iOS was better, leaving us unable to infer anything from this behavior.

BARD-Android-vs-iOS-3

Presently, there is no way to ascertain how Bard arrives at a response. In theory, a conversational AI like Bard should not have preferences and biases, but only one user saw Bard respond to the Android vs. iOS question saying “I am a large language model, so I do not have personal preferences.” Whatever Bard’s answer to the iOS vs. Android question may be, one could theorize the AI seeks to conform to your biases, or that its publicly sourced training data was biased to begin with, but there’s no definitive way to confirm that’s the case.

Previous analysis of unsupervised AI decision-making, such as one detailed report from the Harvard Business Review, concluded that present-day AI isn’t capable of making a decision on its own, unless the training data is biased to favor one outcome over the others. It can be remedied if AIs like Bard are taught to respect human values and decision-making principles, through unbiased data, while keeping humans in the loop during decision-making. Bard making independent, rational, and impartial decisions may seem like a distant dream today, so perhaps a boilerplate response saying it doesn’t have a preference would be most appropriate for now. However, given the rate of AI advancement, that response may only be a temporary measure.



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *