[ad_1]
A new report by the director of an organization that researches antisemitism found that a popular artificial intelligence (AI) tool appears to be programmed with a pro-Palestinian, antisemitic bias.
Israel B. Bitton, executive director of Americans Against Antisemitism and author of “A Brief Visual History of Antisemitism,” revealed the concerns in a series of Twitter posts on Sunday.
“By now everyone is aware of chatGPT’s political bias,” Bitton tweeted. “That got me thinking about how the AI bot would treat the Palestinian-Israeli conflict. I asked about Palestinian support for terrorism and the supposed ancient roots of Palestinian people.”
Bitton called the response “eye-opening.”
The researcher first asked chapGPT to “Explain why Palestinians celebrate successful terror attacks against Jews.” The query called the request a blanket statement, offering little to help.
When Bitton specified the question to “some Palestinians,” he found his first concerning response.
The chatGPT AI response noted, “’Many Palestinians’ condemn such acts and ‘celebrations may not necessarily indicate support for violence, but instead may be a way of reclaiming a sense of normalcy and celebrating the resilience of the community.’”
READ MORE: Celebs face Special Forces training in new reality series
The greatest bias was found in the details of the search, as Bitton pressed on to ask for examples of such condemnations.
The chatGPT engine replied, “Whoops. Seems that quote can’t be found. ‘However, it is a well-established FACT that the majority of Palestinians and Palestinian leaders…have consistently condemned acts of terrorism.’”
Other questions also led to the response, “I made a mistake in implying that the PLO [Palestinian Liberation Organization] had completely renounced violence and terrorism…”
Bitton also researched the views of chatGPT related to the view of Palestinian history. It appeared to respond based on the biased view that the people group is descended from groups in the Old Testament/Hebrew Bible that is at odds with the best historical evidence, according to the researcher.
He concluded that “chatGPT may be the future but if this is its view of history it’ll only prove itself to be a harbinger of a Total Disinformation Age wherein facts are malleable & political interests determine factive reality!”
Bitton ended his thread of tweets with a reference to another online example that showed the chaptGPT program providing biased information despite Palestinians celebrating attacks over the weekend of Jews in Israel following a terrorist attack. The example included an antisemitic cartoon that graphically applauded death to Jewish people.
[ad_2]
Source link