[ad_1]
Regulators across Europe are scrutinising whether ChatGPT complies with privacy regulations after Italy became the first Western country to block the popular chatbot powered by artificial intelligence (AI).
Italy’s data protection watchdog said last week it’s opening an investigation against OpenAI, the company behind the massively popular chatbot, citing data privacy concerns after ChatGPT experienced a data breach involving user conversations and payment information.
ChatGPT can pretty much answer any questions, pass exams and even code a whole website for you.
Hundreds of millions of people have used ChatGPT since its launch in November 2022, making it the fastest-growing consumer application in history, according to a UBS study.
But there have been growing concerns over the potential risks of AI, including its threat to wipe out many jobs and also spread misinformation.
Last month, major figures in technology, including Twitter boss Elon Musk, even called for a halt to the development of these types of AI systems amid fears the race to develop them has gone out of control.
Will other countries soon follow suit and ban ChatGPT?
After Italy’s decision to restrict access to the chatbot, the European Consumer Organisation (BEUC) called on all authorities to investigate all major AI chatbots.
ChatGPT is already inaccessible in a number of countries, including China, Iran, North Korea, and Russia.
“Consumers are not ready for this technology. They don’t realise how manipulative, how deceptive it can be. They don’t realise that the information they get is maybe wrong,” said BEUC Deputy Director Ursula Pachl.
“I think this incident with ChatGPT is very important. It’s kind of a wake-up call for the European Union because even though European institutions have been working on an AI Act, it will not be applicable for another four years. And we have seen how fast these sorts of systems are developing,” she told Euronews.
Other regulators in Europe have now started paying more attention to ChatGPT.
Ireland’s Data Protection Commission said it’s “following up with the Italian regulator to understand the basis for their action,” adding it will “coordinate with all EU Data Protection Authorities in relation to this matter”.
France’s data privacy regulator, CNIL, said it’s investigating after receiving two complaints about ChatGPT. Regulators have also reached out to their Italian counterparts to find out more about the basis of the ban.
In a blog post this week, the UK Information Commissioner’s Office warned that AI developers have “no excuse” for getting data privacy wrong – and that those who don’t follow the law on data protection will face consequences.
In an apparent response to the concerns, OpenAI published a blog post on Wednesday outlining its approach to AI safety.
The company said it works to remove personal information from training data where feasible, fine-tunes its models to reject requests for personal information of private individuals, and acts on requests to delete personal information from its systems.
EU prepares its AI act
The European Commission is currently debating the world’s first legislation on artificial intelligence called the AI Act.
But it looks like it may not be inclined to ban AI systems, according to the European Commission Executive Vice President, Margrethe Vestager.
“No matter which tech we use, we have to continue to advance our freedoms and protect our rights,” she posted on Twitter. “That’s why we don’t regulate AI technologies, we regulate the uses of AI. Let’s not throw away in a few years what has taken decades to build”.
France’s Digital Minister Jean-Noël Barrot has also argued against banning ChatGPT. In an interview with La Tribune, he said he thinks the service doesn’t respect EU privacy laws, but that it would make more sense to try to “master” new technologies and regulate them than try to ban them altogether.
Germany, however, could follow in Italy’s footsteps by blocking ChatGPT over data security concerns, the German commissioner for data protection told the Handelsblatt newspaper.
Dan Morgan, Europe Government Affairs Director at cybersecurity rating provider SecurityScorecard, told Euronews this move signals the importance of regulatory compliance for companies such as OpenAI in Europe.
“I think the clear implications for this is that you can’t avoid regulation if you want to operate in the EU,” he said.
“Businesses must prioritise personal data protection and comply. No matter how important the product, you know, compliance with regulation in Europe is not an optional extra”.
The Italian watchdog said the company behind ChatGPT has 20 days to address the data protection issues or pay a fine of €20 million or up to 4 per cent of annual revenues.
The CEO of the company, Sam Altman, said on Twitter that OpenAI was deferring to “the Italian government” on the issue.
“Italy is one of my favourite countries and I look forward to visiting again soon,” he added.
For more on this story, watch our report from The Cube in the media player above.
[ad_2]
Source link