[ad_1]

The sources of much of Clearview’s data, Google and Facebook, have determined that the company breached the relevant terms of use and have demanded it cease scraping their sites for facial images.

Loading

The debate around Clearview remains interesting, however, in that a clear public interest is being served by the apprehension of serious criminal offenders through use of the software, in addition to demonstrable benefits to community safety. The technology has been used in the pursuit of child abusers, and multiple children have been rescued from harm.

When there is such evidence that technology is saving lives, the question is how to effectively regulate it worldwide so that it can be used for these purposes, while at the same time upholding the human rights and privacy of individuals.

Human rights should always be a consideration with respect to any technology. It’s just that AI, and especially the pace of its development, is challenging regulators as they’ve never been challenged before.

Whether it is regarding hot-button issues such as facial recognition or the other AI systems we interact with, the issue of consent is complicated. How many of us read the privacy policies associated with myriad devices and apps that collect our personal information? It is also becoming increasingly difficult – or at least extremely inconvenient – to opt out of the AI-powered services we interact with.

More and more of these services are being moved onto automated, algorithmic systems. Organisations known as data brokers, whose sole purpose is to buy and sell personal data, are part of a multibillion-dollar market that undoubtedly fuels much of AI.

Loading

Another consideration when it comes to all of this data harvesting is that the data needs to be stored somewhere. Yet, more and more, dedicated data centres are being recognised as an environmental concern – they collectively contributed close to 1 per cent of global carbon dioxide emissions in 2021, a number that has likely risen since then. Organisations also hold on to a lot of data they most likely don’t need, sometimes in the pursuit of data-led AI decision-making that may have only a marginal impact on their operations.

Historical data often does not get destroyed when it should, or data that is only needed for a one-time purpose is inappropriately retained. This produces numerous “honeypots” of personal data around the world that are frequently breached, resulting in it being leaked and used for nefarious purposes such as identity theft and other cybercrimes.

Furthermore, almost every digital service asks users to create yet another account so their activities can be tracked and their data harvested – and likely fed into AI systems of various kinds for analyses and predictions. More data accounts means more personal data spread across the internet, and more opportunities for criminals to exploit this data.

Living with AI by Campbell Wilson

Living with AI by Campbell WilsonCredit:

Is it possible we have become the metaphorical slowly boiled frog with respect to the use of personal data by AI, perhaps only now beginning to realise that the temperature in our pot is uncomfortably high? And even if we have, do many of us really care how our data is being used?

After all, by using Facebook, billions of people are freely sharing significant amounts of personal information, probably without much regard to how it is being used. Whether we should care is a personal issue, but it’s hard to care about something if you don’t know it’s happening. And that’s where we need a lot more transparency when it comes to how data, and the AI powered by it, are being used.

This is an edited extract from Living with AI by Campbell Wilson, the 30th title by Monash University Publishing’s ‘In the National Interest’ series, out next week.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *