© pixabay
It rarely happens that large tech companies shy away from using a technology rather than trying to claim it for their purposes as quickly as possible. This makes the story of ClearView AI all the more remarkable. In May, it emerged that a small start-up company from New York was offering facial recognition software that can automatically recognise persons in photographs and assign the right identity to the picture. Trained with more than three billion images as input, including pictures from social media platforms, this artificial intelligence application has so far mostly been used by judicial authorities, but apparently also by shopping centres, sports league operators and casinos, in order to identify people. Amongst other purposes, the technology is said to have been used to automatically identify known shoplifters and fraudsters using surveillance cameras.
This means that a line which had so far been treated as a moral boundary has been overstepped. Facial recognition software is not new, but using it for mass surveillance purposes based on publicly accessible image material and making it available to anyone as a business model is a different kettle of fish. Google and Facebook continue to hold back some of their software because of its potential for abuse. But many up-and-coming companies such as ClearView AI do not appear to share these concerns.
It is important to point out that facial recognition software is currently far from mature. Even leading algorithms from companies such as IBM and Microsoft have a high error rate. While the software was able to identify white male faces with an accuracy of 88–94 per cent, this rate deteriorated significantly for faces of people of colour, with a particularly low accuracy of 65 per cent for women belonging to this group. This level of inaccuracy combined with a data base of three billion pictures is at best problematic and, in the worst case, downright dangerous. Consider this example: Following the terror attacks in Sri Lanka in 2019, the image of a young woman who had been identified as a suspect by facial recognition software was published. In fact, the image showed a citizen who was not even in the country at the time of the attacks but was sitting her final exams at a university in the US.
In addition to the public outcry about ClearView AI, social media platforms such as Twitter, Facebook and LinkedIn, whose public user profiles had been used to obtain much of the software’s image pool, responded with cease-and-desist declarations. In principle, ‘scraping’ – i.e. systematically downloading, collecting and processing – images constitutes a violation of their terms of use. But whether or not the platforms will follow this up with further legal action remains to be seen.
Irrespective of the details of this particular incident, the case of ClearView illustrates how technological advances can overstep ethical, normative and social boundaries in ways that remain hidden from public scrutiny. Rapid progress in the field of artificial intelligence means that this will only become more commonplace in the future. So far, the Silicon Valley mantra has always been “Let’s quickly push the technology to the limit of what is possible before someone else does it. At least this will allow us to dictate the rules.” But maybe it is not always best to do something just because we can, without pausing to ask whether we should.