Facial-recognition firm’s KYC, anti-fraud launch faces risks after privacy crackdown over database | Thomson Reuters Regulatory Intelligence and Compliance Learning


A New York artificial intelligence company fined by data privacy regulators in the EU, UK and Australia is trying to turn the corner with the launch of a know-your-customer (KYC), anti-fraud and security tool based on the firm’s facial recognition technology. However, Clearview AI’s bid for a fresh market faces legal risks posed by a continuing class action privacy complaint over its use of facial images.

It also calls into question whether the new product’s algorithm was trained on improperly obtained data, but the company denies that it was. The questions highlight regulatory challenges many financial firms may face as they turn to technology solutions and vendors to assist them with compliance and security tasks.

In May, Clearview AI was banned from selling its faceprint database commercially throughout the United States after it settled a suit brought by the American Civil Liberties Union (ACLU). The suit argued that Clearview’s practices violated Illinois’ Biometric Information Privacy Act (BIPA). Clearview is additionally barred in Illinois from selling or granting free access to the Clearview App to state, county, local or other government agencies or contractors. It must delete images of Illinois residents held in the database.

Clearview Consent, a facial recognition algorithm, was launched less than two weeks after the Illinois ACLU settlement. Clearview Consent is being marketed on a standalone basis, apart from Clearview AI’s database which has more than 20 billion facial images and is marketed to government clients. Clearview Consent is being marketed for uses including travel identity checks, in-person payments, online identity verification and fraud detection.

Lingering questions stem from how the algorithm underlying Clearview Consent was produced, according to the ACLU.

“We never got any information about how they actually trained the algorithm. It would be logical to assume that they trained it on this unique, humongous database of faceprints that they’ve amassed. But I can’t say that for sure. If it is the case, if that’s how they trained it, then that is abusive, and I would hope that national or state regulatory authorities in the United States or elsewhere would order them to delete their algorithm and start over with clean, non-abusively collected data,” said Nathan Freed Wessler, deputy project director of the ACLU Foundation’s speech, privacy and technology project in New York.

Clearview said its actions have been appropriate.

“Clearview AI’s algorithm is trained on publicly accessible images from the open internet. No private data has been used to train Clearview AI’s bias-free algorithm, and no personally identifiable information is used before or during the training process. After the algorithm has been created, no personally identifiable information, or photos are included with it,” Hoan Ton-That, Clearview’s chief executive, said in an emailed statement.

However, data-privacy regulators in many jurisdictions consider as impermissible the scraping of photos from the public internet without consent. They view photos posted online as personal data, subject to data privacy laws. Facebook and other social networks have asked Clearview AI to stop scraping data from their sites because the process does not comply with their terms of use.

Everalbum precedent

If Clearview Consent’s algorithm was trained on improperly collected facial images, there is legal precedent for the U.S. Federal Trade Commission (FTC) to order the algorithm to be wiped.

In 2021, an FTC settlement with a company called Everalbum alleged it misled its app users by saying that it would not apply facial recognition technology to user content unless they “affirmatively chose to activate the feature”. The company automatically activated the feature regardless. It also failed to delete photos and videos after users deactivated accounts.

“The FTC came down on them for deceptive trade practices. Part of the relief was they ordered this company to wipe out its algorithm and start again, if it wanted to, with permissible data. It is certainly a remedy that has been used before and I would hope that regulators are looking closely at [such a remedy] for Clearview AI,” Wessler said.

UK fine and ban

The UK Information Commissioner’s Office (ICO), the Italian data protection authority and Australia‘s ICO are the most recent regulators to find that Clearview AI breached data privacy laws when it used personal photos scraped from the internet to populate its database and train its facial recognition algorithms. Canadian privacy regulators have ordered the company to comply with a previous directive to stop collecting images of residents and delete pictures it has gathered.

“Currently we are challenging the cases in the UK, Canada and Australia. We believe these international rulings are incorrect as a matter of law,” Ton-That said.

The UK ICO’s enforcement notice said Clearview must delete all the data it holds pertaining to UK residents, cease scraping any personal data about UK citizens from the public-facing internet and stop adding personal data about UK citizens to the Clearview database. It must also stop processing any images of UK residents, and in particular refrain from seeking to match such images against the Clearview database.

It must refrain “from offering any service provided by way of the Clearview Database to any customer in the UK.” Whether that extends to an algorithm trained on the illegally collected data is unclear.

“That would be a logical conclusion,” said Simon Randall, chief executive and co-founder of Pimloc, a company specialising in visual data privacy and security.

“This really highlights one of the challenges the regulators have. The fact that these policies are so local, state-by-state or country-by-country. It makes it very hard to enforce. What I think the ICO realised was, assuming you can identify which bits of training data were in the UK, the UK ICO can only really say, ‘you need to remove those images specifically from your dataset or from your model’. They stopped short of saying, ‘because you trained it, because you trained some of your model on our data, actually, you need to unwind it’,” Randall said.

The ICO declined to say whether the Clearview products trained on the database were banned too.

Choose third-party solutions with care

Firms should consider privacy, operational, reputational and compliance risks associated with facial recognition technology firms in addition to legal risks.

“Companies should only be using face recognition technology if they have the express consent of the people who it’s being used on. That’s a legal requirement in Illinois and a couple other U.S. states under state law. It’s a requirement of data protection laws in lots of other countries, and it’s obviously a best practice,” Wessler said.

Facial recognition technology remains controversial, particularly because it tends to perform poorly when identifying non-white, non-male faces. Clearview claims to be bias-free and rates itself as highly accurate, citing U.S. National Institute of Standards and Technology’s (NIST) benchmark test results.

“Clearview AI’s technology today far surpasses the human eye and has no racial bias. According to the Innocence Project, 70% of wrongful convictions result from eyewitness lineups. Accurate facial recognition technology like Clearview AI is able to help create a world of bias-free policing. As a person of mixed race this is highly important to me,” Ton-That said.

Recent NIST testing shows Clearview’s facial recognition algorithm shows no detectable racial bias, Ton-That said.

Any results from NIST testing are produced under test conditions and are not real-world results, Wessler said.

Misleading and opportunistic marketing

“They have repeatedly misrepresented the accuracy testing of their system; there was the period when they claimed essentially to have replicated an accuracy test that the ACLU ran against the Amazon system and determined that they were 100% accurate based on that. It was misleadingly framed in a way that suggested the ACLU might have given them an imprimatur that, honestly, we didn’t,” Wessler said.

Most recently, Ton-That said Clearview had provided its technology at no cost the Ukrainian military to identify Russian soldiers — dead or alive. Russia has a data privacy law similar to the EU General Data Protection Regulation (GDPR). The Russian data protection authority did not respond to an email seeking comment.

“There are two things that are creepy about it,” Randall said of the action. “One is doing it. The other is publicising it. The proportionality is very hard to justify. I’ve seen a couple of examples where [Clearview] are talking about catching child offenders. On the face of it, that’s very hard to argue, but actually if you are breaching the privacy rights of the population of the world in order to catch a criminal, the proportionality is wrong.”

Many businesses are becoming more discerning about third-party providers, and more alive to data privacy and security risks.

“The good news is lots of global businesses now want to be doing the right thing and want a lot more transparency on how they’re managing data. The big policy gaps aside, I think the change we’ve seen recently is just the seismic shift in people’s attitudes to who they do business with, who they share their data with and what they now expect,” Randall said.

Compliance risks

Enforcement action as well as Clearview’s own attempts to comply with local data privacy laws show the difficulty of auditing its database. It cannot prove or guarantee requests to delete personal data submitted by data subjects or regulators.

For example, if a data subject is in a jurisdiction that permits requests to opt out of the database, they must provide a photo for the company to check against the database. A Californian data subject, for example, would then receive a message sent on behalf of Clearview saying the company had processed the request successfully. It does not show what images have been deleted.

“Any images of you that we were able to find, based on the image you shared with us to facilitate your request, have been removed from Clearview’s search results and permanently de-identified. The image/s you share with us to facilitate your request will be deleted,” said an automatically-generated email from compliance software firm OneTrust, on Clearview’s behalf.

The problem is that Clearview indiscriminately scrapes personal data from the internet, Wessler said. The company in February told investors it was aiming to have 100 billion facial images in its database within a year.

“They’re always scraping huge volumes of new photos from the internet in the strive to get to 100 billion faceprints by the end of the year. If the deletion requests are to have any durability, then they need to be able to screen all the newly downloaded photos to see if they’re getting new photos of somebody who has tried to opt out,” Wessler said.

Under the terms of the Illinois settlement, Clearview keeps the uploaded photos, and ringfences them from its national database that the police or other government agencies could use. That allows the company to scan periodically against the new images to check it is not adding images of people who are in Illinois.

The ACLU did not secure an audit mechanism in its settlement to test compliance, but can always go back to court to enforce the settlement if it discovers Clearview has violated its terms, Wessler said.

Clearview is abiding by the terms of the settlement, Ton-That said.

75% of capital raised earmarked for fines

Financial resilience is another consideration when assessing third-party vendors.

Clearview has racked up about $31.5 million in fines for data-privacy regulation breaches. Italy’s data privacy authority fined Clearview 20 million euros in March. It could be liable for another £7.5 million, if its UK ICO appeal fails. Remediation and legal costs will also eat into its capital. Public records indicate it has raised about $40 million in venture capital.

In June, Reuters reported Clearview had cut much of its sales staff and parted ways with two of three executives hired about a year ago, as it grapples with litigation and difficult economic conditions.

“Like many other iconic innovative start-ups, there is a major legal component to our operations early on. Also, almost every privacy law worldwide supports exemptions for government, law enforcement and national security, and we are contesting these international rulings as a matter of law,” Ton-That said.

Further legal risks

Clearview also faces a class action complaint based on Illinois’ BIPA, initiated originally by a Macy’s department store customer from Chicago. On June 12, some of the biggest U.S. retailers — Walmart, Kohl’s, Best Buy, Albertsons, the Home Depot and AT&T — were added as co-defendants to the suit. Those companies are alleged to have violated Illinois residents’ privacy when they used Clearview AI’s technology.

A precedent may be that in June, Google agreed to pay $100 million to Illinois residents for allegedly violating BIPA through a facial recognition tool featured in Google Photos, called the grouping tool.