Does state surveillance really keep us safe?

In a time of injustice, Clearview AI poses a fresh threat to those most vulnerable among us

A new and powerful facial recognition technology developed by tech company Clearview AI has been garnering attention around the continent this month, and with very good reason—this is technology that could put an end to privacy—and it’s a threat to communities across Canada.

A statement made on February 27 from the Royal Canadian Mounted Police (RCMP) confirmed that the national police service has been tracking citizens with Clearview’s facial recognition for months, despite a spokesperson’s claim in January that they had not been using the technology. The Ontario Provincial Police and the Toronto Police Service similarly denied their use of face recognition in January before admitting about a month later that it had been in use since at least October 2019. For months, in other words, Canadians have been monitored without regulations or transparency from a database containing billions of photos. From a friend’s Facebook post, a picture of a protest or a strike, even just walking down the street, your face can be linked instantly to your social media profiles, home address, and phone number.

Several officials, including NDP MP Charlie Angus, Toronto mayor John Tory, and former Ontario Information and Privacy Commissioner Ann Cavoukian have spoken against the use of Clearview AI. Toronto police chief Mark Saunders ordered officers to halt its use, apparently not even knowing about the technology until February 5. The federal Office of the Privacy Commissioner has begun to investigate whether the RCMP’s actions constitute a violation of federal privacy laws. But still, Canadians remain woefully in the dark about what is going on, how long it’s been going on, and how their privacy and safety are affected.

The RCMP claim that a few units have used the technology to enhance criminal investigations, while Ontario police departments have not made it clear at all how or how often they use face recognition. Such a blatant lack of transparency around invasive police surveillance is nothing short of concerning, especially considering that it took them months to even admit to its use.

And even if our own police weren’t boldly lying to us about the extent of their surveillance, we would still be under more invasive and comprehensive government surveillance than we have ever seen before. Especially troubling is the fact that Clearview AI itself can see exactly who the police are searching for and when. Clearview is a small, secretive company with a reclusive CEO and very little publicly available information, and who has not responded to any requests for comment. If it doesn’t scare you that our own government is monitoring us at a level more intrusive than ever before, it should scare you that this tiny, unknown tech start-up is too.

MP Angus characterized the situation as a “legislative vacuum”; the technology itself might have the potential for the vindication of innocent people, or identification of offenders, but there is nothing in place to prevent its misuse and abuse, and police forces don’t seem to be in a hurry to organize such regulations. Without public input or thorough discussions around such a

dangerous tool and its implications, how can institutions like the RCMP assume the right to decide how it will be used on the people of this country?

Another point of concern is the unreliable nature of such technology. If innocent people, whose entire identities have been made easily available to police without consent, knowledge, or good reason, are wrongly identified as suspects, they may face enormous challenges trying to deal with the fallout. Further, Eric Goldman—the co-director of the High Tech Law Institute at Santa Clara University—is of the apt opinion that “the weaponization possibilities of this are endless.” We have a well-known, well-documented precedent in the form of NASA’s LOVEINT controversy to tell us that, particularly with the current lack of legal regulation and transparency, it’s very plausible for rogue officers to use Clearview’s program to monitor individuals they know for personal purposes. The secrecy of Clearview AI themselves means that we don’t know how they’re using this tech, either.

And, perhaps most importantly in today’s tense political climate—amidst union strikes, climate action, and protests surrounding Indigenous rights—the use of this technology in this country goes from unethical and problematic to genuinely dangerous. Activists, protesters, and demonstrators are potentially in huge danger from police with access to powerful identification technology. The possibility of identification from a photo of a rally, a peaceful blockade, or a march means that victims of injustice and allies standing with them in solidarity might face real consequences for exercising their rights to civil disobedience.

Minority advocate groups and vulnerable communities—who already face disproportionate over-policing and over-surveillance—are at the most risk from the use of face recognition AI. Indigenous communities, in particular, are in the spotlight now. January saw an eviction notice issued from the Wet’suwet’en Nation in northwest British Columbia to natural gas company Coastal GasLink, in response to plans to build an illegal pipeline across unceded territory. Escalation in the form of rail blockades led to protests and solidarity movements across the country. RCMP officers, despite the calls to withdraw from the United Nations and provincial human rights commissioners, have spent the past months arresting peaceful protesters, often violently, and separating families and communities by force.

The Wet’suwet’en actions and RCMP violence over the last few months should be a wake-up call. Activists and advocates are in danger from the police, and Clearview AI’s technology only puts them in more danger. There is no better moment to care about the privacy and safety of the people who face the most potential harm from lack of police regulation and use of face recognition technology. That time is right now, and that place is right here. This isn’t just some abstract violation of liberty, but a real threat to the safety and even the lives of people across the country. Just because you don’t experience it, or no one you know does, doesn’t mean it isn’t happening. This is real, and this is now.

Our law enforcers are abusing their power. Our government is doing less than nothing to help the vulnerable people among us. And the officials who are supposed to be protecting us have been lying to our faces about how we are being surveilled and policed—clearly, they don’t find themselves beholden to the public. Time after time after time, history shows that times like these are when it falls to us to step up. So, step up. Be angry about the lack of

regulation or public consultation around dangerous technology. Contact your local representatives, donate to advocacy causes you care about, or better yet, attend local protests for these causes—even help to organize actions if you can.

Most of all, I urge you to spread the word about this, because our own police clearly won’t. Do your own research. Tell your friends what’s going on. Tell your families. Tell your followers. Make them angry too. When our “protectors” are putting the most vulnerable of our community in danger, it falls to us to stay aware and stay united.

Comments are closed.