Clearview AI's facial recognition is polarizing the tech industry - The UpStream

Clearview AI's facial recognition is polarizing the tech industry

posted Saturday Feb 8, 2020 by Scott Ertz

Clearview AI's facial recognition is polarizing the tech industry

There's a small, secret startup that you might have heard of recently - Clearview AI. The company has produced what is possibly the most polarizing facial recognition product ever produced. The people it is made for, law enforcement, are in love, but the tech industry wants to hinder its growth and future potential.

Law enforcement loves it

There's no doubt that the use of facial recognition technology is causing problems for those who use it. New York tried it in schools, and that immediately drew criticism. But, the place where the technology, especially that produced by Clearview AI, is being used the most is law enforcement. While publicly recognizing the potential for abuse, law enforcement is using it with perceived success.

According to The New York Times, agencies have been using Clearview AI to identify the victims of child exploitation, both in photo and video. In Indiana, they ran photos of 21 victims through the system and came back with 14 identities. This allowed them to contact the victims and ask if they wanted to provide statements.

Does privacy matter?

According to the company's policies, it would appear that the answer is no. Because of how the company acquires and stores images, privacy advocates argue that it creates new kinds of harm. The company stores the images uploaded by law enforcement, known as probe images, forever. Yes, the internet never forgets, but the accumulation of this kind of imagery by Clearview AI creates a new way for these images to make their way into the public's hands.

Tech companies hate it

Probe images are not enough to produce a viable facial recognition technology. So, how did they get a usable database of images? Exactly where you think - social media. The company has scraped public photos from Facebook, Twitter, YouTube, and more, to produce a database of billions of photos of faces.

These companies, who are against the use of these photos in the Clearview AI platform, have ordered the company to stop scraping data and to stop using data they have already scraped. While the sites' privacy policies may prohibit the behavior, there is a legal precedent against them. LinkedIn lost a battle to prevent similar behavior from a company called hiQ. hiQ had been scraping public data from profiles for years and was granted permission to continue. The court concluded that data made public was just that - public.

As with the discussion around outlawing encryption, the question comes down to whether or not law enforcement can and should regularly violate our privacy to combat child exploitation. There is no way this is the end of the story - it is likely just the beginning.

Advertisement

Login to CommentWhat You're Saying

Be the first to comment!

We're live now - Join us!
PLUGHITZ Keyz

Email

Password

Forgot password? Recover here.
Not a member? Register now.
Blog Meets Brand Stats