Clearview AI Has New Tools to Get to Know You in Photos
There is Clearview AI causing controversy by scraping the web for photos and applying facial recognition to give the police and others an unparalleled ability to look into our lives. Now want to use the CEO of the company artificial intelligence to make Clearview’s surveillance tool more powerful.
This can be even more dangerous and also easy to make mistakes.
Clearview collected billions of photos from across affiliate websites Facebook, Instagram, ug Twitter and AI is used to identify a specific person in the images. Police and government agents used the company’s face database to help identify suspects in the photos by linking them to online profiles.
The company’s founder and CEO, Hoan Ton-That, told WIRED that Clearview has now collected more than 10 billion images from across the web-more than three times as many as previously recorded.
Ton-Kana says the larger pool of photos means that users, often law enforcement, are more likely to find a couple looking for someone. He also admits that the largest set of data makes the company’s tool more accurate.
Clearview integrated web crawl methods, progress on machine learning which enhances facial recognition, and one that disregards one’s own privacy to make a surprisingly powerful tool.
Ton-That demonstrated the technology via a smartphone app by taking a photo of the reporter. The app generates multiple images from multiple U.S. and international websites, each featuring the right person in images captured over a decade. The appeal of such a tool is obvious, but so is the potential for it to be unavailable to many.
Clearview’s actions have sparked public outrage and a broader debate about privacy expectations in an era of smartphones, social media, and AI. Critics say the company is consuming its own privacy. The ACLU charged Clearview in Illinois under a law restricting the collection of biometric information; the company also faces class action lawsuits in New York and California. Facebook and Twitter have asked Clearview to stop scrapping their sites.
The pushback didn’t stop Ton-That. He said he believes most people would accept or support the idea of using facial recognition to solve crimes. “People are worried about it, they’re shaken up, and that’s a good thing, because I think over time we can solve more of their worries,” he said.
Some of Clearview’s new technologies may spark further debate. Ton-That said it is creating new ways to find someone in person, including “deblur” and “mask removal” tools. The first requires a blurred image and sharpens it using machine learning to figure out what a clearer image would look like; The second visual test of the covered part of a person’s face using machine learning models that fills in the missing details of an image using a very high assumption is consistent with the statistical pattern seen in other images.
These capabilities make Clearview technology more attractive but also more problematic. It remains unclear how accurate the new methods are, but experts say they could increase the risk that someone will be misidentified and could exacerbate existing biases in the system.
“I expect accuracy to be bad, and even more accurate, that without careful control of the data set and training process I expect a lot of unintentional bias to creep in,” he said. of Alexander the Madry, an MIT professor who specializes in machine learning. Without proper care, for example, the procedure can make people with specific features more misdiagnosed.
Even if the technology works as promised, Madry said, the behavior of unmasked people will be problematic. “Think of people disguising themselves to join a peaceful protest or vaguely to protect their privacy,” he said.
Ton-That said the tests found new tools that improve the accuracy of Clearview results. “Any enhanced image should be remembered as such, and extra care should be taken when examining the results that could result in an enhanced image,” he said.