Mapped: The State of Facial Recognition Around the World


This post is by Iman Ghosh from Visual Capitalist

View the full-size version of this infographic.

Facial Recognition World Map-1200px

Mapping The State of Facial Recognition Around the World

View the high resolution version of this infographic by clicking here.

From public CCTV cameras to biometric identification systems in airports, facial recognition technology is now common in a growing number of places around the world.

In its most benign form, facial recognition technology is a convenient way to unlock your smartphone. At the state level though, facial recognition is a key component of mass surveillance, and it already touches half the global population on a regular basis.

Today’s visualizations from SurfShark classify 194 countries and regions based on the extent of surveillance.

Facial Recognition Status Total Countries
In Use 98
Approved, but not implemented 12
Considering technology 13
No evidence of use 68
Banned 3

Click here to explore the full research methodology.

Let’s dive into the ways facial recognition technology is used across every region.

North America, Central America, and Caribbean

In the U.S., a 2016 study showed that already half of American adults were captured in some kind of facial recognition network. More recently, the Department of Homeland Security unveiled its “Biometric Exit” plan, which aims to use facial recognition technology on nearly all air travel passengers by 2023, to identify compliance with visa status.

Facial Recognition North America Map

Perhaps surprisingly, 59% of Americans are actually in favor of implementing facial recognition technology, considering it acceptable for use in law enforcement according to a Pew Research survey. Yet, some cities such as San Francisco have pushed to ban surveillance, citing a stand against its potential abuse by the government.

Facial recognition technology can potentially come in handy after a natural disaster. After Hurricane Dorian hit in late summer of 2019, the Bahamas launched a blockchain-based missing persons database “FindMeBahamas” to identify thousands of displaced people.

South America

The majority of facial recognition technology in South America is aimed at cracking down on crime. In fact, it worked in Brazil to capture Interpol’s second-most wanted criminal.

Facial Recognition South America Map

Home to over 209 million, Brazil soon plans to create a biometric database of its citizens. However, some are nervous that this could also serve as a means to prevent dissent against the current political order.

Europe

Belgium and Luxembourg are two of only three governments in the world to officially oppose the use of facial recognition technology.

Facial Recognition Europe Map

Further, 80% of Europeans are not keen on sharing facial data with authorities. Despite such negative sentiment, it’s still in use across 26 European countries to date.

The EU has been a haven for unlawful biometric experimentation and surveillance.

—European Digital Rights (EDRi)

In Russia, authorities have relied on facial recognition technology to check for breaches of quarantine rules by potential COVID-19 carriers. In Moscow alone, there are reportedly over 100,000 facial recognition enabled cameras in operation.

Middle East and Central Asia

Facial recognition technology is widespread in this region, notably for military purposes.

Facial Recognition Middle East and Central Asia Map

In Turkey, 30 domestically-developed kamikaze drones will use AI and facial recognition for border security. Similarly, Israel has a close eye on Palestinian citizens across 27 West Bank checkpoints.

In other parts of the region, police in the UAE have purchased discreet smart glasses that can be used to scan crowds, where positive matches show up on an embedded lens display. Over in Kazakhstan, facial recognition technology could replace public transportation passes entirely.

East Asia and Oceania

In the COVID-19 battle, contact tracing through biometric identification became a common tool to slow the infection rates in countries such as China, South Korea, Taiwan, and Singapore. In some instances, this included the use of facial recognition technology to monitor temperatures as well as spot those without a mask.

Facial Recognition East Asia Oceania Map

That said, questions remain about whether the pandemic panopticon will stop there.

China is often cited as a notorious use case of mass surveillance, and the country has the highest ratio of CCTV cameras to citizens in the world—one for every 12 people. By 2023, China will be the single biggest player in the global facial recognition market. And it’s not just implementing the technology at home–it’s exporting too.

Africa

While the African continent currently has the lowest concentration of facial recognition technology in use, this deficit may not last for long.

Facial Recognition World Map

Several African countries, such as Kenya and Uganda, have received telecommunications and surveillance financing and infrastructure from Chinese companies—Huawei in particular. While the company claims this has enabled regional crime rates to plummet, some activists are wary of the partnership.

Whether you approach facial recognition technology from public and national security lens or from an individual liberty perspective, it’s clear that this kind of surveillance is here to stay.

Subscribe to Visual Capitalist


Thank you!
Given email address is already subscribed, thank you!
Please provide a valid email address.
Please complete the CAPTCHA.
Oops. Something went wrong. Please try again later.

The post Mapped: The State of Facial Recognition Around the World appeared first on Visual Capitalist.

EFF Testifies Today on Law Enforcement Use of Face Recognition Before Presidential Commission on Law Enforcement and the Administration of Justice


This post is by Jennifer Lynch from Deeplinks

The Presidential Commission on Law Enforcement and the Administration of Justice invited EFF to testify on law enforcement use of face recognition. The Commission, which was established via Executive Order and convened by Attorney General William Barr earlier this year, is tasked with addressing the serious issues confronting law enforcement and is made up of representatives from federal law enforcement as well as police chiefs and sheriffs from around the country.

We testified orally and provided the Commission with a copy of our whitepaper, Face Off: Law Enforcement Use of Face Recognition Technology. The following is our oral testimony:

President’s Commission on Law Enforcement and the Administration of Justice
Hearing on Law Enforcement’s Use of Facial Recognition Technology

Oral Testimony of
Jennifer Lynch
Surveillance Litigation Director
Electronic Frontier Foundation (EFF)

April 22, 2020

Thank you very much for the opportunity to discuss law enforcement’s use of facial recognition technologies with you today. I am the surveillance litigation director at the Electronic Frontier Foundation, a 30-year-old nonprofit dedicated to the protection of civil liberties and privacy in new technologies.

In the last few years, face recognition has advanced significantly. Now, law enforcement officers can use mobile devices to capture face recognition-ready photographs of people they stop on the street; surveillance cameras and body-worn cameras boast real-time face scanning and identification capabilities; and the FBI and many other state and federal agencies have access to millions, if not hundreds of millions, of face recognition images of law-abiding Americans.

However, the adoption of face recognition technologies has occurred without meaningful oversight, without proper accuracy testing, and without legal protections to prevent misuse. This has led to the development of unproven systems that will impinge on constitutional rights and disproportionately impact people of color.

Face recognition and similar technologies make it possible to identify and track people, both in real time and in the past, including at lawful political protests and other sensitive gatherings. Widespread use of face recognition by the government—especially to identify people secretly when they walk around in public—will fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate with others. Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny, even when they are doing absolutely nothing wrong. And this burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.

The right to speak anonymously and to associate with others without the government watching is fundamental to a democracy. And it’s not just EFF saying that—the founding fathers used pseudonyms in the Federalist Papers to debate what kind of government we should form in this country, and the Supreme Court has consistently recognized that anonymous speech and association are necessary for the First Amendment right to free speech to be at all meaningful.

Face recognition’s chilling effect is exacerbated by inaccuracies in face recognition systems. For example, FBI’s own testing found its face recognition system failed to even detect a match from a gallery of images nearly 15% of the time. Similarly, the ACLU showed that Amazon’s face recognition product, which it aggressively markets to law enforcement, falsely matched 28 members of Congress to mugshot photos.

The threats from face recognition will disproportionately impact people of color, both because face recognition misidentifies African Americans and ethnic minorities at higher rates than whites, and because mug shot databases include a disproportionate number of African Americans, Latinos, and immigrants.

This has real-world consequences; an inaccurate system will implicate people for crimes they didn’t commit. Using face recognition as the first step in an investigation can bias the investigation toward a particular suspect. Human backup identification, which has its own problems, frequently only confirms this bias. This means face recognition will shift the burden onto defendants to show they are not who the system says they are.

Despite these known challenges, federal and state agencies have for years failed to be transparent about their use of face recognition. For example, the public had no idea how many images were accessible to FBI’s FACE Services Unit until Government Accountability Office reports from 2016 and 2019 revealed the Bureau can access more than 641 million images—most of which were taken for non-criminal reasons like obtaining a driver license or a passport.

State agencies have been just as intransigent in providing information on their face recognition systems. EFF partnered with the Georgetown Center on Privacy and Technology to do a survey of which states were currently using face recognition and with whom they were sharing their data – a project we call “Who Has Your Face.” Many states, including Connecticut, Louisiana, Kentucky, and Alabama failed to or refused to respond to our public records requests. And other states like Idaho and Oklahoma told us they did not use face recognition but other sources, like the GAO reports and records from the American Association of Motor Vehicle Administrators (AAMVA), seem to contradict this.

Law enforcement officers have also hidden their partnerships with private companies from the public. Earlier this year, the public learned that a company called Clearview AI had been actively marketing its face recognition technology to law enforcement, and claimed that more than 1,000 agencies around the country had used its services. But up until the middle of January, most of the general public had never even heard of the company. Even the New Jersey Attorney General was surprised to learn—after reading the New York Times article that broke the story—that officers in his own state were using the technology, and that Clearview was using his image to sell its services to other agencies.

Unfortunately, the police have been just as tight-lipped with defendants and defense attorneys about their use of face recognition. For example, in Florida law enforcement officers have used face recognition to try to identify suspects for almost 20 years, conducting up to 8,000 searches per month. However, Florida defense attorneys are almost never told that face recognition was used in their clients’ cases. This infringes defendants’ constitutional due process right to challenge evidence brought against them.

Without transparency, accountability, and proper security protocols in place, face recognition systems will be subject to misuse. For example, the Baltimore Police used face recognition and social media to identify and arrest people in the protests following Freddie Gray’s death. And Clearview AI used its own face recognition technology to monitor a journalist and encouraged police officers to use it to identify family and friends.

Americans should not be forced to submit to criminal face recognition searches merely because they want to drive a car. And they shouldn’t have to fear that their every move will be tracked if the networks of surveillance cameras that already blanket many cities are linked to face recognition.

But without meaningful restrictions on face recognition, this is where we may be headed. Without protections, it could be relatively easy for governments to amass databases of images of all Americans—or work with a shady company like Clearview AI to do it for them—and then use those databases to identify and track people as they go about their daily lives. 

In response to these challenges, I encourage this commission to do two things: First, to conduct a thorough nationwide study of current and proposed law enforcement practices with regard to face recognition at the federal, state, and local level, and second, to develop model policies for agencies that will meaningfully restrict law enforcement access to and use of this technology. Once completed, both of these should be easily available to the general public.

Thank you once again for the invitation to testify. My written testimony, a white paper I wrote on law enforcement use of face recognition, provides additional information and recommendations. I am happy to respond to questions.

DOJ Moves Forward with Dangerous Plan to Collect DNA from Immigrant Detainees


This post is by Saira Hussain from Deeplinks

The Department of Justice’s (DOJ) recently-issued final rule requiring the collection of DNA from hundreds of thousands of individuals in immigration detention is a dangerous and unprecedented expansion of biometric screening based not on alleged conduct, but instead on immigration status. This type of forcible DNA collection erodes civil liberties and demonstrates the government’s willingness to weaponize biometrics in order to surveil vulnerable communities. 

DOJ finalized its October 2019 Notice of Proposed Rulemaking, making no amendments despite receiving over 40,000 public comments—including one by EFFthe overwhelming majority of which opposed the mandatory DNA collection proposal.

The final rule institutionalizes a practice that is a marked departure from prior DNA collection policies. It draws its authority from the DNA Fingerprint Act of 2005, which granted the Attorney General power to direct federal agencies to collect DNA from “individuals who are arrested, facing charges, or convicted or from non-United States persons who are detained under the authority of the United States.” DOJ regulations implementing the Act specifically exempted the Department of Homeland Security (DHS) from collecting DNA from certain classes of non-U.S. persons, including individuals for whom collection is “not feasible because of operational exigencies or resource limitations,” as identified by the DHS Secretary in consultation with the Attorney General. In 2010, then-DHS Secretary Janet Napolitano used that provision to exclude from DNA collection individuals in immigration custody not charged with a crime and from individuals awaiting deportation proceedings.

In the final rule, DOJ removes the DHS Secretary’s authority to exclude certain classes of individuals from DNA collection because of resource limitations and only allows the Attorney General to make that determination. DOJ estimates that it will collect nearly 750,000 additional DNA profiles annually from immigrant detainees, which will then be added to the Combined DNA Index System (CODIS), the FBI’s national DNA database.

In January 2020, DHS began planning for this vast DNA collection program, releasing an implementation policy for Immigration and Customs and Enforcement (ICE) and Customs and Border Protection (CBP) titled “CBP and ICE DNA Collection.” The policy sets out a five-phase implementation plan over three years. Phase I, which began on January 6, 2020, outlines pilot programs at a Border Patrol sector in Detroit, Michigan, and a port of entry in Eagle Pass, Texas. A subset of CBP officers at these locations collect DNA from immigrants with criminal convictions and from immigrants and U.S. persons (defined as U.S. citizens and legal permanent residents) who are referred for prosecution, including children as young as 14 years old. Subsequent implementation phases permit more CBP officers to collect DNA until Phase V, which allows for DNA collection from all individuals detained under U.S. authority, including people in immigration detention who have never been arrested, charged, or convicted of any criminal offense.

In response to DHS’s implementation policy, U.S. Representatives Rashida Tlaib, Veronica Escobar, and Joaquin Castro sent a letter to the DHS Acting Secretary expressing opposition to the pilot programs. Mandatory DNA collection from immigrants constitutes a privacy invasion, criminalizes immigrant communities, and overburdens federal crime labs, they told DHS. The letter also asked for additional information on the implementation plan, including the privacy protections in place and the administrative burden and backlog the plan will create.

As we highlighted in our comments, DOJ’s final rule marks an unprecedented shift from DNA collection based on a criminal arrest or conviction to DNA collection based on immigration status. After the Supreme Court’s decision in Maryland v. King (2013), which upheld a Maryland statute to collect DNA from individuals arrested for a violent felony offense, states have rapidly expanded DNA collection to encompass more and more offenseseven when DNA is not implicated in the nature of the offense. For example, in Virginia, the ACLU and other advocates fought against a bill that would have added obstruction of justice and shoplifting as offenses for which DNA could be collected. DOJ’s final rule further erodes civil liberties by requiring forcible DNA collection based on false assumptions linking crime to immigration status, despite ample evidence to the contrary.

This DNA collection has serious consequences. Studies have shown that increasing the number of profiles in DNA databases doesn’t solve more crimes. A 2010 RAND report instead stated that the ability of police to solve crimes using DNA is “more strongly related to the number of crime-scene samples than to the number of offender profiles in the database.” There’s no indication that adding nearly 750,000 profiles of immigrant detainees per year will do anything except add more noise to CODIS.

Moreover, inclusion in a DNA database increases the likelihood that an innocent person will be implicated in a crime. We previously wrote about a case where a man was charged with murder during a brutal home invasion because his DNA was found on the victim’s fingernails. In reality, he had been treated by EMTs earlier in the evening, who later responded to the crime scene and likely carried his DNA with them.

The final rule also allows CODIS to indefinitely retain DNA samples from people in immigration detention—even if they later permanently leave the country or adjust their status to become permanent residents or citizens. Indefinite retention creates the opportunity for future misuse, especially since DNA samples reveal ample information about us—from familial relationships to medical history—and may imply characteristics like race and ethnicity. Some have even suggested DNA can reveal intelligence and sexual orientation, although this has been disproved. We’ve seen DNA misuse in the context of genetic genealogy databases, where people voluntarily provide DNA to private companies for ancestry or health analysis, and law enforcement later accesses the database to solve crimes. In 2015, a New Orleans filmmaker was nearly implicated in a cold case murder after police accessed a private genealogy database without a warrant and identified an “exceptionally good match” between the crime scene sample and the filmmaker’s father’s DNA profile.

Lastly, the final rule exacerbates the existing racial disparities in our criminal justice system by subjecting communities of color to genetic surveillance. Black and Latino men are already overrepresented in DNA databases. Adding 750,000 profiles of immigrant detainees annually—who are almost entirely people of color, and the vast majority of whom are Latinx—will further skew the 18 million profiles already in CODIS.

The final rule is yet another example of the government weaponizing biometrics as a form of surveillance of vulnerable communities. This dangerous expansion of DNA collection brings us one step closer to genetic surveillance of the entire population.

Announcing Who Has Your Face


This post is by Jason Kelley from Deeplinks

The government and law enforcement should not be scanning your photos with face recognition technology. But right now, at least half of Americans are likely in government face recognition databases—often thanks to secretive agreements between state and federal government agencies—without any of us having opted in. Although the majority of Americans are in these databases, it’s nearly impossible to know whether or not your photo has been included. Today, EFF is launching a new project to help fight back: Who Has Your Face.

Who Has Your Face includes a short quiz that you can use to learn which U.S. government agencies may have access to your photo for facial recognition purposes, as well as a longer resource page describing in detail the photo sharing we discovered. This project is a collaboration between the Center on Privacy & Technology at Georgetown Law, and aims to shine a light on the photo sharing that has allowed the Department of Homeland Security, Immigration and Customs Enforcement, dozens of Departments of Motor Vehicles, the Federal Bureau of Investigation, law enforcement, and many other agencies to use face surveillance on millions of people without their knowledge. 

Data-sharing agreements made between agencies with little or no room for input from those they affect violate the privacy of thousands of people every day

This work builds on the work that was done in The Perpetual Lineup, a project of the Center on Privacy & Technology at Georgetown Law, and EFF’s research on the growth of government databases like this. To bring this project to you, we reviewed thousands of pages of public records to determine as clearly as possible what government photos of U.S. citizens, residents, and travelers are shared with which agencies for facial recognition purposes. 

A screenshot of the Who Has Your Face Quiz Results

After answering a few short questions, Who Has Your Face will list agencies that likely have access to your image

Individuals Don’t Know They’re In Facial Recognition Databases and Can’t Opt Out

As U.S. government agencies have increased the type of information they collect on individuals, expanding from fingerprints to faceprints, and adding voice data, DNA, scars and tattoos, they’ve also hoovered up more and more information from individuals without their knowledge. Much of this is collected during fairly common practices like applying for a driver’s license. 

The number of people affected by face recognition is staggering: We count at least 27 states where the FBI can search or request data from driver’s license and ID databases. In June of last year, the Government Accountability Office reported only 21. The total number of DMVs with facial recognition is now at least 43, with only four of those limiting data sharing entirely. That puts two-thirds of the population of the U.S. at risk of misidentification, with no choice to opt out. That number is unconscionable. These data-sharing agreements—made between agencies and with little or no room for input from those they affect—violate the privacy of thousands of people every day. 

Data sharing is especially dangerous for vulnerable individuals and populations, and is especially egregious in some states: in Maryland, for example, undocumented individuals are allowed driver’s licenses and IDs, but data sharing agreements also allow ICE to use face recognition on those DMV databases. This turns the legal protection of a driver’s license into a way for ICE to target undocumented individuals for deportation. Florida—the third most populous state in the nation—has the longest-running facial recognition database in the country, and offers over 250 agencies access to DMV photos for facial recognition purposes. 

Lack of Transparency Thwarts Attempts to Learn Who’s At Risk

Despite hundreds of hours of research, it’s still not possible to know precisely which agencies are sharing which photos, and with whom. Each agency across the U.S., from state DMV’s to the State Department, shares access to their photos differently, depending on agreements with local police, other states, and federal agencies. We were continuously thwarted in our research by non-responsive government agencies, conflicting information and agreements, and the generally covert nature of these policies. This is a huge problem: it should be easy to learn who has the personal data that you’ve been required to hand over in exchange for a driver’s license, or for re-entry into the country after visiting family in a foreign nation. 

But agencies all responded differently to requests for transparency: when sent the same public records request, some DMV’s gave the precise number of facial recognition requests that they had received from outside agencies but not which agencies sent them—for example, Wisconsin’s DMV received 238 requests in 2016; Nevada received 788 requests between June 14, 2015, and March 8, 2018. Other DMVs responded with who had made requests and how many: a list of agency requests to Utah’s Department of Public Safety included Immigration and Customs Enforcement, the Department of Homeland Security, various state Fusion Centers, state Secret Service agencies, and the United States Office of National Drug Control Policy. Utah also responded with data about how successful the requests had been.

A screenshot of a spreadsheet of facial recognition requests to the Utah Statewide Information & Analysis Center, with columns for Date Submitted, Ticket ID, Agency, Other, Case Number, and Query Results

A spreadsheet of facial recognition requests to the Utah Statewide Information & Analysis Center

Still others did not respond or regarded the questions as overbroad, or claimed to have no responsive records. Alabama’s DMV, for example, essentially ignored the request until sent an example of the Memorandum of Understanding we believed they had signed. 

Reports also contradict one another: American Association of Motor Vehicle Administrators (AAMVA), a tax-exempt, non-profit organization that serves as an “information clearinghouse” for Departments of Motor Vehicles across the United States and allows members to interactively request and verify license and ID applicant’s images, reported just three months ago that Idaho’s Transportation Department and Oklahoma’s Department of Public Safety have facial recognition, yet both of those states responded to our requests by saying they did not.

Another area of confusion: three of the states that were confirmed to take part in AAMVA’s National Digital Exchange Program but do not have facial recognition systems. Whether they comply with those agreements is unclear. The Real-ID Act, which requires state licenses to adhere to certain uniform standards if they are to be accepted for some federal purposes, also complicates matters. Many states interpret it as requiring them to provide electronic access to all other states to information contained in their motor vehicle database, and to offer some access to federal agencies such as the DHS or ICE. But in some states, sharing this data with the federal government is explicitly forbidden by law. In Utah, for example, the state DMV granted federal access to its database despite the state legislature rejecting the federal info-sharing required under REAL ID. 

This level of confusion and obfuscation is, frankly, unacceptable. It should be simple for anyone to learn who has their private, biometric data, and we must work to make it easier.

It’s Time to Ban Government Use of Face Surveillance

Lack of transparency is, of course, only part of the problem. Face surveillance is a growing menace to our privacy even when the agencies with access to the technology are clear about it. Police worn body cameras with facial surveillance can record the words, deeds, and locations of much of the population at a given time. The Department of Homeland Security and Customs and Border Patrol can use face surveillance to track individuals throughout their travels. Government use of face recognition data collected from private companies, like Clearview AI, poses additional threats. Government must not be allowed to implement this always-on panopticon. 

Thankfully, more and more laws that ban government use of this technology are passing around the country. In addition to the several states that currently don’t allow or don’t have face recognition at DMVs (California, Idaho, Louisiana, Missouri, New Hampshire, Oklahoma, Virginia, and Wyoming), cities like San Francisco, Berkeley, and Oakland in California, and Somerville in Massachusetts have also passed bans on its use by city governments. California has even passed a moratorium on government use of face recognition with mobile cameras. As more cities pass these bans, we hope more states join in protecting their residents, and in being transparent about who has access to every technology that could endanger civil liberties. It’s time to ban government use of face surveillance. 

Learn more about who has your face by visiting Who Has Your Face. To help ban government use of face recognition in your city, visit our About Face campaign.

Clearview AI—Yet Another Example of Why We Need A Ban on Law Enforcement Use of Face Recognition Now


This post is by Jennifer Lynch from Deeplinks

This week, additional stories came out about Clearview AI, the company we wrote about earlier that’s marketing a powerful facial recognition tool to law enforcement. These stories discuss some of the police departments around the country that have been secretly using Clearview’s technology, and they show, yet again, why we need strict federal, state, and local laws that ban—or at least press pause—on law enforcement use of face recognition.

Clearview’s service allows law enforcement officers to upload a photo of an unidentified person to its database and see publicly-posted photos of that person along with links to where those photos were posted on the internet. This could allow the police to learn that person’s identity along with significant and highly personal information. Clearview claims to have amassed a dataset of over three billion face images by scraping millions of websites, including news sites and sites like Facebook, YouTube, and Venmo. Clearview’s technology doesn’t appear to be limited to static photos but can also scan for faces in videos on social media sites.

Clearview has been actively marketing its face recognition technology to law enforcement, and it claims more than 1,000 agencies around the country have used its services. But up until last week, most of the general public had never even heard of the company. Even the New Jersey Attorney General was surprised to learn—after reading the New York Times article that broke the story—that officers in his own state were using the technology, and that Clearview was using Continue reading Clearview AI—Yet Another Example of Why We Need A Ban on Law Enforcement Use of Face Recognition Now

Clearview AI—Yet Another Example of Why We Need A Ban on Law Enforcement Use of Face Recognition Now


This post is by Jennifer Lynch from Deeplinks

This week, additional stories came out about Clearview AI, the company we wrote about earlier that’s marketing a powerful facial recognition tool to law enforcement. These stories discuss some of the police departments around the country that have been secretly using Clearview’s technology, and they show, yet again, why we need strict federal, state, and local laws that ban—or at least press pause—on law enforcement use of face recognition.

Clearview’s service allows law enforcement officers to upload a photo of an unidentified person to its database and see publicly-posted photos of that person along with links to where those photos were posted on the internet. This could allow the police to learn that person’s identity along with significant and highly personal information. Clearview claims to have amassed a dataset of over three billion face images by scraping millions of websites, including news sites and sites like Facebook, YouTube, and Venmo. Clearview’s technology doesn’t appear to be limited to static photos but can also scan for faces in videos on social media sites.

Clearview has been actively marketing its face recognition technology to law enforcement, and it claims more than 1,000 agencies around the country have used its services. But up until last week, most of the general public had never even heard of the company. Even the New Jersey Attorney General was surprised to learn—after reading the New York Times article that broke the story—that officers in his own state were using the technology, and that Clearview was using Continue reading Clearview AI—Yet Another Example of Why We Need A Ban on Law Enforcement Use of Face Recognition Now