Mapped: The State of Facial Recognition Around the World


This post is by Iman Ghosh from Visual Capitalist

View the full-size version of this infographic.

Facial Recognition World Map-1200px

Mapping The State of Facial Recognition Around the World

View the high resolution version of this infographic by clicking here.

From public CCTV cameras to biometric identification systems in airports, facial recognition technology is now common in a growing number of places around the world.

In its most benign form, facial recognition technology is a convenient way to unlock your smartphone. At the state level though, facial recognition is a key component of mass surveillance, and it already touches half the global population on a regular basis.

Today’s visualizations from SurfShark classify 194 countries and regions based on the extent of surveillance.

Facial Recognition Status Total Countries
In Use 98
Approved, but not implemented 12
Considering technology 13
No evidence of use 68
Banned 3

Click here to explore the full research methodology.

Let’s dive into the ways facial recognition technology is used across every region.

North America, Central America, and Caribbean

In the U.S., a 2016 study showed that already half of American adults were captured in some kind of facial recognition network. More recently, the Department of Homeland Security unveiled its “Biometric Exit” plan, which aims to use facial recognition technology on nearly all air travel passengers by 2023, to identify compliance with visa status.

Facial Recognition North America Map

Perhaps surprisingly, 59% of Americans are actually in favor of implementing facial recognition technology, considering it acceptable for use in law enforcement according to a Pew Research survey. Yet, some cities such as San Francisco have pushed to ban surveillance, citing a stand against its potential abuse by the government.

Facial recognition technology can potentially come in handy after a natural disaster. After Hurricane Dorian hit in late summer of 2019, the Bahamas launched a blockchain-based missing persons database “FindMeBahamas” to identify thousands of displaced people.

South America

The majority of facial recognition technology in South America is aimed at cracking down on crime. In fact, it worked in Brazil to capture Interpol’s second-most wanted criminal.

Facial Recognition South America Map

Home to over 209 million, Brazil soon plans to create a biometric database of its citizens. However, some are nervous that this could also serve as a means to prevent dissent against the current political order.

Europe

Belgium and Luxembourg are two of only three governments in the world to officially oppose the use of facial recognition technology.

Facial Recognition Europe Map

Further, 80% of Europeans are not keen on sharing facial data with authorities. Despite such negative sentiment, it’s still in use across 26 European countries to date.

The EU has been a haven for unlawful biometric experimentation and surveillance.

—European Digital Rights (EDRi)

In Russia, authorities have relied on facial recognition technology to check for breaches of quarantine rules by potential COVID-19 carriers. In Moscow alone, there are reportedly over 100,000 facial recognition enabled cameras in operation.

Middle East and Central Asia

Facial recognition technology is widespread in this region, notably for military purposes.

Facial Recognition Middle East and Central Asia Map

In Turkey, 30 domestically-developed kamikaze drones will use AI and facial recognition for border security. Similarly, Israel has a close eye on Palestinian citizens across 27 West Bank checkpoints.

In other parts of the region, police in the UAE have purchased discreet smart glasses that can be used to scan crowds, where positive matches show up on an embedded lens display. Over in Kazakhstan, facial recognition technology could replace public transportation passes entirely.

East Asia and Oceania

In the COVID-19 battle, contact tracing through biometric identification became a common tool to slow the infection rates in countries such as China, South Korea, Taiwan, and Singapore. In some instances, this included the use of facial recognition technology to monitor temperatures as well as spot those without a mask.

Facial Recognition East Asia Oceania Map

That said, questions remain about whether the pandemic panopticon will stop there.

China is often cited as a notorious use case of mass surveillance, and the country has the highest ratio of CCTV cameras to citizens in the world—one for every 12 people. By 2023, China will be the single biggest player in the global facial recognition market. And it’s not just implementing the technology at home–it’s exporting too.

Africa

While the African continent currently has the lowest concentration of facial recognition technology in use, this deficit may not last for long.

Facial Recognition World Map

Several African countries, such as Kenya and Uganda, have received telecommunications and surveillance financing and infrastructure from Chinese companies—Huawei in particular. While the company claims this has enabled regional crime rates to plummet, some activists are wary of the partnership.

Whether you approach facial recognition technology from public and national security lens or from an individual liberty perspective, it’s clear that this kind of surveillance is here to stay.

Subscribe to Visual Capitalist


Thank you!
Given email address is already subscribed, thank you!
Please provide a valid email address.
Please complete the CAPTCHA.
Oops. Something went wrong. Please try again later.

The post Mapped: The State of Facial Recognition Around the World appeared first on Visual Capitalist.

EFF to UN Expert on Racial Discrimination: Mass Border Surveillance Hurts Vulnerable Communities


This post is by Matthew Guariglia from Deeplinks

EFF submitted a letter to the United Nations’ Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance to testify to the negative impacts of mass surveillance on vulnerable communities at the U.S. border. The Special Rapporteur called for submissions on “Race, Borders, and Digital Technologies” that examine the harmful effects of electronic surveillance on vulnerable communities and free movement at the border. These submissions will inform the Special Rapporteur’s 2020 thematic report to the U.N. General Assembly about how digital technologies used for border enforcement and administration reproduce, reinforce, and compound racial discrimination.

Ms. E. Tendayi Achiume was appointed the 5th Special Rapporteur on contemporary forms of racism, racial discrimiantion, xenophobia and related intolerance in 2017. In the United Nations, Special Rapporteurs are independent experts appointed by the U.N. Human Rights Council who serve in a personal capacity and report on human rights from a thematic or country-specific perspective. Special Rapporteurs also report back annually to the U.N. General Assembly (which is made up of 193 Member States). With the support of the U.N. Office of the High Commissioner on Human Rights, Special Rapporteurs undertake country visits, intervene directly with States on alleged human rights violations, and conduct thematic studies like this report.

In our submission, we explained that EFF has spent the last several years expanding our expertise in mapping, litigation, research, and advocacy against the use of digital surveillance technologies at the U.S. borders. The Atlas of Surveillance: Southwestern Border Communities project, published in partnership with the Reynolds School of Journalism at the University of Nevada, Reno, found that dozens of law enforcement agencies along the U.S.-Mexico border use biometric surveillance, automated license plate readers, aerial surveillance, and other technologies that not only track migration across the border, but also constantly surveil the diverse communities that live in the border region.  

Litigation is one tool EFF has used to fight back against invasive surveillance at the border and the government secrecy that hides it. In our case, Alasaad v. Wolf, we worked with the national ACLU and ACLU of Massachusetts to challenge the government’s warrantless, suspicionless searches of electronic devices at the U.S. border. We argued that warrantless border searches of electronic devices constitute grave privacy invasions because of the vast amount of personal information that can be revealed by a search of an individual’s electronic devices such as smartphones and laptops. In November 2019, a Massachusetts federal district court held that the government must have reasonable suspicion that an electronic device contains digital contraband in order to conduct a border search. While not the warrant standard that we had argued, the court’s ruling is the most rights-protective decision in the country on searches of electronic devices at the border. Alasaad is currently on appeal in the U.S. Court of Appeals for the First Circuit. In addition, EFF has two ongoing Freedom of Information Act (FOIA) lawsuits regarding the border, one on GPS tracking at the border, and the other on Rapid DNA testing of migrant families at the border

Our letter also highlights EFF’s successful advocacy with the California Attorney General’s Office to classify immigration enforcement as a form of misuse of the California Law Enforcement Telecommunications System (CLETS). As a result of this change in policy, U.S. Immigration and Customs Enforcement (ICE) was altogether barred from using CLETS.

We hope that our submission adds to the United Nations and the larger international community’s understanding of the vast surveillance systems being set up and deployed at the U.S. border, and the disproportionate impact of these technologies on vulnerable communities. 

Skyflow raises $7.5M to build its privacy API business


This post is by Alex Wilhelm from Fundings & Exits – TechCrunch

Skyflow, a Mountain View-based privacy API company, announced this morning that it has closed a $7.5 million round of capital it describes as a seed investment. Foundation Capital’s Ashu Garg led the round, with the company touting smaller checks from Jeff Immelt (former GE CEO) and Jonathan Bush (former AthenaHealth CEO).

For Skyflow, founded in 2019, the capital raise and its constituent announcement mark an exit from quasi-stealth mode.

TechCrunch knew a little about Skyflow before it announced its seed round because one if its co-founders, Anshu Sharma is a former Salesforce executive and former venture partner at Storm Ventures, a venture capital firm that focuses on enterprise SaaS businesses. That he left the venture world to eventually found something new caught our eye.

Sharma co-founded the company with Prakash Khot, another former Salesforce denizen.

So what is Skyflow? In a sense it’s the nexus between two trends, namely the growing importance of data security (privacy, in other words), and API -based companies. Skyflow’s product is an API that allows its customers — businesses, not individuals — to store sensitive user information, like Social Security numbers, securely.

Chatting with Sharma in advance of the funding, the CEO told TechCrunch that many providers of cybersecurity solutions today sell products that raise a company’s walls a little higher against certain threats. Once breached, however, the data stored inside is loose. Skyflow wants to make sure that its customers cannot lose your personal information.

Sharma likened Skyflow to other API companies that work to take complex services — Twilio’s telephony API, Stripe’s payments API, and so forth — and provide a simple endpoint for companies to hook into, giving them access to something hard with ease.

Comparing his company’s product to privacy-focused solutions like Apple Pay, the CEO said in a release that “Skyflow has taken a similar approach to all the sensitive data so companies can run their workflows, analytics and machine learning to serve the customer, but do so without exposing the data as a result of a potential theft or breach.”

It’s an interesting idea. If the technology works as promised, Skyflow could help a host of companies that either can’t afford, or simply can’t be bothered, to properly protect your data that they have collected.

If you are not still furious with Equifax, a company that decided that it was a fine idea to collect your personal information so it could grade you and then lost “hundreds of millions of customer records,” Skyflow might not excite you. But if the law is willing to let firms leak your data with little punishment, tooling to help companies be a bit less awful concerning data security is welcome.

Skyflow is not the only API-based company that has raised recently. Daily.co picked up funds recently for its video-chatting API, FalconX raised money for its crypto pricing and trading API, and CNBC reported today that another privacy-focused API company called Evervault has also taken on capital.

Skyflow’s model, however, may differ a little from how other API-built companies have priced themselves. Given that the data it will store for customers isn’t accessed as often, say, as a customer might ping Twilio’s API, Skyflow won’t charge usage rates for its product. After discussing the topic with Sharma, our impression is that Skyflow — once it formally launches its service commercially– will look something like a SaaS business.

The cloud isn’t coming, it’s here. And companies are awful at cybersecurity. Skyflow is betting it’s engineering-heavy team can make that better, while making money. Let’s see.

Victory! German Mass Surveillance Abroad is Ruled Unconstitutional


This post is by Matthew Guariglia from Deeplinks

In a landmark decision, the German Constitutional Court has ruled that mass surveillance of telecommunications outside of Germany conducted on foreign nationals is unconstitutional. Thanks to the chief legal counsel, Gesellschaft für Freiheitsrechte (GFF), this a major victory for global civil liberties, but especially those that live and work in Europe. Many will now be protected after lackluster 2016 surveillance reforms continued to authorize the surveillance on EU states and institutions for the purpose of “foreign policy and security,” and permitted the BND to collaborate with the NSA.

In its press release about the decision, the court found that the privacy rights of the German constitution also protects foreigners in other countries and that the German intelligence agency, Bundesnachrichtendienst (BND), had no authority to conduct telecommunications surveillance on them:

 “The Court held that under Art. 1(3) GG German state authority is bound by the fundamental rights of the Basic Law not only within the German territory. At least Art. 10(1) and Art. 5(1) second sentence GG, which afford protection against telecommunications surveillance as rights against state interference, also protect foreigners in other countries. This applies irrespective of whether surveillance is conducted from within Germany or from abroad. As the legislator assumed that fundamental rights were not applicable in this matter, the legal requirements arising from these fundamental rights were not satisfied, neither formally nor substantively.”

 The court also decided that as currently structured, there was no way for the BND to restrict the type of data collected and who it was being collected from. Unrestricted mass surveillance posed a particular threat to the rights and safety of lawyers, journalists and their sources and clients:

“In particular, the surveillance is not restricted to sufficiently specific purposes and thereby structured in a way that allows for oversight and control; various safeguards are lacking as well, for example with respect to the protection of journalists or lawyers. Regarding the transfer of data, the shortcomings include the lack of a limitation to sufficiently weighty legal interests and of sufficient thresholds as requirements for data transfers. Accordingly, the provisions governing cooperation with foreign intelligence services do not contain sufficient restrictions or safeguards. The powers under review also lack an extensive independent oversight regime. Such a regime must be designed as continual legal oversight that allows for comprehensive oversight and control of the surveillance process.”

The hearing comes after a coalition of media and activist organizations including the Gesellschaft für Freiheistrechte filed a constitutional complaint against the BND for its dragnet collection and storage of telecommunications data. One of the leading arguments against massive data collection by the foreign intelligence service is the fear that sensitive communications between sources and journalists may be swept up and made accessible by the government. Surveillance which, purposefully or inadvertently, sweeps up the messages of journalists jeopardizes the integrity and health of a free and functioning press, and could chill the willingness of sources or whistleblowers to expose corruption or wrongdoing in the country. In September 2019, based on similar concerns about the surveillance of journalists, South Africa’s High Court issued a watershed ruling that the country’s laws do not authorize bulk surveillance, in part because there were no special protections to ensure that the communications of lawyers and journalists were not also swept up and stored by the government.

In EFF’s own landmark case against the NSA’s dragnet surveillance program, Jewel v. NSA, the Reporters Committee for Freedom of the Press recently filed an Amicus brief making similar arguments about surveillance in the United States. “When the threat of surveillance reaches these sources,” the brief argues, “there is a real chilling effect on quality reporting and the flow of information to the public.” The NSA is also guilty of carrying out mass surveillance of foreigners abroad in much the same way that the BND was just told it can no longer do. 

Victories in Germany and South African may seem like a step in the right direction toward pressuring the United States judicial system to make similar decisions, but state secrecy remains a major hurdle. In the United States, our lawsuit against NSA mass surveillance is being held up by the government argument that it cannot submit into evidence any of the requisite documents necessary to adjudicate the case. In Germany, both the BND Act and its sibling, the G10 Act, as well as their technological underpinnings, are both openly discussed making it easier to confront their legality.

The German government now has until the end of 2021 to amend the BND Act to make it compliant with the court’s ruling.

EFF wishes our hearty congratulations to the lawyers, activists, journalists, and concerned citizens that worked very hard to bring this case before the court. We hope that this victory is just one of many we areand will becelebrating as we continue to fight together to dismantle global mass surveillance.

Security Expert Tadayoshi Khono Joins EFF Advisory Board


This post is by Karen Gullo from Deeplinks

EFF is proud to announce a new addition to our crack advisory board: security expert and scholar Tadayoshi Khono. A professor at University of Washington’s Paul G. Allen School of Computer Science & Engineering, Khono is a researcher whose work focuses on identifying and fixing security flaws in emerging technologies, the Internet, and the cloud.

Khono examines and tests software and networks with the goal of developing solutions to security and privacy risks before those risks become a threat. His research focuses on helping protect the security, privacy, and safety of users of current and future generation technologies.

Khono has revealed security flaws in electronic voting machines, implantable cardiac defibrillators, and pacemakers, and automobiles. He recently studied flaws in augmented reality (AR) apps, and last year co-developed a tool for developers to build secure multi-user AR platforms. A 2019 report he co-authored about the genealogy site GEDmatch, used to find the Golden State Killer, showed vulnerabilities to multiple security risks that could allow bad actors to create fake genetic profiles and falsely appear as a relative to people in the GEDmatch database.

Khono has spent the last 20 years working to raise awareness about computer security among students, industry leaders, and policy makers. He is the recipient of an Alfred P. Sloan Research Fellowship, a U.S. National Science Foundation CAREER Award, and a Technology Review TR-35 Young Innovator Award. He has presented his research to the U.S. House of Representatives, and had his research profiled in the NOVA ScienceNOW “Can Science Stop Crime?” documentary and the NOVA “CyberWar Threat” documentary. Kohno received his Ph.D. from the University of California at San Diego, where he earned the department’s Doctoral Dissertation Award.

We’re thrilled that Khono has joined EFF’s advisory board.

 

Second Paraguay Who Defends Your Data? Report: ISPs Still Have a Long Way Towards Public Commitments to Privacy and Transparency


This post is by Veridiana Alimonti from Deeplinks

Keeping track of ISPs’ commitments to their users, today Paraguay’s leading digital rights organization TEDIC is launching its second edition of ¿Quién Defiende Tus Datos? (Who Defends Your Data?), a report in collaboration with EFF. Transparent practices and firm privacy commitments are particularly crucial right now. During times of crisis and emergency, companies must, more than ever, show that users can trust them with sensitive information about their habits and communications. While Paraguayan ISPs have made progress with their privacy policies and taking part in forums pledging promotion of human rights, they still have a long way to go to give users what is needed for fully building this trust.

Paraguayan ISPs should make greater efforts in being transparent about their practices and procedures as well as having stronger public commitments to their users, such as taking steps to notify users about government data requests.

Overall, Tigo remains the best-ranked company in the report, followed by Claro and Personal. Copaco and Vox received the worst ratings. The second edition brings two new categories: assessing whether companies have publicly available guidelines for law enforcement requests, and whether their privacy policies and terms of service are provided following proper web accessibility standards. This year’s report focuses on telecommunication companies with more than fifteen thousand internet users across the country, which together represent the whole base of mobile broadband customers (except for Copaco, whichonly provides fixed services).

The full study is available in Spanish, and we outline the main findings below.

Main Findings

Each ISP was evaluated in the following seven categories: privacy policies, judicial order, user notification, policies for promoting political commitments, transparency, law enforcement guidelines, and accessibility standards.

Regarding privacy policies, this edition looked into companies’ publicly available documents and checked whether they provided clear and easily accessible information about personal data collection, processing, and sharing with third parties, as well as the retention time and security practices. While no company scored in the previous report, more than half of them showed improvements in this year’s edition. Tigo stands out with a full star, followed closely by Claro’s privacy policies. Claro did not earn the full star, as it failed to provide sufficient information on how personal data are collected and stored. Personal also received a partial score for publishing policies that properly detail how users’ data are shared with third parties.

When it comes to requiring a warrant before handing over users’ communications content for law enforcement authorities, Tigo is the only ISP to clearly and publicly commit to doing so. Claro stated that the company complies with applicable legislation, judicial proceedings, and government requests. TEDIC’s report highlights that, in response to the research team, Claro and other companies claimed they do request judicial authorization for handing over communications content. Yet, these claims are still not reflected in the policies these companies’ public, verifiable policies.

Regarding government access to traffic data, a Supreme Court ruling in 2010 authorized prosecutors to request such data directly despite the country’s telecommunications law’s assertion that the constitutional safeguard of inviolability of communications refers not only to the content itself, but also to what indicates the existence of a communication, which would cover traffic data. The 2010 ruling has been applied to online context, running also afoul of the Inter-American Court of Human Rights case law recognizing that communications metadata should receive the same level of protection granted to content. TEDIC’s report recommends that companies publicly commit to requesting judicial authorization when handing metadata to authorities. Clarifying this discrepancy in favor of users’ privacy is still a challenge and companies should play a greater role in taking it on and fighting for their users in courts or in Congress.

Tigo is the only ISP to receive partial stars in the transparency and law enforcement guidelines categories for documents published by its parent corporation Millicom. Regarding the transparency report, Millicom falls short of providing detailed information for Paraguay. The report aggregates data per region, disclosing statistical figures for interception and metadata that merge the requests received in Paraguay, Colombia, and Bolivia. Transparency reports are valuable tools for providing insight into how often governments request data and how companies respond to it, but this is not the case if the figures for each country are not disclosed.

However, Millicom does provide relevant insight when it states that Paraguay’s authorities mandate direct access to their mobile network, though it doesn’t specify the legal ground that compels companies to do so.

As for law enforcement guidelines, Millicom publishes global key steps that its subsidiaries must follow when complying with government requests, but the ISP doesn’t make available to the public its detailed global and locally tailored procedures.

Getting companies’ commitment to notify users about government data requests remains a hard challenge. Just like in the last edition of the report, no company received credit in this category. While international human rights standards reinforce how crucial user notification is to ensure due process and effective remedies, ISPs are usually reluctant to take steps towards putting a proper notification procedure in place.

Three out of five companies (Claro, Tigo, and Personal) scored in the web accessibility category, though there is still room for improvement.   

TEDIC’s work is part of a larger initiative across Latin America and Spain kicked off in 2015 and inspired by EFF’s Who Has Your Back? Project. Earlier this year, both Fundación Karisma in Colombia and ADC in Argentina published new reports. The second edition of Eticas Foundation in Spain comes next, with new instalments in Panamá, Peru, and Brazil already in the pipeline.

 

Using Drones to Fight COVID-19 is the Slipperiest of All Slopes


This post is by Matthew Guariglia from Deeplinks

As governments search in vain for a technological silver bullet that will contain COVID-19 and allow people to safely leave their homes, officials are increasingly turning to drones. Some have floated using them to enforce social distancing, break up or monitor places where gatherings of people are occurring, identify infected people with supposed “fever detecting” thermal imaging, or even assist in contact tracing by way of face recognition and mass surveillance.

Any current buy-up of drones would constitute a classic example of how law enforcement and other government agencies often use crises in order to justify the expenditures and negate the public backlash that comes along with buying surveillance equipment. For years, the LAPD, the NYPD, and other police departments across the country have been fighting the backlash from concerned residents over their acquisitions of surveillance drones. These drones present a particular threat to free speech and political participation. Police departments often deploy them above protests, large public gatherings, and on other occasions where people might practice their First Amendment-protected rights to speech, association, and assembly.

The threats to civil liberties created by drones increase exponentially if those drones are, as some current plans propose, equipped with the ability to conduct face surveillance. If police now start to use drones to identify people who are violating quarantine and walking around in public after testing positive for COVID-19, police can easily use the same drones to identify participants in protests or strikes once the crisis is over. Likewise, we oppose the attachment of thermal imaging cameras to government drones, because the government has failed to show that such cameras are sufficiently accurate to remotely determine whether a person has a fever. Yet, police could use these cameras to identify the whereabouts of protesters in public places at nighttime. 

Some have suggested that drones may be a useful way of monitoring the density of gatherings in a public places, like a jogging areas, or as a safer alternative to sending an actual person to determine crowd density. EFF has clear guidelines to evaluate such proposals: Would it work? Is it too invasive? Are their sufficient safeguards? So to start, we’d want to hear from public health experts that drones would be effective for this purpose. Further, we’d want guarantees that such drones are part of a temporary public health approach to social distancing, and not a permanent criminal justice approach to gatherings in public places. For example, there would need to be guidelines that would only allow public health officials, rather than law enforcement, access to the drones. The drones should also never be equipped with face recognition or inaccurate “fever detecting” thermal cameras. They should not be used to photograph or otherwise identify individual people. No government agencies should use this moment to purchase new drones, which ensures that agencies would find excuses in the future to use the drones for other purposes, in order to justify the initial expense. We don’t want more government drones flying over concerts and rallies in the not-so-distant future. 

Police surveillance technology is disproportionately deployed against people of color, undocumented people, unhoused individuals, and other vulnerable populations. Having drones become part of the criminal justice apparatus, rather than being controlled by public health officials with no punitive focus,  runs the risk of new over-policing of already over-policed neighborhoods and increasing the racially biased dissemination of fines, summonses, and in-person harassment. If drones must be deployed at all, they need firm guardrails to avoid disproportionately impacting specific communities.

As always, local police and public health officials should not acquire new surveillance technologies, or use old surveillance technologies in new ways, without first asking for permission from their city councils or other legislative authorities. Those bodies should hear from the public before deciding. If the civil liberties costs outweigh the public health benefits, new spy tech should be rejected. During the COVID-19 crisis, community control of government surveillance technologies, including drones, is more important than ever.  

Even in a time of crisis, we must not normalize policing by robot. Videos of Italian mayors using drones with speakers to shout at people defying shelter-in-place orders are supposed to be funny, but we find them alarming. People often turn toward first responders at the worst moments of their lives. We should not be getting people used to, or even amused by, outsourcing more and more of the necessary human side of policing to robots.

The Dangers of COVID-19 Surveillance Proposals to the Future of Protest


This post is by Matthew Guariglia from Deeplinks

Many of the new surveillance powers now sought by the government to address the COVID-19 crisis would harm our First Amendment rights for years to come. People will be chilled and deterred from speaking out, protesting in public places, and associating with like-minded advocates if they fear scrutiny from cameras, drones, face recognition, thermal imaging, and location trackers. It is all too easy for governments to redeploy the infrastructure of surveillance from pandemic containment to political spying. It won’t be easy to get the government to suspend its newly acquired tech and surveillance powers.

When this wave of the public health emergency is over and it becomes safe for most people to leave their homes, they may find a world with even more political debate than when they left it. A likely global recession, a new election season, and re-energized social movements will provide an overwhelming incentive for record numbers of people to speak out, to demonstrate in public places, and to demand concessions of their governments. The pent-up urge to take to the streets may bring mass protests like we have not seen in years. And what impact would new surveillance tools, adopted in the name of public health, have on this new era of marches, demonstrations, and strikes?

The collection and sharing of phone location data that was sold and deployed in order to trace the spread of the virus could be used by a reigning administration to crack down on dissent. The government and vendors have yet to make a convincing argument for how this measure would contribute to the public health effort. Indeed, they cannot, because GPS data and cell site location information are not sufficiently granular to show whether two people were close enough together to transmit the virus (six feet). But this data is sufficiently precise to show whether a person attended a protest in a park, picketed in front of a factory, or traveled at night to the block where a dissident lives.

Many other technologies that should never be deployed to prevent the spread of the virus would also harm free speech. Vendors are seeking to sell face recognition cameras to the government to alert authorities if someone in mandatory quarantine went grocery shopping. They could just as easily be used to identify picketers opposing government initiatives or journalists meeting with confidential sources. For example, the infamous face surveillance company, Clearview AI, is in talks with the government to create a system that would use face recognition in public places to identify unknown people who may have been infected by a known carrier. This proposal would create a massive surveillance infrastructure, linked to billions of social media images, that could allow the government to readily identify people in public spaces, including protesters, by scanning footage of them against images found online. Likewise, thermal imagining cameras in public places will not be an effective means of finding people with a fever, given the high error rate when calculating a person’s temperature at a distance. But police might be able to use such cameras to find protesters that have fled on foot from police engaged in excessive force against peaceful gatherings.

The U.S. government is not known for its inclination to give back surveillance powers seized during extraordinary moments. Once used in acute circumstances, a tool stays in the toolbox until it is taken away.  The government did not relinquish the power to tear gas protesters after the National Guard was called in to break up the Bonus Marchers assembled in the capitol during the Great Depression. Only after decades of clandestine use did the American people learn about the ways the FBI misused the threat of Communism to justify the wholesale harassment, surveillance, and sabotage of civil rights leaders and anti-war protesters. The revelation of these activities resulted in Sen. Frank Church’s investigations into U.S. surveillance in the mid-1970s, the type of forceful oversight of intelligence agencies we need more of today. And the massive surveillance apparatus created by the PATRIOT Act after 9/11 remains mostly intact and operational even after revelations of its overreach, law-breaking, and large-scale data collection on U.S. persons.

Even more proportionate technologies could be converted to less benign purposes than COVID-19 containment. Bluetooth-based proximity tracking apps are being used to trace the distance between two peoples’ phones in an attempt to follow potential transmission of the virus. Done with privacy as a priority, these apps may be able to conceal the identities of people who come into contact with each other. Done wrong, these apps could be used to crack down on political expression. If police know that Alice was at a protest planning meeting, and police learn from the proximity app that Alice was near Bob that day, then police could infer that Bob was also at the meeting. Some versions of these apps also collect identifiers or geolocations, which could further be used to identify and track participants in protest planning meetings.

Done without collecting identifying information and minimizing storage, measures like aggregate geolocation tracking might assist public health response and be difficult to weaponize against protestors. But done with deliberate intention to survey demonstrations, aggregate location data might be disaggregated, merged with other data, and used to identify individual people. For example, police could single out individual protestors in a public plaza, track them to their respective homes and workplaces once the demonstration is over, and thereby identify them.

Free speech and political participation are chilled when governments put protests, protestors, activists, and organizers under surveillance. Studies have found that when people are aware of surveillance, they’re less likely to engage in political speech or debate the important issues of the day. The First Amendment also protects the right of association for purposes of collective expression. This right is threatened if people are worried that they will be put under surveillance for joining or meeting with specific people or groups. Suddenly a person’s movements, correspondence, or personal relationships are scrutinized by strangers within the government. At a moment when our society is desperate to find innovative solutions to daunting political problems, we should loudly condemn any surveillance efforts which might chill our ability to freely discuss and associate about pressing issues.

EFF has clear guidelines for how we evaluate whether a piece of surveillance technology, proposed as a tool of public health: Would it work? Is it too invasive? Are their sufficient safeguards? One of the biggest concerns is that new powers introduced at this current moment will long outstay their necessity, experience mission creep, and by overtly redeployed for other purposes. Now, more than ever, we must stay vigilant about any new surveillance powers, technologies, and public-private relationships.

EFF Testifies Today on Law Enforcement Use of Face Recognition Before Presidential Commission on Law Enforcement and the Administration of Justice


This post is by Jennifer Lynch from Deeplinks

The Presidential Commission on Law Enforcement and the Administration of Justice invited EFF to testify on law enforcement use of face recognition. The Commission, which was established via Executive Order and convened by Attorney General William Barr earlier this year, is tasked with addressing the serious issues confronting law enforcement and is made up of representatives from federal law enforcement as well as police chiefs and sheriffs from around the country.

We testified orally and provided the Commission with a copy of our whitepaper, Face Off: Law Enforcement Use of Face Recognition Technology. The following is our oral testimony:

President’s Commission on Law Enforcement and the Administration of Justice
Hearing on Law Enforcement’s Use of Facial Recognition Technology

Oral Testimony of
Jennifer Lynch
Surveillance Litigation Director
Electronic Frontier Foundation (EFF)

April 22, 2020

Thank you very much for the opportunity to discuss law enforcement’s use of facial recognition technologies with you today. I am the surveillance litigation director at the Electronic Frontier Foundation, a 30-year-old nonprofit dedicated to the protection of civil liberties and privacy in new technologies.

In the last few years, face recognition has advanced significantly. Now, law enforcement officers can use mobile devices to capture face recognition-ready photographs of people they stop on the street; surveillance cameras and body-worn cameras boast real-time face scanning and identification capabilities; and the FBI and many other state and federal agencies have access to millions, if not hundreds of millions, of face recognition images of law-abiding Americans.

However, the adoption of face recognition technologies has occurred without meaningful oversight, without proper accuracy testing, and without legal protections to prevent misuse. This has led to the development of unproven systems that will impinge on constitutional rights and disproportionately impact people of color.

Face recognition and similar technologies make it possible to identify and track people, both in real time and in the past, including at lawful political protests and other sensitive gatherings. Widespread use of face recognition by the government—especially to identify people secretly when they walk around in public—will fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate with others. Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny, even when they are doing absolutely nothing wrong. And this burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.

The right to speak anonymously and to associate with others without the government watching is fundamental to a democracy. And it’s not just EFF saying that—the founding fathers used pseudonyms in the Federalist Papers to debate what kind of government we should form in this country, and the Supreme Court has consistently recognized that anonymous speech and association are necessary for the First Amendment right to free speech to be at all meaningful.

Face recognition’s chilling effect is exacerbated by inaccuracies in face recognition systems. For example, FBI’s own testing found its face recognition system failed to even detect a match from a gallery of images nearly 15% of the time. Similarly, the ACLU showed that Amazon’s face recognition product, which it aggressively markets to law enforcement, falsely matched 28 members of Congress to mugshot photos.

The threats from face recognition will disproportionately impact people of color, both because face recognition misidentifies African Americans and ethnic minorities at higher rates than whites, and because mug shot databases include a disproportionate number of African Americans, Latinos, and immigrants.

This has real-world consequences; an inaccurate system will implicate people for crimes they didn’t commit. Using face recognition as the first step in an investigation can bias the investigation toward a particular suspect. Human backup identification, which has its own problems, frequently only confirms this bias. This means face recognition will shift the burden onto defendants to show they are not who the system says they are.

Despite these known challenges, federal and state agencies have for years failed to be transparent about their use of face recognition. For example, the public had no idea how many images were accessible to FBI’s FACE Services Unit until Government Accountability Office reports from 2016 and 2019 revealed the Bureau can access more than 641 million images—most of which were taken for non-criminal reasons like obtaining a driver license or a passport.

State agencies have been just as intransigent in providing information on their face recognition systems. EFF partnered with the Georgetown Center on Privacy and Technology to do a survey of which states were currently using face recognition and with whom they were sharing their data – a project we call “Who Has Your Face.” Many states, including Connecticut, Louisiana, Kentucky, and Alabama failed to or refused to respond to our public records requests. And other states like Idaho and Oklahoma told us they did not use face recognition but other sources, like the GAO reports and records from the American Association of Motor Vehicle Administrators (AAMVA), seem to contradict this.

Law enforcement officers have also hidden their partnerships with private companies from the public. Earlier this year, the public learned that a company called Clearview AI had been actively marketing its face recognition technology to law enforcement, and claimed that more than 1,000 agencies around the country had used its services. But up until the middle of January, most of the general public had never even heard of the company. Even the New Jersey Attorney General was surprised to learn—after reading the New York Times article that broke the story—that officers in his own state were using the technology, and that Clearview was using his image to sell its services to other agencies.

Unfortunately, the police have been just as tight-lipped with defendants and defense attorneys about their use of face recognition. For example, in Florida law enforcement officers have used face recognition to try to identify suspects for almost 20 years, conducting up to 8,000 searches per month. However, Florida defense attorneys are almost never told that face recognition was used in their clients’ cases. This infringes defendants’ constitutional due process right to challenge evidence brought against them.

Without transparency, accountability, and proper security protocols in place, face recognition systems will be subject to misuse. For example, the Baltimore Police used face recognition and social media to identify and arrest people in the protests following Freddie Gray’s death. And Clearview AI used its own face recognition technology to monitor a journalist and encouraged police officers to use it to identify family and friends.

Americans should not be forced to submit to criminal face recognition searches merely because they want to drive a car. And they shouldn’t have to fear that their every move will be tracked if the networks of surveillance cameras that already blanket many cities are linked to face recognition.

But without meaningful restrictions on face recognition, this is where we may be headed. Without protections, it could be relatively easy for governments to amass databases of images of all Americans—or work with a shady company like Clearview AI to do it for them—and then use those databases to identify and track people as they go about their daily lives. 

In response to these challenges, I encourage this commission to do two things: First, to conduct a thorough nationwide study of current and proposed law enforcement practices with regard to face recognition at the federal, state, and local level, and second, to develop model policies for agencies that will meaningfully restrict law enforcement access to and use of this technology. Once completed, both of these should be easily available to the general public.

Thank you once again for the invitation to testify. My written testimony, a white paper I wrote on law enforcement use of face recognition, provides additional information and recommendations. I am happy to respond to questions.

Yes, Section 215 Expired. Now What?


This post is by India McKinney from Deeplinks

On March 15, 2020, Section 215 of the PATRIOT Act—a surveillance law with a rich history of government overreach and abuse—expired. Along with two other PATRIOT Act provisions, Section 215 lapsed after lawmakers failed to reach an agreement on a broader set of reforms to the Foreign Intelligence Surveillance Act (FISA).

In the week before the law expired, the House of Representatives passed the USA FREEDOM Reauthorization Act, without committee markup or floor amendments, which would have extended Section 215 for three more years, along with some modest reforms. 

In order for any bill to become law, the House and Senate must pass an identical bill, and the President must sign it. That didn’t happen with the USA FREEDOM Reauthorization Act. Instead, knowing the vote to proceed with the House’s bill in the Senate without debating amendments was going to fail, Senator McConnell brought a bill to the floor that would extend all the expiring provisions for another 77 days, without any reforms at all. Senator McConnell’s extension passed the Senate without debate.

But the House of Representatives left town without passing Senator McConnell’s bill, at least until May 12, 2020, and possibly longer. That means that Section 215 of the USA PATRIOT Act, along with the so-called lone wolf and the roving wiretap provisions have expired, at least for a few weeks.

So Now What?

EFF has argued that if Congress can’t agree on real reforms to these problematic laws, they should be allowed to expire. While we are pleased that Congress didn’t mechanically reauthorize Section 215, it is only one of a number of largely overlapping surveillance authorities. The loss of the current version of the law will still leave the government with a range of tools that is still incredibly powerful. These include other provisions of FISA as well as surveillance authorities used in criminal investigations, many of which can include gag orders to protect sensitive information.

In addition, the New York Times and others have noted that Section 215’s expiration clause contains an exception permitting the intelligence community to use the law for investigations that were ongoing at the time of expiration or to investigate “offenses or potential offenses” that occurred before the sunset. Broad reliance on this exception would subvert Congress’s intent to have Section 215 truly expire, and the Foreign Intelligence Surveillance Court should carefully—and publicly—circumscribe any attempt to rely on it.

Reform Is Still Needed

Although Section 215 and the two other provisions have expired, that doesn’t mean they’re gone forever. For example, in 2015, during the debate over the USA FREEDOM Act, these same provisions were also allowed to expire for a short period of time, and then Congress reauthorized them for another four years. While transparency is still lacking in how these programs operate, the intelligence community did not report a disruption in any of these “critical” programs at that time. If Congress chooses to reauthorize these programs in the next couple of months, it’s unlikely that this disruption will have a lasting impact.

The Senate plans to vote on a series of amendments to the House-passed USA FREEDOM Reauthorization Act in the near future. Any changes made to the bill would then have to be approved by the House and signed by the President. This means that Congress has the opportunity to discuss whether these authorities are actually needed, without the pressure of a ticking clock.

As a result, the House and the Senate should take this unique opportunity to learn more about these provisions and create additional oversight into the surveillance programs that rely on them. The expired provisions should remain expired until Congress enacts the additional, meaningful reforms we’ve been seeking.

You can read more about what EFF is calling for when it comes to reining in NSA spying, reforming FISA, and restoring Americans’ privacy here.

Telling Police Where People With COVID-19 Live Erodes Public Health


This post is by Matthew Guariglia from Deeplinks

In some areas of the United States, local governments are sharing the names and addresses of people who have tested positive for COVID-19 with police and other first responders. This is intended to keep police, EMTs, and firefighters safe should they find themselves headed to a call at the residence of someone who has tested positive for the virus.

However, this information fails to protect first responders from unidentified, asymptomatic, and pre-symptomatic cases. It may also discourage people from getting tested, contribute to stigmatization of infected people, reduce the quality of policing in vulnerable communities, and incentivize police to avoid calls for help because of fear of contracting the virus.

In response to the current health crisis, some governments are seeking to collect and deploy personal data in new ways that are untested or ineffective, including by means of face recognition, geolocation tracking, and fever detection cameras. Such new tactics and technologies must be closely evaluated to determine whether their use is justified, minimized, transparent, and unbiased. Sharing the home addresses of people who have contracted COVID-19 with first responders does not pass muster. 

What is being proposed? 

Some local officials in Alabama, Florida, Massachusetts, and North Carolina are already collecting the names and addresses of people who test positive for COVID-19 and turning that data over to local first responders. The proponents of this tactic argue that it will allow first responders to take necessary precautions when they respond to a call from a home where a resident has tested positive. 

However, this would likely do little to protect first responders, who are currently experimenting with ways to avoid contracting the virus. Many cases of COVID-19 are asymptomatic, present mild symptoms, or are undiagnosed because of the lack of testing in many parts of the United States. Giving first responders data on confirmed COVID-19 individuals may lull police, paramedics, or fire fighters into a false sense of security. First responders should respond to every call as if someone inside might be infected—making data sharing unnecessary. Indeed, many interactions between first responders and members of the public do not occur at a home, so first responders must be equipped with the tools and training needed to treat every contact as an infection risk. 

What are the concerns? 

There already are too many hurdles for people in the United States to get a COVID-19 test. Sharing the medical data and addresses of people who test positive could create one more: it may chill some people from getting tested. For example, vulnerable populations such as unhoused or undocumented individuals may not be willing to get tested if they know their information will ends up in the hands of government agencies other than those managing public health. Indeed, the tactic here contradicts a basic norm of data privacy: when the government collects sensitive data about identifiable people for one purpose, the government generally should not use that data for another purpose. Also, when hundreds of thousands of first responders and dispatchers obtain access to this information, there is inherent risk of misuse and breach.

Likewise, there is historical precedent that the accumulation of personal health data in the hands of police and other government officials creates stigma and bias against those who are infected and their communities. For instance, some public health experts have pointed out the parallels between keeping a list of those who test positive for COVID-19, and the stigma that followed a person who tested positive for HIV during the AIDS crisis of the 1980s and 1990s. Similarly, some people and doctors during the 1918 influenza pandemic avoided disclosing or diagnosing patients out of a fear of being quarantined, shamed, or stigmatized.

Moreover, the virus is  disproportionately harming neighborhoods predominantly inhabited by people of color, which are already underserved by public safety and public health institutions. Disclosing the addresses of infected people to first responders may amplify this problem, by discouraging prompt response to homes that put responders at greater risk. This reluctance might even spill over to a neighborhood readily identifiable with a specific race or ethnicity associated with shared COVID-19 testing data. 

Conclusions 

Sharing data from COVID-19 tests with first responders may seem like an easy fix to address a serious problem, but it won’t be as helpful as suggested. First responders should continue to take every precaution when answering calls and initiating interactions with the public, and should not rely on personal health data from the misleadingly small number of positive tests in their community.

The sharing of this data may harm our public health goals. At a moment when people need the government to assist them in testing, containment, and treatment, the government in turn needs the cooperation of the people—information sharing of this type may erode that crucial relationship.

Cell phone tracking for post-COVID-19 must be radical to be efficient


This post is by Frederic Filloux from Monday Note - Medium

by Frederic Filloux

It is time to put aside some privacy principles — temporarily and with numerous guarantees — to alleviate lockdowns and try to restart the economy. But the widespread lack of confidence towards political leadership and Big Tech won’t help.

Last summer, at a workshop organized by Stanford’s CISAC, Michal Kosinski, a Graduate School of Business professor, reminded us o a few things about our digital footprint: in 2012, he said, the data output per person, globally, amounted to 500MB per day. Now, it is 62 GB. Then Kosinski mentioned that it takes only 10 “likes” for Facebook to know us better than our working colleagues, 100–150 to know us better than our friends and family and no more than 250–300 “likes” to be better than our spouse at anticipating our behavior. That’s just for Facebook. If you compound the data from our purchase history on Amazon and Google searches, we are surrounded by a swarm of thousands of our own data points.

Everyone who has been in the digital sector long enough remembers the case of Target supermarket in Minneapolis that detected a teenage pregnancy before her father did. That was in 2012, at a time each of us was spitting out 120 times fewer data than we do today.

I’m recalling this to put in perspective the reluctance for cell phone data tracking in critical times such as the global and deadly pandemic we are facing today.

We are already tracked and traced for purposes much more mundane than saving lives or restarting the crippled economy. We de facto consent to give up our data in exchange for questionable free services. Above all, we gave these multiple consents blindly with no idea whatsoever where and for how long these data will be kept, and if they will be sold to some obscure third parties. We only discovered accidentally the scope of our collective negligence when a spectacular scandal like Cambridge Analytica blew up.

Today, what’s at stake is way more dramatic and crucial for everyone’s future: how to restart the economy by allowing a large number of people back to work with a reasonable amount of risks.

I’m talking about a point positioned somewhere between those who demand an immediate and broad de-confinement of the population, and those who believe that harsher sanitary measures are still needed and that the economy will have to wait. (I’m not going to fuel this debate here, but maybe suggest this article from the MIT Tech Review, way too moderate to garner a large audience).

If the two camps diverge on the timing, both agree that lifting the lockdowns will have to be progressive and carefully planned to prevent any deadly resurgence of COVID-19 clusters.

Hence the question of using cellphone data to track the status and movement of people.

On Friday, Apple and Google came up with a solution based on “contact-tracing” technology. In short, it uses Bluetooth communications to assess the proximity of the individuals and keep an encrypted trace of their contact in case one of them turns out to be infected. Data might be uploaded to a cloud-controlled by health officials in the case that exposed people need to be contacted. That’s it. (Those interested in the technical details should read this excellent piece from the Electronic Frontier Foundation or this explainer in The Verge).

Apple and Google described the principle is this little cartoon:

For obvious reasons, both companies have limited their conceptual work down to the technological aspects: fine-tuning communications and encryption. Their system ends at an API, and it is up to the national or local administrations to do the rest. As it is, there is no GPS tracking, which is funny when you look at your Google travel history bouncing between points like this one:

That’s the basic stuff, but you can also create heatmaps of your life on the move by using Google’s Location History Visualizer.

But for the two companies, harnessing their data collection capabilities to create such a repository would have inevitably unleashed an outrage among users.

This is a key paradox of the situation: the capabilities are already here, they are used — with nothing more than our negligent consent. Tech companies take advantage of it to refine their business but these pieces of information won’t be used for the good of the population, even in the context of the worst health crisis of the century.

“Regulating Big Tech” is one of the 10 professional newsletters produced by Deepnews.ai

Coming back to the case of Bob and Alice described in the cartoon, as it is, the app only states that Bob has become positive for COVID-19 and that Alice should be notified. We know nothing of Bob’s recent whereabouts, the restaurant he went to, the subway line he took, or the grocery store he shopped in the weeks prior to developing the symptoms (or being tested).

Now let’s teleport the two characters into South Korea, the country that has been the best (so far) at “flattening the curve” of the pandemic, without a general lockdown. (What follows is based on a conversation I had last week with Pierre Joo, a Seoul tech investor friend of mine who gave more explanation in this piece in French).

Our friend Bob has developed the symptoms that left no doubt about his condition — despite having been tested negative a month ago. (In France, about 40 percent of the PCR tests generate false negatives, according to a doctor I talked to last week who infected his entire family after falsely tested negative). But because Bob has been tested, he has a medical “referent” and he downloaded the Korea Center for Disease Control app where he must enter his vitals twice a day. (Incidentally, this app was hastily developed by Winitech, a Daegu startup specialized in national emergencies, in just one month).

Bob enters his condition in the app, alerting the KCDC. In addition to Bob’s recent proximity to Alice, the medical investigator now in charge of his case has access to his cellphone data, his credit card history, etc. In about 10 minutes, his behavior from the last two weeks is reconstructed. The data are placed under the control of “Smart City Data Hub” created by the Ministry of Infrastructure and Transport and the Ministry of Sciences and Technology.

At first, this application was created with the primary intent of locating clusters and tracing back infected people.

Now we are in a totally different context. After weeks of lockdown in certain countries like France, it is time to relieve the pressure and to consider a selective and progressive lift of the constraints.

One of the options is a careful classification of the population based on their proven status (immunized or not) and for the majority who haven’t been sick already or tested, assessing their risk factors for developing complications. In addition, geographical considerations must apply as some regions have been way more affected than others. That is why such data collection should be linked to the electronic patient record, which contains essential information. In our example, it would be crucial for Alice’s health status to know that she had bouts of asthma and repeated bronchitis. Or that Bob was a chain smoker.

Who could seriously oppose the development of such a system?

In Europe, the vast majority actually does oppose it. You can argue (as I do on Twitter, wasting my time) that scores of protections should and will be deployed — opt-in only, bi-partisan monitoring of the collection and the use of the data, destruction of the non-essential files after a certain time, all sorts of guarantees — but most of the EU population remains very reluctant if not opposed.

The chances of seeing such an efficient — while closely controlled — apparatus being developed collide with a critical factor unfortunately not limited to France, which is the mistrust of political leadership.

In most countries, polls consistently show a steep decline in the institutions that could protect the public from abuse. In the United States alone, according to Gallup, the Supreme Court was trusted “a great deal/a lot” by 56 percent of the respondents in 1988; in 2019 it was down to 38 percent (not even to mention the yawning political divide). The US Congress dropped from 41 percent in 1986 to 11 percent in 2019. And I’m not talking of the current level of trust in today’s American executive branch.

Many Europeans are following the same trend. In France, the far-right and the far-left, closer than ever, can’t wait to see the end of the lockdown to shred what is left of Macron’s presidency. Certainly not the best time for the executive branch anywhere to propose putting a temporary dent in privacy principles, even for public health’s sake.

frederic.filloux@mondaynote.com


Cell phone tracking for post-COVID-19 must be radical to be efficient was originally published in Monday Note on Medium, where people are continuing the conversation by highlighting and responding to this story.

Twitter Removes Privacy Option, and Shows Why We Need Strong Privacy Laws


This post is by Bennett Cyphers from Deeplinks

Twitter greeted its users with a confusing notification this week. “The control you have over what information Twitter shares with its business partners has changed,” it said. The changes will “help Twitter continue operating as a free service,” it assured. But at what cost?

What Changed?

Twitter has changed what happens when users opt out of the “Allow additional information sharing with business partners” setting in the “Personalization and Data” part of its site.

The privacy setting in question. For most users, the box is checked by default.

The changes affect two types of data sharing that Twitter does:

  1. Conversion tracking for ads on Twitter. When an advertiser runs an ad for a mobile app on Twitter, Twitter collects information about who views, interacts with, and clicks on the ad. If a user who saw the ad proceeds to download and open the app, Twitter will notify the advertiser that the user’s device completed a conversion
  2. Twitter’s use of third-party analytics libraries. Like most of the web, Twitter shares device identifiers and cookies with Facebook and Google so that it can measure the effectiveness of its own ad campaigns on those platforms.

These changes affect users differently depending on whether they are subject to GDPR. Previously, anyone in the world could opt out of Twitter’s conversion tracking (type 1), and people in GDPR-compliant regions had to opt in. Now, people outside of Europe have lost that option. Instead, users in the U.S. and most of the rest of the world can only opt out of Twitter sharing data with Google and Facebook (type 2). It’s unclear whether the “share data with business partners” setting previously affected type 2 sharing, or whether Twitter sharing this kind of data with Google and Facebook is a new phenomenon.

For people protected by GDPR, type-1 data sharing remains opt-in, and type 2—Twitter sharing their data with Google and Facebook—never happens at all.

Why Did This Happen?

To understand what’s going on, we need to look at another piece of Twitter news from last year.

On August 5, 2019, Twitter announced that it had identified and fixed a couple of bugs. As it turned out, some of its privacy settings were… not setting things correctly. Specifically, the opt-outs for device-level targeting and conversion tracking—the same conversion tracking described above—did not actually opt users out. Twitter explained at the time:

Source: “An issue with your settings choices related to ads on Twitter,” at https://help.twitter.com/en/ads-settings

Twitter fixed both bugs, and its privacy settings began working the way they were supposed to.

The next event happened months later, when Twitter announced its quarterly earnings. Apparently, advertisers had really appreciated the data they weren’t supposed to be getting. Once Twitter shut off the hose of non-consensual device information, advertisers were unhappy. And Twitter announced a substantial hit to its revenue after fixing the bugs.

That leads us to today. Twitter apparently was happy to let users opt out as long as ad spending continued to grow. But last year, the privacy bugs and subsequent fixes seem to have shown Twitter exactly how much privacy options were costing it. Now, Twitter has removed the ability to opt out of conversion tracking altogether.

Laws Matter

Today, users in Europe maintain the same agency and control over their personal data that they’ve always had. They get to decide whether advertisers can use Twitter’s ad tools to tie actions on Twitter to device identifiers. Everyone else has lost that right.

The reason is simple: European users are protected by GDPR. Users in the United States and everywhere else, who don’t have the protection of a comprehensive privacy law, are only protected by companies’ self-interest. All too often, Twitter, Google, and Facebook will give users only as much control as they think they need to in order to stave off regulation and competitors, but no more. When push comes to shove, they’ll protect their bottom line.

This is why it shouldn’t be up to tech companies to give us privacy. We need strong data privacy laws that protect users’ rights to privacy, access, and control. And we need to change a system that tempts companies to sell out their users for a few points of growth.

Thermal Imaging Cameras are Still Dangerous Dragnet Surveillance Cameras


This post is by Matthew Guariglia from Deeplinks

As governments around the world continue to seek solutions to prevent the spread of COVID-19, companies are eager to sell their technology as a silver bullet to defeating the virus. The public already has seen privacy-invasive proposals for geolocation tracking and face recognition. Now, some vendors of surveillance equipment are advocating for the use of thermal cameras that would supposedly detect people who may be infected with the virus and walking around with a fever. These cameras threaten to build a future where public squares and sidewalks are filled with constant video surveillance—and all for a technology that may not even be effective or accurate at detecting fevers or infection. 

Thermal cameras are still surveillance cameras. Spending money to acquire and install infrastructure like so-called “fever detection” cameras increases the likelihood that the hardware will long outlive its usefulness during this public health crisis. Surveillance cameras in public places can chill free expression, movement, and association; aid in the targeted harassment and over-policing of vulnerable populations; and open the door to face recognition at a time when cities and states are attempting to ban it

During a pandemic, it may be prudent to monitor a person’s body temperature under specific circumstances. Hospitals are checking patient and staff  temperatures at the door to make sure that no one with a fever unknowingly exposes the people inside the facility to the virus. In the San Francisco Bay Area, wearable rings are constantly monitoring the temperature of doctors and nurses treating COVID-19 patients to immediately alert them if they start to develop symptoms. This kind of tech can pose privacy risks depending on the privacy policy of the company that manufactures the rings, the hospital’s own privacy policy, the data the technology collects, and who has access to that data. But these more focused programs are a far cry from dragnet surveillance cameras constantly surveilling the public—especially if those cameras don’t function effectively.

Experts are now concluding that thermal imaging from a distance—including that in camera systems that claim to detect fevers—may not be effective. The cameras typically only have an accuracy of +/- 2° centigrade (approximate +/- 4 degrees fahrenheit) at best. This is cause for major concern. With such a wide range of variance, a camera might read a person’s temperature as a very high 102.2°F when they are actually running an average 98.5°F. What’s more, human temperatures tend to vary widely, as much as 2 degrees fahrenheit. Not only does this technology present privacy problems, but the problem of false positives can not be ignored. False positives carry the very real risk of involuntary quarantines and/or harassment. 

Thermal imaging seems  even less likely to solve the COVID-19 pandemic given that a large number of people spreading the virus are doing so unknowingly because they are asymptomatic or have mild symptoms—mild enough to avoid triggering a “fever detecting” camera, even if it were running with perfect accuracy. 

During this current moment, when governments are trying to hinder the spread of a contagion, technology companies are scrambling to prove that their goods are the solution we’ve been looking for. And while some of these companies may have tools that can help, a new network of surveillance cameras with dubious thermal measuring capabilities is not a tool we should deploy.

Google-Fitbit Merger Would Cement Google’s Data Empire


This post is by Andrés Arrieta from Deeplinks

Google buying another tech company isn’t new.  But Google’s proposed acquisition of Fitbit poses an extraordinary threat to competition and user privacy.  Users face having their Fitbit information added to Google’s already large and invasive data pool, and a world that makes it harder and harder for privacy-focused tech companies to exist and compete.

The U.S. Department of Justice (DOJ) is reviewing the deal, and could take steps to either block it or establish conditions for approval.  The DOJ should take the first route, and block the deal altogether.  U.S. antitrust laws bar any merger that would “substantially lessen competition,” and Google buying Fitbit would do just that.

The most critical issue is Google’s acquisition of Fitbit’s trove of health and biometric data.  Obtaining that data will help Google both improve its advertising business and significantly expand its data empire.  Given consumers’ repeated affirmations that they care about privacy, makers of mobile computing devices and software should be competing to offer privacy-protective options.  That’s not happening nearly enough, and it will be even harder once Google expands its data hoard to include personal health and fitness monitoring.

What Is Fitbit and What Data Does It Have?

Fitbit is one of the world’s largest wearable device companies, focusing on trackers and smartwatches that monitor health and fitness.  These devices can track users’ location, how and when they exercise, their heart rate, and their sleep patterns.  Through the Fitbit app, users are encouraged to add other information, such as their weight and eating habits. Tracking and analyzing your own health data can be very useful (for you) but it’s also potentially very valuable to advertisers and other businesses because it may reveal a user’s daily habits, movements, medical conditions, and associates.

That’s a lot of data—sensitive data—for a standalone company to have.  Now let’s see what can happen when it’s added to the Googleplex.

Google’s Data Tentacles

Today, Google has a massive presence in Internet users’ daily lives.  Google’s Chrome browser has a 69% market share on the desktop and 62% on mobile.  Google has 66% of the search market, and with Android, a 71% market share in mobile operating systems.  Its ecosystem includes Maps, home devices, News, Photos, Duo, smartwatches, auto technology, Google TV, Wallet, YouTube, Docs, Sheets, and Slides.  Simply put, it’s increasingly hard to use the Internet without interacting with one or more Google services.

And all of that use generates the data that Google uses to build user profiles.  The company sells or leverages those profiles for multiple purposes, especially advertising.  In fiscal year 2019, advertising represented 83% of the revenue for Google’s parent company, Alphabet.

That revenue is a powerful incentive to add new vectors for collecting data.  One way it does this is through acquisitions of data-rich businesses like Doubleclick, Nest, and now Fitbit.

Another way is Google’s third-party advertising arm, through which it collects data about user activities on websites and apps outside of Google’s own.  More than a million apps and untold millions of websites integrate with Google’s ad network.  Through this network, Google collects data about browsing history, app usage, and users’ precise location. Google also collects offline purchase data and health data

For users, this complex system of data acquisition makes it almost impossible to fully understand where our data resides, how it is used, and by whom.  Many users assume that the data generated by their interactions with a single Google service (say, a Web search query) stays inside that product’s silo within Google’s corporate structure.  But Google product designers are masters ofdark patterns,” which entice or coerce users into granting sham “consent” for their data to be shared across all of Google’s businesses.  For example, users are generally unaware that when they enable “location” in Google’s search tool, they’re also “granting permission” for Google to track their location at all times and in all places and use that data in Google’s other services and products.  The user experience in these products isn’t necessarily improved by this indiscriminate data-mining; rather, this deliberate complexity deprives users of the ability to make informed choices, because the ramifications of those choices have been deliberately obscured.

So what does all this mean for the Fitbit acquisition?

Google’s user-data storehouses are money-making machines, but they also scare off potential competitors and their investors, who generally accept that they can’t compete with Google in most of the markets where it operates, because they’ll never catch up Google’s lead in acquiring and analyzing data.

A Fitbit acquisition will let Google eliminate one of the few competitors in the wearables market, and allow it to cement its lead against others.  Unparalleled access to intimate user data—physical movement, daily habits, and health status—can be wielded against would-be competitors: first, by applying insights from Fitbit data to improve Google’s products; and second, by starving competitors of data and insights that they could use to design and improve competing products.

Google’s acquisition of Fitbit will also deprive users of one simple, meaningful choice they could have made: to track their health and fitness without putting that data into Google’s ecosystem.  And where users have already made this choice—by buying and using Fitbit devices prior to the acquisition—an acquisition destroys those user choices, retrospectively opting them into Google data collection despite their revealed preference to use a Google competitor.

Allowing Google’s acquisition of Fitbit removes choice for consumers going forward as well.  Why would a privacy-conscious user bother buying products and services from a Google competitor if successful competitors are increasingly likely to sell themselves, and users’ data, to Google?

Why not simply set limits and conditions?

In similar situations, government regulators can and have imposed conditions on deals like this, to help protect consumers and competition generally.  But that won’t be enough here.  First, Google’s Fitbit acquisition is part of a disturbing, long term pattern of acquiring and fencing off every avenue a competitor could use to gather data.  It’s past time to draw a line in the sand.  Second, Google’s earlier promises not to merge data from its acquisitions have been proven hollow, and likely will be again given the irresistible business incentives to combine those data sets. 

Users shouldn’t have to choose between abandoning the systems they’ve paid for and learned to use, and having their data nonconsensually ingested by one massive, surveillance-based business.  The DOJ and other competition authorities can and should preserve users’ rights to participate in technological life without being watched by Google, and help ensure a healthy market where competitors have a chance.

Zoom Security and Privacy Problems, Fixes and Response and a Lesson in PR


This post is by rich from Tong Family

Well, Zoom has exploded in usage and has just been hammered in the last week culminating in bans. There have been countless stories about Zoombombing where kids and conferences have been inundated. Not to mention privacy problems and security holes.

And most importantly, it’s not been if Zoom is a truly evil company bent on selling user data and spying on video calls or not. The company has been largely silent and on the defensive.

TL;dr

When using Zoom, you can be end-to-end encrypted and safe if you should:

  1. Make sure everyone uses a Zoom client. Even on your phones. Try to avoid using the teleconferencing feature as phone calls by their nature are vulnerable to hacking.
  2. Redo all your meetings right now. From now on, every meeting requires a password. So delete all your own meetings and redo them to use a password. It’s inconvenient for dial-in, but for the URL stuff, they stick an encrypted password into the URL and you should be safe.
  3. Make sure that you have the defaults correct on these calls. The new ones are good. Make sure that you have passwords turned on and that no one can start a meeting without you as host present (so people can’t steal your zoom time) and that you don’t allow screen or other sharing with participants.

A Great Response

Personally, I’ve been in plenty of circumstances like this and challenging my best Pam Edstrom (RIP), these are defining moments for companies. Their traffic has gone from 10M active users to 200M in less than 90 days and first of all it’s remarkable they have managed to handle that increase, but a lack of response is horrible. With that usage, comes exponentially more attention to the security and other problems. You can duck and cover or you can take the bull by the horns.

But the good news is that the Eric Yuan, the CEO, himself responded in a super articulate and crisp in a personal blog entry. I’ve never met him, but I sure hope to some day. It’s a great service and they were tuned for a different use case and acknowledging the mistake and being concrete is what matters.

Words are incredibly important because they set the stage, so saying “At Zoom we feel incredibly privileged to be in a position to help you stay connected”, but also to acknowledge “We also feel immense responsibility.” Moreover, the fact he is personally going to host a weekly security webinar is pretty unprecedented.

This is one of the key thing that we want all products we use to have. Believing in companies is just as important as loving the products in modern marketing. And that’s a good thing.

What is even better it to be specific about what you have done and what you are doing. That’s hard to communicate in a tweet, but really important as influentials read this stuff and they are the bedrock of a community. Moreover the words are super strong and having a time line that is hours and days, not weeks and months.

  1. We have permanently removed the attendee tracker and LinkedIn Sales Navigator. I love they were honest about clarifying and saying when the clarification was made.
  2. Specifically crediting people with finding bugs like Partick Wardle and saying what fixes are made.
  3. Changing the various defaults particularly for education users. (As they’ve been banned by various folks, this matters).

What I love the best is that they have down the really important (and painful leadership things) like:

  1. They are in feature freeze, the entire engineering team is focused on trust, safety and privacy. The push to add more is always there.
  2. Doing a transparency report so people know what is going on
  3. Having a real bug bounty program and white hat penetration tests.

Nerdy facts

Besides the obvious bugs, one of the deep technical questions has to do with end-to-end encryption. That is, can Zoom record and monitor (and pass on to others) the actual contents of Zoom calls. their encryption blog entry was actually really good. It says basically:

  1. If you use dedicated Zoom clients on Windows, Mac, iOS and Android, you are in good shape. That is they end to end encrypt so Zoom cannot actually see what you are doing.
  2. The problem has to do with teleconferences and things that are not an endpoint they control. Obviously, they need to translate this into a phone call. They do their best in that the “proxy” clients are walled off, but it is a loop hole.
  3. The conclusion is that you should only use zoom clients when you are doing Zoom calls or just be aware that everything you say could appear in Reddit as a transcript 🙂

The post Zoom Security and Privacy Problems, Fixes and Response and a Lesson in PR appeared first on Tong Family.

How EFF Evaluates Government Demands for New Surveillance Powers


This post is by Adam Schwartz from Deeplinks

The COVID-19 public health crisis has no precedent in living memory. But government demands for new high-tech surveillance powers are all too familiar. This includes well-meaning proposals to use various forms of data about disease transmission among people. Even in the midst of a crisis, the public must carefully evaluate such government demands, because surveillance invades privacy, deters free speech, and unfairly burdens vulnerable groups. It also metastasizes behind closed doors. And new surveillance powers tend to stick around. For example, nearly two decades after the 9/11 attacks, the NSA is still conducting dragnet Internet surveillance.

Thus, when governments demand new surveillance powers—especially now, in the midst of a crisis like the ongoing COVID-19 outbreak—EFF has three questions:

  • First, has the government shown its surveillance would be effective at solving the problem?
  • Second, if the government shows efficacy, we ask: Would the surveillance do too much harm to our freedoms?
  • Third, if the government shows efficacy, and the harm to our freedoms is not excessive, we ask: Are there sufficient guardrails around the surveillance?

Would It Work?

The threshold question is whether the government has shown that its surveillance plan would be effective at solving the problem at hand. This must include published details about what the government plans, why this would help, and what rules would apply. Absent efficacy, there is no reason to advance to the next questions. Surveillance technology is always a threat to our freedoms, so it is only justified where (among other things) it would actually do its job.

Sometimes, we simply can’t tell whether the plan would hit its target. For example, governments around the world are conducting location surveillance with phone records, or making plans to do so, in order to contain COVID-19. As we recently wrote, governments so far haven’t shown this surveillance works.

Would It Do Too Much Harm?

Even if the government shows that a surveillance power would be effective, EFF still opposes its use if it would too greatly burden our freedoms. High-tech surveillance can turn our lives into open books. It can chill and deter our participation in protests, advocacy groups, and online forums. Its burdens fall all too often on people of color, immigrants, and other vulnerable groups. Breaches of government data systems can expose intimate details about our lives to scrutiny by adversaries including identity thieves, foreign governments, and stalkers. In short, even if surveillance would be effective at solving a problem, it must also be necessary and proportionate to that problem, and not have an outsized impact on vulnerable groups.

Thus, for example, EFF opposes NSA dragnet Internet surveillance, even if it can theoretically provide leads to uncovering terrorists, such as the proverbial needle in the haystack. We believe this sort of mass, suspicionless surveillance is simply incompatible with universal human rights.  Similarly, we oppose face surveillance, even if this technology sometimes contributes to solving crime. The price to our freedoms is simply too great.

On the other hand, the CDC’s proposed program for contact tracing of international flights might be necessary and proportionate. It would require airlines to maintain the names and contact information of passengers and crews arriving from abroad. If a person on a flight turned out to be infected, the program would then require the airline to send the CDC the names and contact information of the other people on the flight. This program applies to a discrete set of information about a discrete set of people. It will only occasionally lead to disclosure of this information to the government. And it is tailored to a heightened transmission risk: people returning from a foreign country, who are densely packed for many hours in a sealed chamber. However, as we recently wrote, we don’t know whether this program has sufficient safeguards.

Are the Safeguards Sufficient?

Even if the government shows a form of high-tech surveillance is effective, and even if such surveillance would not intolerably burden our freedoms, EFF still seeks guardrails to limit whether and how the government may conduct this surveillance. These include, in the context of surveillance for public health purposes:

1.  Consent. For reasons of both personal autonomy and effective public health response, people should have the power to decide whether or not to participate in surveillance systems, such as an app built for virus-related location tracking. Such consent must be informed, voluntary, specific, and opt-in.

2. Minimization. Surveillance programs must collect, retain, use, and disclose the least possible amount of personal information needed to solve the problem at hand. For example, information collected for one purpose must not be used for another purpose, and must be deleted as soon as it is no longer useful to the original purpose. In the public health context, it may often be possible to engineer systems that do not share personal information with the government. When the government has access to public health information, it must not use it for other purposes, such as enforcement of criminal or immigration laws.

3. Information security. Surveillance programs must process personal information in a secure manner, and thereby minimize risk of abuse or breach. Robust security programs must include encryption, third-party audits, and penetration tests. And there must be transparency about security practices.

4. Privacy by design. Governments that undertake surveillance programs, and any corporate vendors that help build them, must employ privacy officers, who are knowledgeable about technology and privacy, and who ensure privacy safeguards are designed into the program.

5. Community control. Before a government agency uses a new form of surveillance, or uses a form of surveillance it has already acquired in a new way, it must first obtain permission from its legislative authority, including approval of the agency’s proposed privacy policy. The legislative authority must consider community input based on the agency’s privacy impact report and proposed privacy policy.

6. Transparency. The government must publish its policies and training materials, and regularly publish statistics and other information about its use of each surveillance program in the greatest detail possible. Also, it must regularly conduct and publish the results of audits by independent experts about the effectiveness and any misuse of each program. Further, it must fully respond to public records requests about its programs, taking into account the privacy interests of people whose personal information has been collected.

7. Anti-bias. Surveillance must not intentionally or disparately burden people on the basis of categories such as race, ethnicity, religion, nationality, immigration status, LGBTQ status, or disability.

8. Expression. Surveillance must not target, or document information about, people’s political or religious speech, association, or practices.

9. Enforcement. Members of the community must have the power to go to court to enforce these safeguards, and evidence collected in violation of these safeguards must be excluded from court proceedings.

10. Expiration. If the government acquires a new surveillance power to address a crisis, that power must expire when the crisis ends. Likewise, personal data that is collected during the crisis, and used to help mitigate the crisis, must be deleted or minimized when the crisis is over. And crises cannot be defined to last in perpetuity.

Outside the context of public health, surveillance systems need additional safeguards. For example, before using a surveillance tool to enforce criminal laws, the government must first obtain a warrant from a judge, based on probable cause that evidence of a crime or contraband would be found, and particularly describing who and what may be surveilled. Targets of such surveillance must be promptly notified, whether or not they are ever prosecuted. Additional limits are needed for more intrusive forms of surveillance: use must be limited to investigation of serious violent crimes, and only after exhaustion of less intrusive investigative methods.

Conclusion

Once the genie is out of the bottle, it is hard to put back. That’s why we ask these questions about government demands for new high-tech surveillance powers, especially in the midst of a crisis. Has the government shown it would be effective? Would it do too much harm to our freedoms? Are there sufficient guardrails?

Harden Your Zoom Settings to Protect Your Privacy and Avoid Trolls


This post is by Gennie Gebhart from Deeplinks

Whether you are on Zoom because your employer or school requires it or you just downloaded it to stay in touch with friends and family, people have rushed to the video chat platform in the wake of COVID-19 stay-at-home orders—and journalists, researchers, and regulators have noticed its many security and privacy problems. Zoom has responded with a surprisingly good plan for next steps, but talk is cheap. Zoom will have to follow through on its security and privacy promises if it wants to regain users’ trust.

In the meantime, take these steps to harden your Zoom privacy settings and protect your meetings from “Zoombombing” trolls. The settings below are all separate, which means you don’t need to change them all, and you don’t need to change them in any particular order. Consider which settings make sense for you and the groups you communicate with, and do your best to make sure meeting organizers and participants are on the same page about settings and shared expectations.

Privacy Settings

Make Sure Chat Auto-Saving Is Off

In your Zoom account settings under In Meeting (Basic), make sure Auto saving chats is toggled off to the left.

The autosave chats setting toggled off to the left

Make Sure “Attention Tracking” Is Off

In your Zoom account settings under In Meeting (Advanced), make sure Attention tracking is toggled off to the left.

The attention tracking setting toggled off to the left

Use a Virtual Background

The space you’re in during a call can expose a lot of information about where you live, your habits, and your hobbies. If you’re uncomfortable having your living space in the background of your calls, set a virtual background. From the zoom.us menu in the top right corner of your screen while using Zoom, navigate to Preferences and then Virtual backgrounds.

Best Practices for Avoiding Trolls

With Zoom now more widely used than ever, the mechanics of its public meeting IDs have allowed bad actors to invade people’s meetings with harassment, slurs, and disturbing images. When you host a meeting, consider taking the steps below to protect yourself and your participants from this “Zoombombing.”

Bad actors can find your meeting in one of two ways: they can cycle through random meeting IDs until they find an active one, or they can take advantage of meeting links and invites that have been posted in public places, like Facebook groups, Twitter, or personal websites. So, protecting yourself boils down to controlling who can enter your meeting, and keeping your meeting IDs private.

Keep the Meeting ID Private

Whenever possible, do not post the link to your meeting or the meeting ID publicly. Send it directly to trusted people and groups instead. 

Set a Meeting Password, and Carefully Inspect the Meeting Link

In your Zoom account settings under Schedule Meeting, toggle Require a password when scheduling new meetings on to the right. You’ll find additional password options in this area of the settings as well.

Several password settings toggled on to the right

You can also set a password when scheduling a meeting from the Zoom desktop app by checking the “Require meeting pass” checkbox.

BEWARE, however, that Zoom passwords can behave in unexpected ways. If you use the “Copy Invitation” functionality to copy the meeting link and send it to your participants, that link might include your meeting password. Look out for an unusually long URL with a question mark in it, which indicates it includes your meeting password.

If you plan to send the meeting link link directly to trusted participants, having the password included in the link will be no problem—but if you want to post the meeting link in a Facebook group, on Twitter, or in another public space, then it means the password itself will also be public. If you need to publicize your event online, consider posting only the meeting ID, and then separately sending the password to vetted participants shortly before the meeting begins.

Lock Down Screen Sharing

In your Zoom account settings under In Meeting (Basic), set Screen sharing to Host Only. That means that, when you are hosting a meeting, only you and no other meeting participants will be able to share their screen.

The screensharing setting set to host only

Depending on the calls you plan to host, you can also turn screen sharing off entirely by toggling it off to the left.

Use Waiting Rooms to Approve Participants

In your Zoom account settings under In Meeting (Advanced), enable Waiting room by toggling it on to the right. A waiting room allows hosts to screen new participants before letting them join, which can help prevent disruptions or unexpected participants.

The waiting room setting toggled on to the right

Lock the Meeting

When you are actively in a meeting and all your expected participants have arrived, you can “lock” the meeting to prevent anyone else from joining. Click Participants at the bottom of the Zoom window, and select Lock Meeting.

The EARN IT Act Violates the Constitution


This post is by Sophia Cope from Deeplinks

Since senators introduced the EARN IT Act (S. 3398) in early March, EFF has called attention to the many ways in which the bill would be a disaster for Internet users’ free speech and security.

We’ve explained how the EARN IT Act could be used to drastically undermine encryption. Although the bill doesn’t use the word “encryption” in its text, it gives government officials like Attorney General William Barr the power to compel online service providers to break encryption or be exposed to potentially crushing legal liability.

The bill also violates the Constitution’s protections for free speech and privacy. As Congress considers the EARN IT Act—which would require online platforms to comply with to-be-determined “best practices” in order to preserve certain protections from criminal and civil liability for user-generated content under Section 230 (47 U.S.C. § 230)—it’s important to highlight the bill’s First and Fourth Amendment problems.

First Amendment

As we explained in a letter to Congress, the EARN IT Act violates the First Amendment in several ways.

1. The bill’s broad categories of “best practices” for online service providers amount to an impermissible regulation of editorial activity protected by the First Amendment.

The bill’s stated purpose is “to prevent, reduce, and respond to the online sexual exploitation of children.” However, it doesn’t directly target child sexual abuse material (CSAM, also referred to as child pornography) or child sex trafficking ads. (CSAM is universally condemned, and there is a broad framework of existing laws that seek to eradicate it, as we explain in the Fourth Amendment section below).

Instead, the bill would allow the government to go much further and regulate how online service providers operate their platforms and manage user-generated content—the very definition of editorial activity in the Internet age. Just as Congress cannot pass a law demanding news media cover specific stories or present the news a certain way, it similarly cannot direct how and whether online platforms host user-generated content.

2. The EARN IT Act’s selective removal of Section 230 immunity creates an unconstitutional condition.

Congress created Section 230 and, therefore, has wide authority to modify or repeal the law without violating the First Amendment (though as a policy matter, we don’t support that). However, the Supreme Court has said that the government may not condition the granting of a governmental privilege on individuals or entities doing things that amount to a violation of their First Amendment rights.

Thus, Congress may not selectively grant Section 230 immunity only to online platforms that comply with “best practices” that interfere with their First Amendment right to make editorial choices regarding their hosting of user-generated content.

3. The EARN IT Act fails strict scrutiny.

The bill seeks to hold online service providers responsible for a particular type of content and the choices they make regarding user-generated content, and so it must satisfy the strictest form of judicial scrutiny.

Although the content the EARN IT Act seeks to regulate is abhorrent and the government’s interest in stopping the creation and distribution of that content is compelling, the First Amendment still requires that the law be narrowly tailored to address those weighty concerns. Yet, given the bill’s broad scope, it will inevitably force online platforms to censor the constitutionally protected speech of their users.

Fourth Amendment

The EARN IT Act violates the Fourth Amendment by turning online platforms into government actors that search users’ accounts without a warrant based on probable cause.

The bill states, “Nothing in this Act or the amendments made by this Act shall be construed to require a provider of an interactive computer service to search, screen, or scan for instances of online child sexual exploitation.” Nevertheless, given the bill’s stated goal to, among other things, “prevent” online child sexual exploitation, it’s likely that the “best practices” will effectively coerce online platforms into proactively scanning users’ accounts for content such as CSAM or child sex trafficking ads.

Contrast this with what happens today: if an online service provider obtains actual knowledge of an apparent or imminent violation of anti-child pornography laws, it’s required to make a report to the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline. NCMEC then forwards actionable reports to the appropriate law enforcement agencies.

Under this current statutory scheme, an influential decision by the U.S. Court of Appeals for the Tenth Circuit, written by then-Judge Neil Gorsuch, held that NCMEC is not simply an agent of the government, it is a government entity established by act of Congress with unique powers and duties that are granted only to the government.

On the other hand, courts have largely rejected arguments that online service providers are agents of the government in this context. That’s because the government argues that companies voluntarily scan their own networks for private purposes, namely to ensure that their services stay safe for all users. Thus, courts typically rule that these scans are considered “private searches” that are not subject to the Fourth Amendment’s warrant requirement. Under this doctrine, NCMEC and law enforcement agencies also do not need a warrant to view users’ account content already searched by the companies.

However, the EARN IT Act’s “best practices” may effectively coerce online platforms into proactively scanning users’ accounts in order to keep the companies’ legal immunity under Section 230. Not only would this result in invasive scans that risk violating all users’ privacy and security, companies would arguably become government agents subject to the Fourth Amendment. In analogous cases, courts have found private parties to be government agents when the “government knew of and acquiesced in the intrusive conduct” and “the party performing the search intended to assist law enforcement efforts or to further his own ends.”

Thus, to the extent that online service providers scan users’ accounts to comply with the EARN IT Act, and do so without a probable cause warrant, defendants would have a much stronger argument that these scans violate the Fourth Amendment. Given Congress’ goal of protecting children from online sexual exploitation, it should not risk the suppression of evidence by effectively coercing companies to scan their networks.

Next Steps

Presently, the EARN IT Act has been introduced in the Senate and assigned to the Senate Judiciary Committee, which held a hearing on March 11. The next step is for the committee to consider amendments during a markup proceeding (though given the current state of affairs it’s unclear when that will be). We urge you to contact your members of Congress and ask them to reject the bill.

Take Action

PROTECT OUR SPEECH AND SECURITY ONLINE

EFF to Supreme Court: Losing Your Phone Shouldn’t Mean You Lose Your Fourth Amendment Rights


This post is by Andrew Crocker from Deeplinks

You probably know the feeling: you reach for your phone only to realize it’s not where you thought it was. Total panic quickly sets in. If you’re like me (us), you don’t stop in the moment to think about why losing a phone is so scary. But the answer is clear: In addition to being an expensive gadget, all your private stuff is on there.  

Now imagine that the police find your phone. Should they be able to look through all that private stuff without a warrant? What if they believe you intentionally “abandoned” it? Last week, EFF filed an amicus brief in Small v. United States asking the Supreme Court to take on these questions.

In Small, police pursued a robbery suspect in a high-speed car chase near Baltimore, ending with a dramatic crash through the gates of the NSA’s campus in Fort Meade, Maryland. The suspect left his car, and officers searched the area. They quickly found some apparently discarded clothing, but many hours later they also find a cell phone on the ground, over a hundred feet from the clothing and the car. Despite the intervening time and the distance from the other items, the police believed that the phone also belonged to their suspect. So they looked through it and called one of the stored contacts, who eventually led them to the defendant, Mr. Small.

The Fourth Circuit Court of Appeals upheld this warrantless search of Small’s phone under the Fourth Amendment’s “abandonment doctrine.” This rule says that police don’t need a warrant to search and seize property that is abandoned, as determined by an objective assessment of facts known to the police at the time. Mr. Small filed a petition for certiorari, asking the Supreme Court to review the Fourth Circuit’s decision.

EFF’s brief in support of Small’s petition argues police shouldn’t be able to search a phone they find separated from its owner without a warrant. That’s because phones have an immense storage capacity, allowing people to carry around a comprehensive record of their lives stored on their phones. And if you’ve ever experienced that panicky feeling when you can’t find your phone, you know that, despite their intimate contents, phones are all too easy to lose. Even where someone truly chooses to abandon a phone, such as when they turn in an old phone to upgrade to a new one, they probably don’t intend to abandon any and all data that phone can store or access from the Internet—think of cloud storage, social media accounts, and the many other files accessible from your phone, but not actually located there. As a result, we argue phones are unlike any other object that individuals might carry with them and subsequently lose or even voluntarily abandon. Even when it’s arguable that the owner “abandoned” their cell phone, rather than simply misplacing it, police should be required to get a warrant to search it.

If this reasoning all sounds familiar, it’s because the Supreme Court relied on it in a landmark case involving the warrantless search of phones all the way back in 2014, in Riley v. California. Riley involved the warrantless searches of phones found on suspects during lawful arrests. Even though police can search items in a suspect’s pockets during an arrest to avoid destruction of evidence and identify any danger to the officers, the Court recognized in its opinion that phones are different: “Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans ‘the privacies of life.’”  In a unanimous decision by Chief Justice Roberts, the Court wrote, “Our answer to the question of what police must do before searching a cell phone seized incident to an arrest is accordingly simple — get a warrant.”

Even though the warrant rule in Riley seemed clear and broadly applicable, the lower court in Small ruled it was limited to searches of phones found on suspects during an arrest. That’s not only a misreading of everything the Supreme Court said in Riley about why phones are different than other personal property, it’s also a bad rule that creates terrible incentives for law enforcement. It encourages warrantless searches of unattended phones, which are especially likely to lead to trawling through irrelevant and sensitive personal information.

Losing a phone is scary enough; we shouldn’t have to worry that it also means the government has free rein to look through it. We hope the Supreme Court agrees, and grants review in Small. A decision on the petition is expected by June.