Argument
An expert's point of view on a current event.

Google Wants Your Data in Exchange for a Coronavirus Test

Public health shouldn’t mean surrendering privacy to Silicon Valley.

By , an expert on unmanned aerial vehicles, technology in humanitarian aid, remote sensing, spatial data, and data policy and ethics.
U.S. President Donald Trump holds up tweets from Google
U.S. President Donald Trump holds up tweets from Google
U.S. President Donald Trump holds up tweets from Google as he speaks during a press briefing about the coronavirus at the White House on March 15. JIM WATSON/AFP via Getty Images

While U.S. President Donald Trump may have mangled the details in his press conference, Google (via its Alphabet sister company Verily) really has launched a coronavirus screening tool. The website, which was developed in collaboration with the state of California, was rolled out on March 16 and currently offers coronavirus testing services in four counties. At first glance, it’s simple. The site runs users through a series of screening questions via the company’s Project Baseline health data collection platform. If the system deems them eligible, they’re allowed to make an appointment for a much-coveted coronavirus test.

While U.S. President Donald Trump may have mangled the details in his press conference, Google (via its Alphabet sister company Verily) really has launched a coronavirus screening tool. The website, which was developed in collaboration with the state of California, was rolled out on March 16 and currently offers coronavirus testing services in four counties. At first glance, it’s simple. The site runs users through a series of screening questions via the company’s Project Baseline health data collection platform. If the system deems them eligible, they’re allowed to make an appointment for a much-coveted coronavirus test.

There’s just one catch. Users must have a Google account to use the screening tool. If you’re sitting at home wondering if your cough is seasonal allergies or COVID-19, you probably think this sounds like a good deal. And it is a deal, because Google didn’t launch its screening tool out of altruism. It’s doing so, at least in part, because it wants access to your health data, as part of the company’s intense push into the health care business.

Thanks to the coronavirus pandemic, the company has a new way to get your information.

With the blessing of both federal and state governments, Verily has set up a system where people must choose between sharing their health data with the company and, practically speaking, not getting a coronavirus test. That’s no choice at all, given the stakes of not complying. And there are plenty of ways that Verily—and its corporate parent—might put your health care data to use. In its frequently asked questions, Verily’s website notes that the information users provide may be shared with a long list of other parties, including health care professionals, clinical labs, the California Department of Public Health, and federal, state, and local health authorities. It may “also be shared with certain service providers engaged to perform services on behalf of Verily, including Google.”

[Mapping the Coronavirus Outbreak: Get daily updates on the pandemic and learn how it’s affecting countries around the world.]

While the Verily website clearly states that your data “will never be joined with your data stored in Google products without your explicit permission,” that wording implies Google may ask you to give that permission at some point in the future—and it almost certainly will because Google is well aware of the immense value of joining different huge data sets together to come up with new insights and correlations. There is also little, legally speaking, that stops Google from joining those data sets together anyway. (And while Google does allow you to delete your data, it’s unclear how that will apply here—or how many people actually know that’s even a possibility.)

Take it or leave it—and don’t you really want to know if that cough might kill you?

Google’s ability to force users to consent to data collection may become a more common tactic for companies and governments as the coronavirus rolls on.

Google’s ability to, in essence, force users to consent to data collection may become a more common tactic for companies and governments as the coronavirus rolls on, in their ongoing scramble to use technology to more effectively (and, most likely, profitably) stop the pandemic. The hope of a return to some kind of normality after the lockdowns depends on the ability to trace and track cases and limit new outbreaks. In the coming months, people worldwide are going to be asked to trust that tech companies and governments have our best interests in mind when they collect our data and track our movements.

Unfortunately—and frighteningly—Google hasn’t earned that trust. Google’s recent health care push has been plagued with missteps and public criticism. In 2017, the U.K. Information Commissioner’s Office ruled that London’s Royal Free Hospital violated the nation’s Data Protection Act after it handed over 1.6 million patients’ personal data to the Google subsidiary DeepMind. While DeepMind’s co-founder promised in 2016 that the company would never link or associate patient data with Google’s other products (in a very similar-sounding claim to that which Verily is making), that promise went out the window in 2018, when Google fully absorbed DeepMind’s app and the data that went along with it. In November 2019, some users tossed their Fitbits in fear (and to little avail) after Google acquired both the company and its vast stores of health information. Soon afterward, a whistleblower sounded the alarm about Google’s Project Nightingale, a partnership between the company and the enormous health nonprofit Ascension: The agreement gave Google access to the health data of millions of Americans and concerned regulators so much that they launched a federal inquiry. In this context, Google’s pushy coronavirus testing tool looks like a very clever business play indeed.

Don't Touch Your Face podcast Listen on Apple podcasts Listen on Spotify

Listen Now: Don't Touch Your Face

A new podcast from Foreign Policy covering all aspects of the coronavirus pandemic

Is being coerced into giving up your data—in exchange for a coronavirus test—really such a bad thing? It certainly can be. Health data breaches are dangerous, and merely collecting heath data and holding onto it exposes the data subject (you, me, everyone) to risk. As the health law expert Charlotte Tschider pointed out in an interview, “People are discriminated against every day based on health conditions, and we’ve seen how quickly many people who have COVID-19 are identified to media. There’s a very real risk of someone being identified as patient zero or 10 or whatever and potentially getting some attention.”

This very scenario has played out in South Korea, where government public health authorities have used mobile phone data to track the pre-diagnosis movements of people infected with the coronavirus. Authorities sent out mass texts describing these movements to the general public as a way of alerting them to potential infection risks. Unfortunately, the data in the texts was badly anonymized, and a number of people (and their movements) were quickly reidentified online—leading to embarrassing speculation about affairs, plastic surgery procedures, and even insurance fraud. U.S. coronavirus survivors have also reported suffering social stigma, public shaming, and hate-mail on their return home, while in Ukraine, protesters attacked buses carrying evacuees from China to a quarantine center.

Data coercion can go a lot further than merely controlling your access to a coronavirus test. In a dramatic example of data coercion, the Chinese government is partnering with the popular Alipay wallet app to roll out a mandatory coronavirus quarantine app, which uses a questionnaire to assign people a color-coded health status. (Green means you’re free to move about, while red means a mandatory two-week quarantine.) Across most major Chinese cities, citizens need to show a green code at checkpoints if they wish to get through; location data and other information are also shared with the police and other authorities. Multiple different competing apps, some created by local governments, are making the process exceptionally confusing as different levels of the authorities attempt different techniques.

Google’s tactic of linking access to COVID-19 testing to giving up one’s data also isn’t new. It’s simply one of the few instances where this type of data coercion has happened to relatively privileged people. Refugees, people of color, the homeless, and other marginalized populations are, unfortunately, very familiar with being forced to give up privacy and personal information in exchange for access to the goods and services they need to stay alive, in what some refer to as an ever-growing “privacy divide”—a divide that the coronavirus pandemic may help narrow if the tactics that have been tested on the poor and vulnerable are rolled out to everyone else in the near future. Just as tech is leveraged to control the movements of marginalized people today, it may soon be leveraged to control the movements of the more privileged in the very near future, with the coronavirus as the justification.

Verily’s coronavirus screening tool isn’t just coercive, though. It’s also an experiment, in that it is a hastily rolled-out and presumably largely untested effort at effectively identifying high-risk patients and getting them tested. As tech companies and governments around the world scramble to form partnerships to craft innovative responses to the pandemic, we are all likely going to become nonconsenting (or unsuspecting) subjects. And in most cases, we won’t be able to meaningfully opt out.

Crisis is often used as a justification for rolling out untested new technologies and ideas in a process of humanitarian experimentation, in which the usual rules and oversights are tossed out in favor of supposedly life-saving haste—and the potential dangers and unexpected consequences of those new technologies are largely ignored.

Usually, this type of aid-through-innovation happens to vulnerable people in less wealthy countries, as we saw during the 2014 Ebola outbreak, when aid, research, and tech organizations used the crisis to justify using West Africans’ mobile data to (ineffectively) attempt to track the spread of the disease. Millions of people’s privacy was violated, and as recent studies indicate, this type of call-detail record tracking for pandemics may not even work that well—but there was little recourse for pushback from the people surveilled, and resources that should have gone to directly fighting the disease were put toward tech solutions instead.

The coronavirus pandemic may—if tech companies and surveillance-curious governments get their way—extend this type of tech-driven experimentation to just about everyone. “Lots of people don’t have nomenclature for being experimented on by your government during disaster,” said the privacy expert Sean Martin McDonald. “Now we are probably going to have to develop it.”

There are some ways we can push back. As governments and tech companies roll out new, so-called innovative means of tracking the coronavirus, the public should approach them with a critical eye—and defend the value of privacy and human rights protections during crisis situations. “Describing privacy as a trivial thing avoids the very real threats that it presents to people’s safety,” said Lindsey Barrett, a privacy law expert at Georgetown University. “If we’re trying to address one safety concern by making people vulnerable in another way—if we’re saying that privacy is stupid and anyone trying to criticize these measures is wearing a tin-foil hat—that’s disingenuous and unproductive.”

Security tools have a way of embedding themselves permanently.

Of course, the overwhelming nature of the crisis makes it tempting to carve out exceptions, just as the public is being forced to accept the loss of its freedom of movement—at least temporarily. But security tools have a way of embedding themselves permanently. That’s why it’s important that these new surveillance and data collection tools must come with clear sunset clauses (as suggested by Tilburg University’s Linnet Taylor) and other legal mechanisms that will force them to, eventually, end. In the absence of these protections, new surveillance methods that are launched during the pandemic may never go away: Tech surveillance will become a one-way ratchet, a dial of control that can only be turned up. Google should not quietly become a humdrum partner of public health authorities across the United States while the pandemic goes on—and if the government and corporations do end up tracking people’s movements using phone data, those efforts should have a clearly defined end date and clear guidelines for deleting all that information when it is no longer needed.

Inevitably, the world will be a very different, sadder place when the pandemic ends. It may also be much less free, a place where both tech companies and governments are capable of using data to strengthen their control over the average person, in ways that are difficult even to imagine today. If we want to stop pandemic profiteers from getting far more access to the most intimate details of our lives, then we need to stay vigilant—and we need to act now.

Faine Greenwood is an expert on unmanned aerial vehicles, technology in humanitarian aid, remote sensing, spatial data, and data policy and ethics. Greenwood's current research work centers on civilian drone technology and the new opportunities and operational and ethical challenges that drones and the spatial data that they collect presents to people affected by disaster and conflict.

More from Foreign Policy

Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.
Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.

Arab Countries Have Israel’s Back—for Their Own Sake

Last weekend’s security cooperation in the Middle East doesn’t indicate a new future for the region.

A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.
A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.

Forget About Chips—China Is Coming for Ships

Beijing’s grab for hegemony in a critical sector follows a familiar playbook.

A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.
A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.

‘The Regime’ Misunderstands Autocracy

HBO’s new miniseries displays an undeniably American nonchalance toward power.

Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.
Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.

Washington’s Failed Africa Policy Needs a Reset

Instead of trying to put out security fires, U.S. policy should focus on governance and growth.