Facial Recognition May Catch a Fever, But What About Privacy Rights? Commentary
Peggy_Marco / Pixabay
Facial Recognition May Catch a Fever, But What About Privacy Rights?

Streets were silenced, businesses shuttered, and daily life reduced to hurried, masked visits to the grocery store. Such scenes seemed unthinkable, yet they became a reality as governments worldwide scrambled to contain the outbreak of COVID-19 in 2020. While Western democratic leaders at first placed faith in isolating known infections, many ultimately followed in the footsteps of less democratically inclined countries and resorted to nationwide lockdowns to avert an even deadlier crisis. The resulting images, silent metros, police checkpoints, and empty malls, felt eerily like footage from a sci-fi thriller.

In the tumult that followed, societies began asking how to prevent similar lockdowns in the future. Some experts proposed harnessing forms of disease surveillance or facial recognition technology (FRT) for widespread public health surveillance. However, civil liberties advocates were skeptical. Could monitoring body temperatures and tracking infected individuals, all through advanced cameras, necessarily serve as a line of defense against resurgent outbreaks? Or would it lay a path to intrusive government surveillance control?

As new discussions emerge around constitutional privacy protections, biometric data, and lessons from abroad, the story of facial recognition in public health raises one of the defining questions of our era: How far can societies go to protect collective well-being without sacrificing hard-won civil liberties?

A Wake-Up Call and a Dilemma

At the height of the pandemic, strict limitations on movement were often justified by emergency powers. Many worried that reacting too slowly would lead to tragic consequences, just like the historical “second wave” of the 1918 Spanish Flu. Some thought that easing restrictions too quickly might create a surge of cases. Others warned that prolonged lockdowns could produce catastrophic fallout, from economic collapse to domestic abuse and mental health crises. The lockdowns sparked debate about whether milder alternatives existed.

Instead of widespread closures, governments found they could leverage technology to identify infected or high-risk individuals in real time, nudging them into isolation before new waves of infection took root. Controversial then and now, this approach presents immense power in preventing outbreaks but also tests the boundaries of privacy.

As COVID-19 raged, government agencies, groceries, and hospitality services relied on traditional thermometer checks for fever checks. Infrared cameras were also employed to estimate internal temperature by detecting energy emitted from the inner corner of the eye and processing the data through a machine-learning algorithm, a valuable tool in scenarios where direct temperature checks, such as in crowded areas, are impractical. Many people questioned the state’s surveillance authority, with over 1,000 lawsuits filed by individuals, businesses, and religious organizations challenging COVID-19 community mitigation measures and public health orders. Some experts even expressed skepticism over the effectiveness of using FRT to fight the spread of the pandemic.

While this article focuses on FRT in the US, the questions it raises about privacy, surveillance, and public health are increasingly relevant to countries around the world grappling with similar technological and ethical challenges.

No Clear Rules? The Courts Might Decide Instead

FRT remains largely unregulated at the federal level, leaving a patchwork of state and local laws. Illinois leads with its strict Biometric Information Privacy Act, giving residents the right to sue over misuse of their data. California and a few cities like San Francisco have either discussed or successfully banned government use of FRT, while proposed federal bills have stalled in Congress. In the absence of clear laws, some experts suggest the Federal Trade Commission could step in, using its authority over unfair or deceptive practices. Since regulation is fragmented across state and federal laws, what we’re ultimately left to rely on is judicial precedent.

The US Supreme Court has not ruled directly on FRT, but its decisions on GPS tracking, wiretapping, and cell phone searches offer clues about how the Fourth Amendment to the Constitution could apply. One key question is how far the government can go in accessing personal information from third parties—an issue partly addressed in Carpenter v. United States, which curtailed the so-called “third-party doctrine” for cell phone data but left many gray areas.

Critics argue that FRT goes far beyond ordinary surveillance because it can sweep up billions of images and reveal deep personal details about people’s locations, associations, and habits. Cases like Riley v. California, which stressed the vast amounts of private data stored on cell phones, highlight privacy risks posed by large-scale digital monitoring. While some legal scholars contend FRT might be unconstitutional, others argue existing case law could allow its use in public health—provided the government can prove it meets stringent Fourth Amendment requirements.

While the Supreme Court hasn’t resolved all digital privacy questions, it’s clear that warrants must be based on probable cause, much like the legal threshold for quarantine orders. States also conduct a thorough self-examination to ensure quarantine measures are necessary while considering the individual’s due process rights. The Model State Emergency Health Powers Act (MSEHPA) seeks to address gaps in state public health laws and provide clearer guidance for responding to bioterrorism. To meet due process requirements, state quarantine laws should pass a “means/ends test,” requiring the government to rigorously evaluate and defend the legality and effectiveness of each quarantine. Proponents of the MSEHPA have advocated for a mandatory hearing before a court, with legal representation for the individual, within three days of the quarantine’s implementation.

Similarly, if the government receives a reliable tip that an infected individual has violated a quarantine order, this could provide sufficient probable cause to believe the order was violated, which justifies a warrant to use FRT and obtain location data to verify the claim. However, people know from experiencing COVID-19 that time is critical during a pandemic. With thousands quarantined at once, courts would drown in paperwork, and potentially contagious plaintiffs attending hearings would be akin to inviting a virus to a networking event. In a pandemic, practicality sometimes beats perfection.

One way the government can bypass a warrant is through “exigent circumstances,” but as Alan Rozenshtein argues, courts generally narrowly interpret “exigent circumstances” and still mandate that police have probable cause to believe the underlying activity is occurring. As a result, this exception is unlikely to suffice for disease surveillance, which involves collecting ongoing data on a broad population, most of whom may not display clear symptoms. Rozenshtein suggests that the “special needs” doctrine is a better basis for justifying a disease surveillance program. Courts have occasionally allowed warrantless surveillance with less than probable cause when obtaining a warrant would be impractical, when the search served a purpose other than traditional law enforcement, and was deemed reasonable overall.

However, the Supreme Court’s evolving “reasonableness balancing” approach has made the special needs doctrine both powerful and unpredictable. Some courts weigh factors like the government’s immediate interest versus the level of privacy intrusion, but outcomes vary widely. Vehicle checkpoints for drunk drivers are allowed but not to catch drug smugglers; school students generally need individual suspicion for searches yet may face mandatory drug testing if they’re in sports or extracurriculars. The result is a patchwork of rulings and ongoing uncertainty.

One Size Doesn’t Fit All: Why FRT Must Be Evaluated Case by Case

When it comes to using facial recognition for public health, the “special needs” doctrine offers a potential legal framework, but it requires careful balancing between public safety and personal privacy. Historically, courts have granted broad leeway to public health authorities during crises, as shown by landmark rulings like Jacobson v. Massachusetts. During the COVID-19 pandemic, many courts supported the government by deferring to the executive branch and upholding public health orders in cases where those orders were challenged.

The second step of the “special needs” analysis is to assess the intrusion on the individual. FRT raises unique concerns: thermal scans reveal not just external features but also sensitive biometric information that can hint at pregnancy, menstrual cycles, or substance use. Critics warn this “over-the-skin” and “under-the-skin” surveillance blurs lines between genuine public health measures and intrusive data collection.

The last step is to evaluate whether (1) thermal detection is a reliable and effective tool for containing viral spread and, if so, (2) whether combining facial recognition with thermal detection is essential to achieving the government’s public health objectives. FRT can be used if it truly helps spot infected individuals in crowded spaces—but only if it’s proven effective and necessary. Groups like the ACLU caution against “public health theater,” pushing for independent analysis to ensure these tools don’t erode civil liberties without clear benefits. Ultimately, any FRT rollout would need to be case-by-case to show it’s limited in scope, temporary, and capable of protecting people’s privacy—all while effectively serving its public health purpose.

If FRT can pass constitutional muster in the US, the next big question is how far it should go. What level of privacy intrusion is acceptable in the name of public health? To find answers, the US might look abroad, where governments have already tested FRT in real-world public health crises with very different results.

Europe and China: Two Worlds, Two Approaches to FRT

The European Union treats FRT as a serious privacy risk. Even during the pandemic, EU countries avoided using facial recognition for personal identification, limiting it to non-invasive purposes like monitoring mask compliance. Strict data protection laws under the General Data Protection Regulation (GDPR) require that biometric data be used only when absolutely necessary and only with strong safeguards.

In addition to meeting the legal basis for data processing under Article 6, an additional legal justification outlined in Article 9, specifically addressing sensitive data, must also be satisfied. Subsections of Article 9 provide legal grounds to regulate FRT.

Under Article 9(2)(e) of the GDPR, data can be processed if it’s “manifestly made public”—like a face captured in a public square. However, the European Data Protection Board has warned that just being in front of a camera doesn’t mean you’ve given up your right to privacy. Visibility is not the same as consent.

Other legal pathways exist. Article 9(2)(h) permits data use for “preventive medicine,” and Article 9(2)(j) or (g) allows it for “public interest.” But there’s a catch: these pathways require strong backing from national or EU laws and must pass a tough proportionality test proving the technology is effective, necessary, and that no less invasive option exists. In many cases, authorities decide that facial recognition simply doesn’t meet that standard. Even Article 9(2)(i), which focuses on public health, falls short without supporting legislation and alignment with other GDPR rules. This makes FRT in the EU nearly impossible, especially when less intrusive tools are available.

European courts have made it clear: mass surveillance has no place in a democratic society. In landmark rulings, the EU’s top court struck down laws that allowed broad monitoring of electronic communications even for national security. The court also ruled that blanket retention of location and traffic data, even to fight serious crime, violates EU law.

Echoing this stance, the European Data Protection Board warned that facial recognition in public spaces poses a serious threat to fundamental rights, calling it a form of mass surveillance incompatible with democratic values.

In contrast, China embraced FRT as part of an aggressive public health response. It integrated facial recognition with temperature scanning, QR health codes, and AI systems across transportation hubs and neighborhoods. While China’s Personal Information Protection Law (PIPL) theoretically restricts misuse, vague definitions of “public security” allow broad application, raising questions about long-term surveillance and consent. Under Articles 26, 28, and 30, FRT data must be collected only when absolutely necessary, for a clearly defined purpose, and with informed user consent. Entities must also explain the risks and justify why the data is being gathered.

In 2021, a property company in Tianjin required residents to submit facial data for COVID-19 access control, offering no alternative. One resident sued, citing privacy concerns. While the lower court sided with the company, an appellate court reversed course, ruling that FRT use must include non-biometric options. With no evidence that the system actually tracked virus spread, the company was ordered to delete the plaintiff’s data and offer alternative access.

Where Do We Draw the Line? Crafting a Legal Blueprint for FRT in Public Health

As FRT edges closer to the frontlines of public health, the critical question becomes: how far is too far? To protect both civil liberties and collective safety, the US needs a clear legal framework, one that limits the reach of surveillance without undermining pandemic response.

Rather than greenlighting blanket data collection, such a framework would set strict conditions. FRT could only be used for a narrowly defined public health purpose, such as curbing the spread of a contagious disease, and only when less intrusive methods have failed. Its use would need to be temporary, transparent, and tied to demonstrable public benefit.

Transparency, consent, and data safeguards would be non-negotiable. People should be informed when FRT is in use, what it captures, how long the data will be stored, and when it will be deleted. Inspired by Illinois’ Biometric Information Privacy Act, the framework would require secure storage, mandatory deletion timelines, and legal accountability—meaning individuals could sue even without showing concrete harm.

Courts would play a central role. Borrowing from Fourth Amendment principles and doctrines like “special needs,” judges could weigh whether the government’s use of FRT is truly necessary and effective in reducing harm. The more invasive the technology, the stronger the evidence would need to be that it works.

The US could also learn from abroad. The EU demands a proportionality test for surveillance—ensuring that intrusions on personal privacy match the public benefit. Even China, despite broader surveillance norms, imposes rules on where and when FRT can be used and stresses transparency about its purpose.

A Five-Step Blueprint for Ethical FRT Use

Step 1: Clearly Define the Public Health Purpose and Necessity

Before deploying FRT, identify a specific public health emergency, such as a pandemic, and demonstrate that the technology is necessary. Other less-intrusive measures should have been tried and have failed or are insufficient to contain the threat before resorting to FRT.

Step 2: Conduct a Proportionality and Reasonableness Assessment

Evaluate whether the privacy intrusion caused by FRT is proportionate to the public health risk. This includes considering Fourth Amendment principles and determining if the technology’s benefits, such as identifying infected individuals quickly, outweigh its impact on personal privacy and civil liberties.

Step 3: Implement Strict Data Minimization and Safeguards

Limit data collection only to what is essential for the defined public health purpose. Set strict boundaries on data retention, ensure secure storage and handling, and mandate timely deletion of information once the crisis subsides or FRT is no longer needed.

Step 4: Ensure Transparency, Notice, and Public Oversight

Provide clear, accessible information to the public about when, where, and why FRT is used. Establish independent oversight mechanisms such as specialized review boards or data protection authorities to monitor compliance, field complaints, and maintain public trust.

Step 5: Periodic Review and Sunset Clauses

Include legal “sunset” provisions that automatically end FRT surveillance once the emergency passes or after a set period. Require periodic reviews and reassessments of necessity, efficacy, and proportionality, ensuring that any continued use of FRT remains justified under evolving circumstances.

What We Decide Now Matters Later

COVID-19 forced governments to make fast, high-stakes decisions. Facial recognition technology emerged as a potential asset but also a warning sign. Right now, US law offers more uncertainty than guidance. We need to set these boundaries before the next crisis hits. If we are not careful, emergency tools could quietly become permanent fixtures of everyday life. The path forward is not about rejecting innovation, it is about ensuring that the tools we build to protect lives do not end up compromising the liberties we are trying to save.

Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.