On May 18, 2023, the Federal Trade Commission issued a Policy Statement on biometric information technologies and the agency’s approach to regulating the use of biometric information under Section 5.  

The guidance addresses both unfair and deceptive acts, providing examples for both. The guidance explains that an example of unfairness is the use of biometric technologies like facial or voice recognition to provide access to financial accounts despite the “potential for bias. . . [which] can lead or contribute to harmful or unlawful discrimination.” Examples of deceptive acts in the guidance include making any false or misleading statements about the collection, use, or efficacy of biometric information, including omissions and “half-truths” like making “an affirmative statement about some purposes for which it will use biometric information but fail[ing] to disclose other material uses of the information.”

The statement defines “biometric information” as “data that depict or describe physical, biological, or behavioral traits, characteristics, or measurements of or relating to an identified or identifiable person’s body.” This definition covers any data directly or indirectly derived from someone’s body or a depiction of their body “to the extent that it would be reasonably possible to identify the person.” This policy announcement follows less than a month after the FTC issued a joint agency statement asserting that automated systems and innovative new technologies are fully covered by existing federal laws protecting civil rights, competition, consumer protection, and equal opportunity. These statements reflect and respond to a growing government-wide push for greater transparency and accountability in data collection and algorithmic decision making—particularly in response to the Biden administration’s Blueprint for an AI Bill of Rights

Although the FTC’s Policy Statement only provides “a non-exhaustive list of examples of practices it will scrutinize,” the guidance indicates that “businesses should continually assess whether their use of biometric information or biometric information technologies causes or is likely to cause consumer injury in a manner that violates Section 5 of the FTC Act.” The FTC states in the guidance that potential violations of Section 5 will be assessed holistically, using factors such as:

  • Failing to assess foreseeable harms to consumers before collecting biometric information;
  • Failing to promptly address known or foreseeable risks;
  • Engaging in surreptitious and unexpected collection or use of biometric information;
  • Failing to evaluate the practices and capabilities of third parties;
  • Failing to provide appropriate training for employees and contractors; and
  • Failing to conduct ongoing monitoring of technologies that the business develops, offers for sale, or uses in connection with biometric information.

Any company collecting or using biometric information should be proactive and consider steps that may include regularly training employees and third parties, actively notifying and updating consumers on its data policies, and implementing regular audits of any biometric technology it develops or uses.

The FTC has announced the Policy Statement at a moment where the agency has shown a willingness to enforce the laws within its jurisdiction. Three recent settlements, two brought under Section 5 and one under the Children’s Online Privacy Protection Act (COPPA) Rule, illustrate the high costs of improper biometric data collection and usage.

In 2019, the FTC imposed a $5 billion penalty on Facebook, Inc., for violating “a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information.” The complaint focused in particular on Facebook’s data-sharing with third party apps and the company’s failure to act against apps that knew were violating its privacy policies. The order also requires Facebook to overhaul its privacy decision-making processes and submit to heightened compliance monitoring for 20 years.

In 2021, a photo storage service called Everalbum, Inc., settled with the FTC for allegedly deceiving consumers about its data retention policies and its use of facial recognition technology. As part of the settlement, Everalbum was required to delete not only the photos and videos of deactivated users, but also any models and algorithms developed using user-uploaded images. The company ended up shutting down because it could not compete with Apple and Google’s photo storage services.  

Most recently, a settlement order in 2022 forced Kurbo, Inc., a weight loss app marketed for children, to delete all illegally collected data on children under 13, destroy any algorithms developed from that data, and pay a $1.5 million penalty. The case was brought under the FTC’s COPPA rule rather than Section 5, and further underscores the seriousness with which the agency is pursuing privacy violations and deceptive practices related to biometric data.

The FTC is sending a clear message that it plans to use Section 5 and other rules to regulate biometric data privacy. To avoid liability, companies will have to carefully ensure that their data policies align with their conduct and keep consumers on notice about any changes.

Special thanks to Meaghan Katz, a Crowell’s 2023 Summer Intern, for their assistance in the writing of this piece.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Kristin Madigan Kristin Madigan

Kristin J. Madigan is a partner in Crowell & Moring’s San Francisco office and a member of the firm’s Litigation and Privacy & Cybersecurity groups. Kristin focuses her practice on representing clients in high-stakes complex litigation with a focus on technology, as well…

Kristin J. Madigan is a partner in Crowell & Moring’s San Francisco office and a member of the firm’s Litigation and Privacy & Cybersecurity groups. Kristin focuses her practice on representing clients in high-stakes complex litigation with a focus on technology, as well as privacy and consumer protection matters including product counseling, compliance, investigations, enforcement, and litigation that typically involves existing and emerging technologies. In addition, Kristin is well-versed in and counsels clients on California Consumer Privacy Act (CCPA) compliance. Kristin is a Certified Information Privacy Professional/United States (CIPP/US).

Photo of Jacob Canter Jacob Canter

Jacob Canter is an attorney in the San Francisco office of Crowell & Moring. He is a member of the Litigation and Privacy & Cybersecurity groups. Jacob’s areas of emphasis include technology-related litigation, involving competition, cybersecurity and digital crimes, copyright, trademark, and patent…

Jacob Canter is an attorney in the San Francisco office of Crowell & Moring. He is a member of the Litigation and Privacy & Cybersecurity groups. Jacob’s areas of emphasis include technology-related litigation, involving competition, cybersecurity and digital crimes, copyright, trademark, and patent, as well as general complex commercial matters.

Jacob graduated from the University California, Berkeley School of Law in 2018, where he launched Berkeley’s election law outreach program and pro bono project. He joins the firm after a year of practice at an international law firm in Washington, D.C., and a year clerking in the Southern District of New York for the Hon. Lorna G. Schofield. Jacob was exposed to and provided support in a variety of complex substantive and procedural legal topics during the clerkship, including trade secrets, insurance/reinsurance, contracts, class actions, privacy, intellectual property, and arbitrability.