On May 18, 2023, the Federal Trade Commission issued a Policy Statement on biometric information technologies and the agency’s approach to regulating the use of biometric information under Section 5.  

The guidance addresses both unfair and deceptive acts, providing examples for both. The guidance explains that an example of unfairness is the use of biometric technologies like facial or voice recognition to provide access to financial accounts despite the “potential for bias. . . [which] can lead or contribute to harmful or unlawful discrimination.” Examples of deceptive acts in the guidance include making any false or misleading statements about the collection, use, or efficacy of biometric information, including omissions and “half-truths” like making “an affirmative statement about some purposes for which it will use biometric information but fail[ing] to disclose other material uses of the information.”

The statement defines “biometric information” as “data that depict or describe physical, biological, or behavioral traits, characteristics, or measurements of or relating to an identified or identifiable person’s body.” This definition covers any data directly or indirectly derived from someone’s body or a depiction of their body “to the extent that it would be reasonably possible to identify the person.” This policy announcement follows less than a month after the FTC issued a joint agency statement asserting that automated systems and innovative new technologies are fully covered by existing federal laws protecting civil rights, competition, consumer protection, and equal opportunity. These statements reflect and respond to a growing government-wide push for greater transparency and accountability in data collection and algorithmic decision making—particularly in response to the Biden administration’s Blueprint for an AI Bill of Rights

Although the FTC’s Policy Statement only provides “a non-exhaustive list of examples of practices it will scrutinize,” the guidance indicates that “businesses should continually assess whether their use of biometric information or biometric information technologies causes or is likely to cause consumer injury in a manner that violates Section 5 of the FTC Act.” The FTC states in the guidance that potential violations of Section 5 will be assessed holistically, using factors such as:

  • Failing to assess foreseeable harms to consumers before collecting biometric information;
  • Failing to promptly address known or foreseeable risks;
  • Engaging in surreptitious and unexpected collection or use of biometric information;
  • Failing to evaluate the practices and capabilities of third parties;
  • Failing to provide appropriate training for employees and contractors; and
  • Failing to conduct ongoing monitoring of technologies that the business develops, offers for sale, or uses in connection with biometric information.

Any company collecting or using biometric information should be proactive and consider steps that may include regularly training employees and third parties, actively notifying and updating consumers on its data policies, and implementing regular audits of any biometric technology it develops or uses.

The FTC has announced the Policy Statement at a moment where the agency has shown a willingness to enforce the laws within its jurisdiction. Three recent settlements, two brought under Section 5 and one under the Children’s Online Privacy Protection Act (COPPA) Rule, illustrate the high costs of improper biometric data collection and usage.

In 2019, the FTC imposed a $5 billion penalty on Facebook, Inc., for violating “a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information.” The complaint focused in particular on Facebook’s data-sharing with third party apps and the company’s failure to act against apps that knew were violating its privacy policies. The order also requires Facebook to overhaul its privacy decision-making processes and submit to heightened compliance monitoring for 20 years.

In 2021, a photo storage service called Everalbum, Inc., settled with the FTC for allegedly deceiving consumers about its data retention policies and its use of facial recognition technology. As part of the settlement, Everalbum was required to delete not only the photos and videos of deactivated users, but also any models and algorithms developed using user-uploaded images. The company ended up shutting down because it could not compete with Apple and Google’s photo storage services.  

Most recently, a settlement order in 2022 forced Kurbo, Inc., a weight loss app marketed for children, to delete all illegally collected data on children under 13, destroy any algorithms developed from that data, and pay a $1.5 million penalty. The case was brought under the FTC’s COPPA rule rather than Section 5, and further underscores the seriousness with which the agency is pursuing privacy violations and deceptive practices related to biometric data.

The FTC is sending a clear message that it plans to use Section 5 and other rules to regulate biometric data privacy. To avoid liability, companies will have to carefully ensure that their data policies align with their conduct and keep consumers on notice about any changes.

Special thanks to Meaghan Katz, a Crowell’s 2023 Summer Intern, for their assistance in the writing of this piece.

On March 2, 2023, the Biden-Harris Administration released the National Cybersecurity Strategy.[i] The highly anticipated Strategy has illuminated that a more overt and aggressive approach to mitigating cyber risks may be necessary to drive real change, leading to the anticipation of increased communication and partnerships between private companies and government agencies.[ii] The new Strategy sets a strategic objective of “enhancing public-private operational collaboration to disrupt adversaries,” including sharing insights between private organizations and government agencies, and the push for private companies to come together and organize their efforts through nonprofit organizations.[iii]

The Strategy highlights the government’s commitment to investing in cybersecurity research and new technologies to protect the nation’s security and improve critical infrastructure defenses. It outlines five pillars of action, each of which implicates critical infrastructure entities, from strengthening their cybersecurity processes, to receiving support from the federal government.[iv] It also makes evident the Administration’s desire to shift the burden of cybersecurity (and its associated costs and liability) from individuals, small businesses, and local government to the entities with the greatest expertise and resources, e.g., large owners and operators of critical infrastructure, vendors and software developers.[v] 

Companies evaluating their alignment with the Strategy may also consider their law enforcement and government agency relationships. These include: i) assessing how the Strategy impacts interactions between victim companies and their counsel with the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) when they are seeking assistance with cybersecurity challenges, and ii) the new expectation of agency involvement in the private sector when it comes to cybersecurity.

“Private companies and their legal counsel can take several steps now to ensure they create a positive relationship with agencies ahead of new regulation expected to follow the National Cybersecurity Strategy,” says Brian Hale, a former FBI Assistant Director of the Office of Public Affairs, and current Managing Director in FTI Consulting’s Cybersecurity practice, and who is experienced in helping companies with cybersecurity challenges from both a government and private sector perspective. Some of these actions include:

  • Form Connections. Be familiar with the lead cybersecurity FBI agent(s) in the local FBI Office – find a local field office here – before an incident occurs and develop a relationship.
  • Attend Outreach Events. Agencies like the FBI and CISA often host outreach events to meet with companies and counsel in their area or participate as panelist and presenters at industry functions.[vi]
  • Keep Track of Announcements. Stay up to date with the latest messaging released from the FBI, CISA, and other agencies regarding cybersecurity best practices and regulations. This also includes remaining current on any potential threats and new requirements announced that can help prepare organizations for cybersecurity incidents.
  • Leverage Industry Groups, such as InfraGard. This nonprofit is a partnership between the FBI and the U.S. private sector, created to protect critical infrastructure and with a common goal of “advancing national security.”[vii] Learn more here.

Through plans to increase defense of critical infrastructure and partner on sector-specific cybersecurity requirements, the National Cybersecurity Strategy emphasizes that relationships and communication between the public and private sectors remains paramount in achieving the common goal of minimizing cybersecurity risk. Plans to shift more responsibility for cybersecurity onto the best-positioned organizations to handle this risk, like government agencies, will result in better protection from threat actors for individuals and small businesses, but will only be successful if proper streams of information and trust between the public and private sectors are established.

Furthermore, the Strategy encourages the forging of international partnerships to pursue shared goals. This includes building coalitions to counter threats to the digital ecosystem, strengthening international partner capacity, expanding U.S. ability to assist allies and partners, building coalitions to reinforce global norms of responsible state behavior, and securing global supply chains for information, communications, and operation technology products and services.

Whether an organization is in the public or private sector, its cybersecurity program will undoubtedly be impacted by the National Cybersecurity Strategy.

For a more detailed summary and analysis of the National Cybersecurity Strategy, Crowell examines the Strategy in a March 2023 client alert.[viii]


The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, Inc., its management, its subsidiaries, its affiliates, or its other professionals. FTI Consulting, Inc., including its subsidiaries and affiliates, is a consulting firm and is not a certified public accounting firm or a law firm.

FTI Consulting is an independent global business advisory firm dedicated to helping organizations manage change, mitigate risk and resolve disputes: financial, legal, operational, political & regulatory, reputational and transactional. FTI Consulting professionals, located in all major business centers throughout the world, work closely with clients to anticipate, illuminate and overcome complex business challenges and opportunities. ©2023 FTI Consulting, Inc. All rights reserved. fticonsulting.com

Crowell & Moring LLP is an international law firm with offices in the United States, Europe, MENA, and Asia. Drawing on significant government, business, industry and legal experience, the firm helps clients capitalize on opportunities and provides creative solutions to complex litigation and arbitration, regulatory and policy, and corporate and transactional issues. The firm is consistently recognized for its commitment to pro bono service and its programs and initiatives to advance diversity, equity and inclusion.


[i] “National Cybersecurity Strategy,” The White House (March 2023), https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.

[ii] Id.

[iii] Id.

[iv] Id.

[v] Id.

[vi] “Community Relations,” Federal Bureau of Investigation (March 2023), https://www.fbi.gov/how-we-can-help-you/outreach.

[vii] “Welcome to InfraGard,” InfraGard (March 2023), https://www.infragard.org/.

[viii] “Biden Administration Releases Comprehensive National Cybersecurity Strategy,” Crowell & Moring (March 6, 2023), https://www.crowell.com/en/insights/client-alerts/biden-administration-releases-comprehensive-national-cybersecurity-strategy.

Ever since the public launch of OpenAI’s ChatGPT, the world has been gasping at the astonishing accomplishments of this generative AI chatbot:  a simple “prompt” in the form of a question (“which are the most important decisions of the CJEU in copyright?”) will receive a credible response within seconds (“The Court of Justice of the European Union (CJEU) has issued several important decisions in the field of copyright law. While it is challenging to determine a definitive list of the most important decisions, here are some key rulings that have had significant impact” and it goes on to list some of the CJEU’s most well know decisions, such as Infopaq, UsedSoft, Svensson, Deckmyn, ACI Adam, GS Media and YouTube).

Impressive for sure and, although the information is not always reliable (ChatGPT has been reported to invent legal precedents, to the embarrassment of the lawyers who have submitted briefs on that basis…), companies recognise the appeal of AI-powered chatbots – they are here to stay.  To avoid reeling in these applications as legal Trojan horses, in-house counsel do well to identify the legal risk of this new technology:  racial, sexual, and other bias that may induce discriminatory acts and misinformation are well documented and important hurdles to the widespread adoption of AI solutions in a corporate environment.  In this post, we will, however, address some of the concerns relating to copyright, trade secrets and the protection of personal data.

Copyright and the protection of trade secrets may complicate the AI applications in different ways:  the use of “input” data and the “output” of the AI solution. 

The algorithms of the AI solution are “trained” using datasets that may contain content protected under copyright or related rights (such as performances, recordings, or databases).  Similarly, such protected content may be present in the prompts that the user submits to the AI-powered solution.  Keeping in mind the broad interpretation that the CJEU has given to the reproduction right, the copies made of these datasets may be seen as “reproductions” and consequently require the prior authorisation from the author and holders of related rights – unless the use is covered under one of the (harmonised) legal exceptions. 

Under the Information Society Directive N° 2001/29, the exceptions for temporary acts of reproduction or the research exception may have exempted some uses, but these provisions were considered insufficient to create the legal certainty required to stimulate the development of innovative technologies, such as AI. With the Copyright in the Digital Single Market Directive N° 2019/790 (“DSM Dir”), two new exceptions were introduced for “text and data mining” (“TDM”), i.e. “any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations” (art. 1(2) DSM Dir).  Text and data mining is permitted with the right holders’ prior consent in two cases:

  1. TDM for scientific research (art. 3 DSM Dir):  a research organisation (art. 2(1) DSM Dir) or cultural heritage institution (art. 2(3) DSM Dir) may reproduce or extract the protected content in a TDM process, for the purpose of scientific research under this exception – provided that they have “lawful access”.
  2. TDM for other purposes (art. 4 DSM Dir):  other users may reproduce or extract protected lawfully accessible works (including software) and other content in a TDM process, for other purposes – provided that the rightholders have not “reserved” the use of the works or other subject matter “in an appropriate manner, such as machine-readable means in the case of content made publicly available online”. 

These exceptions have been transposed in the Belgian Code of Economic Law (art. 190, 20° and 191/1, 7° CEL).  Important challenges will remain: especially the modalities for the “opt-out” of the general TDM exception that the different rightholders may exercise are not standardised (yet) and, as the TDM exceptions will be implemented in 27 member states, there may be national variations.  In addition, authors and performers may enjoy moral rights in the member states, which are not harmonised under the DSM Directive.

In the meanwhile, technical responses to this web-wide crawling are being developed (such as Have I been trained?) to find out whether a particular file has been used.  Some AI providers are proposing mechanisms to give authors some control over the use of their works (e.g. Stability AI) – but it is uncertain whether they suffice to comply with article 4 DSM Dir.

While the protection of trade secrets is arguably less of an issue when an AI solution is trained using publicly accessible datasets, this may be an issue where employees include their employer’s confidential information in the “prompts” they submit to an AI chatbot.  

Under the Trade Secrets Directive N° 2016/943 (“TS Dir”), the acquisition, use or disclosure of a trade secret without the consent of the trade secret holder is unlawful (art. 4 TS Dir).  Logically, the provider of the AI-powered chatbot is likely to put all responsibility for the prompts on the user’s side (e.g. OpenAI requests users not to submit any sensitive information and to permit the use of all input content to provide and maintain the service in its terms of use).  It is then for the user to make sure that their prompts contain no trade secrets or confidential information of their employer or of third parties that their employer holds under a confidentiality agreement. 

While the mere transfer of sensitive information to a service provider is unlikely to affect the secret nature of the “trade secret”, it may go against the confidentiality policy or violate the conditions of a confidentiality agreement with a supplier, a client or a partner, as a copy of the confidential information will be out of the trade secret holder’s control (i.e. stored on the servers of the AI provider).

As to the AI-generated output, it may be infringing copyright if the original traits of protected work can be recognised in the output.  In most cases, however, the AI-creations imitate the style of a musician, a painter or a photographer.  Elements of style are however considered “ideas” and consequently not protected under copyright.  By contrast, where the AI-output imitates the voice or other features of singers or actors, the latter may rely upon their personality rights and their image rights to oppose the use of their appearance.

Lastly, the AI-generated output may itself be protected under various rights.  While traditional copyright typically requires the creative input of a human author and will not be available for AI-productions without human intervention (regardless of questions of evidence), such requirement is absent under the related rights – in particular the protection of phonograms or first fixations of films.  This means that no author can control the reuse of AI-generated output on the basis of copyright, but the producers of AI-produced audio- or audiovisual content may have the right to prohibit the reproduction or communication to the public or to conclude licences for their productions.

Another important legal concern is the protection of personal data.  As organisations increasingly turn to AI-powered chatbots to enhance operations and customer experiences, data protection issues have come to the forefront.  Notably, the Italian Data Protection Authority identified sufficient violations of the GDPR to temporarily ban the use of ChatGPT in Italy. However, after addressing the concerns raised by the Italian Authority, OpenAI reactivated access again at the end of April 2023.  In the same vein, the European Data Protection Board established a dedicated task force to address data protection concerns related to ChatGPT.  These actions underscore the importance of considering data protection when deploying AI chatbots.

AI chatbots process personal data during the training of AI models and in the real-time interactions with users.  One key concern is the need for organisations to establish a valid legal basis for processing personal data, which can include, for example, consent, legitimate interest, or contractual obligations.  Another requirement is transparency of the data processing:  organisations need to provide easily understandable information to data subjects, clearly explaining how their personal data is processed within AI-powered chatbot systems.  In addition to these core concerns, other issues may arise relating to the need for age verification systems to mitigate risks associated with inappropriate interactions with minors, as well as the implementation of robust security measures to protect personal data from data breaches and unauthorised access.

The data protection analysis will depend on the precise technical features that AI-chatbot organisations will actually deploy.  ChatGPT, for instance, offers various usage scenarios, including the use of the web version, API use by developers, and the recently introduced ChatGPT plugins.  Each scenario has different implications for data protection and different roles and responsibilities of the involved actors.

The first scenario covers the regular use of the web version of ChatGPT.  In this case, the chatbot is used in the way it was developed and it is offered by OpenAI.  For this web version, OpenAI and the users are the primary actors.  OpenAI acts as a controller for both training the models and processing user requests.  However, organisations using ChatGPT in their workflow need to be cautious about potentially processing personal data, the more as it is retained by default for training purposes.  Compliance with data protection regulations becomes crucial in this context.

The second scenario involves API users, i.e. developers.  An API user will get an OpenAI API key, and with this key, the API user will be able to gain additional control over the AI model.  API users can refine the ChatGPT models based on their own needs and they can train the models to function either as a standalone model or they can integrate ChatGPT in their own products.  In this case, developers act as controllers for the processing of personal data.  OpenAI provides a data processing addendum to API users, qualifying itself as a processor.  However, this qualification may raise questions due to the control exerted by OpenAI.

The third scenario concerns ChatGPT plugins, which enable access to third-party knowledge sources and databases.  The plugin functionality allows ChatGPT to consider third-party information from these sources in its generated responses.  In this case, according to OpenAI, both the third-party creator of the plugin and OpenAI act as separate and independent controllers of personal data.  Also in this case, this qualification may raise questions, and further examination by the task force set up by the European Data Protection Board is eagerly anticipated.

Some takeaways for organisations that care to assess some of the legal risks resulting from the use of AI-powered tools in a professional context:

It is important to raise awareness within AI-using organisations, i.e. among their company lawyers, employees, freelancers and other partners, and to assess whether a company policy would be useful.  A non-representative poll during the IBJ webinar of 5 May 2023 indicated that AI-powered chatbots are already commonly being used in a professional context (50% of the respondents confirmed such use) and that a minority has a policy in place (3% stated that their organisation prohibits the use of AI-tools, 19% permits the use within certain limits, and 77% has no policy at all).

Where AI-using organisations establish a policy on the use of AI-tools for professional purposes, they may consider the following points.  Developers of AI-solutions may use all web-accessible content to train their algorithms.  Organisations that do not wish their content to be used for these purposes may look into technical, organisational and contractual means of reserving their rights, facing TDM processes (e.g. by finetuning the robot.txt instructions or other metadata).  Especially for creators of high value content, such as broadcasters, music producers or newspaper publishers, may want to look into the appropriate expression of their opt-out under all available (copyright or related) rights.  Furthermore, organisations ought to assess the risk that their employees, freelancers or other partners transmit copyright protected content or confidential information (belonging to the organisation or to third parties) in a prompt to the AI-powered tool and, if useful, address such risks in clear guidelines.  Where important confidential information is at stake, it may be worth revising confidentiality clauses in contracts with third parties (partners, suppliers, customers) to whom such information is disclosed, to explicitly prohibit the use of third-party AI tools without explicit contractual guarantees.  Where the organisation intends to use the AI-generated output in any way that requires some sort of exclusivity, it ought to verify whether they can exercise any statutory exclusive rights (such as producers’ rights) and, where applicable, settle such rights with the AI-provider.  Where no such statutory rights exist, they may want to organise the contractual protection to control the use of the AI-generated output. 

Also from a data protection perspective, AI-using organisations should ensure that they have the necessary contractual arrangements in place, for example a data processing agreement or another data protection agreement with the AI chatbot provider.  This agreement should clearly outline the responsibilities of both parties and stipulate that the provider complies with all applicable data protection laws, including the GDPR.  If there are any international data transfers the organisation should make sure that the transfer relies on a valid transfer mechanism and that the necessary transfer protocols are in place.  Prior to this, it is recommended that AI-using organisations conduct a data protection impact assessment and if needed, a transfer impact assessment, before allowing for the use of AI chatbots in their organisation.  It may be needed to refine internal rules on the use of personal data in order to establish guidelines for the proper use of AI-powered chatbots by employees, including rules against sharing of personal data, particularly sensitive and special categories of personal data, through the chatbots.

AI-developers, on the other hand, must be wary of the expressions of rightholders who wish to reserve their rights to TDM and must proactively check whether any instructions are given in code or elsewhere.  In their terms and conditions, they should clearly indicate how rightholders’ and users’ content (in prompts or otherwise) will be used, so they have a sufficient authorisation to operate their AI-driven solutions.  Ideally they also indicate more explicitly for which purposes the users’ input is used (“performing the service”, “improving the service”), how long the content will be stored and whether the user (or their organisation) can request the erasure of the content.  

AI-developers also need to consider data protection and are encouraged to conduct a data protection impact assessment for the development and provision of AI-powered tools.  Especially when training new models, whenever possible, AI-developers could use anonymisation techniques on data before feeding it into the chatbot for training purposes. In general, AI-developers could adhere to the principle of data minimisation, using only the necessary categories and amount of personal data for model refinement or development.  Next to many other requirements, transparency is also crucial, and data subjects should be informed about the use of their personal data in data protection notices.

If you would like learn more about the subject and to stay informed about recent legal developments, you are invited to the Crowell & Moring Legal Knowledge Library  –  Crowell Hub 💡. This free portal has been designed specifically to support in-house counsel. Please click here to login or register.

Overview

On March 27, 2023, President Biden signed the Executive Order on Prohibition on Use by the United States Government of Commercial Spyware that Poses Risks to National Security (EO), restricting federal agencies’ use of commercial spyware.  The Biden Administration cited targeted attacks utilizing commercial spyware on U.S. officials and human rights abuses abroad as motivations for these restrictions.

Usage Restrictions

The EO is not a blanket ban on commercial spyware.[1]  Instead, it bars federal government agencies from using commercial spyware tools if they pose significant counterintelligence or security risks to the U.S. government, or significant risks of improper use by a foreign government or foreign person, including to target Americans or enable human rights abuses.  Indirect use of such spyware (e.g. through a contractor or other third party) is also prohibited.  The EO establishes risk factors indicative of prohibited commercial spyware, including:

  • Past use of the spyware by a foreign entity against U.S. government personnel or devices;
  • Past use of the spyware by a foreign entity against U.S. persons;
  • The spyware was or is furnished by an entity that maintains, transfers, or uses data obtained from the commercial spyware without authorization from the licensed end-user or the U.S. government, or has disclosed or intends to disclose non-public information about the U.S. government or its activities without authorization from the U.S. government;
  • The spyware was or is furnished by an entity under the direct or effective control of a foreign government or foreign person engaged in intelligence activities directed against the United States;
  • A foreign actor uses the commercial spyware to limit freedoms of expression, peaceful assembly or association; or to enable other forms of human rights abuses or suppression of civil liberties; or
  • The spyware is furnished to governments that have engaged in gross violations of human rights, whether such violations were aided by the spyware or not.

The above restrictions do not apply to the use of commercial spyware for purposes of testing, research, analysis, cybersecurity, or the development of countermeasures for counterintelligence or security risks, or for purposes of a criminal investigation arising out of the criminal sale or use of the spyware.  Additionally, an agency may be able to obtain a waiver allowing it to temporarily bypass the EO’s prohibitions, but only in “extraordinary circumstances.”

Agency Reporting Requirements

The EO contains various agency reporting requirements.  Some are specific to the Director of National Intelligence (DNI) while some apply to all federal agencies:

  • Within 90 days of the EO, the DNI will issue a classified intelligence assessment on foreign commercial spyware and foreign use of commercial spyware.
  • Within 90 days of the DNI assessment, all federal agencies must review their use of commercial spyware and discontinue uses that violate the EO. 
  • If an agency elects to continue using commercial spyware, within one year of the EO it must report its continued use to the Assistant to the President for National Security Affairs (APNSA) and explain why its continued use does not violate the EO.

New Commercial Spyware Procurement Procedures

Agencies seeking to procure commercial spyware “for any purpose other than for a criminal investigation arising out of the criminal sale or use of the spyware” must:

  • Consider any relevant information provided by the DNI, and solicit such information from the DNI if necessary;
  • Consider the risk factors listed above;
  • Consider any controls the commercial spyware vendor has in place to detect and prevent potential security risks or misuse; and
  • Notify APNSA within 45 days of procurement and provide a description of its intended purpose and use(s) for the commercial spyware.  

Key Takeaways

While the EO signals that the federal government is approaching commercial spyware with caution, interested parties should note that the government has been careful not to rule out its usage altogether. The EO, for example, does not address the government’s use of non-commercial (i.e. government-produced) spyware, or mention state or local government use of commercial spyware at all. The EO also allows federal agencies to procure and employ commercial spyware so long as the agency determines that the spyware does not pose a significant risk to national security or for improper use. Vendors of commercial spyware should pay close attention to the risk factors identified in the EO and consider implementing internal controls to address them.

On March 22, 2022, the Department of Defense (DoD) issued a final rule requiring contracting officers to consider supplier risk assessments in DoD’s Supplier Performance Risk System (SPRS) when evaluating offers. SPRS is a DoD enterprise system that collects contractor quality and delivery performance data from a variety of systems to develop three risk assessments: item risk, price risk, and supplier risk. The final rule introduces a new solicitation provision, DFARS 252.204-7024, which instructs contracting officers to consider these assessments, if available, in the determination of contractor responsibility.

SPRS risk assessments are generated daily using specific criteria and calculations based on the price, item, quality, delivery, and contractor performance data collected in the system.  Although compliance with cybersecurity clauses DFARS 252.204-7012, -7019, or -7020 are not currently used to generate supplier risk assessments, the potential cybersecurity implications are evident. Under DFARS -7019 and -7020, DoD requires contractors to demonstrate their compliance with cybersecurity standard NIST SP 800-171 by scoring their implementation of 110 controls and uploading their score to SPRS.

Some believe that DoD could incorporate the NIST 800-171 Basic Self-Assessment score into the supplier risk assessment at any time. If SPRS scores are incorporated into supplier risk assessments, this solicitation provision will make the accuracy and veracity of contractors’ SPRS scores significantly more important. Inaccurate SPRS scores could open contractors to legal risk, including False Claims Act (FCA) liability. Under the Department of Justice’s Civil Cyber Fraud Initiative, FCA actions regarding inaccurate cybersecurity representations have increased. Because these assessments will now influence award decisions, accuracy will become key.

For more information, please contact the professional(s) listed below, or your regular Crowell & Moring contact.

On March 15, the Iowa House passed Senate File 262 (SF 262), a comprehensive state privacy law bill. If enacted, SF 262 would be the sixth state level privacy legislation, following California, Virginia, Colorado, Utah, and Connecticut, and it would go into effect on January 1, 2025.

Iowa’s new law is closest to the Utah Consumer Privacy Act (UCPA), having broad exemptions and more limited obligations for controllers. Notably, SF 262 provides exemptions for consumer rights where “pseudonymous data” and “de-identified data” (as defined by the bill) are involved, including certain opt-out rights.

For the most part, Iowa’s bill treads familiar territory. Its scope extends to entities that conduct business in Iowa or produce products or services targeted to Iowa residents, and that meet the following requirements, in a calendar year: (1) control or process personal data of at least 100,000 consumers; or (2) control or process personal data of at least 25,000 consumers and derive over 50% of gross revenue from sale of personal data.

Iowa’s bill does not create new obligations for businesses compared to what is already required under other states’ privacy laws. For example, the Iowa bill’s privacy notice requirements are not unique to SF 262 – companies with privacy policies drafted to comply with the CCPA (California Consumer Privacy Act) and VCDPA (Virginia Consumer Data Protection Act) are not likely to have to amend their policies in order to comply with Iowa’s requirements. In addition, like Utah and Virginia, Iowa’s bill includes a narrow definition of “sale” of personal data (the exchange of personal data for monetary consideration by the controller to a third party), as well as numerous exceptions. 

Iowa’s bill notably diverges from consumer protections found in most existing state privacy laws. For example, it only requires clear notice and opt-out for sensitive data, while other states like Colorado, Connecticut, and Virginia adopted opt-in requirements. The Iowa bill also lacks a consumer right to correct data. There are no requirements for covered entities to conduct privacy impact assessments or establish data minimization principles. Furthermore, responses to consumer requests not only have a 90-day response period (compared to 45-days in other states) but also are subject to a potential 45-day extension.

This bill does not contain a private right of action; enforcement rights belong exclusively with the Iowa State Attorney General. The AG may seek injunctive relief and civil penalties of up to $7,500 per violation. However, this first requires providing a 90-day cure period before bringing any enforcement, and such cure period does not sunset.

We will continue to monitor the developments and keep you informed of any further updates.

Eight months after the issuance of the draft Measures on the Standard Contract for the Export of Personal Information (“SCC Regulations”), on February 24, 2023, the Cyberspace Administration of China (“CAC”) released the final version of the SCCs Regulations, along with the Standard Contractual Clauses (“SCCs”). The SCCs set a baseline for cross-border data transfer agreements. This can impact any business that relies on the sharing of information between China and third countries, like the United States.

The SCCs will come into effect on June 1, 2023, and companies have an additional six months (until November 30, 2023) to comply with the SCCs’ requirements for the transfer of data outside of China.

China’s Three Data Transfer Mechanisms are Now Settled

The PRC Personal Information Protection Law (“PIPL”) requires personal information processors (similar to the concept of data controllers under the General Data Protection Regulation) to implement one of the following three data transfer mechanisms, if personal information is transferred outside of China:

  1. Complete a Security Assessment by the CAC;
  2. Complete a Security Certification by a certification institution designated by the CAC; or
  3. Adopt the SCCs.

Prior to the release of the final SCCs, the CAC had already released the Measures on Security Assessment of Cross-Border Data Transfer and Specifications on Security Certification for Cross-Border Personal Information Processing Activities in the summer of 2022. These measures include detailed guidance on the security assessment and security certification process necessary for the transfer of data outside of China.

The issuance of the SCCs indicates that the final piece of the puzzle of China’s cross-border data transfer regime is now settled. Previously, many companies that were not required to go through the security assessment process took a “wait-and-see approach” pending the finalization of the SCCs. Now, with the final piece of China’s cross-border data transfer regime in place, a full assessment of the available data transfer mechanisms is required.

Application Scope of the SCCs

The SCCs may be a more user-friendly approach to qualify a data transfer, as the SCCsdo not require a review by the CAC or certification by a third-party institution. In addition, they provide for more definite contractual terms. However, the SCCs may be adopted only if all of the following conditions are met:

  1. The data exporter is not a critical information infrastructure operator (“CIIO”), which is broadly defined as an operator of critical network facilities or information systems in important industries (such as finance, energy, or transportation), where destruction, loss of function, or data leakage may seriously endanger China’s national security, peoples’ livelihood, or the public interest;
  2. The data exporter has not processed personal information of more than one million individuals (“Mass Processor”); AND
  3. Since January 1 of the previous year, the data exporter has not made aggregated transfers of:
  • personal information of more than 100,000 individuals; or
  • sensitive personal information of more than 10,000 individuals.

If any of the above conditions are not met, a CAC security assessment will be required instead, and the SCCs would not be an option. Notably, a CAC security assessment will also be triggered if any important data is transferred out of China, even if the SCCs are used to transfer data. Important data are broadly defined as any data that – if tampered with, destroyed, leaked, illegally accessed, or used – may endanger China’s national security, economic operation, social stability, or public health and safety.

Are Modifications to the SCCs Permissible?

According to the SCC Regulations, the parties are not allowed to make any modifications to the SCCs. The parties, however, may add terms, to the extent they do not conflict with the SCCs.

For companies who have already entered into a data processing agreement (“DPA”), questions abound regarding how the SCCs would interact and integrate with these existing agreements. Where corporations are considering combining the two through the use of exhibits, the SCCs may need to be the main body of an agreement, with any additional terms, including those in an existing DPA, placed into an exhibit.

Governing Law and Liability

Notably, the governing law of a DPA transferring data outside of China must be PRC law. However, the parties are granted some flexibility to submit their disputes under the SCCs to a PRC court or, if arbitration is preferred, to a PRC or international arbitration tribunal in a member state of the New York Convention.

Under the SCCs, the data exporter and the data importer assume joint and several liability to the data subjects.  As such, data subjects can enforce their rights against both such parties as a third-party beneficiary under the SCCs.

Are There Different Modules Available for Different Transfer Scenarios?

The European Union’s Standard Contractual Clauses cover four different modules: controller-to-controller, controller-to-processor, processor-to-processor, and processor-to-controller. China’s SCCs do not draw any distinction among such transfers. China’s SCCs, however, do set out different obligations where the overseas data recipient is an “entrusted processor.” An entrusted processor is a processor who does not determine the purpose or method of the processing, but instead only processes personal information based on a data transfer agreement with the personal information processor and/or the instructions from the personal information processor.

Liabilities for Violating the SCC Regulations

Companies violating the SCCs Regulations may be subject to:

  1. civil claims by data subjects for any damages caused;
  2. administrative penalties, including a fine up to RMB 50 million (approximately USD 7.3 million) or 5% of the last year’s turnover (whichever is higher), suspension of relevant business and revocation of business license or other licenses/approvals; and/or
  3. criminal liabilities in worst cases.

The SCC Regulations create a whistle-blowing mechanism for individuals or organizations to report any non-compliance or violations to the CAC. The CAC may also request a meeting with a company and may issue an order to a company to take corrective measures, if any significant risks or any data breach are identified. 

What Steps Should Companies Take to Comply with the SCC Regulations?

Complying with China’s SCCs requires more than just signing the SCCs provided by the CAC. We set forth below some of the key steps that companies would take to comply with the requirements under the SCC Regulations.

Data Inventory: The first step toward compliance is often to conduct a data inventory to understand the type and volume of data transferred outside of China, the entities and jurisdictions involved, the purpose(s) and method of the processing, and the IT systems involved. The SCC Regulations specifically prohibit companies from dividing data among their subsidiaries in order to avoid volume thresholds that trigger the applicability of the security assessment.

Adopt an Appropriate Data Transfer MechanismBased on the findings of the data inventory, companies would then determine whether the data transfer triggers the security assessment by the CAC. If the security assessment is not triggered, the next step would be to determine the most appropriate data transfer mechanism. Generally, for intra-company data transfers, companies may choose to use security certifications or SCCs to qualify their data transfers out of China if the security assessment is not triggered.  For data processing that is subject to the extraterritorial effect of the PIPL (i.e., direct collection of personal information from individuals in China by a foreign personal information processor), it appears that the only option is a security certification, given the SCCs are generally used for transfers between a Chinese personal information processor and a foreign recipient.  For other transfers below the security assessment threshold, the SCCs may be adopted.

Personal Information Protection Impact Assessment (“PIA”): Data exporters are required to undertake a PIA before transferring any personal information outside of China. The PIA report is a required document for the subsequent filing with the local CAC (as explained below), in conjunction with a filing of a data processing agreement. There is no standard format yet for a PIA in the context of SCCs.

Implement Appropriate Internal Policies and Processes: The SCCs impose a series of obligations on data exporters and recipients, such as notifying the data subjects and obtaining their consent (or separate consent), where necessary; taking technical and organizational measures to protect the security of the personal information involved (e.g., encryption, de-identification, or access controls); establishing a process for responding to data subjects’ requests or complaints; and formulating an incident response plan. Companies should take steps to ensure that their internal policies and processes accommodate the requirements of the SCCs, and keep detailed records demonstrating their compliance in case of any audits, inspections, or investigations.

Execute the SCCs: Because data exporters are required to file the SCCs (or related DPA) with the local CAC within ten working days (as of the effective date of the SCCs), it is advisable for companies to complete the above preparatory work before execution of the SCCs. Otherwise, the filing may be rejected by the local CAC (if a PIA is not conducted and filed with the DPA, for example), or additional corrective measures may be required to mitigate any risks involved in the transfer.

Filing with the Local CAC: Data exporters must file the executed SCCs along with the PIA report with the provincial CAC where they are located within ten working days. All documents filed with the local CAC must be written in Chinese or translated into Chinese.

Although the SCCs Regulations provide a six-month grace period, given the amount of preparatory work involved in the implementation of the SCCs, companies should act as soon as practical to take necessary steps to implement the appropriate transfer mechanisms. Doing so will help avoid any disruption to their data transfer activities outside of China.

On February 28, 2023, the European Data Protection Board (“EDPB”) adopted its Opinion 5/2023 (the “Opinion”) on the draft adequacy decision of the European Commission regarding the EU-U.S. Data Privacy Framework (“DPF”). The DPF aims to ensure that personal data transferred from the European Union to the U.S. receives an adequate level of protection. The framework is based on the principles of transparency, accountability, and oversight, and it includes safeguards to protect the data privacy rights of individuals.

In the Opinion, the EDPB noted substantial improvements in the proposed DPF compared to the former Privacy Shield, but also expressed concerns regarding the level of protection provided by the draft adequacy decision. Key takeaways from the EDPB’s Opinion are:

  • The EDPB welcomed the updates to the DPF Principles, but opined that the Principles to which the DPF organizations have to adhere remain essentially unchanged from the Privacy Shield, and the concerns previously raised by the Article 29 Working Party and the EDPB in relation to the Privacy Shield principles remain unaddressed. In particular, these concerns relate to the rights of data subjects, the absence of key definitions, the lack of clarity in relation to the application of the DPF Principles to processors, and the broad exemption for publicly available information.
  • The EDPB opined that the structure and complexity of the DPF makes it difficult for data subjects and relevant stakeholders to understand, and that some key definitions are missing from the text and terminology usage is not consistent.
  • Regarding the level of protection of individuals whose data is transferred, the EDPB noted that protection must not be undermined by onward transfers from the initial recipient of the transferred data. The EDPB invites the European Commission to clarify that the safeguards imposed by the initial recipient on the importer in the third country must be effective in light of third-country legislation, prior to an onward transfer in the context of the DPF.
  • Regarding government access to data transferred to the U.S., the EDPB acknowledged the significant improvements brought by Executive Order 14086, which introduced concepts of necessity and proportionality with regard to U.S. intelligence-gathering of data (signals intelligence).
  • The Opinion recognized the specific safeguards provided by relevant U.S. law in different fields concerning automated decision-making and profiling by means of AI technologies. However, the EDPB pointed out that the level of protection for individuals seems to vary according to which sector-specific rules, if any, apply to the situation at hand. The EDPB maintained that specific rules concerning automated decision-making are needed in order to provide sufficient safeguards especially when AI decisions could significantly affect an individual.
  • The EDPB recommended clarification on the scope of exemptions, including on the applicable safeguards under U.S. law, in order to better identify their impact on data subjects. The Opinion also underlined that the European Commission should monitor the application and adoption of any statute or government regulation that would affect adherence to the DPF Principles. In relation to the list of exceptions to the right of access, the EDPB noted that some still tended to tip the balance towards the interests of DPF organizations, while the EDPB is concerned that there appears to be no requirement to consider the rights and interests of the individual.
  • The EDPB further addressed bulk data collection and asked for clarity regarding temporary bulk collection and the further retention and dissemination of such data. EDPB opined that collection of large quantities of data without discriminants (e.g., without the use of specific identifiers) presents higher risks for the individuals than targeted collection and thus requires additional safeguards to be adduced. The Opinion noted that the DPF lacks a requirement for prior authorization from an independent authority in advance of bulk data collection.
  • The EDPB highlighted that close monitoring, oversight, and enforcement of the DPF will be needed. The DPF continues to rely on a system of self-certification, although it recognizes commitments made by relevant agencies to investigate alleged DPF violations and monitor and enforce against entities making false or deception claims of participation.

Given the concerns expressed and the clarifications required, the EDPB suggests that these concerns should be addressed by the European Commission in future reviews. The EDPB further invites the European Commission to provide the requested clarifications in order to solidify the grounds for the draft adequacy decision and to ensure a close monitoring of the concrete implementation of this new legal framework, in particular the safeguards it provides. The draft adequacy decision will continue to make its way through the review and approval process. Once ratified, participating in the DPF will require that companies certify their adherence with the U.S. Department of Commerce.

We will continue to monitor the developments in this matter and keep you informed of any further updates.

In the past few years, privacy activists, consumers and national and European data protection authorities have become increasingly aware of the impact of cookies and other tracking technologies. As a result, most administrators of websites and mobile apps know that they have to provide users with a clear and prominent cookie banner. They also know that they should explain what cookies are being used and obtain the user’s consent before storing any non-essential cookies on their device. 

What they don’t know is how, exactly, this information should be conveyed. In theory, the conditions are straightforward and set forth in Directive 2002/58/EC (“ePrivacy Directive”) and Regulation (EU) 2016/679 (“GDPR”). In practice, however, requirements for obtaining consent for the use of cookies depend on the jurisdiction.

To address concerns regarding cookie banners and consent management on websites, the European Data Protection Board set up the “Cookie Banner Taskforce.” On January 17, 2023, the Cookie Banner Taskforce adopted a report detailing their findings. This report offers further guidance on the minimum requirements for transparency and efficiency of cookie banners and consent management practices within the European Union (“EU”).

The following are key takeaways from the report if you are a website or app owner:

  1. Ensure that your cookie banner includes a “reject button” on the first layer;
  2. Avoid using pre-ticked checkboxes for cookie consent;
  3. Provide a clear and direct option for users to reject, without using deceptive link designs;
  4. Avoid using deceptive button colors or deceptive button contrast;
  5. If you haven’t received consent for storing or accessing information through cookies, abstain from any further processing;
  6. Classify cookies as “essential” or “strictly necessary” only when they are truly required for your website to function; and
  7. Make it easy for users to withdraw their consent, such as by providing an icon that is visible at all times or a link placed on a visible and standardized place.

Despite the fact that they are not formally binding, the minimum requirements in the current report are expected to have a substantial impact on businesses and website owners operating within the EU. Consequently, they will have to ensure that their cookie banners and consent management practices meet the minimum thresholds set out in this report.

Unfortunately, the report only outlines minimum requirements. Website owners must still verify whether  additional national requirements (such as the ones specified by the French data protection authority) exist beyond the report’s minimum thresholds.

Additionally, please note that the ePrivacy Directive is currently being revised and a new, more harmonized, version is expected to be adopted in the near future. The new ePrivacy Directive is expected to introduce stricter rules on online tracking and data collection, particularly regarding cookies and other similar technologies which we will be sure to summarize upon its release.  

Source: Report of the work undertaken by the Cookie Banner Taskforce, January 17, 2023, https://edpb.europa.eu/our-work-tools/our-documents/other/report-work-undertaken-cookie-banner-taskforce_en

On March 2, 2023, the Biden Administration released the 35-page National Cybersecurity Strategy (the “Strategy”) with a goal “to secure the full benefits of a safe and secure digital ecosystem for all Americans.”

Summary and Analysis

The Strategy highlights the government’s commitment to investing in cybersecurity research and new technologies to protect the nation’s security and improve critical infrastructure defenses.  It outlines five pillars of action, each of which implicates critical infrastructure entities, from strengthening their cybersecurity processes, to receiving support from the federal government. For example, the Strategy highlights improving the security of Internet of Things (IoT) devices and expanding IoT cybersecurity labels, investing in quantum-resisting systems, developing a stronger cyber workforce, evolving privacy-enhancing platforms, and adopting security practices that are aligned with the National Institute of Standards and Technology (NIST) framework are some other suggested approaches that the private sector could take.

The Strategy makes evident the Administration’s desire to shift the burden of cybersecurity (and its associated costs and liability) from individuals, small businesses, and local government to the entities with the greatest expertise and resources, e.g., large owners and operators of critical infrastructure, vendors and software developers. To that end, we should expect legislation regarding baseline cybersecurity measures and establishing new liabilities for providers of software products and services. Further, the Administration emphasizes its support for legislative efforts for data minimization and increasing protection for sensitive data, which puts additional pressure on Congress to pass a federal privacy law.

The Strategy builds on sustained efforts by the Biden Administration to protect the nation’s critical infrastructure, including:

  • The 2022 Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) – expands the reporting obligations of covered entities;
  • The 2022 Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act – reduces reliance on China-based suppliers of emerging technologies by providing a financial incentive for investment in U.S. semiconductor manufacturing and the creation of collaborative networks for research and innovation;
  • President Biden’s 2021 Executive Order – strengthens the nation’s cybersecurity defenses by mandating all federal agencies use basic cybersecurity measures (such as multifactor authentication and requiring new security standards for software makers that contract with the federal government); and
  • President Biden’s 2021 national security memorandum – directs his administration to develop cybersecurity performance goals for U.S. critical infrastructure.

The Five Pillars

Replacing the 2018 Trump Administration strategy, which focused on voluntary public-private partnerships and information-sharing practices, the new framework mapped out by the Strategy pushes for a more aggressive and comprehensive regulatory approach. Combining government actions with new requirements for the private sector, which owns the majority of the country’s critical infrastructure, the Strategy aims to tackle some of our nation’s most difficult and complex issues in cybersecurity, software liability, and regulatory programs by centering on the following five pillars:

  1. Defend Critical Infrastructure;
  2. Disrupt & Dismantle Threat Actors;
  3. Shape Market Forces to Drive Security and Resilience;
  4. Invest in a Resilient Future; and
  5. Forge International Partnerships to Pursue Shared Goals

I. Defend Critical Infrastructure

The Administration makes clear that this pillar “is vital to our national security, public safety, and economic prosperity.” This pillar focuses on private-public collaboration to equitably distribute risk and responsibility, and includes five strategic objectives:

  1. Establish Cybersecurity Requirements to Support National Security and Public Safety. Protecting critical services is essential to the American people’s confidence in the nation’s infrastructure and the economy, and the Strategy breaks out three categories of activity to accomplish this objective:
    • Establish Cybersecurity Regulation to Secure Critical Infrastructure. To the extent possible, the government plans to use existing authorities to create a set of “minimum expected cybersecurity practices” for the infrastructure sector that are performance-based and adaptable.  Where gaps in the law exist, the Administration plans to work with Congress to close them with the goal of ensuring that systems are designed to “fail safely and recover quickly.” The Administration plans to drive improvements in cybersecurity practices in the cloud computing industry and other essential services for these industry sectors.
    • Harmonize and Streamline New and Existing Regulations. A key goal of the Strategy is controlling the costs and other burdens of compliance for regulated entities to enable them to commit more resources to cybersecurity.  To that end, the Strategy calls for regulators to (1) seek to harmonize regulations, audits, and reporting requirements as they are developed—for example, by leveraging existing international standards where consistent with U.S. policy and law, and (2) work together to minimize instances where existing regulations are in conflict, duplicative, or overly burdensome.  
    • Enable Regulated Entities to Afford Security. The Strategy provides several strategies to accommodate critical infrastructure sectors with varying capacities to absorb such costs. This includes calling for regulation that will ensure a level playing field that bypasses competition to underspend peers on cybersecurity in sectors with a greater ability to absorb costs. The Strategy also describes how low-margin sectors will likely need incentives to invest in cybersecurity, for example through rate-making processes, tax structures, or other mechanisms.  
  2. Scale Public-Private Collaboration. The Strategy stresses the importance of creating a distributed network of cyber defense, developed by collaboration between defenders and enabled by the automated exchange of information. For example, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (“CISA”) will employ Sector Risk Management Agencies (“SRMAs”) to coordinate with and support critical infrastructure owners to protect the assets they operate. The government plans to invest in developing SRMA capabilities to enable security and resilience improvements across critical infrastructure sectors and support maturation of third-party collaboration mechanisms. Additionally, information sharing and analysis organizations (“ISAOs”), sector-focused information sharing and analysis centers (“ISACs”), and similar organizations will be leveraged to facilitate cyber defense operations. The Strategy also acknowledges that machine-based solutions will be required to improve the sharing of information and coordination of defensive efforts. To accomplish this, CISA and SRMAs will explore technical and organizational mechanisms in partnership with the private sector to enhance and evolve data sharing, and the federal government will deepen its collaborative efforts with software, hardware, and managed service providers which have the capability to provide greater cybersecurity and resilience.
  3. Integrate Federal Cybersecurity Centers. Federal Cybersecurity Centers will serve as collaborative nodes that bring together capabilities across entities involved with homeland defense, law enforcement, intelligence, and diplomatic, economic, and military missions to drive intragovernmental coordination and support non-federal partners.
  4. Update Federal Incident Response Plans and Processes. The federal government will aim to present a unified, coordinated, whole-of-government response to cyber incidents when federal assistance is required, including, for example, that CISA will update the National Cyber Incident Response Plan (“NCIRP”). The Strategy discusses how these efforts will harmonize new requirements, such as CIRCIA’s to-be-finalized requirement that covered entities report cybersecurity incidents to CISA within hours in order to strengthen the collective defense, and current efforts by the Cyber Safety Review Board (CSRB), which is comprised of private and public sector cybersecurity leaders and will review incidents and guide industry remediation.
  5. Modernize Federal Defenses. The Administration will focus on long-term efforts to defend federal systems in accordance with zero-trust principles. In addition, it commits to develop plans to collectively defend federal civilian agencies, modernize federal technology systems, and defend national security systems.

II. Disrupt & Dismantle Threat Actors

Pillar 2 discussed the commitment to use “all instruments of national power to disrupt and dismantle threat actors whose actions threaten our interests,” focusing on heading off “sustained cyber-enabled campaigns that would threaten the national security or public safety of the United States.”  One of the ways to accomplish this is to make cyber-enabled campaigns unprofitable. There are five strategic objectives for disrupting and dismantling threat actors:

  1. Integrate Federal Disruption Activities. The Strategy outlines three commitments to integrate the federal government’s disruption efforts. First, the DOD will update its departmental cyber strategy so that it is aligned with “the National Security Strategy, National Defense Strategy, and [the] Strategy” to ensure that cyberspace operations are integrated into other strategic defense efforts. Second, the National Cyber Investigative Joint Task Force (“NCIJTF”) will “expand its capacity to coordinate takedown and disruption campaigns with greater speed, scale and frequency.”  Third, the DOD and the intelligence community “commit[s] to bringing to bear their full range of complementary authorities to disruption campaigns.”
  2. Enhance Public-Private Operational Collaboration to Disrupt Adversaries. To enhance the collaboration between the public and private sectors, the Strategy “encourage[s]” private companies to organize cyber-disruption efforts “through one or more nonprofit organizations that can serve as hubs for operational collaboration with the Federal Government, such as the National Cyber-Forensics and Training Alliance (NCFTA).”  The Strategy also commits the government to lowering barriers in the interests of supporting and leveraging collaboration.
  3. Increase the Speed and Scale of Intelligence Sharing and Victim Notification. One aspect of disruption and dismantling threat actors is to increase the speed and scale of intelligence sharing, both to and from victims. The Strategy commits to “proactively warn cyber defenders and notify victims when the government has information that an organization is being actively targeted or may already be compromised.” Part of implementing this is to “review declassification policies and processes to determine the conditions under which extending additional classified access and expanding clearances.” The Strategy also calls on “SRMAs, in coordination with CISA, law enforcement agencies, and the [Cyber Threat Intelligence Integration Center (CTIIC)to] identify intelligence needs and priorities within their sector and develop processes to share warnings, technical indicators, threat context, and other relevant information with both government and non-government partners.”
  4. Prevent Abuse of U.S.-Based Infrastructure. The Strategy commits to working with cloud and infrastructure providers to address the full gamut of issues that they may face, from quickly identifying malicious use of such infrastructure, notifying the government in the event of such malicious use, making it easier for victims to report such abuse, and preventing the malicious use in the first place. This strategy also places an expectation on “[a]ll services providers” to “make reasonable attempts to secure the use of their infrastructure against abuse or other criminal behavior.”
  5. Counter Cybercrime, Defeat Ransomware. The Strategy calls out ransomware in particular as a threat and identifies four processes to combat it: “(1) leveraging international cooperation to disrupt the ransomware ecosystem and isolate those countries that provide safe havens for criminals; (2) investigating ransomware crimes and using law enforcement and other authorities to disrupt ransomware infrastructure and actors; (3) bolstering critical infrastructure resilience to withstand ransomware attacks; and (4) addressing the abuse of virtual currency to launder ransom payments.”  This effort includes contributions from the Counter-Ransomware Initiative (CRI) with 30 other countries and the Joint Ransomware Task Force. It also includes further consideration of international anti-money laundering and combating the financing of terrorism (AML/CFT) standards. To achieve these objectives, the Strategy focuses on mounting “disruption campaigns and other efforts that are so sustained, coordinated, and targeted that they render ransomware no longer profitable.”  Accordingly, the Strategy repeats the position that the U.S. government has held for years: “strongly discourag[ing] the payment of ransoms” and encouraging victims to report the incidents to law enforcement and other appropriate agencies.

III. Shape Market Forces to Drive Security and Resilience

Pillar 3 of the Strategy focuses on shaping market forces to reduce risk and strengthen our digital ecosystem to keep our country resilient and secure. To drive broader adoption of best practices in cybersecurity, market forces are important, but the Administration will shape the long-term security and resilience of the digital ecosystem by: increasing accountability, driving development of more secure connected devices, reshaping existing laws, using federal purchasing power to incentivize security, and stabilizing insurance markets against catastrophic risk with the following six strategic objectives:

  1. Hold the Stewards of our Data Accountable.The Administration supports legislative efforts to protect consumers by imposing limitations on technologies that collect personal information. Failures to protect personal information pass the harm on to consumers, and often the greatest harm falls upon the most vulnerable populations. To protect consumers, legislation should provide strong protections for personal and sensitive data and set national requirements to secure data consistent with the standards and guidelines developed by NIST.
  2. Drive the Development of Secure IoT Devices.Many IoT devices today are vulnerable to cybersecurity threats and exploitation by bad actors. The Administration will continue to improve IoT cybersecurity through research and development and risk management efforts under the 2020 IoT Cybersecurity Improvement Act and security labeling programs under Executive Order 14028, “Improving the Nation’s Cybersecurity” (the “Cybersecurity Executive Order”) The goal is to expand IoT security labels, allowing consumers to compare protections for different IoT products, and create market incentive for greater security for IoT devices.
  3. Shift Liability for Insecure Software Products and Services.The Administration will begin to shift liability onto entities that fail to take reasonable precautions to secure their software while at the same time recognizing that even advanced software security programs cannot prevent all vulnerabilities.  Legislation will be designed to prevent manufacturers and software publishers from fully disclaiming liability and establish higher security standards, while also providing a safe harbor for companies that do securely develop and maintain their software products and services. These so-called safe harbor provisions will draw from current best practices, such as the NIST Secure Software Development Framework, but will also need to be flexible enough to evolve over time to keep up with technological advancements. The Administration also encourages coordinated vulnerability disclosures and further development of Software Bill of Materials (SBOMs), as well as processes for identifying and mitigating the risk of unsupported software used by critical infrastructure.
  4. Use Federal Grants and Other Incentives to Build in Security.The Administration is committed to investing in programs to improve infrastructure and the digital ecosystem supporting it, and balancing cybersecurity requirements. The federal government will collaborate with State, Local, Tribal and Territorial (“SLTT”) entities, private sector stakeholders, and other partners to drive investment in secure and resilient products and to fund cybersecurity research, development, and demonstration programs.
  5. Leverage Federal Procurement to Improve Accountability.One successful method of improving cybersecurity has been to implement specific contracting requirements for federal government vendors. The Cybersecurity Executive Order expands cybersecurity requirements for contracts, ensuring that such standards are strengthened and standardized across federal agencies. The Department of Justice’s (“DOJ’s”) Civil Cyber-Fraud Initiative (CCFA) will hold accountable entities that knowingly: put data at risk through deficient cybersecurity products or services, misrepresent cybersecurity practices or protocols, or violate obligations to monitor and report cyber incidents and breaches.
  6. Explore a Federal Cyber Insurance Backdrop.The Administration will assess the need for and the potential structure of a federal response to a catastrophic cyber event, which will include analyzing current cyber insurance offerings.  Input will be sought from Congress, state regulators, and industry stakeholders to determine if a plan is necessary and how to structure a response to stabilize and aid recovery to prepare for a catastrophic cyber event before one occurs.

IV. Invest in a Resilient Future 

The Strategy’s fourth pillar relies on the following five strategic objectives to accomplish the Administration’s commitment to investing in the concept of resilience in the face of near-certain cyber-attacks:

  1. Cybersecurity Research & Development. The Strategy recognizes that cyber adversaries have been weaponizing American innovation and using it against our country to steal intellectual property, sow dissent, interfere with elections, and undermine our national defenses. Because of this, the Strategy recommends that investment and innovation must go hand-in-hand with cybersecurity efforts, and that it will be critical for our government to harness emerging technologies for cybersecurity purposes as those technological advancements are made. 
  2. Securing the Technical Foundation of the Internet. Acknowledging that the very foundation of the Internet has inherent vulnerabilities that need to be addressed (specifically mentioning the Domain Name System and Border Gateway Protocol), the Strategy prioritizes protection of the multistakeholder model of Internet governance and standards development. Principles such as transparency, openness, and consensus are at the core of our nation’s values and will drive the evolution of more secure technical standards and technologies. Because of the rapid pace at which technologies are advancing, the Strategy advocates for the Federal Research and Development enterprise to direct projects to advance cybersecurity and resilience in areas such as encryption, the protection of industrial control systems, and artificial intelligence.
  3. Preparing for a Post-Quantum Future. The Strategy recommends preparation for a post-quantum future to protect the encryption systems that undergird the methods by which we protect data, authenticate users, and certify the accuracy of information. The means transitioning the nation’s cryptographic systems to interoperable quantum-resistant systems and advancing the notion of cryptographic agility to address unknown threats arising from quantum computing. This is one area of the Strategy that specifically recommends that the private sector follow the government’s Strategy to prepare for a post-quantum future.
  4. Development of a Digital Identity Ecosystem. Data breaches, COVID-19 fraud, and identity theft have caused billions in losses for the federal government because we do not yet have a comprehensive, secure, and accessible digital identity system. The Strategy promotes investment in strong, verifiable, privacy-enhancing digital identity platforms that comport with the values of transparency and accountability. 
  5. Strengthen Our Cyber Workforce. Great efforts will be made to address unfilled vacancies for cybersecurity positions in workforces across the nation. The need for cybersecurity professionals across industries means that the federal government will be coordinating a comprehensive strategy for cyber education and training pathways for all persons who wish to develop a career in cybersecurity, with a particular focus on the public’s need to develop and recruit cybersecurity talent to protect critical infrastructure. The Strategy is also committed to addressing the lack of diversity in the nation’s cybersecurity workforce as “both a moral necessity and strategic imperative.” 

V. Forge International Partnerships to Pursue Shared Goals

Pillar 5 consists of five strategic objectives that aim to “scale the emerging model of collaboration by national cybersecurity stakeholders to cooperate with the international community” using the following five strategic objectives:

  1. Build coalitions to counter threats to our digital ecosystem. The U.S. will leverage existing partnerships, intergovernmental forums, and trade agreements to advance shared goals in cyberspace.  This includes using a variety of mechanisms, including the Declaration for the Future of the Internet (DFI), the Quadrilateral Security Dialogue (the Quad), the Indo-Pacific Economic Framework for Prosperity (IPEF), the U.S.-EU Trade and Technology Council (TTC), and the Americas Partnership for Economic Prosperity (APEP), among others. Coordination and collaboration with allies and partners are important, particularly in sharing cyber threat information, exchanging model cybersecurity practices, comparing security-specific expertise, driving secure-by-design principles, and coordinating policy and incident response activities.
  2. Strengthen international partner capacity. As the U.S. builds a coalition to advance shared goals, it will also strengthen capacity of allies and partners that support shared interests in cyberspace. To achieve this goal, the U.S. will “marshal expertise across agencies, the public and private sectors, and among advanced regional partners to pursue coordinated and effective” cyber capacity. The Strategy emphasizes the importance of working with law enforcement and explains distinct actions in which the DOJ, the DOD, and the Department of State (“DOS”) will engage. Specifically, the DOJ will work with law enforcement for more robust cybercrime cooperation, the DOD will strengthen military-to-military relationships to bolster collective cybersecurity posture, and the DOS will coordinate with the whole-of-government to ensure that federal capacity, as well as U.S., allied, and partner interests are strategically aligned.
  3. Expand U.S. ability to assist allies and partners. The U.S.  will provide support to allies and partners to investigate, respond to, and recover from cyberattacks. The U.S. will also establish policies to determine when such support is in the national interest, develop mechanisms to identify and deploy this support, and, when needed, “rapidly seek to remove existing financial and procedural barriers to provide such operational support.”
  4. Build coalitions to reinforce global norms of responsible state behavior. The U.S. will reinforce political commitments that every member of the United Nations has made to endorse peacetime norms and refrain from cyber operations that may “intentionally damage critical infrastructure” by holding irresponsible states accountable through meaningful and collaborative consequences, such as “diplomatic isolation, economic cost, counter-cyber and law enforcement operations, or legal sanctions, among others.”
  5. Secure global supply chains for information, communications, and operation technology products and services. The strategy recognizes that complex and globally interconnected supply chains are critical to the nation’s economy. Our dependency on foreign products and services introduces a degree of risk, which must be mitigated through long-term, strategic collaborations between public and private sectors in the U.S. and abroad. The federal government will work with allies and partners to “implement best practices in cross-border supply chain risk management and work to shift supply chains to flow through partner countries and trusted vendors,” making supply chains “more transparent, secure, resilient, and trustworthy.”