In an unconventional opening to the normally staid proceedings of the United States Senate, the voice of Frank Sinatra introduced the July 12, 2023 Senate Judiciary Subcommittee hearing on artificial intelligence (AI) and intellectual property. More accurately, an AI-generated version of Frank Sinatra’s voice sang about regulating AI to the tune of New York, New York, which Senator Chris Coons (D-DE), Chairman of the Senate Judiciary Subcommittee on Intellectual Property, used to illustrate both the possibilities and the risks of the use of AI in creative industries.

Continue Reading Senate Judiciary Subcommittee on Intellectual Property Hearing on Artificial Intelligence and Intellectual Property – Part II: Copyright

On June 18, 2023, the Biden-Harris administration announced the launch of a new “U.S. Cyber Trust Mark” program (hereinafter the “Program”). First proposed by Federal Communication Commission (“FCC”) Chairwoman Jessica Rosenworcel, the Program aims to increase transparency and competition across the smart devices sector and to assist consumers in making informed decisions about the security of the devices they purchase.

Continue Reading Biden Admin Eyes IoT Cyber Practices

In a June 30, 2023 decision by the Superior Court of California, County of Sacramento, the Court issued a ruling delaying agency enforcement of final regulations under the California Privacy Rights Act (CPRA) until March 2024. Calfornia Chamber of Commerce v. California Privacy Protection Act, Case No. 34-2023-80004106-CU-WM-GDS (Sacramento Superior Court, June 30, 2023).

The California Consumer Privacy Act of 2018 (CCPA) and the California Privacy Rights Act of 2020 (CPRA) provisions in the ballot initiative passed in 2020 by California voters are still in effect. However, enforcement of the final regulations implementing the CPRA, enacted on March 29, 2023 by the California Privacy Protection Agency (Agency) and which were set to go in effect on July 1, 2023, has been stayed by the California court until March 2024 (until one year after the enactment of the final CPRA regulations). Assuming the ruling is not overturned on appeal, it gives businesses another 9 months to become compliant with the final CPRA regulations. Businesses still need to remain compliant with the prior CCPA regulations in effect before the final CPRA regulations, including the CPRA provisions that were in the ballot initiative of 2020. The Agency has set a public meeting for July 14 to discuss enforcement and other topics.  

Notably, on March 29, 2023, the Agency issued final regulations with respect to only 12 of the 15 areas required by Section 1798.185 of the CPRA. The Court ruled that enforcement of these regulations was delayed until March 29, 2024. Enforcement of any regulations in the remaining three areas (cybersecurity audits, risk assessments and automated decision-making technology) will begin until a year after the Agency finalizes those rules. The Court did not mandate any specific date by which the Agency must finalize these remaining regulations.

On May 18, 2023, the Federal Trade Commission issued a Policy Statement on biometric information technologies and the agency’s approach to regulating the use of biometric information under Section 5.  

The guidance addresses both unfair and deceptive acts, providing examples for both. The guidance explains that an example of unfairness is the use of biometric technologies like facial or voice recognition to provide access to financial accounts despite the “potential for bias. . . [which] can lead or contribute to harmful or unlawful discrimination.” Examples of deceptive acts in the guidance include making any false or misleading statements about the collection, use, or efficacy of biometric information, including omissions and “half-truths” like making “an affirmative statement about some purposes for which it will use biometric information but fail[ing] to disclose other material uses of the information.”

The statement defines “biometric information” as “data that depict or describe physical, biological, or behavioral traits, characteristics, or measurements of or relating to an identified or identifiable person’s body.” This definition covers any data directly or indirectly derived from someone’s body or a depiction of their body “to the extent that it would be reasonably possible to identify the person.” This policy announcement follows less than a month after the FTC issued a joint agency statement asserting that automated systems and innovative new technologies are fully covered by existing federal laws protecting civil rights, competition, consumer protection, and equal opportunity. These statements reflect and respond to a growing government-wide push for greater transparency and accountability in data collection and algorithmic decision making—particularly in response to the Biden administration’s Blueprint for an AI Bill of Rights

Although the FTC’s Policy Statement only provides “a non-exhaustive list of examples of practices it will scrutinize,” the guidance indicates that “businesses should continually assess whether their use of biometric information or biometric information technologies causes or is likely to cause consumer injury in a manner that violates Section 5 of the FTC Act.” The FTC states in the guidance that potential violations of Section 5 will be assessed holistically, using factors such as:

  • Failing to assess foreseeable harms to consumers before collecting biometric information;
  • Failing to promptly address known or foreseeable risks;
  • Engaging in surreptitious and unexpected collection or use of biometric information;
  • Failing to evaluate the practices and capabilities of third parties;
  • Failing to provide appropriate training for employees and contractors; and
  • Failing to conduct ongoing monitoring of technologies that the business develops, offers for sale, or uses in connection with biometric information.

Any company collecting or using biometric information should be proactive and consider steps that may include regularly training employees and third parties, actively notifying and updating consumers on its data policies, and implementing regular audits of any biometric technology it develops or uses.

The FTC has announced the Policy Statement at a moment where the agency has shown a willingness to enforce the laws within its jurisdiction. Three recent settlements, two brought under Section 5 and one under the Children’s Online Privacy Protection Act (COPPA) Rule, illustrate the high costs of improper biometric data collection and usage.

In 2019, the FTC imposed a $5 billion penalty on Facebook, Inc., for violating “a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information.” The complaint focused in particular on Facebook’s data-sharing with third party apps and the company’s failure to act against apps that knew were violating its privacy policies. The order also requires Facebook to overhaul its privacy decision-making processes and submit to heightened compliance monitoring for 20 years.

In 2021, a photo storage service called Everalbum, Inc., settled with the FTC for allegedly deceiving consumers about its data retention policies and its use of facial recognition technology. As part of the settlement, Everalbum was required to delete not only the photos and videos of deactivated users, but also any models and algorithms developed using user-uploaded images. The company ended up shutting down because it could not compete with Apple and Google’s photo storage services.  

Most recently, a settlement order in 2022 forced Kurbo, Inc., a weight loss app marketed for children, to delete all illegally collected data on children under 13, destroy any algorithms developed from that data, and pay a $1.5 million penalty. The case was brought under the FTC’s COPPA rule rather than Section 5, and further underscores the seriousness with which the agency is pursuing privacy violations and deceptive practices related to biometric data.

The FTC is sending a clear message that it plans to use Section 5 and other rules to regulate biometric data privacy. To avoid liability, companies will have to carefully ensure that their data policies align with their conduct and keep consumers on notice about any changes.

Special thanks to Meaghan Katz, a Crowell’s 2023 Summer Intern, for their assistance in the writing of this piece.

On March 2, 2023, the Biden-Harris Administration released the National Cybersecurity Strategy.[i] The highly anticipated Strategy has illuminated that a more overt and aggressive approach to mitigating cyber risks may be necessary to drive real change, leading to the anticipation of increased communication and partnerships between private companies and government agencies.[ii] The new Strategy sets a strategic objective of “enhancing public-private operational collaboration to disrupt adversaries,” including sharing insights between private organizations and government agencies, and the push for private companies to come together and organize their efforts through nonprofit organizations.[iii]

The Strategy highlights the government’s commitment to investing in cybersecurity research and new technologies to protect the nation’s security and improve critical infrastructure defenses. It outlines five pillars of action, each of which implicates critical infrastructure entities, from strengthening their cybersecurity processes, to receiving support from the federal government.[iv] It also makes evident the Administration’s desire to shift the burden of cybersecurity (and its associated costs and liability) from individuals, small businesses, and local government to the entities with the greatest expertise and resources, e.g., large owners and operators of critical infrastructure, vendors and software developers.[v] 

Companies evaluating their alignment with the Strategy may also consider their law enforcement and government agency relationships. These include: i) assessing how the Strategy impacts interactions between victim companies and their counsel with the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) when they are seeking assistance with cybersecurity challenges, and ii) the new expectation of agency involvement in the private sector when it comes to cybersecurity.

“Private companies and their legal counsel can take several steps now to ensure they create a positive relationship with agencies ahead of new regulation expected to follow the National Cybersecurity Strategy,” says Brian Hale, a former FBI Assistant Director of the Office of Public Affairs, and current Managing Director in FTI Consulting’s Cybersecurity practice, and who is experienced in helping companies with cybersecurity challenges from both a government and private sector perspective. Some of these actions include:

  • Form Connections. Be familiar with the lead cybersecurity FBI agent(s) in the local FBI Office – find a local field office here – before an incident occurs and develop a relationship.
  • Attend Outreach Events. Agencies like the FBI and CISA often host outreach events to meet with companies and counsel in their area or participate as panelist and presenters at industry functions.[vi]
  • Keep Track of Announcements. Stay up to date with the latest messaging released from the FBI, CISA, and other agencies regarding cybersecurity best practices and regulations. This also includes remaining current on any potential threats and new requirements announced that can help prepare organizations for cybersecurity incidents.
  • Leverage Industry Groups, such as InfraGard. This nonprofit is a partnership between the FBI and the U.S. private sector, created to protect critical infrastructure and with a common goal of “advancing national security.”[vii] Learn more here.

Through plans to increase defense of critical infrastructure and partner on sector-specific cybersecurity requirements, the National Cybersecurity Strategy emphasizes that relationships and communication between the public and private sectors remains paramount in achieving the common goal of minimizing cybersecurity risk. Plans to shift more responsibility for cybersecurity onto the best-positioned organizations to handle this risk, like government agencies, will result in better protection from threat actors for individuals and small businesses, but will only be successful if proper streams of information and trust between the public and private sectors are established.

Furthermore, the Strategy encourages the forging of international partnerships to pursue shared goals. This includes building coalitions to counter threats to the digital ecosystem, strengthening international partner capacity, expanding U.S. ability to assist allies and partners, building coalitions to reinforce global norms of responsible state behavior, and securing global supply chains for information, communications, and operation technology products and services.

Whether an organization is in the public or private sector, its cybersecurity program will undoubtedly be impacted by the National Cybersecurity Strategy.

For a more detailed summary and analysis of the National Cybersecurity Strategy, Crowell examines the Strategy in a March 2023 client alert.[viii]


The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, Inc., its management, its subsidiaries, its affiliates, or its other professionals. FTI Consulting, Inc., including its subsidiaries and affiliates, is a consulting firm and is not a certified public accounting firm or a law firm.

FTI Consulting is an independent global business advisory firm dedicated to helping organizations manage change, mitigate risk and resolve disputes: financial, legal, operational, political & regulatory, reputational and transactional. FTI Consulting professionals, located in all major business centers throughout the world, work closely with clients to anticipate, illuminate and overcome complex business challenges and opportunities. ©2023 FTI Consulting, Inc. All rights reserved. fticonsulting.com

Crowell & Moring LLP is an international law firm with offices in the United States, Europe, MENA, and Asia. Drawing on significant government, business, industry and legal experience, the firm helps clients capitalize on opportunities and provides creative solutions to complex litigation and arbitration, regulatory and policy, and corporate and transactional issues. The firm is consistently recognized for its commitment to pro bono service and its programs and initiatives to advance diversity, equity and inclusion.


[i] “National Cybersecurity Strategy,” The White House (March 2023), https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.

[ii] Id.

[iii] Id.

[iv] Id.

[v] Id.

[vi] “Community Relations,” Federal Bureau of Investigation (March 2023), https://www.fbi.gov/how-we-can-help-you/outreach.

[vii] “Welcome to InfraGard,” InfraGard (March 2023), https://www.infragard.org/.

[viii] “Biden Administration Releases Comprehensive National Cybersecurity Strategy,” Crowell & Moring (March 6, 2023), https://www.crowell.com/en/insights/client-alerts/biden-administration-releases-comprehensive-national-cybersecurity-strategy.

Ever since the public launch of OpenAI’s ChatGPT, the world has been gasping at the astonishing accomplishments of this generative AI chatbot:  a simple “prompt” in the form of a question (“which are the most important decisions of the CJEU in copyright?”) will receive a credible response within seconds (“The Court of Justice of the European Union (CJEU) has issued several important decisions in the field of copyright law. While it is challenging to determine a definitive list of the most important decisions, here are some key rulings that have had significant impact” and it goes on to list some of the CJEU’s most well know decisions, such as Infopaq, UsedSoft, Svensson, Deckmyn, ACI Adam, GS Media and YouTube).

Impressive for sure and, although the information is not always reliable (ChatGPT has been reported to invent legal precedents, to the embarrassment of the lawyers who have submitted briefs on that basis…), companies recognise the appeal of AI-powered chatbots – they are here to stay.  To avoid reeling in these applications as legal Trojan horses, in-house counsel do well to identify the legal risk of this new technology:  racial, sexual, and other bias that may induce discriminatory acts and misinformation are well documented and important hurdles to the widespread adoption of AI solutions in a corporate environment.  In this post, we will, however, address some of the concerns relating to copyright, trade secrets and the protection of personal data.

Copyright and the protection of trade secrets may complicate the AI applications in different ways:  the use of “input” data and the “output” of the AI solution. 

The algorithms of the AI solution are “trained” using datasets that may contain content protected under copyright or related rights (such as performances, recordings, or databases).  Similarly, such protected content may be present in the prompts that the user submits to the AI-powered solution.  Keeping in mind the broad interpretation that the CJEU has given to the reproduction right, the copies made of these datasets may be seen as “reproductions” and consequently require the prior authorisation from the author and holders of related rights – unless the use is covered under one of the (harmonised) legal exceptions. 

Under the Information Society Directive N° 2001/29, the exceptions for temporary acts of reproduction or the research exception may have exempted some uses, but these provisions were considered insufficient to create the legal certainty required to stimulate the development of innovative technologies, such as AI. With the Copyright in the Digital Single Market Directive N° 2019/790 (“DSM Dir”), two new exceptions were introduced for “text and data mining” (“TDM”), i.e. “any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations” (art. 1(2) DSM Dir).  Text and data mining is permitted with the right holders’ prior consent in two cases:

  1. TDM for scientific research (art. 3 DSM Dir):  a research organisation (art. 2(1) DSM Dir) or cultural heritage institution (art. 2(3) DSM Dir) may reproduce or extract the protected content in a TDM process, for the purpose of scientific research under this exception – provided that they have “lawful access”.
  2. TDM for other purposes (art. 4 DSM Dir):  other users may reproduce or extract protected lawfully accessible works (including software) and other content in a TDM process, for other purposes – provided that the rightholders have not “reserved” the use of the works or other subject matter “in an appropriate manner, such as machine-readable means in the case of content made publicly available online”. 

These exceptions have been transposed in the Belgian Code of Economic Law (art. 190, 20° and 191/1, 7° CEL).  Important challenges will remain: especially the modalities for the “opt-out” of the general TDM exception that the different rightholders may exercise are not standardised (yet) and, as the TDM exceptions will be implemented in 27 member states, there may be national variations.  In addition, authors and performers may enjoy moral rights in the member states, which are not harmonised under the DSM Directive.

In the meanwhile, technical responses to this web-wide crawling are being developed (such as Have I been trained?) to find out whether a particular file has been used.  Some AI providers are proposing mechanisms to give authors some control over the use of their works (e.g. Stability AI) – but it is uncertain whether they suffice to comply with article 4 DSM Dir.

While the protection of trade secrets is arguably less of an issue when an AI solution is trained using publicly accessible datasets, this may be an issue where employees include their employer’s confidential information in the “prompts” they submit to an AI chatbot.  

Under the Trade Secrets Directive N° 2016/943 (“TS Dir”), the acquisition, use or disclosure of a trade secret without the consent of the trade secret holder is unlawful (art. 4 TS Dir).  Logically, the provider of the AI-powered chatbot is likely to put all responsibility for the prompts on the user’s side (e.g. OpenAI requests users not to submit any sensitive information and to permit the use of all input content to provide and maintain the service in its terms of use).  It is then for the user to make sure that their prompts contain no trade secrets or confidential information of their employer or of third parties that their employer holds under a confidentiality agreement. 

While the mere transfer of sensitive information to a service provider is unlikely to affect the secret nature of the “trade secret”, it may go against the confidentiality policy or violate the conditions of a confidentiality agreement with a supplier, a client or a partner, as a copy of the confidential information will be out of the trade secret holder’s control (i.e. stored on the servers of the AI provider).

As to the AI-generated output, it may be infringing copyright if the original traits of protected work can be recognised in the output.  In most cases, however, the AI-creations imitate the style of a musician, a painter or a photographer.  Elements of style are however considered “ideas” and consequently not protected under copyright.  By contrast, where the AI-output imitates the voice or other features of singers or actors, the latter may rely upon their personality rights and their image rights to oppose the use of their appearance.

Lastly, the AI-generated output may itself be protected under various rights.  While traditional copyright typically requires the creative input of a human author and will not be available for AI-productions without human intervention (regardless of questions of evidence), such requirement is absent under the related rights – in particular the protection of phonograms or first fixations of films.  This means that no author can control the reuse of AI-generated output on the basis of copyright, but the producers of AI-produced audio- or audiovisual content may have the right to prohibit the reproduction or communication to the public or to conclude licences for their productions.

Another important legal concern is the protection of personal data.  As organisations increasingly turn to AI-powered chatbots to enhance operations and customer experiences, data protection issues have come to the forefront.  Notably, the Italian Data Protection Authority identified sufficient violations of the GDPR to temporarily ban the use of ChatGPT in Italy. However, after addressing the concerns raised by the Italian Authority, OpenAI reactivated access again at the end of April 2023.  In the same vein, the European Data Protection Board established a dedicated task force to address data protection concerns related to ChatGPT.  These actions underscore the importance of considering data protection when deploying AI chatbots.

AI chatbots process personal data during the training of AI models and in the real-time interactions with users.  One key concern is the need for organisations to establish a valid legal basis for processing personal data, which can include, for example, consent, legitimate interest, or contractual obligations.  Another requirement is transparency of the data processing:  organisations need to provide easily understandable information to data subjects, clearly explaining how their personal data is processed within AI-powered chatbot systems.  In addition to these core concerns, other issues may arise relating to the need for age verification systems to mitigate risks associated with inappropriate interactions with minors, as well as the implementation of robust security measures to protect personal data from data breaches and unauthorised access.

The data protection analysis will depend on the precise technical features that AI-chatbot organisations will actually deploy.  ChatGPT, for instance, offers various usage scenarios, including the use of the web version, API use by developers, and the recently introduced ChatGPT plugins.  Each scenario has different implications for data protection and different roles and responsibilities of the involved actors.

The first scenario covers the regular use of the web version of ChatGPT.  In this case, the chatbot is used in the way it was developed and it is offered by OpenAI.  For this web version, OpenAI and the users are the primary actors.  OpenAI acts as a controller for both training the models and processing user requests.  However, organisations using ChatGPT in their workflow need to be cautious about potentially processing personal data, the more as it is retained by default for training purposes.  Compliance with data protection regulations becomes crucial in this context.

The second scenario involves API users, i.e. developers.  An API user will get an OpenAI API key, and with this key, the API user will be able to gain additional control over the AI model.  API users can refine the ChatGPT models based on their own needs and they can train the models to function either as a standalone model or they can integrate ChatGPT in their own products.  In this case, developers act as controllers for the processing of personal data.  OpenAI provides a data processing addendum to API users, qualifying itself as a processor.  However, this qualification may raise questions due to the control exerted by OpenAI.

The third scenario concerns ChatGPT plugins, which enable access to third-party knowledge sources and databases.  The plugin functionality allows ChatGPT to consider third-party information from these sources in its generated responses.  In this case, according to OpenAI, both the third-party creator of the plugin and OpenAI act as separate and independent controllers of personal data.  Also in this case, this qualification may raise questions, and further examination by the task force set up by the European Data Protection Board is eagerly anticipated.

Some takeaways for organisations that care to assess some of the legal risks resulting from the use of AI-powered tools in a professional context:

It is important to raise awareness within AI-using organisations, i.e. among their company lawyers, employees, freelancers and other partners, and to assess whether a company policy would be useful.  A non-representative poll during the IBJ webinar of 5 May 2023 indicated that AI-powered chatbots are already commonly being used in a professional context (50% of the respondents confirmed such use) and that a minority has a policy in place (3% stated that their organisation prohibits the use of AI-tools, 19% permits the use within certain limits, and 77% has no policy at all).

Where AI-using organisations establish a policy on the use of AI-tools for professional purposes, they may consider the following points.  Developers of AI-solutions may use all web-accessible content to train their algorithms.  Organisations that do not wish their content to be used for these purposes may look into technical, organisational and contractual means of reserving their rights, facing TDM processes (e.g. by finetuning the robot.txt instructions or other metadata).  Especially for creators of high value content, such as broadcasters, music producers or newspaper publishers, may want to look into the appropriate expression of their opt-out under all available (copyright or related) rights.  Furthermore, organisations ought to assess the risk that their employees, freelancers or other partners transmit copyright protected content or confidential information (belonging to the organisation or to third parties) in a prompt to the AI-powered tool and, if useful, address such risks in clear guidelines.  Where important confidential information is at stake, it may be worth revising confidentiality clauses in contracts with third parties (partners, suppliers, customers) to whom such information is disclosed, to explicitly prohibit the use of third-party AI tools without explicit contractual guarantees.  Where the organisation intends to use the AI-generated output in any way that requires some sort of exclusivity, it ought to verify whether they can exercise any statutory exclusive rights (such as producers’ rights) and, where applicable, settle such rights with the AI-provider.  Where no such statutory rights exist, they may want to organise the contractual protection to control the use of the AI-generated output. 

Also from a data protection perspective, AI-using organisations should ensure that they have the necessary contractual arrangements in place, for example a data processing agreement or another data protection agreement with the AI chatbot provider.  This agreement should clearly outline the responsibilities of both parties and stipulate that the provider complies with all applicable data protection laws, including the GDPR.  If there are any international data transfers the organisation should make sure that the transfer relies on a valid transfer mechanism and that the necessary transfer protocols are in place.  Prior to this, it is recommended that AI-using organisations conduct a data protection impact assessment and if needed, a transfer impact assessment, before allowing for the use of AI chatbots in their organisation.  It may be needed to refine internal rules on the use of personal data in order to establish guidelines for the proper use of AI-powered chatbots by employees, including rules against sharing of personal data, particularly sensitive and special categories of personal data, through the chatbots.

AI-developers, on the other hand, must be wary of the expressions of rightholders who wish to reserve their rights to TDM and must proactively check whether any instructions are given in code or elsewhere.  In their terms and conditions, they should clearly indicate how rightholders’ and users’ content (in prompts or otherwise) will be used, so they have a sufficient authorisation to operate their AI-driven solutions.  Ideally they also indicate more explicitly for which purposes the users’ input is used (“performing the service”, “improving the service”), how long the content will be stored and whether the user (or their organisation) can request the erasure of the content.  

AI-developers also need to consider data protection and are encouraged to conduct a data protection impact assessment for the development and provision of AI-powered tools.  Especially when training new models, whenever possible, AI-developers could use anonymisation techniques on data before feeding it into the chatbot for training purposes. In general, AI-developers could adhere to the principle of data minimisation, using only the necessary categories and amount of personal data for model refinement or development.  Next to many other requirements, transparency is also crucial, and data subjects should be informed about the use of their personal data in data protection notices.

If you would like learn more about the subject and to stay informed about recent legal developments, you are invited to the Crowell & Moring Legal Knowledge Library  –  Crowell Hub 💡. This free portal has been designed specifically to support in-house counsel. Please click here to login or register.

Overview

On March 27, 2023, President Biden signed the Executive Order on Prohibition on Use by the United States Government of Commercial Spyware that Poses Risks to National Security (EO), restricting federal agencies’ use of commercial spyware.  The Biden Administration cited targeted attacks utilizing commercial spyware on U.S. officials and human rights abuses abroad as motivations for these restrictions.

Usage Restrictions

The EO is not a blanket ban on commercial spyware.[1]  Instead, it bars federal government agencies from using commercial spyware tools if they pose significant counterintelligence or security risks to the U.S. government, or significant risks of improper use by a foreign government or foreign person, including to target Americans or enable human rights abuses.  Indirect use of such spyware (e.g. through a contractor or other third party) is also prohibited.  The EO establishes risk factors indicative of prohibited commercial spyware, including:

  • Past use of the spyware by a foreign entity against U.S. government personnel or devices;
  • Past use of the spyware by a foreign entity against U.S. persons;
  • The spyware was or is furnished by an entity that maintains, transfers, or uses data obtained from the commercial spyware without authorization from the licensed end-user or the U.S. government, or has disclosed or intends to disclose non-public information about the U.S. government or its activities without authorization from the U.S. government;
  • The spyware was or is furnished by an entity under the direct or effective control of a foreign government or foreign person engaged in intelligence activities directed against the United States;
  • A foreign actor uses the commercial spyware to limit freedoms of expression, peaceful assembly or association; or to enable other forms of human rights abuses or suppression of civil liberties; or
  • The spyware is furnished to governments that have engaged in gross violations of human rights, whether such violations were aided by the spyware or not.

The above restrictions do not apply to the use of commercial spyware for purposes of testing, research, analysis, cybersecurity, or the development of countermeasures for counterintelligence or security risks, or for purposes of a criminal investigation arising out of the criminal sale or use of the spyware.  Additionally, an agency may be able to obtain a waiver allowing it to temporarily bypass the EO’s prohibitions, but only in “extraordinary circumstances.”

Agency Reporting Requirements

The EO contains various agency reporting requirements.  Some are specific to the Director of National Intelligence (DNI) while some apply to all federal agencies:

  • Within 90 days of the EO, the DNI will issue a classified intelligence assessment on foreign commercial spyware and foreign use of commercial spyware.
  • Within 90 days of the DNI assessment, all federal agencies must review their use of commercial spyware and discontinue uses that violate the EO. 
  • If an agency elects to continue using commercial spyware, within one year of the EO it must report its continued use to the Assistant to the President for National Security Affairs (APNSA) and explain why its continued use does not violate the EO.

New Commercial Spyware Procurement Procedures

Agencies seeking to procure commercial spyware “for any purpose other than for a criminal investigation arising out of the criminal sale or use of the spyware” must:

  • Consider any relevant information provided by the DNI, and solicit such information from the DNI if necessary;
  • Consider the risk factors listed above;
  • Consider any controls the commercial spyware vendor has in place to detect and prevent potential security risks or misuse; and
  • Notify APNSA within 45 days of procurement and provide a description of its intended purpose and use(s) for the commercial spyware.  

Key Takeaways

While the EO signals that the federal government is approaching commercial spyware with caution, interested parties should note that the government has been careful not to rule out its usage altogether. The EO, for example, does not address the government’s use of non-commercial (i.e. government-produced) spyware, or mention state or local government use of commercial spyware at all. The EO also allows federal agencies to procure and employ commercial spyware so long as the agency determines that the spyware does not pose a significant risk to national security or for improper use. Vendors of commercial spyware should pay close attention to the risk factors identified in the EO and consider implementing internal controls to address them.

On March 22, 2022, the Department of Defense (DoD) issued a final rule requiring contracting officers to consider supplier risk assessments in DoD’s Supplier Performance Risk System (SPRS) when evaluating offers. SPRS is a DoD enterprise system that collects contractor quality and delivery performance data from a variety of systems to develop three risk assessments: item risk, price risk, and supplier risk. The final rule introduces a new solicitation provision, DFARS 252.204-7024, which instructs contracting officers to consider these assessments, if available, in the determination of contractor responsibility.

SPRS risk assessments are generated daily using specific criteria and calculations based on the price, item, quality, delivery, and contractor performance data collected in the system.  Although compliance with cybersecurity clauses DFARS 252.204-7012, -7019, or -7020 are not currently used to generate supplier risk assessments, the potential cybersecurity implications are evident. Under DFARS -7019 and -7020, DoD requires contractors to demonstrate their compliance with cybersecurity standard NIST SP 800-171 by scoring their implementation of 110 controls and uploading their score to SPRS.

Some believe that DoD could incorporate the NIST 800-171 Basic Self-Assessment score into the supplier risk assessment at any time. If SPRS scores are incorporated into supplier risk assessments, this solicitation provision will make the accuracy and veracity of contractors’ SPRS scores significantly more important. Inaccurate SPRS scores could open contractors to legal risk, including False Claims Act (FCA) liability. Under the Department of Justice’s Civil Cyber Fraud Initiative, FCA actions regarding inaccurate cybersecurity representations have increased. Because these assessments will now influence award decisions, accuracy will become key.

For more information, please contact the professional(s) listed below, or your regular Crowell & Moring contact.

On March 15, the Iowa House passed Senate File 262 (SF 262), a comprehensive state privacy law bill. If enacted, SF 262 would be the sixth state level privacy legislation, following California, Virginia, Colorado, Utah, and Connecticut, and it would go into effect on January 1, 2025.

Iowa’s new law is closest to the Utah Consumer Privacy Act (UCPA), having broad exemptions and more limited obligations for controllers. Notably, SF 262 provides exemptions for consumer rights where “pseudonymous data” and “de-identified data” (as defined by the bill) are involved, including certain opt-out rights.

For the most part, Iowa’s bill treads familiar territory. Its scope extends to entities that conduct business in Iowa or produce products or services targeted to Iowa residents, and that meet the following requirements, in a calendar year: (1) control or process personal data of at least 100,000 consumers; or (2) control or process personal data of at least 25,000 consumers and derive over 50% of gross revenue from sale of personal data.

Iowa’s bill does not create new obligations for businesses compared to what is already required under other states’ privacy laws. For example, the Iowa bill’s privacy notice requirements are not unique to SF 262 – companies with privacy policies drafted to comply with the CCPA (California Consumer Privacy Act) and VCDPA (Virginia Consumer Data Protection Act) are not likely to have to amend their policies in order to comply with Iowa’s requirements. In addition, like Utah and Virginia, Iowa’s bill includes a narrow definition of “sale” of personal data (the exchange of personal data for monetary consideration by the controller to a third party), as well as numerous exceptions. 

Iowa’s bill notably diverges from consumer protections found in most existing state privacy laws. For example, it only requires clear notice and opt-out for sensitive data, while other states like Colorado, Connecticut, and Virginia adopted opt-in requirements. The Iowa bill also lacks a consumer right to correct data. There are no requirements for covered entities to conduct privacy impact assessments or establish data minimization principles. Furthermore, responses to consumer requests not only have a 90-day response period (compared to 45-days in other states) but also are subject to a potential 45-day extension.

This bill does not contain a private right of action; enforcement rights belong exclusively with the Iowa State Attorney General. The AG may seek injunctive relief and civil penalties of up to $7,500 per violation. However, this first requires providing a 90-day cure period before bringing any enforcement, and such cure period does not sunset.

We will continue to monitor the developments and keep you informed of any further updates.

Eight months after the issuance of the draft Measures on the Standard Contract for the Export of Personal Information (“SCC Regulations”), on February 24, 2023, the Cyberspace Administration of China (“CAC”) released the final version of the SCCs Regulations, along with the Standard Contractual Clauses (“SCCs”). The SCCs set a baseline for cross-border data transfer agreements. This can impact any business that relies on the sharing of information between China and third countries, like the United States.

The SCCs will come into effect on June 1, 2023, and companies have an additional six months (until November 30, 2023) to comply with the SCCs’ requirements for the transfer of data outside of China.

China’s Three Data Transfer Mechanisms are Now Settled

The PRC Personal Information Protection Law (“PIPL”) requires personal information processors (similar to the concept of data controllers under the General Data Protection Regulation) to implement one of the following three data transfer mechanisms, if personal information is transferred outside of China:

  1. Complete a Security Assessment by the CAC;
  2. Complete a Security Certification by a certification institution designated by the CAC; or
  3. Adopt the SCCs.

Prior to the release of the final SCCs, the CAC had already released the Measures on Security Assessment of Cross-Border Data Transfer and Specifications on Security Certification for Cross-Border Personal Information Processing Activities in the summer of 2022. These measures include detailed guidance on the security assessment and security certification process necessary for the transfer of data outside of China.

The issuance of the SCCs indicates that the final piece of the puzzle of China’s cross-border data transfer regime is now settled. Previously, many companies that were not required to go through the security assessment process took a “wait-and-see approach” pending the finalization of the SCCs. Now, with the final piece of China’s cross-border data transfer regime in place, a full assessment of the available data transfer mechanisms is required.

Application Scope of the SCCs

The SCCs may be a more user-friendly approach to qualify a data transfer, as the SCCsdo not require a review by the CAC or certification by a third-party institution. In addition, they provide for more definite contractual terms. However, the SCCs may be adopted only if all of the following conditions are met:

  1. The data exporter is not a critical information infrastructure operator (“CIIO”), which is broadly defined as an operator of critical network facilities or information systems in important industries (such as finance, energy, or transportation), where destruction, loss of function, or data leakage may seriously endanger China’s national security, peoples’ livelihood, or the public interest;
  2. The data exporter has not processed personal information of more than one million individuals (“Mass Processor”); AND
  3. Since January 1 of the previous year, the data exporter has not made aggregated transfers of:
  • personal information of more than 100,000 individuals; or
  • sensitive personal information of more than 10,000 individuals.

If any of the above conditions are not met, a CAC security assessment will be required instead, and the SCCs would not be an option. Notably, a CAC security assessment will also be triggered if any important data is transferred out of China, even if the SCCs are used to transfer data. Important data are broadly defined as any data that – if tampered with, destroyed, leaked, illegally accessed, or used – may endanger China’s national security, economic operation, social stability, or public health and safety.

Are Modifications to the SCCs Permissible?

According to the SCC Regulations, the parties are not allowed to make any modifications to the SCCs. The parties, however, may add terms, to the extent they do not conflict with the SCCs.

For companies who have already entered into a data processing agreement (“DPA”), questions abound regarding how the SCCs would interact and integrate with these existing agreements. Where corporations are considering combining the two through the use of exhibits, the SCCs may need to be the main body of an agreement, with any additional terms, including those in an existing DPA, placed into an exhibit.

Governing Law and Liability

Notably, the governing law of a DPA transferring data outside of China must be PRC law. However, the parties are granted some flexibility to submit their disputes under the SCCs to a PRC court or, if arbitration is preferred, to a PRC or international arbitration tribunal in a member state of the New York Convention.

Under the SCCs, the data exporter and the data importer assume joint and several liability to the data subjects.  As such, data subjects can enforce their rights against both such parties as a third-party beneficiary under the SCCs.

Are There Different Modules Available for Different Transfer Scenarios?

The European Union’s Standard Contractual Clauses cover four different modules: controller-to-controller, controller-to-processor, processor-to-processor, and processor-to-controller. China’s SCCs do not draw any distinction among such transfers. China’s SCCs, however, do set out different obligations where the overseas data recipient is an “entrusted processor.” An entrusted processor is a processor who does not determine the purpose or method of the processing, but instead only processes personal information based on a data transfer agreement with the personal information processor and/or the instructions from the personal information processor.

Liabilities for Violating the SCC Regulations

Companies violating the SCCs Regulations may be subject to:

  1. civil claims by data subjects for any damages caused;
  2. administrative penalties, including a fine up to RMB 50 million (approximately USD 7.3 million) or 5% of the last year’s turnover (whichever is higher), suspension of relevant business and revocation of business license or other licenses/approvals; and/or
  3. criminal liabilities in worst cases.

The SCC Regulations create a whistle-blowing mechanism for individuals or organizations to report any non-compliance or violations to the CAC. The CAC may also request a meeting with a company and may issue an order to a company to take corrective measures, if any significant risks or any data breach are identified. 

What Steps Should Companies Take to Comply with the SCC Regulations?

Complying with China’s SCCs requires more than just signing the SCCs provided by the CAC. We set forth below some of the key steps that companies would take to comply with the requirements under the SCC Regulations.

Data Inventory: The first step toward compliance is often to conduct a data inventory to understand the type and volume of data transferred outside of China, the entities and jurisdictions involved, the purpose(s) and method of the processing, and the IT systems involved. The SCC Regulations specifically prohibit companies from dividing data among their subsidiaries in order to avoid volume thresholds that trigger the applicability of the security assessment.

Adopt an Appropriate Data Transfer MechanismBased on the findings of the data inventory, companies would then determine whether the data transfer triggers the security assessment by the CAC. If the security assessment is not triggered, the next step would be to determine the most appropriate data transfer mechanism. Generally, for intra-company data transfers, companies may choose to use security certifications or SCCs to qualify their data transfers out of China if the security assessment is not triggered.  For data processing that is subject to the extraterritorial effect of the PIPL (i.e., direct collection of personal information from individuals in China by a foreign personal information processor), it appears that the only option is a security certification, given the SCCs are generally used for transfers between a Chinese personal information processor and a foreign recipient.  For other transfers below the security assessment threshold, the SCCs may be adopted.

Personal Information Protection Impact Assessment (“PIA”): Data exporters are required to undertake a PIA before transferring any personal information outside of China. The PIA report is a required document for the subsequent filing with the local CAC (as explained below), in conjunction with a filing of a data processing agreement. There is no standard format yet for a PIA in the context of SCCs.

Implement Appropriate Internal Policies and Processes: The SCCs impose a series of obligations on data exporters and recipients, such as notifying the data subjects and obtaining their consent (or separate consent), where necessary; taking technical and organizational measures to protect the security of the personal information involved (e.g., encryption, de-identification, or access controls); establishing a process for responding to data subjects’ requests or complaints; and formulating an incident response plan. Companies should take steps to ensure that their internal policies and processes accommodate the requirements of the SCCs, and keep detailed records demonstrating their compliance in case of any audits, inspections, or investigations.

Execute the SCCs: Because data exporters are required to file the SCCs (or related DPA) with the local CAC within ten working days (as of the effective date of the SCCs), it is advisable for companies to complete the above preparatory work before execution of the SCCs. Otherwise, the filing may be rejected by the local CAC (if a PIA is not conducted and filed with the DPA, for example), or additional corrective measures may be required to mitigate any risks involved in the transfer.

Filing with the Local CAC: Data exporters must file the executed SCCs along with the PIA report with the provincial CAC where they are located within ten working days. All documents filed with the local CAC must be written in Chinese or translated into Chinese.

Although the SCCs Regulations provide a six-month grace period, given the amount of preparatory work involved in the implementation of the SCCs, companies should act as soon as practical to take necessary steps to implement the appropriate transfer mechanisms. Doing so will help avoid any disruption to their data transfer activities outside of China.