Data Law Insights

Data Law Insights

Legal insights on navigating privacy, data protection, cybersecurity, information governance, and e-discovery

Can You Copyright Infringe Anonymously? Revisited.

Posted in Advertising & Product Risk Management, Cybersecurity / Data Security, Litigation
Joe MeadowsLaura Aradi

On November 28, 2017, the Sixth Circuit, in a 2:1 decision, ruled on the anonymous copyright infringement case we discussed back in April. The central issue in the case involved whether an adjudicated copyright infringer can remain anonymous. A decision in favor of the infringer could encourage anonymous unlawful speech. A decision in favor of the judgment plaintiff could encourage suits designed only to “out” the name of an anonymous critic.

In a case of first impression, the Sixth Circuit didn’t make a final decision. See Signature Management Team, LLC v. Doe, No. 16-2188, 2017 WL 5710571 (6th Cir. Nov. 28, 2017).

The Court remanded the case back to the district court to balance the infringer’s anonymity interest against both the judgment plaintiff’s interest in unmasking the infringer and the public’s interest in open judicial proceedings, with a presumption in favor of disclosure of the infringer. In short, the Court held that the infringer’s anonymity was not automatically lost upon his defeat in the litigation … at least under these circumstances. Continue Reading

Law Firm Data Security Seminar

Posted in Cybersecurity / Data Security, Data Breach, Ethics
Crowell & Moring

Please join us for a seminar on December 5 in Washington, D.C. or December 6 in New York City on “Law Firm Data Security”. Our very own Partner Evan Wolff will be presenting alongside RSA’s Doug Howard and Niloofar Howe. Our panelists will cover all sorts of critical issues such as:

  • How to defend high-demand data?
  • Cyber-attack response readiness
  • What is your ethical obligation regarding data security knowledge and mitigating the risk of a data breach?
  • What are the “reasonable safeguards” for a given matter?
  • Are you leveraging state-of-the-art technology?
  • Can you assure your clients that their data is secure?

Click here to sign up and see the full agenda and panelists.

Report on the Autonomous Vehicle Safety Regulation World Congress 2017

Posted in Cybersecurity / Data Security, Privacy, Product Liability & Torts
Cheryl A. FalveyChahira Solh

The big takeaways from The Autonomous Vehicle Safety Regulation World Congress centered on the importance of a federal scheme for AV regulation and the reality of the states’ interest in traditional issues such as traffic enforcement, product liability, and insurance coverage.  In keeping with those messages, the World Congress kicked off with NHTSA Deputy Administrator and Acting Director, Heidi King, speaking about NHTSA’s goals and interest followed almost immediately with wide participation from the states including California, Michigan, and Pennsylvania, among others.

Deputy Administrator King emphasized NHTSA’s desire to foster an environment of collaboration among all stakeholders, including the states.  Ms. King emphasized that safety remains the top priority at NHTSA.  NHTSA has provided some guidance, and looks forward to hearing from stakeholders about the best way to support and encourage growth in autonomous vehicles.  NHTSA wants to provide a flexible frame work to keep the door open for private sector innovation.  It is necessary to build public trust and confidence in the safety of autonomous vehicles, and that can only accomplished by all stakeholders working together.

NHTSA is working on the next version of AV guidance, having already issues its 2.0 version, with an expected release of 3.0 in 2018.  The guidelines will remain voluntary, but NHTSA is ready to support entities as they try to implement the voluntary guidance.  Working with the states, DOT, OEMs, and other stakeholders, NHTSA hopes to continue to be flexible and allow for rapid changes.  Later in the conference lawyers emphasized the importance of compliance with the guidance in minimizing liability particularly in no-fault states such as Michigan.

Dr. Bernard Soriano, deputy director, California Department of Motor Vehicles, similarly confirmed that California’s overarching interest in regulating AV is the safe operation of vehicles on its roadways.  In summarizing California’s recent October 11, 2017 release of revised regulations, he emphasized that “change happens fast,” and that the state is pleased to now be close to allowing completely driverless testing.  He recognized the federal preemption on the design of the vehicle and its crashworthiness and emphasized the state’s interest in the operation of the vehicles and compliance with state traffic laws. Continue Reading

Join Us for a Webinar – Tuesday, October 10, 2017 12:00 – 1:00 PM ET

Posted in Uncategorized
Crowell & Moring

It’s been said that “A lie gets halfway around the world before the truth can even pull its boots on.” In today’s world of online commentary and social media, this is truer than ever.

In the cyber-world, you or your company may be accused of selling defective goods, providing poor service, misleading customers, defrauding the government, or committing unethical or criminal conduct. These accusations can appear in e-mails to your clients or government enforcement agencies, as posts on blogs or company websites, or in streamed videos on social media. What’s more, they can be made or circulated by competitors or persons cloaked behind the anonymity of the internet, making it difficult (but not impossible) to hold responsible persons accountable.
As a result, internet defamation cases are on the rise. A surprise reputational attack in the cyber-world requires quick thinking and a game plan.

Please click here to register for this webinar, or click here to view the event on

This 60 minute webinar will cover the:

  • types of growing internet defamation (and sometimes intellectual property infringement) cases
  • “hot” litigation issues, including First Amendment anonymity, Communications Decency Act Section 230, and personal jurisdiction issues
  • related anti-SLAPP statute issues
  • steps to defend your online reputation


DOJ Asks Supreme Court to Resolve Split over Its Ability to Compel Foreign Records

Posted in Criminal Law, Cybersecurity / Data Security
Paul RosenChris Garcia

U.S.-based technology companies and courts across the country have disagreed over the extraterritorial application of the Stored Communications Act in allowing U.S. law enforcement to enforce warrants to reach data stored overseas.  Some courts have treated the data stored overseas as a “physical” object  and, therefore, refused to extend the reach of the Act abroad.  Other courts have found that the Act authorized a warrant for overseas data because the technology company was subject to the court’s jurisdiction and the warrant sought information from the only place the company could access it. Companies have called on Congress to help clarify the issue, and the government has also appealed to the Supreme Court to do the same.

Click here to read more.

D.C. Circuit: Alleged theft of healthcare subscriber information satisfies Article III harm standard under Spokeo

Posted in Data Breach, Insurance, Litigation
Jeffrey L. PostonPeter B. MillerCharles Austin

The U.S. Court of Appeals for the D.C. Circuit has now weighed in on whether plaintiffs can bring a putative class action arising from an alleged data breach in lieu of allegations of actual misuse of compromised data.  Emphasizing the “low bar to establish [] standing at the pleading stage,” the D.C. Circuit reversed a ruling that the alleged theft of personally identifying policyholder information alone without any specific allegations of harm did not satisfy Article III’s standing requirements.  In Attias v. CareFirst, Inc., a group of CareFirst customers alleged that a 2014 cyberattack compromised their personal information and thus increased their risk of identity theft from compromised social security numbers and financial information, and also their risk of medical identity theft from compromised health insurance subscriber ID numbers. The district court dismissed their claims, finding that the plaintiffs failed to allege “facts demonstrating a substantial risk that stolen data has been or will be misused in a harmful manner.”  Applying the “substantial risk” standard discussed in the Supreme Court’s Clapper v. Amnesty International and Susan B. Anthony List v. Driehaus decisions, the D.C. Circuit reversed.

The D.C. Circuit noted that identify theft is a sufficiently concrete and particularized injury for Article III purposes, so the only issue before the court was whether the allegations showed “that the plaintiffs now face a substantial risk of identity theft” as a result of the alleged breach.  Echoing the Seventh Circuit’s 2015 decision addressing the Neiman Marcus data breach, the D.C. Circuit inferred that the alleged attacker(s) had the intent and ability to misuse the data because the purpose of a data breach is, presumably, to make fraudulent charges or commit identity theft.  In light of this presumption, the D.C. Circuit reasoned that the alleged theft of either type of information—even before misuse—presented a substantial risk of future injury, which constituted the “actual or imminent” harm necessary for Article III standing.  As to the other standing requirements, the court found the alleged harm fairly traceable to CareFirst’s alleged failure to properly secure policyholder information, and that the policyholders’ risk-mitigation expenses satisfied Article III’s redressability requirement.

The D.C. Circuit’s conclusion furthers a circuit split on standing that has deepened since the Supreme Court’s 2016 Spokeo v. Robins decision.  In Spokeo, the Supreme Court noted that a bare procedural violation did not necessarily constitute “concrete” harm, and that the Ninth Circuit failed to address whether the alleged harm presented “a degree of risk sufficient to meet the concreteness requirement” of Article III.  Even though Spokeo is the Supreme Court’s most recent decision regarding Article III standing, the CareFirst decision relied upon Clapper as the basis for its reversal.  It should, be noted that these two cases arose from different fact patterns and addressed wholly different statutes and allegations of harm.  Nonetheless, there remains disagreement over what meets Article III’s “concreteness” requirement for standing in the privacy class action realm.  The D.C. Circuit’s decision seems to align with the Third, Sixth, Seventh, and Eleventh circuits, each of which has permitted consumer data breach suits on the basis of possible future misuse.  The Second and Fourth circuits, however, have reached different conclusions in 2017.  This split may ultimately increase potential costs of litigations if data breach plaintiffs begin concentrating class action filings in the more “friendly” jurisdictions and avoid courts that do not align with the D.C. and Seventh circuits.

New Jersey Restricts Retailers’ Collection and Use of Customer Information

Posted in Cybersecurity / Data Security, Data Breach, Information Management, Privacy
Paul RosenStephanie Reiter

On July 21, 2017, Governor Chris Christie signed the Personal Information Privacy and Protection Act (S-1913) (the “Act”) into law, further enhancing the protections afforded to consumers who make retail credit card purchases in New Jersey.  As technology has evolved, many retailers rely on electronic barcode scanners to review and capture information on customers’ driver’s licenses and other forms of identification.  The Act addresses these new technologies by:

  • Restricting the type of personal information that retailers may collect and retain from consumers’ identification cards to name, address, date of birth, identification card number, and the state in which the card was issued;
  • Limiting the purposes for which retailers may use personal information obtained from consumer identification cards (e.g. age verification);
  • Reiterating retailers’ breach reporting obligations under New Jersey’s breach notification law;
  • Requiring retailers to securely store the limited information it is permitted to retain after electronically scanning the bar codes on consumers’ identification cards; and
  • Prohibiting retailers from disclosing or selling such information to third parties unless otherwise permitted to do so by the statute.

The Act carries civil penalties of $2,500 for first-time offenders, $5,000 for repeat offenders.  In addition, the law allows consumers to bring a private right of action against retailers in connection with violations of the statute.  While retailers that simply “card” customers (e.g. manually view identification cards) are not subject to the Act, it is important to note that their data handling practices may trigger liability under other applicable state laws (e.g. data destruction laws).

The Personal Information Privacy and Protection Act, which becomes effective on October 1, 2017, represents an important step in protecting consumer information in the context of retail transactions.  First, the Act’s purpose limitation and security provisions will minimize the likelihood and impact of a data breach by substantially reducing the amount of sensitive data elements that retailers collect, store, transmit to third parties, and requiring extra layers of security to protect the limited information retailers now may retain.  Second, by prohibiting the unauthorized sale of consumer information for marketing, advertising, or promotional activities, the Act will give consumers more control over their personal information.  As technological advances continue to impinge on the privacy rights of consumers, it is likely that other states will enact similar legislation to ensure that the use of emerging technologies does not allow businesses to capture and use consumer information in a manner that is inconsistent with the purposes for which such information was originally collected and communicated to consumers at the point of sale.

FBI and FTC on Privacy Risks Stemming from “Smart” Toys

Posted in Advertising & Product Risk Management, Cybersecurity / Data Security, Privacy
Stephanie Reiter

Earlier this month, the Federal Bureau of Investigation (FBI) issued a public comment about privacy, cybersecurity, and safety risks associated with internet-connected toys.  The FBI’s comment builds on the Federal Trade Commission’s recent amendment to the Children’s Online Privacy Protection Act (COPPA), which explicitly states that connected toys are deemed “websites or online services” subject to COPPA.  In our sister blog, Retail & Consumer Products Law, our colleagues highlight the key issues associated with connected toys, protections with which smart toy manufacturers must comply under COPPA, and the potential trajectory for government enforcement efforts in the context of connected toys.

Recent IoT Device Cases

Posted in Advertising & Product Risk Management, Cybersecurity / Data Security, Litigation
Clifford J. ZatzJoe MeadowsLaura AradiPaul Mathis

“There are many ways to surveil each other now, unfortunately,” including “microwaves that turn into cameras, et cetera.  So we know that that is just a fact of modern life.”  Kellyanne Conway, March 12, 2017 Interview with New Jersey’s The Record.

Data from microwaves-turned-cameras has yet to appear in court, but data from other IoT devices has.  And while its appearance has been invaluable in cracking criminal cases or pursuing civil claims or defenses, it also has raised constitutional and privacy issues.

Here we highlight some recent IoT device cases.

  • smart speaker:  In a murder case, the police seized the defendant’s smart speaker on the theory that it may offer evidence of what transpired the night of the murder at the defendant’s home.  A search warrant was then served on the speaker’s manufacturer for the audio recordings that had been uploaded to out-of-state servers.  The manufacturer moved to quash the warrant, contending that it had First Amendment rights to publish and speak through the speaker.  The motion was later mooted when the defendant gave the manufacturer permission to turn over any audio recordings.  See Arkansas v. Bates, No. CR-2016-370 (Cir. Ct. Benton County, Arkansas).
  • search engines:  In censorship and unfair competition cases, plaintiffs brought claims against internet companies arising out of their search results.  The companies moved to dismiss on the grounds that their search results were protected speech under the First Amendment.  Florida and New York federal courts agreed:  the companies’ production and ranking of search results was similar to that of a newspaper exercising protected editorial discretion over what to publish.  It made no difference that the search results arose out of automated computer programming.  See e-ventures Worldwide, LLC v. Google, Inc., No. 14-cv-646 (M.D. Fla. Feb. 8, 2017); Zhang v. Inc., 10 F. Supp. 3d 433 (S.D.N.Y. 2014).
  • fitness wearable:  In another murder case, the victim’s husband told police that he was at home fighting off an intruder when his wife returned from the gym no later than 9 am.  According to the husband, the intruder then shot his wife, tied him up, and ran out of the house.  The police searched the wife’s fitness wearable.  Its data showed that the wife was still moving about the home a distance of 1,217 feet between 9:18 am and 10:05 am.  After additional discoveries of the husband’s extra-marital affair and attempt to cash in on the wife’s life insurance, the husband was charged with murder.  See
  • pacemaker:  In a home arson case, the homeowner told police that he did a number of things as soon as he discovered the fire:  he gathered his belongings, packed them in a suitcase and other bags, broke out the bedroom window with his cane, threw his belongings outside, and rushed out of the house.  The police searched the 59-year old’s pacemaker.  Its data showed that the man’s heart rate barely changed during the fire.  And after a cardiologist testified that it was “highly improbable” that a man in his condition could do the things claimed, the man was charged with arson and insurance fraud.  See
  • biometric devices:  In privacy violation cases, plaintiff consumers have alleged that technology companies have illegally obtained, used, or shared personal “biometric identifiers” – generally, fingerprints, voiceprints, and retinal/facial scans — without consent in violation of privacy laws.  Illinois state and federal courts have sustained some of these claims and approved settlements.  See Rivera v. Google Inc., No. 16-C-02714, 2017 U.S. Dist. LEXIS 27276 (N.D. Ill. Feb. 27, 2017); Sekura v. L.A. Tan Enterprises, No. 2015-CH-16694 (Cir. Ct. Cook County, Illinois);

These IoT device cases – whether in the civil or criminal context — present interesting First and Fourth Amendment issues and privacy rights.  Considering the growth of new IoT devices and their expanding use, identifying and understanding the constitutional issues and privacy rights will continue to gain importance in courtroom disputes.  And on the horizon are similar issues and rights surrounding artificial intelligence and augmented reality devices.

But don’t hold out for an onslaught of microwave-turned-camera cases.

FTC Submits Public Comment to Working Group Tasked with Developing Guidance on IoT Security, Upgradability, and Patching

Posted in Cybersecurity / Data Security, Data Breach, Internet of Things
Jeffrey L. PostonStephanie Reiter

On June 19, 2017, the Federal Trade Commission (FTC) issued a public comment regarding the National Telecommunications & Information Administration’s (NTIA) draft guidance titled Communicating IoT Device Security Update Capability to Improve Transparency for Customers.  In commenting on the guidance, the FTC acknowledged the benefits of and challenges to IoT device security, and encouraged manufacturers to take reasonable measures to secure devices and inform consumers about its security features.

The FTC also recommended three specific modifications to the working group’s proposed “Elements of Updatability.” First, including additional “key elements” that manufacturers should disclose prior to sale:

  • Whether and how the device can receive upgrades;
  • The date on which security support begins;
  • Guaranteed minimum security support period; and
  • Whether a “smart” device will become highly vulnerable or lose functionality after support ends.

Second, offering “additional elements” to consumers before or after purchase:

  • Uniform method for notifying consumers of available updates;
  • Method to sign up for support notifications, separate from marketing communications; and
  • Real-time notifications when security support is about to end.

Third, removing an “additional element” that described the process by which the manufacturer provides updates, as the technical details likely will not benefit the customer.

While the FTC’s comments are not binding, the FTC’s suggestions reflect lessons learned from its prior enforcement actions, policy initiatives, and consumer and business education.  As a result, IoT device manufacturers should consider implementing the FTC’s proposed practices, regardless of whether NTIA incorporates the FTC’s recommendations into the finalized guidance document.