Data Law Insights

Data Law Insights

Legal insights on navigating privacy, data protection, cybersecurity, information governance, and e-discovery

D.C. Circuit: Alleged theft of healthcare subscriber information satisfies Article III harm standard under Spokeo

Posted in Data Breach, Insurance, Litigation
Jeffrey L. PostonPeter B. MillerCharles Austin

The U.S. Court of Appeals for the D.C. Circuit has now weighed in on whether plaintiffs can bring a putative class action arising from an alleged data breach in lieu of allegations of actual misuse of compromised data.  Emphasizing the “low bar to establish [] standing at the pleading stage,” the D.C. Circuit reversed a ruling that the alleged theft of personally identifying policyholder information alone without any specific allegations of harm did not satisfy Article III’s standing requirements.  In Attias v. CareFirst, Inc., a group of CareFirst customers alleged that a 2014 cyberattack compromised their personal information and thus increased their risk of identity theft from compromised social security numbers and financial information, and also their risk of medical identity theft from compromised health insurance subscriber ID numbers. The district court dismissed their claims, finding that the plaintiffs failed to allege “facts demonstrating a substantial risk that stolen data has been or will be misused in a harmful manner.”  Applying the “substantial risk” standard discussed in the Supreme Court’s Clapper v. Amnesty International and Susan B. Anthony List v. Driehaus decisions, the D.C. Circuit reversed.

The D.C. Circuit noted that identify theft is a sufficiently concrete and particularized injury for Article III purposes, so the only issue before the court was whether the allegations showed “that the plaintiffs now face a substantial risk of identity theft” as a result of the alleged breach.  Echoing the Seventh Circuit’s 2015 decision addressing the Neiman Marcus data breach, the D.C. Circuit inferred that the alleged attacker(s) had the intent and ability to misuse the data because the purpose of a data breach is, presumably, to make fraudulent charges or commit identity theft.  In light of this presumption, the D.C. Circuit reasoned that the alleged theft of either type of information—even before misuse—presented a substantial risk of future injury, which constituted the “actual or imminent” harm necessary for Article III standing.  As to the other standing requirements, the court found the alleged harm fairly traceable to CareFirst’s alleged failure to properly secure policyholder information, and that the policyholders’ risk-mitigation expenses satisfied Article III’s redressability requirement.

The D.C. Circuit’s conclusion furthers a circuit split on standing that has deepened since the Supreme Court’s 2016 Spokeo v. Robins decision.  In Spokeo, the Supreme Court noted that a bare procedural violation did not necessarily constitute “concrete” harm, and that the Ninth Circuit failed to address whether the alleged harm presented “a degree of risk sufficient to meet the concreteness requirement” of Article III.  Even though Spokeo is the Supreme Court’s most recent decision regarding Article III standing, the CareFirst decision relied upon Clapper as the basis for its reversal.  It should, be noted that these two cases arose from different fact patterns and addressed wholly different statutes and allegations of harm.  Nonetheless, there remains disagreement over what meets Article III’s “concreteness” requirement for standing in the privacy class action realm.  The D.C. Circuit’s decision seems to align with the Third, Sixth, Seventh, and Eleventh circuits, each of which has permitted consumer data breach suits on the basis of possible future misuse.  The Second and Fourth circuits, however, have reached different conclusions in 2017.  This split may ultimately increase potential costs of litigations if data breach plaintiffs begin concentrating class action filings in the more “friendly” jurisdictions and avoid courts that do not align with the D.C. and Seventh circuits.

New Jersey Restricts Retailers’ Collection and Use of Customer Information

Posted in Cybersecurity / Data Security, Data Breach, Information Management, Privacy
Paul RosenStephanie Reiter

On July 21, 2017, Governor Chris Christie signed the Personal Information Privacy and Protection Act (S-1913) (the “Act”) into law, further enhancing the protections afforded to consumers who make retail credit card purchases in New Jersey.  As technology has evolved, many retailers rely on electronic barcode scanners to review and capture information on customers’ driver’s licenses and other forms of identification.  The Act addresses these new technologies by:

  • Restricting the type of personal information that retailers may collect and retain from consumers’ identification cards to name, address, date of birth, identification card number, and the state in which the card was issued;
  • Limiting the purposes for which retailers may use personal information obtained from consumer identification cards (e.g. age verification);
  • Reiterating retailers’ breach reporting obligations under New Jersey’s breach notification law;
  • Requiring retailers to securely store the limited information it is permitted to retain after electronically scanning the bar codes on consumers’ identification cards; and
  • Prohibiting retailers from disclosing or selling such information to third parties unless otherwise permitted to do so by the statute.

The Act carries civil penalties of $2,500 for first-time offenders, $5,000 for repeat offenders.  In addition, the law allows consumers to bring a private right of action against retailers in connection with violations of the statute.  While retailers that simply “card” customers (e.g. manually view identification cards) are not subject to the Act, it is important to note that their data handling practices may trigger liability under other applicable state laws (e.g. data destruction laws).

The Personal Information Privacy and Protection Act, which becomes effective on October 1, 2017, represents an important step in protecting consumer information in the context of retail transactions.  First, the Act’s purpose limitation and security provisions will minimize the likelihood and impact of a data breach by substantially reducing the amount of sensitive data elements that retailers collect, store, transmit to third parties, and requiring extra layers of security to protect the limited information retailers now may retain.  Second, by prohibiting the unauthorized sale of consumer information for marketing, advertising, or promotional activities, the Act will give consumers more control over their personal information.  As technological advances continue to impinge on the privacy rights of consumers, it is likely that other states will enact similar legislation to ensure that the use of emerging technologies does not allow businesses to capture and use consumer information in a manner that is inconsistent with the purposes for which such information was originally collected and communicated to consumers at the point of sale.

FBI and FTC on Privacy Risks Stemming from “Smart” Toys

Posted in Advertising & Product Risk Management, Cybersecurity / Data Security, Privacy
Stephanie Reiter

Earlier this month, the Federal Bureau of Investigation (FBI) issued a public comment about privacy, cybersecurity, and safety risks associated with internet-connected toys.  The FBI’s comment builds on the Federal Trade Commission’s recent amendment to the Children’s Online Privacy Protection Act (COPPA), which explicitly states that connected toys are deemed “websites or online services” subject to COPPA.  In our sister blog, Retail & Consumer Products Law, our colleagues highlight the key issues associated with connected toys, protections with which smart toy manufacturers must comply under COPPA, and the potential trajectory for government enforcement efforts in the context of connected toys.

Recent IoT Device Cases

Posted in Advertising & Product Risk Management, Cybersecurity / Data Security, Litigation
Clifford J. ZatzJoe MeadowsLaura AradiPaul Mathis

“There are many ways to surveil each other now, unfortunately,” including “microwaves that turn into cameras, et cetera.  So we know that that is just a fact of modern life.”  Kellyanne Conway, March 12, 2017 Interview with New Jersey’s The Record.

Data from microwaves-turned-cameras has yet to appear in court, but data from other IoT devices has.  And while its appearance has been invaluable in cracking criminal cases or pursuing civil claims or defenses, it also has raised constitutional and privacy issues.

Here we highlight some recent IoT device cases.

  • smart speaker:  In a murder case, the police seized the defendant’s smart speaker on the theory that it may offer evidence of what transpired the night of the murder at the defendant’s home.  A search warrant was then served on the speaker’s manufacturer for the audio recordings that had been uploaded to out-of-state servers.  The manufacturer moved to quash the warrant, contending that it had First Amendment rights to publish and speak through the speaker.  The motion was later mooted when the defendant gave the manufacturer permission to turn over any audio recordings.  See Arkansas v. Bates, No. CR-2016-370 (Cir. Ct. Benton County, Arkansas).
  • search engines:  In censorship and unfair competition cases, plaintiffs brought claims against internet companies arising out of their search results.  The companies moved to dismiss on the grounds that their search results were protected speech under the First Amendment.  Florida and New York federal courts agreed:  the companies’ production and ranking of search results was similar to that of a newspaper exercising protected editorial discretion over what to publish.  It made no difference that the search results arose out of automated computer programming.  See e-ventures Worldwide, LLC v. Google, Inc., No. 14-cv-646 (M.D. Fla. Feb. 8, 2017); Zhang v. Baidu.com Inc., 10 F. Supp. 3d 433 (S.D.N.Y. 2014).
  • fitness wearable:  In another murder case, the victim’s husband told police that he was at home fighting off an intruder when his wife returned from the gym no later than 9 am.  According to the husband, the intruder then shot his wife, tied him up, and ran out of the house.  The police searched the wife’s fitness wearable.  Its data showed that the wife was still moving about the home a distance of 1,217 feet between 9:18 am and 10:05 am.  After additional discoveries of the husband’s extra-marital affair and attempt to cash in on the wife’s life insurance, the husband was charged with murder.  See https://www.nytimes.com/2017/04/27/nyregion/in-connecticut-murder-case-a-fitbit-is-a-silent-witness.html.
  • pacemaker:  In a home arson case, the homeowner told police that he did a number of things as soon as he discovered the fire:  he gathered his belongings, packed them in a suitcase and other bags, broke out the bedroom window with his cane, threw his belongings outside, and rushed out of the house.  The police searched the 59-year old’s pacemaker.  Its data showed that the man’s heart rate barely changed during the fire.  And after a cardiologist testified that it was “highly improbable” that a man in his condition could do the things claimed, the man was charged with arson and insurance fraud.  See http://www.abajournal.com/news/article/data_on_mans_pacemaker_led_to_his_arrest_on_arson_charges.
  • biometric devices:  In privacy violation cases, plaintiff consumers have alleged that technology companies have illegally obtained, used, or shared personal “biometric identifiers” – generally, fingerprints, voiceprints, and retinal/facial scans — without consent in violation of privacy laws.  Illinois state and federal courts have sustained some of these claims and approved settlements.  See Rivera v. Google Inc., No. 16-C-02714, 2017 U.S. Dist. LEXIS 27276 (N.D. Ill. Feb. 27, 2017); Sekura v. L.A. Tan Enterprises, No. 2015-CH-16694 (Cir. Ct. Cook County, Illinois); http://www.chicagotribune.com/bluesky/originals/ct-biometric-illinois-privacy-whats-next-bsi-20170113-story.html.

These IoT device cases – whether in the civil or criminal context — present interesting First and Fourth Amendment issues and privacy rights.  Considering the growth of new IoT devices and their expanding use, identifying and understanding the constitutional issues and privacy rights will continue to gain importance in courtroom disputes.  And on the horizon are similar issues and rights surrounding artificial intelligence and augmented reality devices.

But don’t hold out for an onslaught of microwave-turned-camera cases.

FTC Submits Public Comment to Working Group Tasked with Developing Guidance on IoT Security, Upgradability, and Patching

Posted in Cybersecurity / Data Security, Data Breach, Internet of Things
Jeffrey L. PostonStephanie Reiter

On June 19, 2017, the Federal Trade Commission (FTC) issued a public comment regarding the National Telecommunications & Information Administration’s (NTIA) draft guidance titled Communicating IoT Device Security Update Capability to Improve Transparency for Customers.  In commenting on the guidance, the FTC acknowledged the benefits of and challenges to IoT device security, and encouraged manufacturers to take reasonable measures to secure devices and inform consumers about its security features.

The FTC also recommended three specific modifications to the working group’s proposed “Elements of Updatability.” First, including additional “key elements” that manufacturers should disclose prior to sale:

  • Whether and how the device can receive upgrades;
  • The date on which security support begins;
  • Guaranteed minimum security support period; and
  • Whether a “smart” device will become highly vulnerable or lose functionality after support ends.

Second, offering “additional elements” to consumers before or after purchase:

  • Uniform method for notifying consumers of available updates;
  • Method to sign up for support notifications, separate from marketing communications; and
  • Real-time notifications when security support is about to end.

Third, removing an “additional element” that described the process by which the manufacturer provides updates, as the technical details likely will not benefit the customer.

While the FTC’s comments are not binding, the FTC’s suggestions reflect lessons learned from its prior enforcement actions, policy initiatives, and consumer and business education.  As a result, IoT device manufacturers should consider implementing the FTC’s proposed practices, regardless of whether NTIA incorporates the FTC’s recommendations into the finalized guidance document.

New Texas Law Explicitly Allows Driverless Cars

Posted in Cybersecurity / Data Security
Jeffrey L. PostonBrandon C. Ge

On June 15, Texas Gov. Greg Abbott signed a bill that explicitly allows self-driving cars on the state’s roads and highways, regardless of whether a human is physically present. While there was no ban on driverless cars, Texas law did not explicitly permit them either. This created a grey area of the law that fueled apprehension among manufacturers about testing self-driving cars in Texas.

Senate Bill 2205 allows driverless vehicles to operate in the state as long as the vehicle is:

  • Capable of operating in compliance with state traffic and motor vehicle laws;
  • Equipped with a recording device;
  • Equipped with an automated driving system that complies with applicable federal law and federal motor vehicle safety standards;
  • Registered and titled in accordance with Texas law; and
  • Covered by motor vehicle liability coverage or self-insurance.

With the new law, Texas joins a growing list of states that officially permit driverless cars on public roads, setting up the stage for the eventual rollout of autonomous vehicles to consumers. But while the technology has remarkable potential, it also raises significant privacy and security concerns. Autonomous vehicles are data-gathering machines and may log historic and real-time geolocation data, which will likely be highly coveted for its ability to reflect individuals’ lifestyles and purchasing habits. Cybersecurity is another major issue – for example, how will collected data be stored or transmitted? In addition, vulnerabilities may allow hackers to hijack and steal self-driving cars or interfere with their safety.

Judge Approves Neiman Marcus Data Breach Settlement

Posted in Cybersecurity / Data Security, Data Breach
Jeffrey L. PostonBrandon C. Ge

Last week, an Illinois judge preliminarily approved a $1.6 million settlement between Neiman Marcus and a class of customers affected by a 2013 data breach. The settlement, which the parties agreed to in March, covers U.S. residents whose credit card or debit card was used between July 16, 2013 and January 10, 2014 at any Neiman Marcus store. Any such customers who file a claim will receive up to $100, with the four class representatives receiving $2,500 each. The settlement does not require Neiman Marcus to take any specific security-related measures.

The 2013 data breach, which was the result of malware installed in Neiman Marcus’s computer system, potentially exposed approximately 370,385 cards. Approximately 9,200 of these were later used fraudulently. The suit was filed in March 2014 and was initially dismissed for a lack of standing in September 2014. The Seventh Circuit later revived the case, finding that any costs for fraud prevention such as credit monitoring were sufficient to establish standing.

Nevada Enacts Internet Privacy Regulation

Posted in Internet of Things, Privacy
Jeffrey L. PostonLeigh Colihan

On June 12, Nevada Gov. Brian Sandoval (R) signed into law a bill requiring the operator of an Internet website to disclose the type of information it collects on Nevada residents.  Under the law, any company or person who (1) owns or operates an Internet website or online service for commercial purposes, (2) collects information about individuals residing within Nevada, and (3) maintains minimum contacts with Nevada must make available a notice listing the personally identifiable information the operator is collecting on consumers.  The operator must also disclose whether it allows third-party access to the personal information and must notify the consumer of any process to review and request changes to any of his or her covered information.  If not in compliance, the operator has 30 days to remedy a failure to comply or face a civil penalty imposed by the state attorney general.

Other states have submitted similar legislation to enhance Internet privacy laws following President Trump’s repeal of the Federal Communications Commission’s broadband privacy rules.  For example, Illinois’s “Right to Know” bill passed the Senate and now is pending before the House before it can be brought to a vote.  The Illinois bill requires websites to notify consumers about what data the companies collect and to whom they sell the data.  As more states propose and pass their own regulations, compliance for companies could become challenging if the requirements vary, mirroring the oft-cited “patchwork” of state data breach notification laws.

Data Breach Class Action Dismissed for Not Establishing Economic Injury

Posted in Data Breach, Litigation
Jeffrey L. PostonBrandon C. Ge

Earlier this week, a federal Illinois court dismissed a class action against book retailer Barnes & Noble that alleged breach of contract, invasion of privacy, and violations of state consumer fraud and breach reporting laws. The case, dismissed for failing to establish economic harm, marks another data point in demarcating actionable data breaches and highlights perhaps the most challenging issue for plaintiffs in data breach class actions.

The complaint stemmed from a data breach that Barnes & Noble suffered in 2012 where hackers tampered with PIN pad terminals in 63 Barnes & Noble stores across nine states, compromising customers’ credit card and debit card information. The Court previously ruled that the plaintiffs had to allege economic or out-of-pocket damages caused by the data breach in order to state a claim.

The U.S. District Court for the Northern District of Illinois ruled that the plaintiffs’ alleged injuries to the value of their personally identifiable information, time spent with bank and police employees, and emotional distress were insufficient to state a claim. Similarly, although the plaintiffs alleged a temporary inability to use their bank accounts, they failed to demonstrate how this inconvenience caused any monetary injury. The plaintiffs’ lost cell phone minutes in speaking to bank employees and purchases of credit monitoring were also deemed insufficient to state a claim.

Supreme Court to Hear Major Cellphone Privacy Case

Posted in Admissibility, Litigation, Privacy
Jeffrey L. PostonBrandon C. Ge

Yesterday, the Supreme Court announced that it will hear a case with significant ramifications for privacy in the digital age. The case involves a man convicted of armed robbery based in part on cellphone location data obtained without a probable cause warrant. The conviction was appealed at the Sixth Circuit Court of Appeals, which held that the Fourth Amendment does not require a warrant under such circumstances.

While the Supreme Court has recently restricted the search of cellphone contents and the use of GPS devices by law enforcement, it ruled in 1979 that a robbery suspect had no reasonable expectation of privacy in numbers dialed from his phone because the suspect had voluntarily turned this information to the phone company. Relying on this “third-party doctrine,” federal appeals courts have generally agreed that the Fourth Amendment does not protect cellphone location data because customers routinely provide this data to cellphone companies.

Cellphone carriers can track individuals’ approximate locations based on which signal towers the cellphone can reach, and law enforcement officials frequently obtain such information to assist in investigations. This case, likely to be heard in the fall, gives the Supreme Court an opportunity in the digital age to clarify privacy rights in such records.