Third Circuit Affirms Dismissal of FACTA Suit on Standing Grounds

A three-judge panel of the Third Circuit recently affirmed a district court ruling that dismissed a suit for violation of the Fair and Accurate Credit Transaction Act of 2003 (FACTA) for lack of Article III standing.  The plaintiff, Ahmed Kamal, alleged that receipts he received from J. Crew showed the first six and last four digits of his credit card number in violation of FACTA.  The panel, applying the Supreme Court’s ruling in Spokeo, Inc. v. Robins, ruled that absent more, such an allegation of a “technical violation” is insufficient to demonstrate the concrete harm required to demonstrate Article III standing.

Congress enacted FACTA to combat identity theft.  The statute prohibits businesses from printing any more than the last five digits of a credit or debit card number on a receipt provided to the cardholder at the point of sale.  FACTA also prohibits businesses from printing the card’s expiration date on the receipt.  FACTA provides for actual damages and attorneys’ fees for negligent violations and statutory damages up to $1000, punitive damages, and attorneys’ fees for willful violations.

Kamal alleged that J. Crew willfully violated FACTA.  On three separate occasions, at three separate J. Crew stores, he received a receipt that showed the first six and the last four digits of his card number.  Kamal did not allege that anyone else saw these receipts, that his identity was stolen, or that his credit card number was compromised.  He filed a class action suit against J. Crew, which the district court ultimately dismissed for lack of Article III standing.

Article III standing is a component of the Constitution’s case or controversy requirement.  To maintain suit under this requirement, plaintiffs must show that 1) they suffered an injury in fact, 2) it is fairly traceable to the challenged conduct of the defendant, and 3) it is likely to be redressed by a favorable judicial decision.

On appeal, the issue before the panel was whether Kamal had sufficiently pled an injury in fact.  To do so, Kamal was required to “allege an invasion of a legally protected interest that is concrete and particularized and actual or imminent, not conjectural or hypothetical.”

Kamal argued that he had pled concrete injury for two reasons.  First, he argued that the violation of FACTA’s plain text was an intangible concrete harm in itself.  Second, he argued that the increased risk of identity theft from the violation was concrete harm.  After discussing Spokeo and a number of its own decisions, the panel rejected both arguments.

The panel discussed whether the alleged intangible harm had a close relationship to a harm that traditionally formed the basis of a common law action.  It discussed a number of privacy torts and concluded that Kamal’s alleged harm did not have a close relationship to them because they all required disclosure of some personal information to a third party.  Here, however, Kamal did not allege that any third party saw the offending receipts.

Next, the panel discussed whether Kamal had alleged an increased risk of the concrete harm of identity theft to satisfy Article III standing requirements.  The panel noted that the first six digits of a credit card number identify the bank and card type, information that is permitted to be printed elsewhere on the receipt under FACTA.  Therefore, J. Crew’s alleged violation did little to increase any risk of identity theft.

The panel also noted that for the alleged harm of identity theft to become realized, Kamal would have to lose or throw away the receipt, and then a would-be identity thief would have to find it and figure out the remaining digits along with additional information such as the expiration date, the Card Verification Value (CVV), or the billing zip code.  The panel agreed with the district court that this chain of events was too attenuated and speculative to entail the sufficient degree of risk necessary to meet the concreteness requirement.

The decision puts the Third Circuit in line with the Second Circuit, which ruled in Katz v. Donna Karan Co. that printing the first six credit card digits on a receipt was a “bare procedural violation” that “did not raise a material risk of harm to identity theft.”

The Eleventh Circuit, however, is on the other side of the issue.  In Muransky v. Godiva Chocolatier, Inc., it ruled that printing the first six digits on a receipt created concrete injury because it was similar to a common law action for breach of confidence.  The Kamal panel expressed its disagreement with the Eleventh Circuit because a breach of confidence action required disclosure to a third party, which Kamal had not alleged.

By requiring disclosure to a third party to show a close relationship to a traditional tort action, however, the panel essentially closed the door on one option to show concrete harm.  Under the panel’s reasoning, even printing the full credit card number on the receipt would not have a close relationship to traditional privacy torts so long as the merchant gave the receipt to only the customer.

Even so, under that set of facts, the plaintiff would likely be able to show concrete harm through the increased risk of identity theft.  The panel admitted that its “analysis would be different” had Kamal “alleged that the receipt included all sixteen digits of his credit card number, making the potential for fraud significantly less conjectural.”  But that raises the question of where courts should draw the line.  What about a receipt that shows 12 or 13 digits?  Is the risk of identity theft that much more appreciable to satisfy the concrete harm requirement?  And will this standard shift as identity thieves employ more sophisticated means to get the information they need?

Stay tuned, as this issue of FACTA standing is sure to get murkier as lower courts continue to grapple with the Supreme Court’s Spokeo decision.

Tagged with: , , , , ,
Posted in Litigation, Privacy, Regulations

Congress Holds Hearings on Privacy and Data Protection

With all of the hubbub swirling around Capitol Hill last week with the Michael Cohen hearings, you can’t be blamed if you missed the fact that two important congressional hearings on privacy and data protection took place as well, one in the House and one in the Senate.

First, on February 26, the House Energy and Commerce’s Subcommittee on Consumer Protection and Commerce held a hearing titled, “Protecting Consumer Privacy in the Era of Big Data.”  It was the first hearing on the topic in the 116th Congress.  Committee members expressed bipartisan support for enacting comprehensive legislation that would set a national standard for data protection, but differed on what that standard might be.  Republican committee members expressed concern that overly strict standards could burden and disadvantage small businesses.  They focused on how the European Union’s General Data Protection Regulation (GDPR) has advantaged companies with the largest market shares at the expense of smaller businesses.  Democrats, meanwhile, expressed concern over the discriminatory effects of a data marketplace without strong enough standards.

In opening statements, Representative Frank Pallone (D-NJ), Chairman of the full committee, said that dense and lengthy privacy polices mean that we can no longer rely on a system of notice and consent and advocated for a shift toward a strong, comprehensive model of data protection.  Representative Greg Walden (R-OR), Ranking Member of the full committee, expressed a desire to work toward federal privacy legislation that focuses on 1) transparency and accountability, 2) protecting innovation and small businesses, and 3) setting a single national standard.  A number of witnesses testified before the subcommittee, including representatives from Color of Change, the largest online civil rights organization in the U.S., the American Enterprise Institute, and the Center for Democracy and Technology.

Then, on February 27, the Senate Commerce Committee held a hearing titled, “Policy Principles for a Federal Data Privacy Framework in the United States.”  Committee members from both parties expressed support for strong, comprehensive legislation to protect the privacy of consumer data.  They differed, however, on what preemptive effect any federal privacy law should have.  Republican committee members tended to support the idea of preemption to avoid the potential burden of complying with a patchwork of state laws with varying standards.  Democrats, on the other hand, expressed concern that passing a preemptive federal law could lead to a lower overall standard of data protection by nullifying stricter state laws.  The preemption issue is sure to remain a hot topic as at least some of the push to pass comprehensive federal privacy legislation is being driven by concerns over the California Consumer Privacy Act (CCPA), which is scheduled to become operative on January 1, 2020.

In opening statements, Chairman Roger Wicker (R-MS) advocated for a data privacy framework that is “uniquely American.”  This framework, he said, should preempt state law and interoperate with international laws to reduce the burdens of compliance.  He made it clear that “a national framework does not mean a weaker framework than what’s being developed in the states.”  Ranking Member Maria Cantwell (D-WA) described recent data breaches as part of a larger trend rather than one-off incidents.  She suggested that the GDPR and the CCPA could provide valuable insights to congressional efforts to create comprehensive federal data protection legislation.  She stated her position that “we cannot pass a weaker federal law at the expense of the states.”  Witnesses from several organizations testified before the committee, including representatives from the 21st Century Privacy Coalition, the Retail Industry Leaders Association, and the Interactive Advertising Bureau.

While potential comprehensive federal privacy legislation has gotten a lot of attention lately, any move from the current sectorial model of U.S. data protection to a comprehensive model will be a heavy lift and will require careful analysis and balancing of privacy rights and regulatory burden.  And all the while, technologies and techniques for exploiting security vulnerabilities will continue to evolve.  Therefore, statutory and regulatory regimes must provide ample protections while also remaining flexible enough to be applicable to evolving technologies.  As expressed by Senator Cantwell, it will be no easy task.

 

Gregory is a Research Professional with the firm and is not an attorney.

Tagged with: , , , , ,
Posted in Data Security, Legislation, Privacy

FTC Announces Record Settlement for Children’s Privacy Violations

On February 27, the FTC announced that the operators of the video social networking application Musical.ly, now known as TikTok, agreed to pay $5.7 million to settle allegations that it violated the Children’s Online Privacy Protection Act (COPPA). According to the FTC, this is the largest civil penalty obtained in a children’s privacy case. The proposed consent order also requires TikTok to destroy all user data for users under the age of 13 and for users who are over 13 but were under 13 when TikTok collected their data, unless TikTok has verifiable parental consent to collect, use, and disclose such data.

The application at issue allows users to create videos, edit them and synchronize them to music clips, and then share them with other users.  To register, users had to provide their email address, phone number, first and last name, bio, and profile picture. User accounts were set to “public” by default, meaning other users could search for and view the user’s profile. And for a period of time, the application collected geolocation data and had a “my city” function that allowed users to view a list of other users within a 50-mile radius. According to the FTC, the defendants had received thousands of parental complaints and there were numerous public reports of adults attempting to contact children through the application.

The FTC’s complaint, which the Department of Justice filed on its behalf, alleged several longstanding COPPA violations. Specifically, the FTC alleged that the defendants failed to provide required privacy notices, failed to obtain parental consent before collecting children’s personal information, failed to delete children’s personal information upon parental request, and retained children’s personal information for longer than was reasonably necessary to fulfill the purpose for which the information was collected.

The FTC also alleged that the application was directed, at least in part, to children under the age of 13 and that the defendants knew that children were using the application. Many users stated their age in their profile bio or provided grade school information that showed they were under 13. Many of the music clips available from the application’s library are popular with children, such as clips from Disney movies or clips from musical artists who are popular with “tweens and younger children.” And while the application began requesting age information in July 2017 and prevented users under 13 from creating accounts, it had no restrictions in place before then, and it did not request age information from users who created accounts before that date.

In conjunction with the announced civil penalty, Commissioners Rohit Chopra and Rebecca Kelly Slaughter issued a notable joint statement advocating for greater individual accountability for COPPA violators. The statement expressed Chopra’s and Slaughter’s belief that the violations showed “the company’s willingness to pursue growth even at the expense of endangering children.” They went on to say, “When any company appears to have made a business decision to violate or disregard the law, the Commission should identify and investigate those individuals who made or ratified that decision and evaluate whether to charge them.”

Of course, Chopra and Slaughter are only two of the five Commissioners. It therefore remains to be seen whether the Commission as a whole will adopt a more aggressive approach to enforcement against individuals or will reserve going after individuals only in the most egregious cases. But given the record amount of the civil penalty, the Commission is certainly wary of companies that elevate profits over privacy.

Tagged with: , , ,
Posted in FTC, Privacy, Regulations, Social Media

Is it Time to Rethink Notice and Choice as a Fair Information Privacy Practice?

Since the 1970’s, fair information practices (FIPs) or fair information privacy practices (FIPPs) have formed the framework around which organizations structure their policies on data collection, use, disclosure, and retention.  The cornerstone of individual privacy rights under the FIPs is notice and choice, sometimes called notice and consent.  That is, an organization should inform individuals about how their personal information will be processed and shared and proceed only when an individual agrees to such use.  At first glance, these dual concepts may appear to adequately protect individual privacy.  As the digital landscape has evolved, however, it has become apparent that the notice and choice paradigm fails to adequately protect individual privacy in important ways.

First, the concepts of notice and choice assume that the choice is informed, but that is likely not the case.  Privacy notices are often buried in terms of service that are lengthy, confusing, and difficult to read.  They are often full of legalese and written from the perspective of protecting the organization from legal liability rather than from the perspective of genuinely and clearly informing users as to how their personal information might be shared.  The term “privacy notice” may give users the impression that it contains information on how the organization is going to protect personal information rather than how it is going to disclose that information, which further disincentives a close read.  All of this leads to the conclusion that a substantial number of individuals have no idea how companies are using or sharing their personal information.

That leads to a second related problem with the notice and choice framework.  Notice and choice adequately protect individual privacy only if the choice is meaningful and consent is freely given.  Yet accepting an organization’s privacy notice or terms of service is usually presented as a take it or leave it threshold requirement to access a website, web service, or application.  When faced with the choice of access or no access, users will choose access, no matter how draconian an organization’s information sharing practices may be.  In other words, conditioning user access on providing personal information and agreeing to an organization’s privacy policy gives the user a choice only in the most literal sense.  But given human nature and the presence of information technology in our daily lives, it really presents the user with no choice at all.

Both of these issues, difficult to understand privacy notices and conditioning access on acceptance, have real effects. Users are constantly inundated with lengthy terms of use that they know they have no choice but to accept if they want to access the website or application at issue. They soon become desensitized and simply click “accept.” To be sure, a 2017 Deloitte consumer survey concluded that 91% of consumers simply accept legal terms and conditions without reading them, and that number jumps to 97% when looking at consumers age 18 to 34. These statistics show that while notice and choice may sound good in theory, it has real shortcomings in practice.

Recognizing that notice and choice may no longer be sufficient to protect individual data privacy rights, some privacy professionals have signaled a move away from the notice and choice paradigm. For example, in a September 2018 request for comments, the National Telecommunications and Information Administration (NITA) noted, “To date, [mandates on notice and choice], have resulted primarily in long, legal, regulator-focused privacy policies and check boxes, which only help a very small number of users who choose to read these policies and make binary choices.” Fortunately, there are a number of things that a company can do to get out in front of this transition away from a strict notice and choice regime.

First, an organization can build consumer trust by posting an easy to understand, layered privacy notice.  A layered privacy notice starts with a short and simple statement of what personal information the organization collects and why it collects it.  This first layer notice then contains a link to a fuller statement of the organization’s privacy policy.  This second layer can be a broader “highlights” document as well, with a further link to the full privacy policy or perhaps an FAQ page.  Short, top-layer notices also help users and protect the organization because they are more easily read on the smaller screens of mobile devices.  Moreover, being transparent and using plain language in its privacy notice will help the organization build goodwill with its customers.

Second, an organization can protect its customers’ privacy rights by minimizing the amount of data it collects on those customers.  Organizations should give serious thought before collecting more personal information than is necessary to provide the good or service in question.  Data is not only an asset, but also a potential liability.  While a data breach is never a pleasant experience, the harm to a company’s reputation will be amplified if the breach contains disclosure of personal information that has no rational connection to the good or service the organization provides to its customers.

Third, an organization can give its customers multiple options as to how their personal information is used and shared.  For example, customers may be fine with having their email addresses added to a company’s internal marketing list, but may not want that same information sold to a third-party mailing list.  True consumer choice requires more than an all or nothing approach.

As the practical shortcomings of the notice and choice framework become more apparent, lawmakers and regulators likely will begin to mandate a more holistic approach that looks more fully at what an organization does to protect individual privacy rights, rather than focusing on whether the organization simply complied with notice and choice requirements.  By thinking about this shift now, organizations can better prepare themselves for this transition while building trust and confidence with their customers at the same time.

Tagged with: , , , ,
Posted in Privacy, Standards

Privacy Primer: The Children’s Online Privacy Protection Act (COPPA)

COPPA is a U.S. law enacted by Congress in 1998 to address concerns regarding the online collection and disclosure of children’s personal information. Children (defined by COPPA as individuals under the age of 13) may not appreciate the significance of sharing their personal information online. Therefore, the goal of COPPA is to put the power of children’s online personal information into the hands of their parents.

COPPA tasked the FTC with promulgating rules to define what an unfair or deceptive trade practice is under the law. The current Children’s Online Privacy Protection Rule applies to operators of commercial websites or online services (including mobile applications) that are directed to children and to operators who have actual knowledge that they are collecting or maintaining children’s personal information. Under the Rule, such an operator:

(a) Must provide notice on its website of what information it collects from children, how it uses that information, and how it might disclose such information;
(b) Must obtain verifiable parental consent prior to collecting, using, or disclosing a child’s personal information;
(c) Must provide a reasonable means for a parent to review the personal information the operator has collected from a child and to refuse to permit further use of that information;
(d) Must establish and maintain reasonable procedures to protect the confidentiality, security, and integrity of personal information collected from children; and
(e) Cannot condition a child’s participation in a game, prize offering, or other activity on the child disclosing more personal information than is reasonably necessary to participate in such activity.

As a general matter, verifiable parental consent includes any method reasonably calculated, in light of available technologies, to ensure that the person providing consent is the child’s parent. The Rule lays out a list of methods that the FTC has determined to meet that requirement, such as providing a consent form for the parent to sign and return via mail, fax, or email. Violations of the Rule carry civil penalties of up to $41,484 per violation.

A number of states have passed legislation to fill the gap left by COPPA regarding teenagers. For example, the Delaware Online Privacy and Protection Act extends COPPA-like provisions to all Delaware residents who are under 18. Therefore, website operators and online service providers should be aware of potentially applicable state laws even if they do not believe that COPPA applies.

Tagged with: , , ,
Posted in Legislation, Privacy, Regulations

Illinois Supreme Court Sheds Light on the Importance of Strict Compliance with State’s Biometric Information Privacy Act

On January 25, 2019, in Rosenbach v. Six Flags Entm’t Corp., the Illinois Supreme Court held that an individual is an “aggrieved” party under the Illinois Biometric Information Privacy Act (“BIPA”) and may seek damages absent an allegation of harm beyond a violation of the rights conferred by the statute.

The BIPA

In 2008, Illinois passed the BIPA in order to regulate “the collection, use, safeguarding, handling, storage, retention, and destruction of biometric identifiers and information.”  The BIPA imposes several obligations on entities collecting, retaining, and disclosing biometric data, including the obligation to (1) inform the individual or the individual’s representative in writing that biometric data is being collected or stored, (2) inform the individual or the individual’s representative in writing of the purpose and length of term for which the biometric data is being collected, stored, and used, and (3) receive a written release executed by the subject of the biometric data.  As part of the BIPA’s enforcement mechanism, “aggrieved” parties are granted a private right of action.

The Rosenbach Decision

In Rosenbach, the plaintiff filed a class action complaint against Six Flags Entertainment Corporation (“Six Flags”) asserting violations of the BIPA.  The complaint alleged that in 2014, the plaintiff went online to purchase her 14-year-old son a Six Flags season pass.  The plaintiff paid for the pass online, but her son was required to complete the sign-up process in person.  During a school trip to Six Flags, the plaintiff’s son completed the sign-up process by scanning his thumb into Six Flags’ biometric data capture system and obtaining a pass card, which permitted reentry when used together.

Among other things, the complaint alleged that Six Flags violated the BIPA because (1) the plaintiff was never notified that her son’s fingerprint would be scanned when he completed his sign-up in person, (2) neither the plaintiff nor her son were informed in writing (or in any other way) of the purpose or length of term for which the fingerprint was collected, and (3) neither the plaintiff nor her son signed a written release.

Six Flags sought to dismiss the action by arguing that in order to bring a claim as an “aggrieved” party under the statute, the plaintiff was required to allege an actual injury or harm apart from the statutory violation.  The appellate court agreed with Six Flags and held that “a plaintiff who alleges only a technical violation of the statute without alleging some injury or adverse effect is not an aggrieved person[.]”

On appeal, the Illinois Supreme Court unanimously reversed the appellate court’s decision, finding that the term “aggrieved” does not require an allegation of harm beyond a violation of the rights conferred by the BIPA.  In reaching its conclusion, the court stated that although the term “aggrieved” is not defined in the BIPA, the understanding of aggrieved—that “‘[a] person is prejudiced or aggrieved, in the legal sense, when a legal right is invaded by the act complained of or his pecuniary interest is directly affected by the decree or judgment[]’”—was embedded in Illinois jurisprudence when the BIPA was adopted and that the court “must presume that the legislature was aware of that precedent . . . .”  Additionally, the court highlighted the fact that a requirement of actual harm has been specifically identified in some statutory schemes but not in others, which led the court to further conclude that if lawmakers intended the BIPA to require an allegation of actual harm, the statute would have explicitly said so.  To illustrate this point, the court likened the BIPA to the AIDS Confidentiality Act, which authorizes relief to “aggrieved” parties and does not require proof of actual damages.  In contrast, the court referenced the Illinois Consumer Fraud and Deceptive Business Practices Act, which permits a private right of action only when the plaintiff alleges “actual” damages.

The court further reasoned that a party need not allege a harm beyond a statutory violation because when an entity violates the BIPA, “the right of the individual to maintain [his or] her biometric privacy vanishes into thin air . . .” and constitutes an injury that is “real and significant.”

Takeaways

The BIPA is already a hotly-litigated statute, however, the Rosenbach decision will likely lead to a significant uptick of BIPA claims; and in light of the availability of the greater of actual damages or statutory damages ranging from $1,000 to $5,000 per violation, companies subject to the BIPA must now, more than ever, ensure strict compliance with the law.

Tagged with: , , , , ,
Posted in Legislation, Litigation, Privacy, Regulations

Senators Introduce Data Care Act to Establish Duties for Online Service Providers

On December 12, 2018, Senator Schatz (D-HI), along with 15 other Senators, introduced the Data Care Act of 2018 “to establish duties for online service providers with respect to end user data that such providers collect and use.”

The bill would require online service providers (“OSPs”)—defined as entities (1) “engaged in interstate commerce over the [I]nternet or any other digital network” and (2) that collect individual identifying information (“IID”) about end users in the course of or incidental to the course of business—to exercise the duties of care, loyalty, and confidentiality with respect to that information. If the bill becomes law, it will apply 180 days after the date of enactment.

The bill’s definition of IID is limited to information that is collected over the Internet or any other digital network and is information that can be “linked” or is “linkable” to an end user or device that is “associated with or routinely used by an end user.” The bill does not define “linkable”; however, to the extent the GDPR’s definition of “identifiable” in the context of personal data can be a guide, “linkable” is likely to have a broad reach. Under the GDPR, information is identifiable when it can be combined with other pieces of information in order to determine the identity of an individual, but a hypothetical possibility of identification is not sufficient; it must be reasonably likely in light of considerations such as time, cost, and technology.

The duties under the bill are as follows:

The Duty of Care

  • OSPs must:
    • reasonably secure IID from unauthorized access
    • promptly notify end users of any breach of sensitive data; the FTC, subject to defined exceptions and considerations, has the power to promulgate rules for breach notification with respect to categories of IID other than sensitive data

The Duty of Loyalty

  • OSPs cannot use IID in a manner that:
    • benefits the OSP to the detriment of an end user
    • will result in reasonably foreseeable physical or financial harm to the end user
    • would be unexpected or highly offensive to a reasonable end user

They Duty of Confidentiality

  • OSPs cannot disclose, sell, or share IID with any other person:
    • except as consistent with the duties of care and loyalty
    • unless that person enters into an agreement with the OSP that imposes the same duties of care, loyalty, and confidentiality owed to the end user by the OSP
  • Must ensure any person to whom IID is disclosed, sold, or shared abides by the duties of care, loyalty, and confidentiality by, including but not limited to, regularly auditing that person’s data security and information practices

The bill gives the FTC enforcement and rulemaking authority and the ability to impose penalties, which will be an amount not to exceed the penalties permitted by 15 U.S.C. 45 (m)(1)(A) ($10,000) multiplied by the greater of (1) the number of days of non-compliance or (2) the number of end users harmed. The bill also allows for enforcement by state attorneys general.

Notably, and as was clearly favored by the Commissioners during the Senate subcommittee hearing on FTC Oversight on November 27, 2018, the bill also gives the FTC jurisdiction over non-profits and common carriers subject to the Communications Act of 1934

 

 

 

Tagged with: , , , , , , , ,
Posted in FTC, Legislation, Privacy, Standards

Amazon Echo Data at Center of Another Legal Battle

Amazon, Inc. is on the receiving end of another court order demanding it release the data and recordings associated with one of its Echo smart devices. For the uninitiated, Echo smart devices support voice interaction, music playback, and other administrative tasks for its users. The device responds whenever a user says a “wake word,” such as “Alexa” or “Echo.” After performing the given task, the device records and stores the interaction, which can later be accessed by the user or by Amazon.

In its November 5th order, the New Hampshire Superior Court demanded Amazon release two days-worth of data that state prosecutors believe may assist in proving a double murder. The case involves the tragic slaying of two female victims, Christine Sullivan and Jenna Pellegrini, on January 27, 2017 at Ms. Sullivan’s home in Farmington, NH. Timothy Verrill of Dover, NH stands accused of stabbing both women to death inside the home and for tampering with evidence. Upon searching the crime scene, police investigators found and seized Ms. Sullivan’s Echo smart device, which was sitting on the kitchen counter. The Echo device, if it was recording at the time of the killings, could be the state’s star witness.

Amazon, however, has said that it “will not release customer information without a valid and binding legal demand properly served on us.” This is not the first time that Amazon has argued over the admissibility of Echo data. Back in 2015, Amazon was served with a similar court order to produce Echo data and recordings potentially related to the drowning of an Arkansas man in a privately owned hot tub. In response to the order, Amazon filed a 91-page brief arguing that the prosecution should have to meet a higher burden of proof to get a warrant for the data and recordings. Amazon’s proposed standard would require the prosecution to prove there is no less intrusive way to obtain the information and to establish that there is a “sufficient nexus” between the device and the crime. Amazon argued that the First Amendment protected both user requests and Echo’s responses, claiming that permitting government access to the data “chills the exercise of First Amendment rights.” This is similar to the argument Apple made in its opposition to an order demanding it decrypt the iPhone password of one of the defendants in the San Bernardino terrorist case.

The defendant in the Arkansas case ultimately consented to releasing his data. Therefore the court did not have to answer the question of whether the data fell under the protection of the First Amendment. However, the question requires answering.

Amazon has yet to file a response to the New Hampshire court order. Unlike the Arkansas case, where the device belonged to the defendant, here, the device belonged to Ms. Sullivan—one of the two victims. This distinguishes the two cases because Verrill does not have standing to object to production of the evidence, and Ms. Sullivan’s right to privacy likely ceased to exist when she died. It will be interesting to see what, if any, arguments Amazon proposes to address these two issues.

One thing is certain: this will not be the last case to address the production of smart device data. As people’s lives continue to become more and more dependent on smart devices, state and federal courts must determine the protection afforded to smart device data, and what standard should be applied to access it.

Tagged with: , , , ,
Posted in Internet of Things, Litigation, Privacy

Senate Subcommittee Evaluates Expansion of the FTC’s Data and Privacy Authority

On November 27, 2018, the U.S. Senate Subcommittee on Consumer Protection, Product Safety, Insurance, and Data Security held a hearing titled “Oversight of the Federal Trade Commission,” which included testimony from Chairman Joseph Simons and Commissioners Rohit Chopra, Noah Phillips, Rebecca Slaughter, and Christine Wilson. The hearing examined a range of topics within the purview of the FTC, but of particular importance to privacy professionals was the discussion of whether the FTC should have expanded authority over privacy and data security.

The hearing followed two Subcommittee hearings on the creation of a federal privacy law, which Senator Moran (R-KS) noted was necessary in the wake of several large-scale data incidents and in light of the “implementation of the General Data Protection Regulation (“GDPR”) in Europe, and the recent passage of the California Consumer Privacy Act [(“CCPA”)].” Senator Blumenthal (D-CT), echoing the need for a federal privacy law, stated that “[the U.S.] need[s] to do it not only because Europe has done it, not only because California has done it, but [because] these rules are long overdue” and that congress must “[p]rovide the FTC with the resources[,] expertise and structure to enforce the rules [and] establish meaningful penalties on first offenses to pose a credible deterrent . . . .”

In working toward a bipartisan federal privacy law, the Subcommittee was specifically interested in input from the Commissioners on what should be included in the law as well as any additional tools the FTC would need to enforce it. Although little testimony was given on specific aspects of what should be included in the law, there appeared to be a consensus among the Commissioners that the FTC needs (1) direct authority to asses civil penalties, (2) authority over non-profits and common carriers for which there is currently an exemption, and (3) rulemaking authority under the Administrative Procedure Act.

In predicting what may be to come from the FTC in light of growing privacy concerns, Chairman Simons indicated that the FTC may use its Section 6(b) power under the FTC Act—which empowers the FTC to require an entity to file “annual or special . . . reports or answers in writing to specific questions”—to investigate big tech companies such as Amazon, Apple, Facebook, and Google regarding what consumer information is being collected and how that information used, shared, and sold.

Privacy professionals should continue to monitor the development of a federal U.S. privacy law and should keep an eye on potential FTC investigation efforts into big tech as the discussion develops and continues.

Tagged with: , , , , , ,
Posted in FTC, Legislation, Privacy, Regulations

California Passes Internet of Things Law

California continues to pave the way for privacy and cybersecurity legislation as Governor Brown recently signed the first Internet of Things (“IoT”) security law in the United States (SB-327).

While connected devices offer users convenience and efficiency, California lawmakers recognized that such devices also raise serious security and privacy issues. The stated purpose of SB-327 is “to ensure that [I]nternet-connected devices are equipped with reasonable security measures to protect them from unauthorized access, use, destruction, disclosure, or modification by hackers.” Lawmakers identified several concerns, including physical dangers posed by connected cars and medical devices (e.g., connected insulin pumps that can be hacked to deliver lethal doses), as well as concerns over hacks of connected devices to create “botnets,” which have already resulted in major Internet crashes and Denial of Service attacks (attacks intended to prohibit authorized users from accessing networks or devices).

SB-327 has received criticism for its vague terminology, which critics argue fails to provide covered entities with clear direction, thereby preventing them from knowing whether they achieved compliance. Some have also said that SB-327’s requirements are not strict enough. Others applauded the law, saying that despite potential flaws, it was a necessary step in the right direction.

What does SB-327 Require?

Manufacturers must equip connected devices with “reasonable” security features. The bill lacks specificity but, at a minimum, the security features must be (1) appropriate to the nature and function of the device; (2) appropriate to the information it may collect, contain, or transmit; and (3) designed to protect information contained on the device from unauthorized access, destruction, use, modification, or disclosure.

Subject to (1)-(3) in the preceding paragraph, if a device provides a method of authentication outside a local area network (i.e., a remote method of verifying the user’s authority to access the device), it will be deemed to have a reasonable security feature if the manufacturer includes (1) preprogramed passwords that are unique to each device, or (2) a feature requiring a user to generate a new means of authentication before the device can be accessed for the first time (e.g., password set-up, verification code, etc.).

Who does SB-327 Apply to?

Companies that manufacture, or contract to manufacture, connected devices that are sold in or offered for sale in California. Notably, the law does not apply to companies that “contract only to purchase [] connected device[s], or only to purchase and brand [] connected device[s].”

Who Enforces SB-327?

Unlike the recent California Consumer Privacy Act of 2018, SB-327 does not provide a private right of action, nor does it include specific monetary penalties. Rather, enforcement authority belongs exclusively to the Attorney General, a city attorney, a county counsel, or a district attorney.

When does SB-327 go into Effect?

The law is currently scheduled to go into effect on January 1, 2020.

Tagged with: , , , , , ,
Posted in Data Security, Internet of Things
About Cyber Law Monitor
In the new digital world, individuals and businesses are almost entirely dependent on computer technology and electronic communications to function on a daily basis. Although the power of modern technology is a source of opportunity and inspiration—it also poses huge challenges, from protecting privacy and securing proprietary data to adhering to fast-changing statutory and regulatory requirements. The Cyber Law Monitor blog covers privacy, data security, technology, and cyber space. It tracks major legal and policy developments and provides analysis of current events.
Receive Email Updates

Email:

Cozen O’Connor Blogs