Cyber Security

Digital Literacy, a Problem for Americans of All Ages and Experiences

Justice Shannon, MJLST Staffer

According to the American Library Association, “digital literacy” is “the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills.” Digital literacy is a term that has existed since the year 1997. Paul Gilster coined Digital literacy as “the ability to understand and use information in multiple formats from a wide range of sources when it is presented via computers.” In this way, the definition of digital literacy has broadened from how a person absorbs digital information to how one develops, absorbs, and critiques digital information.

The Covid-19 Pandemic taught Americans of all ages the value of Digital literacy. Elderly populations were forced online without prior training due to the health risks presented by Covid-19, and digitally illiterate parents were unable to help their children with classes.

Separate from Covid-19, the rise of crypto-currency has created a need for digital literacy in spaces that are not federally regulated.

Elderly

The Covid-19 pandemic did not create the need for digital literacy training for the elderly. However, the pandemic highlighted a national need to address digital literacy among America’s oldest population. Elderly family members quarantined during the pandemic were quickly separated from their families. Teaching family members how to use Zoom and Facebook messenger became a substitute for some but not all forms of connectivity. However, teaching an elderly family member how to use Facebook messenger to speak to loved ones does not enable them to communicate with peers or teach them other digital literacy skills.

To address digital literacy issues within the elderly population states have approved Senior Citizen Technology grants. Pennsylvania’s Department of Aging has granted funds to adult education centers for technology for senior citizens. Programs like this have been developing throughout the nation. For example, Prince George’s Community College in Maryland uses state funds to teach technology skills to its older population.

It is difficult to tell if these programs are working. States like Pennsylvania and Maryland had programs before the pandemic. Still, these programs alone did not reduce the distance between America’s aging population and the rest of the nation during the pandemic. However, when looking at the scale of the program in Prince George’s County, this likely was not the goal. Beyond that, there is a larger question: Is the purpose of digital literacy for the elderly to ensure that they can connect with the world during a pandemic, or is the goal simply ensuring that the elderly have the skills to communicate with the world? With this in mind, programs that predate the pandemic, such as the programs in Pennsylvania and Maryland, likely had the right approach even if they weren’t of a large enough scale to ensure digital literacy for the entirety of our elderly population.

Parents

The pandemic highlighted a similar problem for many American families. While state, federal, and local governments stepped up to provide laptops and access to the internet, many families still struggled to get their children into online classes; this is an issue in what is known as “last mile infrastructure.”During the pandemic, the nation quickly provided families with access to the internet without ensuring they were ready to navigate it. This left families feeling ill-prepared to support their children’s educational growth from home. Providing families with access to broadband without digital literacy training disproportionately impacted families of color by limiting their children’s growth capacity online compared to their peers. While this wasn’t an intended result, it is a result of hasty bureaucracy in response to a national emergency. Nationally, the 2022 Workforce Innovation Opportunity Act aims to address digital literacy issues among adults by increasing funding for teaching workplace technology skills to working adults. However, this will not ensure that American parents can manage their children’s technological needs.

Crypto

Separate from issues created by Covid-19 is cryptocurrency. One of the largest selling points of cryptocurrency is that it is largely unregulated. Users see it as “digital gold, free from hyper-inflation.”While these claims can be valid, consumers frequently are not aware of the risks of cryptocurrency. Last year the Chair of the SEC called cryptocurrencies “the wild west of finance rife with fraud, scams, and abuse.”This year the Department of the Treasury announced they would release instructional materials to explain how cryptocurrencies work. While this will not directly regulate cryptocurrencies providing Americans with more tools to understand cryptocurrencies may help reduce cryptocurrency scams.

Conclusion

Addressing digital literacy has been a problem for years before the Covid-19 pandemic. Additionally, when new technologies become popular, there are new lessons to learn for all age groups. Covid-19 appropriately shined a light on the need to address digital literacy issues within our borders. However, if we only go so far as to get Americans networked and prepared for the next national emergency, we’ll find that there are disparities between those who excel online and those who are are ill-equipped to use the internet to connect with family, educate their kids, and participate in e-commerce.


What the SolarWinds hack means for the future of law firm cybersecurity?

Sam Sylvan, MJLST Staffer

Last December, the massive software company SolarWinds acknowledged that its popular IT-monitoring software, Orion, was hacked earlier in the year. The software was sold to thousands of SolarWinds’ clients, including government and Fortune 500 companies. A software update of Orion provided Russian-backed hackers with a backdoor into the internal systems of approximately 18,000 SolarWinds customers—a number that is likely to increase over time as more organizations discover that they also are victims of the hack. Even the cybersecurity company FireEye that first identified the hack had learned that its own systems were compromised.

The hack has widespread implications on the future of cybersecurity in the legal field. Courts and government attorneys were not able to avoid the Orion hack. The cybercriminals were able to hack into the DOJ’s internal systems, leading the agency to report that the hackers might have breached 3,450 DOJ email inboxes. The Administrative Office of the U.S. Courts is working with DHS to audit vulnerabilities in the CM/ECF system where highly sensitive non-public documents are filed under seal. Although, as of late February, no law firms had announced that they too were victims of the hack, likely because law firms do not typically use Orion software for their IT management, the Orion hack is a wakeup call to law firms across the country regarding their cybersecurity. There have been hacks, including hacks of law firms, but nothing of this magnitude or potential level of sabotage. Now more than ever law firms must contemplate and implement preventative measures and response plans.

Law firms of all sizes handle confidential and highly sensitive client documents and data. Oftentimes, firms have IT specialists but lack cybersecurity experts on the payroll—somebody internal who can aid by continuing to develop cybersecurity defenses. The SolarWinds hack shows why this needs to change, particularly for law firms that handle an exorbitant amount of highly confidential and sensitive client documents and can afford to add these experts to their ranks. Law firms relying exclusively on consultants or other third parties for cybersecurity only further jeopardizes the security of law firms’ document management systems and caches of electronically stored client documents. Indeed, it is reliance on third-party vendors that enabled the SolarWinds hack in the first place.

In addition to adding a specialist to the payroll, there are a number of other specific measures that law firms can take in order to address and bolster their cybersecurity defenses. For those of us who think it is not a matter of “if” but rather “when,” law firms should have an incident response plan ready to go. According to Jim Turner, chief operating officer of Hilltop Consultants, many law firms do not even have an incident response plan in place.

Further, because complacency and outdated IT software is of particular concern for law firms, “vendor vulnerability assessments” should become commonplace across all law firms. False senses of protection need to be discarded and constant reassessment should become the norm. Moreover, firms should upgrade the type of software protection they have in place to include endpoint detection and response (EDR), which uses AI to detect hacking activity on systems. Last, purchasing cyber insurance is a strong safety measure in the event a law firm has to respond to a breach. It would allow for the provision of additional resources needed to effectively respond to hacks.


Ways to Lose Our Virtual Platforms: From TikTok to Parler

Mengmeng Du, MJLST Staffer

Many Americans bid farewell to the somewhat rough 2020 but found the beginning of 2021 rather shocking. After President Trump’s followers stormed the Capitol Building on January 6, 2021, major U.S. social media, including Twitter, Facebook, Instagram, and Snapchat, moved fast to block the nation’s president on their platforms. While everybody was still in shock, a second wave hit. Apple’s iOS App stores, Google’s Android Play stores, Amazon Web Services, and other service providers decided to remove Parler, an app used by Trump supporters in the riot and mostly favored by conservatives. Finding himself virtually homeless, President Trump relocated to TikTok, a Chinese owned short-video sharing app   relentlessly sought to ban ever since July 2020. Ironically but not unexpected, TikTok banned President Trump before he could even ban TikTok.

Dating back to June 2020, the fight between TikTok and President Trump germinated when the app’s Chinese parent company ByteDance was accused of discreetly accessing the clipboard content on their users’ iOS devices. Although the company argued that the accused technical feature was set up as an “anti-spam” measure and would be immediately stopped, the Trump administration signed Executive Order 13942 on August 6, 2020, citing national security concerns to ban the app in five stages. TikTok responded swiftly , the District Court for the District of Columbia issued a preliminary injunction on September 27, 2020. At the same while, knowing that the root of problem lies in its “Chinese nationality,” ByteDance desperately sought acquisition by U.S. corporations to make TikTok US-owned to dodge the ruthless banishment, even willing to give up billions of dollars and, worse, its future in the U.S. market. The sale soon drew qualified bidders including Microsoft, Oracle, and Walmart, but has not advanced far since September due to the pressure coming from both Washington and Beijing.

TikTok, in the same Executive Order was another Chinese app called WeChat. If banning TikTok means that American teens will lose their favorite virtual platform for life-sharing amid the pandemic, blocking WeChat means much more. It heavily burdens one particular minority group––hundreds and thousands of Chinese Americans and Chinese citizens in America who use WeChat. This group fear losing connection with families and becoming disengaged from the social networks they have built once the vital social platform disappears. For more insight, this is a blog post that talks about the impact of the WeChat ban on Chinese Students studying in the United States.

In response to the WeChat ban, several Chinese American lawyers led the creation of U.S. WeChat Users Alliance. Supported by thousands of U.S. WeChat users, the Alliance is a non-profit organization independent of Tencent, the owner of WeChat, and was formed on August 8, 2020 to advocate for all that are affected by the ban. Subsequently, the Alliance brought suit in the United States District Court for the Northern District of California against the Trump administration and received its first victory in court on September 20, 2020 as Judge Laurel Beeler issued a preliminary injunction against Trump’s executive order.

Law is powerful. Article Two of the United States Constitution vested the broad executive power in the president of this country to discretionally determine how to enforce the law via issuance of executive orders. Therefore, President Trump was able to hunt a cause that seemed satisfying to him and banned TikTok and WeChat for their Chinese “nationality.” Likewise, the First Amendment of the Constitution and section 230 of the Communication Decency Act empowers private Internet forum providers to screen and block offensive material. Thus, TikTok, following its peers, finds its legal justification to ban President Trump and Apple can keep Parler out of reach from Trump supporters. But power can corrupt. It is true that TikTok and WeChat are owned by Chinese companies, but an app, a technology, does not take on nationality from its ownership. What happened on January 6, 2021 in the Capitol Building was a shame but does not justify removal of Parler. Admittedly, regulations and even censorship on private virtual platforms are necessary for national security and other public interest purposes. But the solution shouldn’t be simply making platforms unavailable.

As a Chinese student studying in the United States, I personally felt the of the WeChat ban. I feel fortunate that the judicial check the U.S. legal system puts on the executive power saved WeChat this time, but I do fear for the of internet forum regulation.

 


Inconceivable! How the Fourth Amendment Failed the Dread Pirate Roberts in United States v. Ulbricht

Emily Moss, MJLST Staffer

It is not an overstatement to claim that electronic devices, such as laptop and smart phones, have “altered the way we live.” As Chief Justice Roberts stated, “modern cell phones . . . are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.” Riley v. California, 573 U.S. 373, 385 (2014). These devices create new digital records of our everyday lives. United States v. Ulbricht, 858 F.3d 71 (2d Cir. 2017) is one of many cases that grapple with when the government should gain access to these records.

In February 2015, a jury found Ross William Ulbricht (aka “Dread Pirate Roberts” or “DPR”) guilty on seven counts related to his creation and operation of Silk Road. United States v. Ulbricht, 858 F.3d 71, 82 (2d Cir. 2017). Silk Road was an online criminal marketplace where, using the anonymous currency Bitcoin, “users principally bought and sold drugs, false identification documents, and computer hacking software.” Id. Government trial evidence showed that, hoping to protect Silk Road anonymity, DPR commissioned the murders of five people. Id. at 88. However, there is no evidence that the murders actually transpired. Id.

On appeal, the Second Circuit upheld both the conviction and Ulbricht’s two-life sentence. Ulbricht, 858 F.3d at 82. Ulbricht argued, inter alia, that “the warrant[] authorizing the government to search his laptop . . . violated the Fourth Amendment’s particularity requirement.” Id. at 95. The warrant authorized “opening or ‘cursorily reading the first few’ pages of files to ‘determine their precise contents,’ searching for deliberately hidden files, using ‘key word searches through all electronic storage areas,’ and reviewing file ‘directories’ to determine what was relevant.” Id. at 101–02. Ulbricht claimed that the warrant violated the Fourth Amendment’s particularity requirement because it “failed to specify the search terms and protocols” that the government was required to employ while searching Ulbricht’s laptop. Id. at 102.

The court acknowledged that particularity is especially important when the warrant authorizes the search of electronic data, as the search of a computer can expose “a vast trove of personal information” including “sensitive records.” Id. at 99. It noted that “a general search of electronic data is an especially potent threat to privacy because hard drives and e-mail accounts may be ‘akin to a residence in terms of the scope and quantity of private information [they] may contain’ . . . Because of the nature of digital storage, it is not always feasible to ‘extract and segregate responsive data from non-responsive data,’. . . creating a ‘serious risk that every warrant for electronic information will become, in effect, a general warrant.’” Id. (internal citations omitted).

Nonetheless, the court rejected Ulbricht’s claim that the laptop warrant failed to meet the Fourth Amendment’s particularity requirement. It reasoned that it would be impossible to identify how relevant files would be named before the laptop search began, which the government reasonably anticipated when requesting the laptop warrant. Id. at 102 (emphasizing examples where relevant files and folders had misleading names such as “aliaces” or “mbsobzvkhwx4hmjt”). Further, the court held that broad search protocols were appropriate given that the alleged crime involved sophisticated technology and masking identity. Id. Ultimately, the court emphasized that the “fundamental flaw” in Ulbricht’s argument was that it equated a broad warrant with a violation of the particularity requirement. Id. Using the analogy of searching an entire home where there is probable cause to believe that there is relevant evidence somewhere in the home, the court illustrated that a warrant can be both broad and still satisfy the particularity requirement. Id. (citing U.S. Postal Serv. v. C.E.C. Servs., 869 F.2d 184, 187 (2d Cir. 1989)). The court therefore upheld the constitutionality of the warrant. The Supreme Court denied Ulbrich’s writ of certiorari.

Orin Kerr’s equilibrium adjudgment theory of the Fourth Amendment argues that as new tools create imbalanced power on either the side of privacy or the side of law enforcement, the Fourth Amendment must adjust to restore its original balance. The introduction of computers and the internet created an immense change in the tools that both criminals and law enforcement use. Without minimizing the significance of Ulbricht’s crimes, United States v. Ulbricht illustrates this dramatic change. While computers and the internet did create new avenues for crime, computer and internet searches—such as the ones employed by the government—do far more to disrupt the Fourth Amendment’s balance.

Contrary to the court’s argument in Ulbricht, searching a computer is entirely unlike searching a home. First, it is easy to remove items from your home, but the same is not true of computers. Even deleted files often linger on computers where the government can access them. Similarly, when law enforcement finds a file in someone’s home, it still does not know how that file was used, how often it has been viewed, or who has viewed it. But computers do store such information. These, and many other differences demonstrate why particularity, in the context of computer searches, is even more important than the court in UIlbricht acknowledged. Given the immense amount of information available on an individual’s electronic devices, Ulbricht glosses over the implications for personal privacy posed by broad search warrants directed at computers. And with the rapidly changing nature of computer technology, the Fourth Amendment balance will likely continue to stray further from equilibrium at a speed with which the courts will struggle to keep up.

Thus, adjusting the Fourth Amendment power balance related to electronic data will continue to be an important and complicated issue. See, e.g., Proposal 2 Mich. 2020) (amending the state’s constitution “to require a search warrant to access a person’s electronic data or electronic communications,” passing with unanimous Michigan Senate and House of Representative approval, then with 88.8% of voters voting yes on the proposal); People v. Coke, 461 P.3d 508, 516 (Colo. 2020) (“‘Given modern cell phones’ immense storage capacities and ability to collect and store many distinct types of data in one place, this court has recognized that cell phones ‘hold for many Americans the privacies of life’ and are, therefore, entitled to special protections from searches.”) (internal citations omitted). The Supreme Court has ruled on a number of Fourth Amendment and electronic data cases. See, e.g., Carpenter v. United States, 138 S.Ct. 2206 (2018) (warrantless attainment of cell-site records violates the Fourth Amendment); Riley v. California, 134 S.Ct. 2473 (2014) (warrantless search and seizure of digital contents of a cell phone during an arrest violates the Fourth Amendment). However, new issues seem to appear faster than they can be resolved. See, e.g., Nathan Freed Wessler, Jennifer Stisa Granick, & Daniela del Rosario Wertheimer, Our Cars Are Now Roving Computers. Is the Fourth Amendment Ready?, ACLU (May 21, 2019, 3:00 PM), https://www.aclu.org/blog/privacy-technology/surveillance-technologies/our-cars-are-now-roving-computers-fourth-amendment. The Fourth Amendment therefore finds itself in eel infested waters. Is rescue inconceivable?

Special thanks to Professor Rozenshtein for introducing me to Ulbricht and inspiring this blog post in his course Cybersecurity Law and Policy!


The EARN IT Act has Earned Sex Workers’ Criticism: How a Bill Regulating Internet Speech will Harm an Under-resourced Community Often Overlooked by Policymakers

Ingrid Hofeldt, MJLST Staffer

In March of 2020, as the COVID-19 pandemic swept across the nation, Senator Lindsey Graham introduced the EARN IT Act (EIA), a bill that would allow Congress to coerce internet providers into decreasing the security of communications on their platforms or risk a potential deluge of legal battles. In addition to violating the freedom and security many U.S. citizens enjoy online, this bill will particularly harm sex workers, who already face instability, housing insecurity, and the threat of poverty as the COVID-19 pandemic has made their work nearly impossible. Many human rights groups, including the American Civil Liberties Union, Human Rights Watch, and the Stanford Center for Internet and Society strongly oppose this bill. 

With the aim to protect children from sexual exploitation online, the EIA would amend Section 230 of the Communications Decency Act of 1996 (CDA). The CDA protects internet platforms from legal liability for the content shared by their users. Because of the CDA, the government cannot currently prosecute Facebook for its users’ decisions to upload child pornography onto their accounts. However, the EIA strips platforms of this protection. Additionally, the EIA establishes a National Commission on the Prevention of Online Child Sexual Exploitation. This commission will develop best practices for internet platforms to “prevent, reduce, and respond” to the online sexual exploitation of children. Though not legally binding, these guidelines could influence courts’ decision making as they interpret the EIA.

While preventing the sexual exploitation of children is a worthy aim, this act will provide victimized children with little protection they don’t already have, while opening sex workers and child victims of sexual exploitation up to greater violence at the hands of sex traffickers. Officials at the National Center for Missing and Exploited Children (NCMEC) are overburdened by the existing reports of online child sexual exploitation. The center has reached “a breaking point where NCMEC’s manual review capabilities and law enforcement investigations are no longer doable.” Sex traffickers also don’t necessarily use the platforms that the EIA would target or use internet platforms at all. As one sex worker explained, “[i]t’s interesting to note that Jeffrey Epstein didn’t use a website to traffic young women and neither do the pimps I have met in my 17 years as a sex worker.”

The EIA will impact internet providers’ ability to offer end-to-end encryption, the software that allows internet users to anonymously and securely message each other. Sex workers rely on end-to-end encryption to connect, share information relating to health and safety, and build their businesses. An anonymous sex worker explains that websites with end-to-end encryption allow them to “safely schedule and screen their clients before meeting them in person,” while making them “less dependent on exploitative third parties like pimps.”  If enacted, the EIA will likely harm sex workers immensely, because (1) sex workers will likely make less money without online platforms to secure clients; (2) sex workers will have to resort to less safe, offline means of finding clients; and (3) sex workers who continue using these platforms that have become unencrypted will face the risk of prosecution if law enforcement or website monitors discover they are engaging illegal activity. The EIA will affect internet providers’ ability to offer end-to-end encryption in primarily two ways. 

Firstly, strong evidence exists that the commission the EIA creates will establish anti-encryption guidelines. This commission will  include 19 unelected officials, some of whom must have experience in law enforcement. Unsurprisingly, this commission has no mandated representation of sex workers or sex worker advocates. The EIA will not require this commission to conduct human rights impact assessments, write transparency reports, or establish metrics of success. Given that the commission is headed by Attorney General Barr, who has strongly opposed encryption in the past, it is likely that the commission will recommend that internet platforms either (1) not employ end-to-end encryption, the practice that allows for private, secure internet communications or (2) allow law enforcement agencies a “backdoor” around end-to-end encryption so they can monitor otherwise secure internet communications. The commission also has the power to create whatever recommended standards for internet platforms that it desires, which could a recommendation ban end-to-end encryption. While these guidelines do not have the force of law, courts could look at them persuasively when ruling on whether an internet provider has violated the EIA.

Additionally, the EIA has the potential to open internet providers up to crushing liability from state governments or private individuals based on whether these providers offer encrypted messaging. Regardless of how courts ultimately rule, lengthy and costly court battles between internet providers and state governments will likely ensue. Some internet providers will probably choose to stop offering encrypted messaging services or allow law enforcement agencies a “backdoor” into their messaging services so law enforcement agents can view private Facebook messages or videos. The “voluntary” policies offered by the commission could become essentially mandatory if providers wish to save money.

Senator Patrick Leahy responded to the concerns around encryption by adding an amendment to the EIA that stipulates that “no action will be brought against the provider for utilizing [encryption];” however, Senator Leahy did not address the issue of a law enforcement “backdoor.”  Additionally, state governments could still use the EIA hold internet platforms accountable under their state laws, for recklessly or negligently failing to moderate encrypted and report it to NCMEC. Mike Lemon, the senior director and federal government affairs counsel reasons that “the new version of the [EIA] replaces one set of problems with another by opening the door to an unpredictable and inconsistent set of standards under state laws that pose many of the same risks to strong encryption.” 

Sex workers are already vulnerable to food insecurity, housing insecurity, and the threat of poverty because of the COVID-19 pandemic and the recent passage of FOSTA/SESTA, a law that resulted in the extermination of websites such as Backpage that sex workers commonly used.  As one sex worker explains, “my work is all contact work… a pandemic with a transmittal virus means… [my work has] moved completely online.” Based on survey results conducted by Hacking/Hustling, 78.5% of sex workers secure the majority of their income through sex work. Following the passage of FOSTA/SESTA, 73.5% of sex workers reported that their financial situations had changed. In the words of these anonymous respondents: “I’m homeless and can’t pay the bills.” “My income decreased by 58% following FOSTA/SESTA.” “I used to make enough to feel comfortable. Now I’m barely scraping by.” “I feel totally erased.” 

The EIA will narrow the amount of websites that sex workers can safely use, if a backdoor for encryption is allowed to law enforcement. Additionally, if internet platforms are liable under state laws, these platforms will more heavily police their content, resulting in the removal or prosecution of sex workers. Many sex workers will likely leave platforms that don’t provide encryption given safety and privacy concerns. While “sex workers were pioneers of the digital realm . . . [they] are now being kicked off the same online platforms . . .[they] built and inspired.”

Sex workers and sex worker advocacy organizations have come out in strong opposition against the EIA; however, given the lack of political sway sex workers hold due to societal biases, their outcry has fallen largely on deaf ears.  In response to the EIA, several prominent sex workers organized a live, virtual art exhibit to protest the EIA. In the words left behind on this page: “[t]hey can try to keep on killing us, to put their hands over our mouths, but they can never keep us away. We’ll be back.


Forget About Quantum Computers Cracking Your Encrypted Data, Many Believe End-to-End Encryption Will Lose Out as a Matter of Policy

Ian Sannes, MJLST Staffer

As reported in Nature, Google recently announced they finally achieved quantum supremacy, which is the point when computers that work based on the spin of qubits, rather than how all conventional computers work, are finally able to solve problems faster than conventional computers. However, using quantum computers is not a threat to encryption any time soon according to John Preskill, who coined the term “quantum supremacy,” rather such theorized uses remain many years out. Furthermore, the question remains whether quantum computers are even a threat to encryption at all. IBM recently showcased one way to encrypt data that is immune to the theoretical cracking ability of future quantum computers. It seems that while one method of encryption is theoretically prone to attack by quantum computers, the industry will simply adopt methods that are not prone to such attacks when it needs to.

Does this mean that end-to-end encryption methods will always protect me?

Not necessarily. Stewart Baker opines there are many threats to encryption such as homeland security policy, foreign privacy laws, and content moderation, which he believes will win out over the right to have encrypted private data.

The highly-publicized efforts of the FBI in 2016 to try to force Apple to unlock encryption on an iPhone for national security reasons ended in the FBI dropping the case when they hired a third party who was able to crack the encryption. This may seem like a win for Silicon Valley’s historically pro-encryption stance but foreign laws, such as the UK’s Investigatory Powers Act, are opening the door for government power in obtaining user’s digital data.

In October of 2019 Attorney General Bill Barr requested that Facebook halt its plans to implement end-to-end encryption on its messaging services because it would prevent investigating serious crimes. Zuckerberg, the CEO of Facebook, admitted it would be more difficult to identify and remove harmful content if such an encryption was implemented, but has yet to implement the solution.

Some believe legislators may simply force software developers to create back doors to users’ data. Kalev Leetaru believes content moderation policy concerns will allow governments to bypass encryption completely by forcing device manufacturers or software companies to install client-side content-monitoring software that is capable of flagging suspicious content and sending decrypted versions to law enforcement automatically.

The trend seems to be headed in the direction of some governmental bypass of conventional encryption. However, just like IBM’s quantum-proof encryption was created to solve a weakness in encryption, consumers will likely find another way to encrypt their data if they feel there is a need.


A Data Privacy Snapshot: Big Changes, Uncertain Future

Holm Belsheim, MJLST Staffer

When Minnesota Senator Amy Klobuchar announced her candidacy for the Presidency, she stressed the need for new and improved digital data regulation in the United States. It is perhaps telling that Klobuchar, no stranger to internet legislation, labelled data privacy and net neutrality as cornerstones of her campaign. While data bills have been frequently proposed in Washington, D.C., few members of Congress have been as consistently engaged in this area as Klobuchar. Beyond expressing her longtime commitment to the idea, the announcement may also be a savvy method to tap into recent sentiments. Over the past several years citizens have experienced increasingly intrusive breaches of their information. Target, Experian and other major breaches exposed the information of hundreds of millions of people, including a shocking 773 million records in a recent report. See if you were among them. (Disclaimer: neither I nor MJLST are affiliated with these sites, nor can we guarantee accuracy.)

Data privacy has been big news in recent years. Internationally, Brazil, India and China are have recently put forth new legislation, but the big story was the European Union’s General Data Privacy Regulation, or GDPR, which began enforcement last year. This massive regulatory scheme codifies the European presumption that an individual’s data is not available for business purposes without the individual’s explicit consent, and even then only in certain circumstances. While the scheme has been criticized as both vague and overly broad, one crystal clear element is the seriousness of its enforcement capabilities. Facebook and Google each received large fines soon after the GDPR’s official commencement, and other companies have partially withdrawn from the EU in the face of compliance requirements. No clear challenge has emerged, and it looks like the GDPR is here to stay.

Domestically, the United States has nothing like the GDPR. The existing patchwork of federal and state laws leave much to be desired. Members of Congress propose new laws regularly, most of which then die in committee or are shelved. California has perhaps taken the boldest step in recent years, with its expansive California Consumer Protection Act (CCPA) scheduled to begin enforcement in 2020. While different from the GDPR, the CCPA similarly proposes heightened standards for companies to comply with, more remedies and transparency for consumers, and specific enforcement regimes to ensure requirements are met.

The consumer-friendly CCPA has drawn enormous scrutiny and criticism. While evincing modest support, or perhaps just lip service, tech titans like Facebook and Google are none too pleased with the Act’s potential infringement upon their access to Americans’ data. Since 2018, affected companies have lobbied Washington, D.C. for expansive and modernized federal data privacy laws. One common, though less publicized, element in these proposals is an explicit federal preemption provision, which would nullify the CCPA and other state privacy policies. While nothing has yet emerged, this issue isn’t going anywhere soon.


Car Wreck: Data Breach at Uber Underscores Legal Dangers of Cybersecurity Failures

Matthew McCord, MJSLT Staffer

 

This past week, Uber’s annus horribilis and the everincreasing reminders of corporate cybersecurity’s persistent relevance reached singularity. Uber, once praised as a transformative savior of the economy by technology-minded businesses and government officials for its effective service delivery model and capitalization on an exponentially-expanding internet, has found itself impaled on the sword that spurred its meteoric rise. Uber recently disclosed that hackers were able to access the personal information of 57 million riders and drivers last year. It then paid hackers $100,000 to destroy the compromised data, and failed to inform its users or sector regulators of the breach at the time. These hackers apparently compromised a trove of personally identifiable information, including names, telephone numbers, email addresses, and driver’s licenses of users and drivers through a flaw in their company’s GitHub security.

Uber, a Delaware corporation, is required to present notice of a data breach in the “most expedient time possible and without unreasonable delay” to affected customers per Delaware statutes. Most other states have adopted similar legislation which affects companies doing business in those states, which could allow those regulators and customers to bring actions against the company. By allegedly failing to provide timely notification, Uber opened itself to the parade of announced investigations from regulators into the breach: the United Kingdom’s Information Commissioner, for instance, has threatened fines following an inquiry, and U.S. state regulators are similarly considering investigations and regulatory action.

Though regulatory action is not a certainty, the possibility of legal action and the dangers of lost reputation are all too real. Anthem, a health insurer subject to far stricter federal regulation under HIPAA and its various amendments, lost $115 million to settlement of a class action suit over its infamous data breach. Short-term impacts on reputation rattle companies (especially those who respond less vigorously), with Target having seen its sales fall by almost 50% in 2013 Q4 after its data breach. The cost of correcting poor data security on a technical level also weighs on companies.

This latest breach underscores key problems facing businesses in the continuing era of exponential digital innovation. The first, most practical problem that companies must address is the seriousness with which companies approach information security governance. An increasing number of data sources and applications, and increasing complexity of systems and vectors, similarly increases the potential avenues to exposure for attack. One decade ago, most companies used at least somewhat isolated, internal systems to handle a comparatively small amount of data and operations. Now, risk assessments must reflect the sheer quantity of both internal and external devices touching networks, the innumerable ways services interact with one another (and thus expose each service and its data to possible breaches), and the increasing competence of organized actors in breaching digital defenses. Information security and information governance are no longer niches, relegated to one silo of a company, but necessarily permeate most every business area of an enterprise. Skimping on investment in adequate infrastructure far widens the regulatory and civil liability of even the most traditional companies for data breaches, as Uber very likely will find.

Paying off data hostage-takers and thieves is a particularly concerning practice, especially from a large corporation. This simply creates a perverse incentive for malignant actors to continue trying to siphon off and extort data from businesses and individuals alike. These actors have grown from operations of small, disorganized groups and individuals to organized criminal groups and rogue states allegedly seeking to circumvent sanctions to fund their regimes. Acquiescing to the demands of these actors invites the conga line of serious breaches to continue and intensify into the future.

Invoking a new, federal legislative scheme is a much-discussed and little-acted upon solution for disparate and uncoordinated regulation of business data practices. Though 18 U.S.C. § 1030 provides for criminal penalties for the bad actors, there is little federal regulation or legislation on the subject of liability or minimum standards for breached PII-handling companies generally. The federal government has left the bulk of this work to each state as it leaves much of business regulation. However, internet services are recognized as critical infrastructure by the Department of Homeland Security under Presidential Policy Directive 21. Data breaches and other cyber attacks result in data and intellectual property theft costing the global economy hundreds of billions of dollars annually, with widespread disruption potentially disrupting government and critical private sector operations, like the provision of utilities, food, and essential services, turning cybersecurity into a definite critical national risk requiring a coordinated response. Careful crafting of legislation authorizing federal coordination of cybersecurity best practices and adequately punitive federal action for negligence of information governance systems, would incentivize the private and public sectors to take better care of sensitive information, reducing the substantial potential for serious attacks to compromise the nation’s infrastructure and the economic well-being of its citizens and industries.


6th Circuit Aligns With 7th Circuit on Data Breach Standing Issue

John Biglow, MJLST Managing Editor

To bring a suit in any judicial court in the United States, an individual, or group of individuals must satisfy Article III’s standing requirement. As recently clarified by the Supreme Court in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), to meet this requirement, a “plaintiff must have (1) suffered an injury in fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision.” Id. at 1547. When cases involving data breaches have entered the Federal Circuit courts, there has been some disagreement as to whether the risk of future harm from data breaches, and the costs spent to prevent this harm, qualify as “injuries in fact,” Article III’s first prong.

Last Spring, I wrote a note concerning Article III standing in data breach litigation in which I highlighted the Federal Circuit split on the issue and argued that the reasoning of the 7th Circuit court in Remijas v. Neiman Marcus Group, LLC, 794 F.3d 688 (7th Cir. 2015) was superior to its sister courts and made for better law. In Remijas, the plaintiffs were a class of individuals whose credit and debit card information had been stolen when Neiman Marcus Group, LLC experienced a data breach. A portion of the class had not yet experienced any fraudulent charges on their accounts and were asserting Article III standing based upon the risk of future harm and the time and money spent mitigating this risk. In holding that these Plaintiffs had satisfied Article III’s injury in fact requirement, the court made a critical inference that when a hacker steals a consumer’s private information, “[p]resumably, the purpose of the hack is, sooner or later, to make fraudulent charges or assume [the] consumers’ identit[y].” Id. at 693.

This inference is in stark contrast to the line of reasoning engaged in by the 3rd Circuit in Reilly v. Ceridian Corp. 664 F.3d 38 (3rd Cir. 2011).  The facts of Reilly were similar to Remijas, except that in Reilly, Ceridian Corp., the company that had experienced the data breach, stated only that their firewall had been breached and that their customers’ information may have been stolen. In my note, mentioned supra, I argued that this difference in facts was not enough to wholly distinguish the two cases and overcome a circuit split, in part due to the Reilly court’s characterization of the risk of future harm. The Reilly court found that the risk of misuse of information was highly attenuated, reasoning that whether the Plaintiffs experience an injury depended on a series of “if’s,” including “if the hacker read, copied, and understood the hacked information, and if the hacker attempts to use the information, and if he does so successfully.” Id. at 43 (emphasis in original).

Often in the law, we are faced with an imperfect or incomplete set of facts. Any time an individual’s intent is an issue in a case, this is a certainty. When faced with these situations, lawyers have long utilized inferences to differentiate between more likely and less likely scenarios for what the missing facts are. In the case of a data breach, it is almost always the case that both parties will have little to no knowledge of the intent, capabilities, or plans of the hacker. However, it seems to me that there is room for reasonable inferences to be made about these facts. When a hacker is sophisticated enough to breach a company’s defenses and access data, it makes sense to assume they are sophisticated enough to utilize that data. Further, because there is risk involved in executing a data breach, because it is illegal, it makes sense to assume that the hacker seeks to gain from this act. Thus, as between the Reilly and Remijas courts’ characterizations of the likelihood of misuse of data, it seemed to me that the better rule is to assume that the hacker is able to utilize the data and plans to do so in the future. Further, if there are facts tending to show that this inference is wrong, it is much more likely at the pleading stage that the Defendant Corporation would be in possession of this information than the Plaintiff(s).

Since Remijas, there have been two data breach cases that have made it to the Federal Circuit courts on the issue of Article III standing. In Lewert v. P.F. Chang’s China Bistro, Inc., 819 F.3d 963, 965 (7th Cir. 2016), the court unsurprisingly followed the precedent set forth in their recent case, Remijas, in finding that Article III standing was properly alleged. In Galaria v. Nationwide Mut. Ins. Co., a recent 6th Circuit case, the court had to make an Article III ruling without the constraint of an earlier ruling in their Circuit, leaving the court open to choose what rule and reasoning to apply. Galaria v. Nationwide Mut. Ins. Co., No. 15-3386, 2016 WL 4728027, (6th Cir. Sept. 12, 2016). In the case, the Plaintiffs alleged, among other claims, negligence and bailment; these claims were dismissed by the district court for lack of Article III standing. In alleging that they had suffered an injury in fact, the Plaintiffs alleged “a substantial risk of harm, coupled with reasonably incurred mitigation costs.” Id. at 3. In holding that this was sufficient to establish Article III standing at the pleading stage, the Galaria court found the inference made by the Remijas court to be persuasive, stating that “[w]here a data breach targets personal information, a reasonable inference can be drawn that the hackers will use the victims’ data for the fraudulent purposes alleged in Plaintiffs’ complaints.” Moving forward, it will be intriguing to watch how Federal Circuits who have not faced this issue, like the 6th circuit before deciding Galaria, rule on this issue and whether, if the 3rd Circuit keeps its current reasoning, this issue will eventually make its way to the Supreme Court of the United States.


The Federal Government Wants Your iPhone Passcode: What Does the Law Say?

Tim Joyce, MJLST Staffer

Three months ago, when MJLST Editor Steven Groschen laid out the arguments for and against a proposed New York State law that would require “manufacturers and operating system designers to create backdoors into encrypted cellphones,” the government hadn’t even filed its motion to compel against Apple. Now, just a few weeks after the government quietly stopped pressing the issue, it almost seems as if nothing at all has changed. But, while the dispute at bar may have been rendered moot, it’s obvious that the fight over the proper extent of data privacy rights continues to simmer just below the surface.

For those unfamiliar with the controversy, what follows are the high-level bullet points. Armed attackers opened fire on a group of government employees in San Bernardino, CA on the morning of December 2, 2015. The attackers fled the scene, but were killed in a shootout with police later that afternoon. Investigators opened a terrorism investigation, which eventually led to a locked iPhone 5c. When investigators failed to unlock the phone, they sought Apple’s help, first politely, and then more forcefully via California and Federal courts.

The request was for Apple to create an authenticated version of its iOS operating system which would enable the FBI to access the stored data on the phone. In essence, the government asked Apple to create a universal hack for any iPhone operating that particular version of iOS. As might be predicted, Apple was less than inclined to help crack its own encryption software. CEO Tim Cook ran up the banner of digital privacy rights, and re-ignited a heated debate over the proper scope of government’s ability to regulate encryption practices.

Legal chest-pounding ensued.

That was the situation until March 28, when the government quietly stopped pursuing this part of the investigation. In its own words, the government informed the court that it “…ha[d] now successfully accessed the data stored on [the gunman]’s iPhone and therefore no longer require[d] the assistance from Apple Inc…”. Apparently, some independent governmental contractor (read: legalized hacker) had done in just a few days what the government had been claiming from the start was impossible without Apple’s help. Mission accomplished – so, the end?

Hardly.

While this one incident, for this one iPhone (the iOS version is only applicable to iPhone 5c’s, not any other model like the iPhone 6), may be history, many more of the same or substantially similar disputes are still trickling through the courts nationwide. In fact, more than ten other federal iPhone cases have been filed since September 2015, and all this based on a 227 year old act of last resort. States like New York are also getting into the mix, even absent fully ratified legislation. Furthermore, it’s obvious that legislatures are taking this issue seriously (see NYS’s proposed bill, recently returned to committee).

Although he is only ⅔ a lawyer at this point, it seems to this author that there are at least three ways a court could handle a demand like this, if the case were allowed to go to the merits.

  1. Never OK to demand a hack – In this situation, the courts could find that our collective societal interests in privacy would always preclude enforcement of an order like this. Seems unlikely, especially given the demonstrated willingness in this case of a court to make the order in the first place.
  2. Always OK to demand a hack – Similar to option 1, this option seems unlikely as well, especially given the First and Fourth Amendments. Here, the courts would have to find some rationale to justify hacking in every circumstance. Clearly, the United States has not yet transitioned to Orwellian dystopia yet.
  3. Sometimes OK to demand a hack, but scrutiny – Here, in the middle, is where it seems likely we’ll find courts in the coming years. Obviously, convincing arguments exist on each side, and it seems possible reconcile infringing personal privacy and upholding national security with burdening a tech company’s policy of privacy protection, given the right set of facts. The San Bernardino shooting is not that case, though. The alleged terrorist threat has not been characterized as sufficiently imminent, and the FBI even admitted that cracking the cell phone was not integral to the case and they didn’t find anything anyway. It will take a (probably) much more scary scenario for this option to snap into focus as a workable compromise.

We’re left then with a nagging feeling that this isn’t the last public skirmish we’ll see between Apple and the “man.” As digital technology becomes ever more integrated into daily life, our legal landscape will have to evolve as well.
Interested in continuing the conversation? Leave a comment below. Just remember – if you do so on an iPhone 5c, draft at your own risk.