Data

6th Circuit Aligns With 7th Circuit on Data Breach Standing Issue

John Biglow, MJLST Managing Editor

To bring a suit in any judicial court in the United States, an individual, or group of individuals must satisfy Article III’s standing requirement. As recently clarified by the Supreme Court in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), to meet this requirement, a “plaintiff must have (1) suffered an injury in fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision.” Id. at 1547. When cases involving data breaches have entered the Federal Circuit courts, there has been some disagreement as to whether the risk of future harm from data breaches, and the costs spent to prevent this harm, qualify as “injuries in fact,” Article III’s first prong.

Last Spring, I wrote a note concerning Article III standing in data breach litigation in which I highlighted the Federal Circuit split on the issue and argued that the reasoning of the 7th Circuit court in Remijas v. Neiman Marcus Group, LLC, 794 F.3d 688 (7th Cir. 2015) was superior to its sister courts and made for better law. In Remijas, the plaintiffs were a class of individuals whose credit and debit card information had been stolen when Neiman Marcus Group, LLC experienced a data breach. A portion of the class had not yet experienced any fraudulent charges on their accounts and were asserting Article III standing based upon the risk of future harm and the time and money spent mitigating this risk. In holding that these Plaintiffs had satisfied Article III’s injury in fact requirement, the court made a critical inference that when a hacker steals a consumer’s private information, “[p]resumably, the purpose of the hack is, sooner or later, to make fraudulent charges or assume [the] consumers’ identit[y].” Id. at 693.

This inference is in stark contrast to the line of reasoning engaged in by the 3rd Circuit in Reilly v. Ceridian Corp. 664 F.3d 38 (3rd Cir. 2011).  The facts of Reilly were similar to Remijas, except that in Reilly, Ceridian Corp., the company that had experienced the data breach, stated only that their firewall had been breached and that their customers’ information may have been stolen. In my note, mentioned supra, I argued that this difference in facts was not enough to wholly distinguish the two cases and overcome a circuit split, in part due to the Reilly court’s characterization of the risk of future harm. The Reilly court found that the risk of misuse of information was highly attenuated, reasoning that whether the Plaintiffs experience an injury depended on a series of “if’s,” including “if the hacker read, copied, and understood the hacked information, and if the hacker attempts to use the information, and if he does so successfully.” Id. at 43 (emphasis in original).

Often in the law, we are faced with an imperfect or incomplete set of facts. Any time an individual’s intent is an issue in a case, this is a certainty. When faced with these situations, lawyers have long utilized inferences to differentiate between more likely and less likely scenarios for what the missing facts are. In the case of a data breach, it is almost always the case that both parties will have little to no knowledge of the intent, capabilities, or plans of the hacker. However, it seems to me that there is room for reasonable inferences to be made about these facts. When a hacker is sophisticated enough to breach a company’s defenses and access data, it makes sense to assume they are sophisticated enough to utilize that data. Further, because there is risk involved in executing a data breach, because it is illegal, it makes sense to assume that the hacker seeks to gain from this act. Thus, as between the Reilly and Remijas courts’ characterizations of the likelihood of misuse of data, it seemed to me that the better rule is to assume that the hacker is able to utilize the data and plans to do so in the future. Further, if there are facts tending to show that this inference is wrong, it is much more likely at the pleading stage that the Defendant Corporation would be in possession of this information than the Plaintiff(s).

Since Remijas, there have been two data breach cases that have made it to the Federal Circuit courts on the issue of Article III standing. In Lewert v. P.F. Chang’s China Bistro, Inc., 819 F.3d 963, 965 (7th Cir. 2016), the court unsurprisingly followed the precedent set forth in their recent case, Remijas, in finding that Article III standing was properly alleged. In Galaria v. Nationwide Mut. Ins. Co., a recent 6th Circuit case, the court had to make an Article III ruling without the constraint of an earlier ruling in their Circuit, leaving the court open to choose what rule and reasoning to apply. Galaria v. Nationwide Mut. Ins. Co., No. 15-3386, 2016 WL 4728027, (6th Cir. Sept. 12, 2016). In the case, the Plaintiffs alleged, among other claims, negligence and bailment; these claims were dismissed by the district court for lack of Article III standing. In alleging that they had suffered an injury in fact, the Plaintiffs alleged “a substantial risk of harm, coupled with reasonably incurred mitigation costs.” Id. at 3. In holding that this was sufficient to establish Article III standing at the pleading stage, the Galaria court found the inference made by the Remijas court to be persuasive, stating that “[w]here a data breach targets personal information, a reasonable inference can be drawn that the hackers will use the victims’ data for the fraudulent purposes alleged in Plaintiffs’ complaints.” Moving forward, it will be intriguing to watch how Federal Circuits who have not faced this issue, like the 6th circuit before deciding Galaria, rule on this issue and whether, if the 3rd Circuit keeps its current reasoning, this issue will eventually make its way to the Supreme Court of the United States.


Solar Climate Engineering and Intellectual Property

Jesse L. Reynolds 

Postdoctoral researcher, and Research funding coordinator, sustainability and climate
Department of European and International Public Law, Tilburg Law School

Climate change has been the focus of much legal and policy activity in the last year: the Paris Agreement, the Urgenda ruling in the Netherlands, aggressive climate targets in China’s latest five year plan, the release of the final US Clean Power Plan, and the legal challenge to it. Not surprisingly, these each concern controlling greenhouse gas emissions, the approach that has long dominated efforts to reduce climate change risks.

Yet last week, an alternative approach received a major—but little noticed—boost. For the first time, a federal budget bill included an allocation specifically for so-called “solar climate engineering.” This set of radical proposed technologies would address climate change by reducing the amount of incoming solar radiation. These would globally cool the planet, counteracting global warming. For example, humans might be able to mimic the well-known cooling caused by large volcanos via injecting a reflective aerosol into the upper atmosphere. Research thus far – which has been limited to modeling – indicates that solar climate engineering (SCE) would be effective at reducing climate change, rapidly felt, reversible in its direct climatic effects, and remarkably inexpensive. It would also pose risks that are both environmental – such as difficult-to-predict changes to rainfall patterns – and social – such as the potential for international disagreement regarding its implementation.

The potential role of private actors in SCE is unclear. On the one hand, decisions regarding whether and how to intentionally alter the planet’s climate should be made through legitimate state-based processes. On the other hand, the private sector has long been the site of great innovation, which SCE technology development requires. Such private innovation is both stimulated and governed through governmental intellectual property (IP) policies. Notably, SCE is not a typical emerging technology and might warrant novel IP policies. For example, some observers have argued that SCE should be a patent-free endeavor.

In order to clarify the potential role of IP in SCE (focusing on patents, trade secrets, and research data), Jorge Contreras of the University of Utah, Joshua Sarnoff of DePaul University, and I wrote an article that was recently accepted and scheduled for publication by the Minnesota Journal of Law, Science & Technology. The article explains the need for coordinated and open licensing and data sharing policies in the SCE technology space.

SCE research today is occurring primarily at universities and other traditional research institutions, largely through public funding. However, we predict that private actors are likely to play a growing role in developing products and services to serve large scale SCE research and implementation, most likely through public procurement arrangements. The prospect of such future innovation should be not stifled through restrictive IP policies. At the same time, we identify several potential challenges for SCE technology research, development, and deployment that are related to rights in IP and data for such technologies. Some of these challenges have been seen in regard to other emerging technologies, such as the risk that excessive early patenting would lead to a patent thicket with attendant anti-commons effects. Others are more particular to SCE, such as oft-expressed concerns that holders of valuable patents might unduly attempt to influence public policy regarding SCE implementation. Fortunately, a review of existing patents, policies, and practices reveals a current opportunity that may soon be lost. There are presently only a handful of SCE-specific patents; research is being undertaken transparently and at traditional institutions; and SCE researchers are generally sharing their data.

After reviewing various options and proposals, we make tentative suggestions to manage SCE IP and data. First, an open technical framework for SCE data sharing should be established. Second, SCE researchers and their institutions should develop and join an IP pledge community. They would pledge, among other things, to not assert SCE patents to block legitimate SCE research and development activities, to share their data, to publish in peer reviewed scientific journals, and to not retain valuable technical information as trade secrets. Third, an international panel—ideally with representatives from relevant national and regional patent offices—should monitor and assess SCE patenting activity and make policy recommendations. We believe that such policies could head off potential problems regarding SCE IP rights and data sharing, yet could feasibly be implemented within a relatively short time span.

Our article, “Solar Climate Engineering and Intellectual Property: Toward a Research Commons,” is available online as a preliminary version. We welcome comments, especially in the next couple months as we revise it for publication later this year.


The Comment on the Note “Best Practices for Establishing Georgia’s Alzheimer’s Disease Registry” of Volume 17, Issue 1

Jing Han, MJLST Staffer

Alzheimer’s disease (AD), also known just Alzheimer’s, accounts for 60% to 70% of cases of dementia. It is a chronic neurodegenerative disease that usually starts slowly and gets worse over time. The cause of Alzheimer’s disease is poorly understood. No treatments could stop or reverse its progression, though some may temporarily improve symptoms. Affected people increasingly rely on others for assistance, often placing a burden on the caregiver; the pressures can include social, psychological, physical, and economic elements. It was first described by, and later named after, German psychiatrist and pathologist Alois Alzheimer in 1906. In 2015, there were approximately 48 million people worldwide with AD. In developed countries, AD is one of the most financially costly diseases. Before many states, including Georgia, South Carolina, passed legislation establishing the Registry, many private institutions across the country already had made tremendous efforts to establish their own Alzheimer’s disease registries. The country has experienced an exponential increase of people who are diagnosed with Alzheimer’s disease. More and more states have begun to have their own Alzheimer’s disease registry.

From this Note, the Registry in Georgia has emphasized from the outset, the importance of protecting the confidentiality of patent date from secondary uses. This Note explores many legal and ethical issues raised by the Registry. An Alzheimer’s disease patient’s diagnosis history, medication history, and personal lifestyle are generally confidential information, known only to the physician and patient himself. Reporting such information to the Registry, however, may lead to wider disclosure of what was previously private information and consequently may arouse constitutional concerns. It is generally known that the vast majority of public health registries in the past have focused on collection of infectious disease data, registries for non-infectious diseases, such as Alzheimer’s disease, diabetes, and cancer have been recently created. It is a delicate balance between the public interest and personal privacy. It is not a mandatory requirement to register because Alzheimer is not infectious. After all, people suffering from Alzheimer’s often face violations of their human rights, abuse and neglect, as well as widespread discrimination from the other people. When a patient is diagnosed as AD, the healthcare provider, the doctor should encourage, rather than compel patients to receive registry. Keeping all the patients’ information confidential, enacting the procedural rules to use the information and providing some incentives are good approaches to encourage more patients to join the registry.

Based on the attention to the privacy concerns under federal and state law, the Note recommend slightly broader data sharing with the Georgia Registry, such as a physician or other health care provider for the purpose of a medical evaluation or treatment of the individual; any individual or entity which provides the Registry with an order from a court of competent jurisdiction ordering the disclosure of confidential information. What’s more, the Note mentions there has the procedural rules designated to administer the registry in Georgia. The procedural rules involve clauses: who are the end-users of the registry; what type of information should be collected in the registry; how and from whom should the information be collected; and how should the information be shared or disclosed for policy planning for research purpose; how the legal representatives get authority from patient.

From this Note, we have a deep understanding of Alzheimer’s disease registry in the country through one state’s experience. The registry process has invoked many legal and moral issues. The Note compares the registry in Georgia with other states and points out the importance of protecting the confidentiality of patient data. Emphasizing the importance of protection of personal privacy could encourage more people and more states to get involved in this plan.


The Federal Government Wants Your iPhone Passcode: What Does the Law Say?

Tim Joyce, MJLST Staffer

Three months ago, when MJLST Editor Steven Groschen laid out the arguments for and against a proposed New York State law that would require “manufacturers and operating system designers to create backdoors into encrypted cellphones,” the government hadn’t even filed its motion to compel against Apple. Now, just a few weeks after the government quietly stopped pressing the issue, it almost seems as if nothing at all has changed. But, while the dispute at bar may have been rendered moot, it’s obvious that the fight over the proper extent of data privacy rights continues to simmer just below the surface.

For those unfamiliar with the controversy, what follows are the high-level bullet points. Armed attackers opened fire on a group of government employees in San Bernardino, CA on the morning of December 2, 2015. The attackers fled the scene, but were killed in a shootout with police later that afternoon. Investigators opened a terrorism investigation, which eventually led to a locked iPhone 5c. When investigators failed to unlock the phone, they sought Apple’s help, first politely, and then more forcefully via California and Federal courts.

The request was for Apple to create an authenticated version of its iOS operating system which would enable the FBI to access the stored data on the phone. In essence, the government asked Apple to create a universal hack for any iPhone operating that particular version of iOS. As might be predicted, Apple was less than inclined to help crack its own encryption software. CEO Tim Cook ran up the banner of digital privacy rights, and re-ignited a heated debate over the proper scope of government’s ability to regulate encryption practices.

Legal chest-pounding ensued.

That was the situation until March 28, when the government quietly stopped pursuing this part of the investigation. In its own words, the government informed the court that it “…ha[d] now successfully accessed the data stored on [the gunman]’s iPhone and therefore no longer require[d] the assistance from Apple Inc…”. Apparently, some independent governmental contractor (read: legalized hacker) had done in just a few days what the government had been claiming from the start was impossible without Apple’s help. Mission accomplished – so, the end?

Hardly.

While this one incident, for this one iPhone (the iOS version is only applicable to iPhone 5c’s, not any other model like the iPhone 6), may be history, many more of the same or substantially similar disputes are still trickling through the courts nationwide. In fact, more than ten other federal iPhone cases have been filed since September 2015, and all this based on a 227 year old act of last resort. States like New York are also getting into the mix, even absent fully ratified legislation. Furthermore, it’s obvious that legislatures are taking this issue seriously (see NYS’s proposed bill, recently returned to committee).

Although he is only ⅔ a lawyer at this point, it seems to this author that there are at least three ways a court could handle a demand like this, if the case were allowed to go to the merits.

  1. Never OK to demand a hack – In this situation, the courts could find that our collective societal interests in privacy would always preclude enforcement of an order like this. Seems unlikely, especially given the demonstrated willingness in this case of a court to make the order in the first place.
  2. Always OK to demand a hack – Similar to option 1, this option seems unlikely as well, especially given the First and Fourth Amendments. Here, the courts would have to find some rationale to justify hacking in every circumstance. Clearly, the United States has not yet transitioned to Orwellian dystopia yet.
  3. Sometimes OK to demand a hack, but scrutiny – Here, in the middle, is where it seems likely we’ll find courts in the coming years. Obviously, convincing arguments exist on each side, and it seems possible reconcile infringing personal privacy and upholding national security with burdening a tech company’s policy of privacy protection, given the right set of facts. The San Bernardino shooting is not that case, though. The alleged terrorist threat has not been characterized as sufficiently imminent, and the FBI even admitted that cracking the cell phone was not integral to the case and they didn’t find anything anyway. It will take a (probably) much more scary scenario for this option to snap into focus as a workable compromise.

We’re left then with a nagging feeling that this isn’t the last public skirmish we’ll see between Apple and the “man.” As digital technology becomes ever more integrated into daily life, our legal landscape will have to evolve as well.
Interested in continuing the conversation? Leave a comment below. Just remember – if you do so on an iPhone 5c, draft at your own risk.


Requiring Backdoors into Encrypted Cellphones

Steven Groschen, MJLST Managing Editor

The New York State Senate is considering a bill that requires manufacturers and operating system designers to create backdoors into encrypted cellphones. Under the current draft, failure to comply with the law would result in a $2,500 fine, per offending device. This bill highlights the larger national debate concerning privacy rights and encryption.

In November of 2015, the Manhattan District Attorney’s Office (MDAO) published a report advocating for a federal statute requiring backdoors into encrypted devices. One of MDAO’s primary reasons in support of the statute is the lack of alternatives available to law enforcement for accessing encrypted devices. The MDAO notes that traditional investigative techniques have largely been ineffective. Additionally, the MDAO argues that certain types of data residing on encrypted devices often cannot be found elsewhere, such as on a cloud service. Naturally, the inaccessibility of this data is a significant hindrance to law enforcement. The report offers an excellent summary of the law enforcement perspective; however, as with all debates, there is another perspective.

The American Civil Liberties Union (ACLU) has stated it opposes using warrants to force device manufacturers to unlock their customers’ encrypted devices. A recent ACLU blog post presented arguments against this practice. First, the ACLU argued that the government should not require “extraordinary assistance from a third party that does not actually possess the information.” The ACLU perceives these warrants as conscripting Apple (and other manufacturers) to conduct surveillance on behalf of the government. Second, the ACLU argued using search warrants bypasses a “vigorous public debate” regarding the appropriateness of the government having backdoors into cellphones. Presumably, the ACLU is less opposed to laws such as that proposed in the New York Senate, because that process involves an open public debate rather than warrants.

Irrespective of whether the New York Senate bill passes, the debate over government access to its citizens’ encrypted devices is sure to continue. Citizens will have to balance public safety considerations against individual privacy rights—a tradeoff as old as government itself.


Circumventing EPA Regulations Through Computer Programs

Ted Harrington, MJLST Staffer

In September of 2015, it was Volkswagen Group (VW). This December, it was the General Electric Company (GE) finalizing a settlement in the United States District Court in Albany. The use of computer programs or other technology to override, or “cheat,” some type of Environmental Protection Agency (EPA) regulation has become seemingly commonplace.

GE uses silicone as part of its manufacturing process, which results in volatile organic compounds and chlorinated hydrocarbons, both hazardous byproducts. The disposal of hazardous materials is closely regulated by the Resource Conservation and Recovery Act (RCRA). Under this act, the EPA has delegated permitting authority to the New York State Department of Environmental Conservation (DEC). This permitting authority allows the DEC to grant permits for the disposal of hazardous wastes in the form of an NYS Part 373 Permit.

The permit allowed GE to store hazardous waste, operate a landfill, and use two incinerators on-site at its Waterford, NY plant. The permit was originally issued in 1989, and was renewed in 1999. The two incinerators included an “automatic waste feed cutoff system” designed to keep the GE facility in compliance with RCRA and the NYS Part 373 Permit. If the incinerator reached a certain limit, the cutoff system would simply stop feeding more waste.

Between September 2006 and February 2007, the cutoff system was overridden by computer technology, or manually by GE employees, on nearly 2,000 occasions. This resulted in hazardous waste being disposed of in amounts grossly above the limits of the issued permits. In early December, GE quickly settled the claim by paying $2.25 million in civil penalties.

Beyond the extra pollution caused by GE, a broader problem is emerging—in an increasingly technological world, what can be done to prevent companies from skirting regulations using savvy computer programs? With more opportunities than ever to get around regulation using technology, is it even feasible to monitor these companies? It is virtually certain that similar instances will continue to surface, and agencies such as the EPA must be on the forefront of developing preventative technology to slow this trend.


Warrant Now Required For One Type of Federal Surveillance, and May Soon Follow for State Law Enforcement

Steven Graziano, MJLST Staffer

As technology has advanced over the recent decades, law enforcement agencies have expanded their enforcement techniques. One example of these tools is cell-site simulators, otherwise known as sting rays. Put simply, sting rays act as a mock cell tower, detect the use of a specific phone number in a given range, and then uses triangulation to locate the phone. However, the recent, heightened awareness and criticism directed towards government and law enforcement surveillance has affected their potential use. Specifically, many federal law enforcement agencies have been barred from their use without a warrant, and there is current federal legislation pending, which would require state and local law enforcement agents to also gain a warrant before using a sting ray.

Federal law enforcement agencies, specifically Immigration, Secret Service, and Homeland Security agents must obtain search warrants before using sting rays, as announced by the Department of Homeland Security. Homeland Security’s shift in policy comes after the Department of Justice made a similar statement. The DOJ has affirmed that although they had previously used cell-cite simulators without a warrant, going forward they will require law enforcement agencies gain a search warrant supported by probable cause. DOJ agencies directed by this policy include the FBI and the Drug Enforcement Administration. This shift in federal policy was largely in response to pressures put upon Washington by civil liberties groups, as well as the shift in American public’s attitude towards surveillance generally.

Although these policies only affect federal law enforcement agencies, there have also been steps taken to expand the warrant requirement for sting rays to state and local governments. Federal lawmakers have introduced the Cell-Site Simulator Act of 2015, also known as the Stingray Privacy Act, to hold state and local law enforcement to the same Fourth Amendment standards as the federal government. The law has been proposed in the House of Representatives by Rep. Jason Chaffetz (R-Utah) and was designated to a congressional committee on November 2, 2015, which will consider it before sending it to the entire House or Senate. In addition to requiring a warrant, the act also requires prosecutors and investigators to disclose to judges that the technology they intend to use in execution of the warrant is specifically a sting ray. The proposed law was partially a response to a critique of the federal warrant requirement, name that it did not compel state or local law enforcement to also obtain a search warrant.

The use of advanced surveillance programs by federal, state, and local law enforcement, has been a controversial subject recently. Although law enforcement has a duty to fully enforce that law, and this includes using the entirety of its resources to detect possible crimes, it must still adhere to the constitutional protections laid out in the Fourth Amendment when doing so. Technology chances and advances rapidly, and sometimes it takes the law some time to adapt. However, the shift in policy at all levels of government, shows that the law may be beginning to catch up to law enforcement’s use of technology.


Digital Millennium Copyright Act Exemptions Announced

Zach Berger, MJLST Staffer

The Digital Millennium Copyright Act (DMCA) first enacted in 1998, prevents owners of digital devices from making use of these devices in any way that the copyright holder does not explicitly permit. Codified in part in 17 U.S.C. § 1201, the DMCA makes it illegal to circumvent digital security measures that prevent unauthorized access to copyrighted works such has movies, video games, and computer programs. This law prevents users from breaking what is known as access controls, even if the purpose would fall under lawful fair use. According to the Electronic Frontier Foundation’s (a nonprofit digital rights organization) staff attorney Kit Walsh, “This ‘access control’ rule is supposed to protect against unlawful copying. But as we’ve seen in the recent Volkswagen scandal . . . it can be used instead to hide wrongdoing hidden in computer code.” Essentially, everything not explicitly permitted is forbidden.

However, these restrictions are not iron clad. Every three years, users are allowed to request exemptions to this law for lawful fair uses from the Library of Congress (LOC), but these exemptions are not easy to receive. In order to receive an exemption, activists must not only propose new exemptions, but also plead for ones already granted to be continued. The system is flawed, as users often need to have a way to circumvent their devices to make full use of the products. However, the LOC has recently released its new list of exemptions, and this expanded list represents a small victory for digital rights activists.

The exemptions granted will go into effect in 2016, and cover 22 types of uses affecting movies, e-books, smart phones, tablets, video games and even cars. Some of the highlights of the exemptions are as follows:

  • Movies where circumvention is used in order to make use of short portions of the motion pictures:
    • For educational uses by University and grade school instructors and students.
    • For e-books offering film analysis
    • For uses in noncommercial videos
  • Smart devices
    • Can “jailbreak” these devices to allow them to interoperate with or remove software applications, allows phones to be unlocked from their carrier
    • Such devices include, smart phones, televisions, and tablets or other mobile computing devices
      • In 2012, jailbreaking smartphones was allowed, but not tablets. This distinction has been removed.
    • Video Games
      • Fan operated online servers are now allowed to support video games once the publishers shut down official servers.
        • However, this only applies to games that would be made nearly unplayable without the servers.
      • Museums, libraries, and archives can go a step further by jailbreaking games as needed to get them functioning properly again.
    • Computer programs that operate things primarily designed for use by individual consumers, for purposes of diagnosis, repair, and modification
      • This includes voting machines, automobiles, and implantation medical devices.
    • Computer programs that control automobiles, for purposes of diagnosis, repair, and modification of the vehicle

These new exemptions are a small, but significant victory for consumers under the DMCA. The ability to analyze your automotive software is especially relevant in the wake of the aforementioned Volkswagen emissions scandal. However, the exemptions are subject to some important caveats. For example, only video games that are almost completely unplayable can have user made servers. This means that games where only an online multiplayer feature is lost, such servers are not allowed. A better long-term solution is clearly needed, as this burdensome process is flawed and has led to what the EFF has called “unintended consequences.” Regardless, as long as we still have this draconian law, exemptions will be welcomed. To read the final rule, register’s recommendation, and introduction (which provides a general overview) click here.


The Legal Persona of Electronic Entities – Are Electronic Entities Independent Entities?

Natalie Gao, MJLST Staffer

The advent of the electronic age brought about digital changes and easier accessibility to more information but with this electronic age came certain electronic problems. One such problem is whether or not electronic entities like, (1) usernames online, (2) software agents, (3) avatars, (4) robots, and (5) artificial intelligence, are independent entities under law. A username for a website like eBay or for a forum, for all intents and purposes may well be just a pseudonym for the person behind the computer. But at what point does the electronic entity become an independent entity, and at what point does the electronic entity start have the rights and responsibilities of a legally independent entity?

In 2007, Plaintiff Marc Bragg brought suit against Defendants Linden Research Inc. (Linden), owner of the massive multiplayer online role playing game (MMORPG) Second Life, and its Chief Executive Officer. Second Life is a game with a telling title and it essentially allows its players to have a second life. It has a market for goods, extensive communications functions, and even a red-light district, and real universities have been given digital campuses in the game, where they have held lectures. Players of Second Life purchase items and land in-game with real money.

Plaintiff Bragg’s digital land was frozen in-game by moderators due to “suspicious” activity(s) and Plaintiff brought suit claiming he had property rights to the digital land. Bragg v. Linden Research, Inc., like its descendants including Evans v. Linden Research, Inc. (2011), have been settled out of court and therefore do not offer the legal precedents it could potentially have had regarding its unique fact pattern(s). And Second Life is also a very unique game because pre-2007, Linden had been promoting Second Life by announcing they recognize virtual property rights and that whatever users owned in-game would be belong to the user instead of to Linden. But can the users really own digital land? Would it be the users themselves owning the ditigal land or the avatars they make on the website, the ones living this “second life”, be the true owners? And at what point can avatars or any electronic entity even have rights and responsibilities?

An independent entity is not the same as a legal independent entity because an latter, beyond just existing independently, has rights and responsibilities pursuant to law. MMORPGs may use avatars to allow users to play games and avatars may be one step more independent than a username, but is that avatar an independent entity that can, for example, legally conduct commercial transactions? Or rather, is the avatar conducting a “transaction” in a leisure context? In Bragg v. Linden Research, Inc., the court touches on the issue of transactions but it rules only on civil procedure and contract law. And what about avatars existing now in some games that can play itself? Is “automatic” enough to make something an “independent entity”?

The concept of an independent electronic entity is discussed in length in Bridging the Accountability Gap: Rights for New Entities in the Information Society. Authors Koops, Hildebrandt, and Jaquet-Chiffelle compares the legal personhood of electronic artificial entities with animals, ships, trust funds, and organizations, arguing that giving legal personhood to basically all (or just “all”) currently existing electronic entities bring up problems such as needing representation with agency, lacking the “intent” required for certain crimes and/or areas of law, and likely needing to base some of their legal appeals in area of human/civil rights. The entities may be “actants” (in that they are capable of acting) but they are not always autonomous. A robot will need mens rea to assess responsibility, and none of the five listed entities do not have consciousness (which animals do have), let alone self-consciousness. The authors argue that none of the artificial entities fit the prima facies definition of a legal person and instead they moved to evaluate the entities on a continuum from automatic (acting) to autonomic (acting on its own), as well as the entity’s ability to contract and bear legal responsibility. And they come up with three possible solutions, one “Short Term”, one “Middle Term”, and one “Long Term”. The Short Term method, which seems to be the most legally feasible under today’s law, purposes creating a corporation (a legally independent entity) to create the electronic entity. This concept is reminiscent of theorist Gunther Teubner’s idea of a using a hybrid entity, one that combines an electronic agent(s) with a company with limited liability, instead of an individual entity to give something rights and responsibilities.

Inevitably, even though under the actual claims brought to the court, Bragg v. Linden Research, Inc. mostly seems more like an open-source licensing issue than an issue of electronic independent entity, Koops, Hildebrandt, and Jaquet-Chiffelle still tries to answer some questions that may be very salient one day. Programs can be probabilistic algorithms but no matter how unpredictable the program may be, their unpredictability is fixed in the algorithm. An artificial intelligence (AI), a program that grows and learns and create unpredictability on its own, may be a thing of science fiction and The Avengers, may one day be reality. And an AI does not have to be the AI of IRobot; it does not have to have a personality. At what point will we have to treat electronic entities as legally autonomic and hold it responsible for the things it has done? Will the future genius-programmer, who creates an AI to watch over the trusts in his/her care, be held accountable when that AI starts illegally funneling money out to the AmeriCorp bank account the AI was created to watch over, into the personal saving accounts of lamer non-MJLST law journals in the University of Minnesota? Koops, Hildebrandt, and Jaquet-Chiffelle argues yes, but it largely depends on the AI itself and the area of law.


Data Breach and Business Judgment

Quang Trang, MJLST Staffer

Data breaches are a threat to major corporations. Corporations such as Target Co. and Wyndham Worldwide Co. have been victim of mass data breaches. The damage caused by such breaches have led to derivative lawsuits being filed by shareholders to hold board of directors responsible.

In Palkon v. Holmes, 2014 WL 5341880 (D. N.J. 2014), Wyndham Worldwide Co. shareholder Dennis Palkon filed a lawsuit against the company’s board of directors. The judge granted the board’s motion to dismiss partially because of the Business Judgment Rule. The business judgement rule governs when boards refuse shareholder demands. The principle of the business judgment rule is that “courts presume that the board refused the demand on an informed basis, in good faith and in honest belief that the action taken was in the best interest of the company.” Id. The shareholder who brings the derivative suit has the burden to rebut the presumption that the board acted in good faith or that the board did not base its decision on reasonable investigation.

Cyber security is a developing area. People are still unsure how prevalent the problem is and how damaging it is. It is difficult to determine what a board needs to do with such ambiguous information. In a time when there is no set corporate cyber security standards, it is difficult for a shareholder to show bad faith or lack of reasonable investigation. Until clear standards and procedures for cyber security are widely adopted, derivative suits over data breaches will likely be dismissed such as in Palkon.