Internet

Why Equity-Based Crowdfunding Is Not Flourishing? — A Comparison Between the US and the UK

Tianxiang Zhou, MJLST Editor

While donation-based crowdfunding (giving money to enterprises or organizations they want to support) is flourishing on online platforms in the US, the equity-based crowdfunding (funding startup enterprises or organizations in return for equity) under the JOBS Act is still staggering as the requirements are proving impractical for most entrepreneurs.

Donation-based crowdfunding is dominating the major crowdfunding websites like Indiegogo, Kickstarter, etc. In March, 2017, Facebook announced that it will introduce a crowdfunding feature that will help users back causes such as education, medical needs, pet medical, crisis relief, personal emergencies and funerals. However, this new crowdfunding feature from Facebook has nothing to do with equity-based crowdfunding; it is only used for donation-based crowdfunding. As for the platforms specialized in crowdfunding,  equity-based crowdfunding projects are difficult to find. If you visit Kickstarter or Indiegogo, most of the crowdfunding projects that appear on the webpages are donation-based crowdfunding project. As of April 2, 2017, there are only four active crowdfunding opportunities appearing on the Indiegogo website that are available for investors. The website stated that “more than 200 (equity-based) projects funded in the past.” (The writer cannot find an equity-based crowdfunding opportunity on Kickstarter or a section to search equity-based crowdfunding opportunities.)

The reason why equity-based crowdfunding is not flourishing is easily apparent. As one article points out, the statutory requirements for Crowdfunding under the JOBS Act “effectively weigh it down to the point of making the crowdfunding exemption utterly useless.” The problems associated with obtaining funding for small businesses that the JOBS Act aims to resolve are still there with crowdfunding: for example, the crowdfunding must be done through a registered broker-dealer and the issuer have to file various disclosure statement including financial statement and annual reports. For smaller businesses, the costs to prepare such reports could be heavily burdensome for the business at their early stage.

Compared to crowdfunding requirements in the US, the UK rules are much easier for issuers to comply with. Financial Conduct Authority (FCA) introduced a set of regulations for the peer-to-peer sector in 2014. Before this, the P2P sector did not fall under any regulatory regime. After 2014, the UK government requires platforms to be licensed or to have regulated activities managed by authorized parties. If an investor is deemed a “non-sophisticated” investor constraints are placed on how much they are permitted to invest, in that they must not invest more than 10% of their net investable assets in investments sold via what are called investment-based crowdfunding platforms. Though the rules require communication of the offers and the language and clarity of description used to describe these offers and the awareness of the risk associated with them, much fewer disclosure obligations are required for the issuers such as the filing requirements of annual reports and financial statement.

As a result, the crowdfunding market in the UK is characterized as “less by exchanges that resemble charity, gift giving, and retail, and more by those of financial market exchange” compared with the US. On the UK-based crowdfunding website Crowdcude, there are 14 opening opportunities for investors as of April 2, 17, and there were 494 projects funded. In comparison, the US-based crowdfunding giant Indiegogo’s statement that “more than 200 projects funded in the past” is not very impressive considering the difference between the sizes of the UK’s economy and the US’ economy.

While entrepreneurs in the US are facing many obstacles in funding through equity-based crowdfunding, the UK crowdfunding websites are now providing more equity-based opportunities to the investors, and sometimes even more effective than government-lead programs. The Crowd Data Center publicized a report stating that seed crowdfunding in the UK is more effective in delivering 40% more funding in 2016 than the UK government funded Startup Loans scheme.

As for the concern that the equity-based fraud funding involves too much risk for “unsophisticated investors,” articles pointed out that in countries like UK and Australia where lightly regulated equity crowdfunding platforms welcomed all investors, there is “hardly any instances of fraud.” While the equity-crowdfunding JOBS Act has not failed to prove its efficiency, state laws are devising more options for the issuers with restrictions of SEC Rule 147. (see more from 1000 Days Late & $1 Million Short: The Rise and Rise of Intrastate Equity Crowdfunding). At the same time, the FCA stated that it will also revisit the rules on crowdfunding. It would be interesting to see how the crowdfunding rules will evolve in the future.


Should You Worry That ISPs Can Sell Your Browsing Data?

Joshua Wold, Article Editor

Congress recently voted to overturn the FCC’s October 2016 rules Protecting the Privacy of Customers of Broadband and Other Telecommunications Services through the Congressional Review Act. As a result, those rules will likely never go into effect. Had the rules been implemented, they would have required Internet Service Providers (ISPs) to get customer permission before making certain uses of customer data.

Some commentators, looking the scope of the rules relative to the internet ecosystem as a whole, and the fact that the rules hadn’t yet taken effect, thought that this probably wouldn’t have a huge impact on privacy. Orin Kerr suggested that the overruling of the privacy regulations was unlikely to change what ISPs would do with data, because other laws constrain them. Others, however, were less sanguine. The Verge quoted Jeff Chester of the Center for Digital Democracy as saying “For the foreseeable future, we’re going to be living in a commercial surveillance state.”

While the specific context of these privacy regulations is new (the FCC couldn’t regulate ISPs until 2015, when it defined them as telecommunications providers instead of information services), debates over privacy are not. In 2013, MJLST published Adam Thierer’s Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle. In it, the author argues that privacy threats (as well as many other threats from technological advancement) are generally exaggerated. Thierer then lays out a four-part analytic framework for weighing regulation, calling on regulators and politicians to identify clear harms, engage in cost-benefit analysis, consider more permissive regulation, and then evaluate and measure the outcomes of their choices.

Given Minnesota’s response to Congress’s action, the debate over privacy and regulation of ISPs is unlikely to end soon. Other states may consider similar restrictions, or future political changes could lead to a swing back toward regulation. Or, the current movement toward less privacy regulation could continue. In any event, Thierer’s piece, and particularly his framework, may be useful to those wishing the evaluate regulatory policy as ISP regulation progresses.

For a different perspective on ISP regulation, see Paul Gaus’s student note, upcoming in Volume 19, Issue 1. That article will focus on presenting several arguments in favor of regulating ISPs’ privacy practices, and will be a thoughtful contribution to the discussion about privacy in today’s internet.


Confusion Continues After Spokeo

Paul Gaus, MJLST Staffer

Many observers hoped the Supreme Court’s decision in Spokeo v. Robins would bring clarity to whether plaintiffs could establish Article III standing for claims based on future harm from date breaches. John Biglow explored the issue prior to the Supreme Court’s decision in his note It Stands to Reason: An Argument for Article III Standing Based on the Threat of Future Harm in Date Breach Litigation. For those optimistic the Supreme Court would expand access to individuals seeking to litigate their privacy interests, they were disappointed.

Spokeo is a people search engine that generates publicly accessible online profiles on individuals (they had also been the subject of previous FTC data privacy enforcement actions). The plaintiff claimed Spokeo disseminated a false report on him, hampering his ability to find employment. Although the Ninth Circuit held the plaintiff suffered “concrete” and “particularized” harm, the Supreme Court disagreed, claiming the Ninth Circuit analysis applied only to the particularization requirement. The Supreme Court remanded the matter back to the Ninth Circuit, casting doubt on whether the plaintiff suffered concrete harm. Spokeo violated the Fair Credit Reporting Act, but the Supreme Court characterized the false report as a bare procedural harm, insufficient for Article III standing.

Already, the Circuits are split on how Spokeo impacted consumer data protection lawsuits. The Eighth Circuit held that a cable company’s failure to destroy personally identifiable information of a former customer was a bare procedural harm akin to Spokeo in Braitberg v. Charter Communications. The Eighth Circuit reached this conclusion despite the defendant’s clear violation of the Cable Act. By contrast, the Eleventh Circuit held a plaintiff did have standing when she failed to receive disclosures of her default debt from her creditor under the Fair Debt Collections Practices Act in Church v. Accretive Health.

Many observers consider Spokeo an adverse result for consumers seeking to litigate their privacy interests. The Supreme Court punting on the issue continued the divergent application of Article III standing and class action privacy suits among the Circuits.


Digital tracking: Same concept, Different Era

Meibo Chen, MJLST Staffer

The term “paper trail” continues to become more anachronistic in today’s world as time goes on.  While there are some people who still prefer the traditional old-fashioned pen and paper, our modern world has endowed us with technologies like computers and smartphones.  Whether we like it or not, this digital explosion is slowly consuming and taking over the lives of the average American (73% of US adults own a desktop or laptop computer, and 68% own a smartphone).

These new technologies have forced us to re-consider many novel legal issues that arose from their integration into our daily lives.  Recent Supreme Court decisions such as Riley v. California in 2014 pointed out the immense data storage capacity of a modern cell phone, and requires a warrant for its search in the context of a criminal prosecution.  In the civil context, many consumers are concerned with internet tracking.  Indeed, the MJLST published an article in 2012 addressing this issue.

We have grown so accustomed to seeing “suggestions” that eerily match our respective interests.  In fact, internet tracking technology has become far more sophisticated than the traditional cookies, and can now utilizes “fingerprinting” technology to look at battery status or window size to identify a user’s presence or interest. This leads many to fear for their data privacy in similar digital settings.  However, isn’t this digital tracking just the modern adaptation to “physical” tracking that we have grown so accustomed to?

When we physically go to a grocery store, don’t we subject ourselves to the prying eyes of those around us?  Why should it be any different in a cyberspace context?  While seemingly scary accurate at times, “suggestions” or “recommended pages” based on one’s browsing history can actually be beneficial to both the tracked and the tracker.  The tracked gets more personalized search results while the tracker uses that information for better business results between him and the consumer.  Many browsers already sport the “incognito” function to disable the tracks, bring a balance to when consumers want their privacy.  Of course, this tracking technology can be misused, but malicious use of a beneficial technology has always been there in our world.


Faux News vs. Freedom of Speech?

Tyler Hartney, MJLST Staffer

This election season has produced a lot of jokes on social media. Some of the jokes are funny and other jokes lack an obvious punch line. Multiple outlets are now reporting that this fake news may’ve influenced voters in the 2016 presidential election. Both Facebook and Google have made conscious efforts to reduce the appearance of these fake news stories on their sites in attempt to reduce the click bait, and thus the revenue streams, of these faux news outlets. With the expansion of the use of technology and social media, these types of stories become of a relevant circulation to possibly warrant misinformation being spread on a massive level. Is this like screaming “fire” in a crowded theatre? How biased would filtering this speech become? Facebook was blown to shreds by the media when it was found to have suppressed conservative news outlets, but as a private business it had every right to do so. Experts are now saying that the Russian government made efforts to help spread this fake news to help Donald Trump win the presidency.

First, the only entity that cannot place limits on speech is the state. If Facebook or Google chose to filter the news broadcasted on each site, users still do not have a claim against the entity; this would be a considered a private business choice. These faux news outlets circulate stories that have appeared to be, at times, intentionally and willfully misleading. Is this similar to a man shouting “fire” in a crowded theatre? In essence, the man in the aforementioned commonly used hypothetical knows that his statement is false and that it has a high probability of inciting panic, but the general public will not be aware of the validity of his statement and will have no time to check. The second part of that statement is key. The general public would not hypothetically have time to check the validity of the statement. If government were to begin passing regulations and cracking down on the circulation and creation of these hoax news stories, it would have to prove that these stories create a “clear and present danger” that will bring significant troubles that Congress has the right to protect the public from. This standard was created in the Supreme Court’s decision in Schenck v. United States. The government will not likely be capable of banning these types of faux news stories because, while some may consider these stories dangerous, the audience has the capability of validating the content from these untrusted sources.

Even contemplating government action under this circumstance would require the state to walk a fine line with freedom of political expression. What is humorous and what is dangerously misleading? For example, The Onion posted an article entitled “Biden Forges Presidents Signature Executive Order 54723,” clearly this is a joke; however, it holds the potential ability to insight fury from those who might believe it and create a misinformed public that might use this as material information when casting a ballot. This Onion article is not notably different from another post entitled “FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE” published by the Denver Guardian. With the same potential to mislead the public, there wouldn’t really be any identifiable differences between the two stories. This area of gray would make it extremely difficult to methodically stop the production of fake news while ensuring the protection of the comedic parody news. The only way to protect the public from the dangers of these stories that are apparently being pushed on to the American voting public by the Russian government in an attempt to influence election outcomes is to educate the public on how to verify online accounts.


The Best Process for the Best Evidence

Mary Riverso, MJLST Staffer

Social networking sites are now an integral part of American society. Almost everyone and everything has a profile, typically on multiple platforms. And people like to use them. Companies like having direct contact with their customers, media outlets like having access to viewer opinions, and people like to document their personal lives.

However, as the use of social-networking continues to increase in scope, the information placed in the public sphere is playing an increasingly centralized role in investigations and litigation. Many police departments conduct regular surveillance of public social media posts in their communities because these sites have become conduits for crimes and other wrongful behavior. As a result, litigants increasingly seek to offer records of statements made on social media sites as evidence. So how exactly can content from social media be used as evidence? Ira Robbins explores this issue in her article Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-Networking Evidence. The main hurdle is one of reliability. In order to be admitted as evidence, the source of information must be authentic so that a fact-finder may rely on the source and ultimately its content as trustworthy and accurate. However, social media sites are particularly susceptible to forgery, hacking, and alterations. Without a confession, it is often difficult to determine who is the actual author responsible for posting the content.

Courts grapple with this issue – some allow social media evidence only when the record establishes distinctive characteristics of the particular website under Federal Rule of Evidence 901(b)(4), other courts believe authentication is a relatively low bar and as long as the witness testifies to the process by which the record was obtained, then it is ultimately for the jury to determine the credibility of the content. But is that fair? If evidence is supposed to assist the fact-finder in “ascertaining the truth and securing a just determination,” should it not be of utmost importance to determine the author of the content? Is not a main purpose of authentication to attribute the content to the proper author? Social media records may well be the best evidence against a defendant, but without an authorship-centric approach, the current path to their admissibility may not yet be the best process.


Are News Aggregators Getting Their Fair Share of Fair Use?

Mickey Stevens, MJLST Note & Comment Editor

Fair use is an affirmative defense to copyright that permits the use of copyrighted materials without the author’s permission when doing so fulfills copyright’s goal of promoting the progress of science and useful arts. One factor that courts analyze to determine whether or not fair use applies is whether the use is of a commercial nature or if it is for nonprofit educational purposes—in other words, whether the use is “transformative.” Recently, courts have had to determine whether automatic news aggregators can invoke the fair use defense against claims of copyright infringement. An automatic news aggregator scrapes the Internet and republishes pieces of the original source without adding commentary to the original works.

In Spring 2014, MJLST published “Associated Press v. Meltwater: Are Courts Being Fair to News Aggregators?” by Dylan J. Quinn. That article discussed the Meltwater case, in which the United States District Court for the Southern District of New York held that Meltwater—an automatic news aggregator—could not invoke the defense of fair use because its use of copyrighted works was not “transformative.” Meltwater argued that it should be treated like search engines, whose actions do constitute fair use. The court rejected this argument, stating that Meltwater customers were using the news aggregator as a substitute for the original work, instead of clicking through to the original article like a search engine.

In his article, Quinn argued that the Meltwater court’s interpretation of “transformative” was too narrow, and that such an interpretation made an untenable distinction between search engines and automatic news aggregators who function similarly. Quinn asked, “[W]hat if a news aggregator can show that its commercial consumers only use the snippets for monitoring how frequently it is mentioned in the media and by whom? Is that not a different ‘use’?” Well, the recent case of Fox News Network, LLC v. TVEyes, Inc. presented a dispute similar to Quinn’s hypothetical that might indicate support for his argument.

In TVEyes, Fox News claimed that TVEyes, a media-monitoring service that aggregated news reports into a searchable database, had infringed copyrighted clips of Fox News programs. The TVEyes database allowed subscribers to track when, where, and how words of interest are used in the media—the type of monitoring that Quinn argued should constitute a “transformative” use. In a 2014 ruling, the court held that TVEyes’ search engine that displayed clips was transformative because it converted the original work into a research tool by enabling subscribers to research, criticize, and comment. 43 F. Supp. 3d 379 (S.D.N.Y. 2014). In a 2015 decision, the court analyzed a few specific features of the TVEyes service, including an archiving function and a date-time search function. 2015 WL 5025274 (S.D.N.Y. Aug. 25, 2015). The court held that the archiving feature constituted fair use because it allowed subscribers to detect patterns and trends and save clips for later research and commentary. However, the court held that the date-time search function (allowing users to search for video clips by date and time of airing) was not fair use. The court reasoned that users who have date and time information could easily obtain that clip from the copyright holder or licensing agents (e.g. by buying a DVD).

While the court’s decision did point out that the video clip database was different in kind from that of a collection of print news articles, the TVEyes decisions show that the court may now be willing to allow automatic news aggregators to invoke the fair use defense when they can show that their collection of print news articles enables consumers to track patterns and trends in print news articles for research, criticism, and commentary. Thus, the TVEyes decisions may lead the court to reconsider the distinction between search engines and automatic news aggregators established in Meltwater that puts news aggregators at a disadvantage when it comes to fair use.


Digital Millennium Copyright Act Exemptions Announced

Zach Berger, MJLST Staffer

The Digital Millennium Copyright Act (DMCA) first enacted in 1998, prevents owners of digital devices from making use of these devices in any way that the copyright holder does not explicitly permit. Codified in part in 17 U.S.C. § 1201, the DMCA makes it illegal to circumvent digital security measures that prevent unauthorized access to copyrighted works such has movies, video games, and computer programs. This law prevents users from breaking what is known as access controls, even if the purpose would fall under lawful fair use. According to the Electronic Frontier Foundation’s (a nonprofit digital rights organization) staff attorney Kit Walsh, “This ‘access control’ rule is supposed to protect against unlawful copying. But as we’ve seen in the recent Volkswagen scandal . . . it can be used instead to hide wrongdoing hidden in computer code.” Essentially, everything not explicitly permitted is forbidden.

However, these restrictions are not iron clad. Every three years, users are allowed to request exemptions to this law for lawful fair uses from the Library of Congress (LOC), but these exemptions are not easy to receive. In order to receive an exemption, activists must not only propose new exemptions, but also plead for ones already granted to be continued. The system is flawed, as users often need to have a way to circumvent their devices to make full use of the products. However, the LOC has recently released its new list of exemptions, and this expanded list represents a small victory for digital rights activists.

The exemptions granted will go into effect in 2016, and cover 22 types of uses affecting movies, e-books, smart phones, tablets, video games and even cars. Some of the highlights of the exemptions are as follows:

  • Movies where circumvention is used in order to make use of short portions of the motion pictures:
    • For educational uses by University and grade school instructors and students.
    • For e-books offering film analysis
    • For uses in noncommercial videos
  • Smart devices
    • Can “jailbreak” these devices to allow them to interoperate with or remove software applications, allows phones to be unlocked from their carrier
    • Such devices include, smart phones, televisions, and tablets or other mobile computing devices
      • In 2012, jailbreaking smartphones was allowed, but not tablets. This distinction has been removed.
    • Video Games
      • Fan operated online servers are now allowed to support video games once the publishers shut down official servers.
        • However, this only applies to games that would be made nearly unplayable without the servers.
      • Museums, libraries, and archives can go a step further by jailbreaking games as needed to get them functioning properly again.
    • Computer programs that operate things primarily designed for use by individual consumers, for purposes of diagnosis, repair, and modification
      • This includes voting machines, automobiles, and implantation medical devices.
    • Computer programs that control automobiles, for purposes of diagnosis, repair, and modification of the vehicle

These new exemptions are a small, but significant victory for consumers under the DMCA. The ability to analyze your automotive software is especially relevant in the wake of the aforementioned Volkswagen emissions scandal. However, the exemptions are subject to some important caveats. For example, only video games that are almost completely unplayable can have user made servers. This means that games where only an online multiplayer feature is lost, such servers are not allowed. A better long-term solution is clearly needed, as this burdensome process is flawed and has led to what the EFF has called “unintended consequences.” Regardless, as long as we still have this draconian law, exemptions will be welcomed. To read the final rule, register’s recommendation, and introduction (which provides a general overview) click here.


The Legal Persona of Electronic Entities – Are Electronic Entities Independent Entities?

Natalie Gao, MJLST Staffer

The advent of the electronic age brought about digital changes and easier accessibility to more information but with this electronic age came certain electronic problems. One such problem is whether or not electronic entities like, (1) usernames online, (2) software agents, (3) avatars, (4) robots, and (5) artificial intelligence, are independent entities under law. A username for a website like eBay or for a forum, for all intents and purposes may well be just a pseudonym for the person behind the computer. But at what point does the electronic entity become an independent entity, and at what point does the electronic entity start have the rights and responsibilities of a legally independent entity?

In 2007, Plaintiff Marc Bragg brought suit against Defendants Linden Research Inc. (Linden), owner of the massive multiplayer online role playing game (MMORPG) Second Life, and its Chief Executive Officer. Second Life is a game with a telling title and it essentially allows its players to have a second life. It has a market for goods, extensive communications functions, and even a red-light district, and real universities have been given digital campuses in the game, where they have held lectures. Players of Second Life purchase items and land in-game with real money.

Plaintiff Bragg’s digital land was frozen in-game by moderators due to “suspicious” activity(s) and Plaintiff brought suit claiming he had property rights to the digital land. Bragg v. Linden Research, Inc., like its descendants including Evans v. Linden Research, Inc. (2011), have been settled out of court and therefore do not offer the legal precedents it could potentially have had regarding its unique fact pattern(s). And Second Life is also a very unique game because pre-2007, Linden had been promoting Second Life by announcing they recognize virtual property rights and that whatever users owned in-game would be belong to the user instead of to Linden. But can the users really own digital land? Would it be the users themselves owning the ditigal land or the avatars they make on the website, the ones living this “second life”, be the true owners? And at what point can avatars or any electronic entity even have rights and responsibilities?

An independent entity is not the same as a legal independent entity because an latter, beyond just existing independently, has rights and responsibilities pursuant to law. MMORPGs may use avatars to allow users to play games and avatars may be one step more independent than a username, but is that avatar an independent entity that can, for example, legally conduct commercial transactions? Or rather, is the avatar conducting a “transaction” in a leisure context? In Bragg v. Linden Research, Inc., the court touches on the issue of transactions but it rules only on civil procedure and contract law. And what about avatars existing now in some games that can play itself? Is “automatic” enough to make something an “independent entity”?

The concept of an independent electronic entity is discussed in length in Bridging the Accountability Gap: Rights for New Entities in the Information Society. Authors Koops, Hildebrandt, and Jaquet-Chiffelle compares the legal personhood of electronic artificial entities with animals, ships, trust funds, and organizations, arguing that giving legal personhood to basically all (or just “all”) currently existing electronic entities bring up problems such as needing representation with agency, lacking the “intent” required for certain crimes and/or areas of law, and likely needing to base some of their legal appeals in area of human/civil rights. The entities may be “actants” (in that they are capable of acting) but they are not always autonomous. A robot will need mens rea to assess responsibility, and none of the five listed entities do not have consciousness (which animals do have), let alone self-consciousness. The authors argue that none of the artificial entities fit the prima facies definition of a legal person and instead they moved to evaluate the entities on a continuum from automatic (acting) to autonomic (acting on its own), as well as the entity’s ability to contract and bear legal responsibility. And they come up with three possible solutions, one “Short Term”, one “Middle Term”, and one “Long Term”. The Short Term method, which seems to be the most legally feasible under today’s law, purposes creating a corporation (a legally independent entity) to create the electronic entity. This concept is reminiscent of theorist Gunther Teubner’s idea of a using a hybrid entity, one that combines an electronic agent(s) with a company with limited liability, instead of an individual entity to give something rights and responsibilities.

Inevitably, even though under the actual claims brought to the court, Bragg v. Linden Research, Inc. mostly seems more like an open-source licensing issue than an issue of electronic independent entity, Koops, Hildebrandt, and Jaquet-Chiffelle still tries to answer some questions that may be very salient one day. Programs can be probabilistic algorithms but no matter how unpredictable the program may be, their unpredictability is fixed in the algorithm. An artificial intelligence (AI), a program that grows and learns and create unpredictability on its own, may be a thing of science fiction and The Avengers, may one day be reality. And an AI does not have to be the AI of IRobot; it does not have to have a personality. At what point will we have to treat electronic entities as legally autonomic and hold it responsible for the things it has done? Will the future genius-programmer, who creates an AI to watch over the trusts in his/her care, be held accountable when that AI starts illegally funneling money out to the AmeriCorp bank account the AI was created to watch over, into the personal saving accounts of lamer non-MJLST law journals in the University of Minnesota? Koops, Hildebrandt, and Jaquet-Chiffelle argues yes, but it largely depends on the AI itself and the area of law.


Data Breach and Business Judgment

Quang Trang, MJLST Staffer

Data breaches are a threat to major corporations. Corporations such as Target Co. and Wyndham Worldwide Co. have been victim of mass data breaches. The damage caused by such breaches have led to derivative lawsuits being filed by shareholders to hold board of directors responsible.

In Palkon v. Holmes, 2014 WL 5341880 (D. N.J. 2014), Wyndham Worldwide Co. shareholder Dennis Palkon filed a lawsuit against the company’s board of directors. The judge granted the board’s motion to dismiss partially because of the Business Judgment Rule. The business judgement rule governs when boards refuse shareholder demands. The principle of the business judgment rule is that “courts presume that the board refused the demand on an informed basis, in good faith and in honest belief that the action taken was in the best interest of the company.” Id. The shareholder who brings the derivative suit has the burden to rebut the presumption that the board acted in good faith or that the board did not base its decision on reasonable investigation.

Cyber security is a developing area. People are still unsure how prevalent the problem is and how damaging it is. It is difficult to determine what a board needs to do with such ambiguous information. In a time when there is no set corporate cyber security standards, it is difficult for a shareholder to show bad faith or lack of reasonable investigation. Until clear standards and procedures for cyber security are widely adopted, derivative suits over data breaches will likely be dismissed such as in Palkon.