Privacy


Target Data Security Breach: It’s Lawsuit Time!

by Jenny Warfield, UMN Law Student, MJLST Staff

On December 19th, 2013, Target announced that it fell victim to the second-largest security attack in US retail history. While initial reports showed the hack compromised only the credit and debit card information (including PIN numbers and CVV codes) of 40 million customers, recent findings revealed that the names, phone numbers, mailing addresses, and email addresses of 70 million shoppers between November 27 to December 15 had also been stolen.

As history has proved time and again, massive data security breaches lead to lawsuits. When Heartland Payment Systems (a payment card processing service for small and mid-sized businesses) had its information on 130 million credit and debit card holders exposed in a 2009 cyber-attack, it faced lawsuits by banks and credit card companies for the costs of replacing cards, extending branch hours, and refunding consumers for fraudulent transactions. These lawsuits have so far cost the company $140 million in settlements (with litigation ongoing). Similarly, when TJX Company (parent of T.J. Maxx) had its accounts hacked in 2007, it cost the company $256 million in settlements.

Target currently faces at least 15 lawsuits in state and federal court seeking class action status, and several other lawsuits by individuals across the country. Common themes by the claimants are that 1) Target failed to properly secure customer data (more specifically, that Target did not abide by Payment Card Industry Security Standards Council Data Security Standards “PCI DSS”); 2) Target failed to promptly notify customers of the security breach in violation of state notification statutes, preventing customers from taking steps to protect against fraud; 3) Target violated the Federal Stored Communications Act; 4) and Target breached its implied contracts with its customers.

A quick review of past data breach cases reveals that these plaintiffs face an uphill battle, especially in the class-action context. While financial institutions and credit card companies can point to pecuniary damages in the form of costs associated with card replacements and customer refunds for fraudulent transactions (as in the TJX and Heartland cases), the damages suffered by plaintiffs in these cases are usually speculative. Not only are customers almost always refunded for transactions they did not make, it is unclear how to value the loss of information like home addresses and phone numbers in the absence of evidence that such information has been used to the customer’s detriment. As a result, almost all of the class action suits brought against companies in cyber-attacks have failed.

However, the causes of the cyber-attack on Target are still unclear, and it may be too early to speculate on Target’s liability. Target is currently being investigated by the DOJ (and potentially the FTC) for its role in the data breach while also conducting its own investigation in partnership with the U.S. Secret Service. In any event, affected customers should take advantage of Target’s year-long free credit monitoring while waiting for more facts to unfold.


Can I Keep It Private? Privacy Laws in Various Contexts

by Ude Lu, UMN Law Student, MJLST Articles Editor

Target Corp., the second-largest retailer in the nation, announced to its customers on Dec 20, 2013 that its payment card data had been breached. About 40 million customers who shopped at Target between Nov. 27 and Dec. 15, 2013 using credit or debit cards are affected. The stolen information includes the customer’s name, credit or debit card number, and the card’s expiration date. [Update: The breach may have affected over 100 million customers, and additional kinds of information may have been disclosed.]

This data breach stirred public discussions about data security and privacy protections. Federal Trade (FTC) Commissioner Maureen Ohlhausen said on Jan. 6, during a Twitter chat, that this event highlights the need for consumer and business education on data security.

In the US, the FTC’s privacy protection enforcement runs on a “broken promise” framework. This means the FTC will enforce privacy protection according to what a business entity promised to its customers. Privacy laws have increasing importance in wake of the information age.

Readers of this blog are encouraged to explore the following four articles published in MJLST, discussing privacy laws in various contexts:

  1. Constitutionalizing E-mail Privacy by Informational Access, by Manish Kumar. This article highlights the legal analyses of email privacy under the Fourth Amendment.
  2. It’s the Autonomy, Stupid: Political Data-Mining and Voter Privacy in the Information Age, by Chris Evans. This article explores the unique threats to privacy protection posed by political data-mining.
  3. Privacy and Public Health in the Information Age: Electronic Health Records and the Minnesota Health Records Act, by Kari Bomash. This article examines the adequacy of the Minnesota Health Records Act (MHRA) that the state passed to meet then-Governor Pawlenty’s 2015 mandate requiring every health care provider in Minnesota to have electronic health records.
  4. An End to Privacy Theater: Exposing and Discouraging Corporate Disclosure of User Data to the Government, by Christopher Soghoian. This article explores how businesses vary in disclosing privacy information of their clients to governmental agencies.


Supreme Court Denies Request To Review FISC Court Order.

by Erin Fleury, UMN Law Student, MJLST Staff

Last week, the Supreme Court denied a petition requesting a writ of mandamus to review a decision that ordered Verizon to turn over domestic phone records to the National Security Administration (“NSA”) (denial available here). The petition alleged that the Foreign Intelligence Surveillance Court (“FISC”) exceeded its authority because the production of these types of records was not “relevant to an authorized investigation . . . to obtain foreign intelligence information not concerning a United States person.” 50 U.S.C. § 1861(b)(2)(A).

The Justice Department filed a brief with the Court that challenged the standing of a third party to request a writ of mandamus from the Supreme Court for a FISC decision. The concern, however, is that telecommunication companies do not adequately fight to protect their users’ privacy concerns. This apprehension certainly seems justified considering the fact that no telecom provider has yet challenged the legality of an order to produce user data. Any motivation to fight these orders for data is further reduced by the fact that telecommunication companies can obtain statutory immunity to lawsuits by their customers based on turning over data to the NSA. 50 USC § 1885a. If third parties cannot ask a higher court to review a decision made by the FISC, then the users whose information is being given to the NSA may have their rights limited without any recourse short of legislative overhaul.

Unfortunately, like most denials for hearing, the Supreme Court did not provide its reasoning for denying the request. The question remains though; if the end users cannot object to these orders (and may not even be aware that their data was turned over in the first place), and the telecommunication companies have no reason to, is the system adequately protecting the privacy interests of individual citizens? Or can the FISC operate with impunity as long as the telecom carriers do not object?


Censorship Remains Viable in China– But For How Long?

by Greg Singer, UMN Law Student, MJLST Managing Editor

Thumbnail-Greg-Singer.jpgIn the west, perhaps no right is held in higher regard than the freedom of speech. It is almost universally agreed that a person has the inherent right to speak their mind as he or she pleases, without fear of censorship or reprisal by the state. Yet for the more than 1.3 billion currently residing in what is one of the oldest civilizations on the planet, such a concept is either unknown or wholly unreflective of the reality they live in.

Despite the exploding amount of internet users in China (from 200 million users in 2007 to over 530 million by the end of the first half of 2012, more than the entire population of North America), the Chinese Government has remained implausibly effective at banishing almost all traces of dissenting thought from the wires. A recent New York Times article detailing the fabulous wealth of the Chinese Premier Wen Jiabao and his family members (at least $2.7 billion) resulted in the almost immediate censorship of the newspaper’s English and Chinese web presence in China. Not stopping there, the censorship apparatus went on to scrub almost all links, reproductions, or blog posts based on the article, leaving little trace of its existence to the average Chinese citizen. Earlier this year, the Bloomberg News suffered a similar fate, as it too published an unacceptable report regarding the unusual wealth of Xi Jinping, the Chinese Vice President and expected successor of current President, Hu Jintao.

In “Forbidden City Enclosed by the Great Firewall: The Law and Power of Internet Filtering in China,” published in the Winter 2012 version of the Minnesota Journal of Law, Science & Technology, Jyh-An Lee and Ching-Yi Liu explain that it is not mere tenacity that permits such effective censorship–the structure of the Chinese internet itself has been designed to allow the centralized authority to control and filter the flow of all communications over the network. Even despite the decentralizing face of content creation on the web, it appears as though censorship will remain technically possible in China for the foreseeable future.

Yet still, technical capability is not synonymous with political permissibility. A powerful middle class is emerging in the country, with particular strength in the large urban areas, where ideas and sentiments are prone to spread quickly, even in the face of government censorship. At the same time, GDP growth is steadily declining from its tremendous peak in the mid-2000s. These two factors may combine to produce a population that has the time, education, and wherewithal to challenge a status quo that will perhaps look somewhat less like marvelous prosperity in the coming years. If China wishes to enter the developed world as a peer to the west (with an economy based on skilled and educated individuals, rather than mass labor), addressing its ongoing civil rights issues seems like an almost unavoidable prerequisite.


Political Data-Mining and Election 2012

by Chris Evans, UMN Law Student, MJLST Managing Editor

Thumbnail-Chris-Evans.jpgIn “It’s the Autonomy, Stupid: Political Data-Mining and Voter Privacy in the Information Age,” I wrote about the compilation and aggregation of voter data by political campaigns and how data-mining can upset the balance of power between voters and politicians. The Democratic and Republican data operations have evolved rapidly and quietly since my Note went to press, so I’d like to point out a couple of recent articles on data-mining in the 2012 campaign.

In August, the AP ran this exclusive: “Romney uses secretive data-mining.” Romney has hired an analytics firm, Buxton Co., to help his fundraising by identifying untapped wealthy donors. The AP reports:

“The effort by Romney appears to be the first example of a political campaign using such extensive data analysis. President Barack Obama’s re-election campaign has long been known as data-savvy, but Romney’s project appears to take a page from the Fortune 500 business world and dig deeper into available consumer data.”

I’m not sure it’s true Buxton is digging any deeper than the Democrats’ Catalist or Obama’s fundraising operation. Campaigns from both parties have been scouring consumer data for years. As for labeling Romney’s operation “secretive,” the Obama campaign wouldn’t even comment on its fundraising practices for the article, which strikes me as equally if not more secretive. Political data-mining has always been nonpartisanly covert; that’s part of the problem. When voters don’t know they’re being monitored by campaigns, they are at a disadvantage to candidates. (And when they do know they’re being monitored, they may alter their behavior.) This is why I argued in my Note for greater transparency of data-mining practices by candidates.

A more positive spin on political data-mining appeared last week, also by way of the AP: “Voter registration drives using data mining to target their efforts, avoid restrictive laws.” Better, cheaper technology and Republican efforts to restrict voting around the country are inducing interest groups to change how they register voters, swapping their clipboards for motherboards. This is the bright side of political data-mining: being able to identify non-voters, speak to them on the issues they care about, and bring them into the political process.

The amount of personal voter data available to campaigns this fall is remarkable, and the ways data-miners aggregate and sort that data is fascinating. Individuals ought to be let in on the process, though, so they know what candidates and groups are collecting what type of personal information, and so they can opt out of the data-mining.


Obama, Romney probably know what you read, where you shop, and what you buy. Is that a problem?

by Bryan Dooley, UMN Law Student, MJLST Staff

Thumbnail-Bryan-Dooley.jpgMost voters who use the internet frequently are probably aware of “tracking cookies,” used to monitor online activity and target ads and other materials specifically to individual users. Many may not be aware, however, of the increasing sophistication of such measures and the increasing extent of their use, in combination with other “data-mining” techniques, in the political arena. In “It’s the Autonomy, Stupid: Political Data-Mining and Voter Privacy in the Information Age,” published in the Spring 2012 volume of the Minnesota Journal of Law, Science, & Technology, Chris Evans discusses the practice and its implications for personal privacy and voter autonomy.

Both parties rely extensively on data-mining to identify potentially sympathetic voters and target them, often with messages tailored carefully to the political leanings suggested by detailed individual profiles. Technological developments and the widespread commercial collection of consumer data, of which politicians readily avail themselves, allow political operatives to develop (and retain for future campaigns, and share) personal voter profiles with a broad swath of information about online and market activity.

As Evans discusses, this allows campaigns to allocate their resources more efficiently, and likely increases voter turnout by actively engaging those receptive to a certain message. It also has the potential to chill online discourse and violate the anonymity of the voting booth, a central underpinning of modern American democracy. Evans ultimately argues that existing law fails to adequately address the privacy issues stemming from political data-mining. He suggests additional protections are necessary: First, campaigns should be required to disclose information contained in voter profiles upon request. Second, voters should be given an option to be excluded from such profiling altogether.


Censorship, Technology, and Bo Xilai

by Jeremy So, UMN Law Student, MJLST Managing Editor

Thumbnail-Jeremy-So.jpgAs China’s Communist party prepares for its once-a-decade leadership transition, the news has instead been dominated by the fall from power of Bo Xilai, the former head of the Chongching Communist Party and formerly one of the party’s potential leaders. While such a fall itself is unusual, the dialogue surrounding Bo’s fall is also remarkable–Chinese commentators have been able to express their views while facing only light censorship.

This freedom is remarkable because of the Chinese government’s potential control over the internet, which was recently outlined by Jyh-An Lee and Ching-Yi Liu in “Forbidden City Enclosed by the Great Firewall: The Law and Power of Internet Filtering in China” recently published in the Minnesota Journal of Law, Science & Technology. Lee and Liu explain how early on in the internet’s development, the Chinese government decided to limit a user’s ability to access non-approved resources. By implementing a centralized architecture, the government has been able to implement strict content filtering controls. In conjunction with traditional censorship, the Chinese government has an unprecedented amount of control over what can be viewed online.

Lee and Liu argue that these technological barriers rise to the level of de facto law. Within this framework, the Chinese government’s history of censorship indicates that there are rules against criticizing the party, its leaders, or its actions.

Chinese internet reactions to the Bo Xilai case are notable because thy have included criticism of all three. Posts expressing differing opinions, including those criticizing the government’s reaction and those supporting the disgraced leader, have not been taken down. Such posts have remained online even while commentary on China’s next leader, Xi Jinping, has been quickly taken down. Given the Chinese government’s potential control and past use of those controls, the spread of such dissent must be intentional.

Whether this is part of a broader movement towards more openness, a calculated response by the party, or a failure of Chinese censorship technology remains to be seen. Regardless, the changing nature of the internet and technology will force the Chinese government to adapt.


Digital Privacy: Who is Tracking you online?

by Eric Friske, UMN Law Student, MJLST Managing Editor

Thumbnail-Eric-Friske.jpgFrom one mouse click to the next, internet users knowingly and unknowingly leave a vast array of online data points that reveal something about those users’ identities and preferences. These digital footprints are collected and exploited by websites, advertisers, researchers, and other parties for a multitude of commercial and non-commercial purposes. Despite growing awareness by users that their online activities do not simply evaporate into the ether, many people are unaware of the extent to which their actions may be visible, collected, or used without their knowledge.

Scholars Omer Tene and Jules Polontensky, in their article “To Track or ‘Do Not Tract’: Advancing Transparency and Individual Control in Online Behavioral Advertising,” discusses the various online tracking technologies that have been used by industries to document and analyze these digital footprints, and argue that policymakers should be addressing the underlying value question regarding the benefits of online data usage and its inherent privacy costs.

With each new technological advance that seeks to make us more connected with the world around us, our daily lives and our online presence have become increasingly intertwined. Ordinary users have become more aware that their online activities lacks the anonymity that they once thought existed. However, despite this awareness, many users may not know what personal information is available online, how it got there, or how to prevent it. Moreover, some tracking services are undertaking efforts to prevent users from evading them, even when those users intentionally attempt to keep their online activities private.

Corporations have begun to recognize the importance of providing consumers with the opportunity to choose what information they wish to share while on the internet. For example, last May, Microsoft announced that Internet Explorer 10 will have a “Do Not Track” flag on by default, stating that it believes “consumers should have more control over how information about their online behavior is tracked, shared and used.” Not unexpectedly, the Interactive Advertising Bureau, a global non-profit trade association for the online advertising industry, denounced Microsoft’s move as “a step backwards in consumer choice;” although, some have argued that these pervasive tracking practices are actually robbing individuals of free choice. It should perhaps be noted that the popular internet browser Firefox already possesses a Do Not Track feature, though it is not engage by default, and Google has stated that it will include Do Not Track support for Chrome by the end of the year.

Regardless, while academic and political discussions on how to address these concerns continue to simmer, internet users who desire privacy must learn how to protect themselves in an online environment replete with corporations that are relentlessly trying scavenge every morsel of information they leave behind, something which may not be an easy task when tracking is so prevalent.


Don’t Track Me! – Okay Maybe Just a Little

by Mike Borchardt, UMN Law Student, MJLST Managing Editor

Thumbnail-Michael-Borchardt.jpgRecent announcements from Microsoft have helped to underscore the current conflict between internet privacy advocates and businesses which rely on online tracking and advertising to generate revenues. Microsoft recently announced that “Do Not Track” settings will be enabled by default in the next version of their web browser, Internet Explorer 10 (IE 10).

As explained by Omer Tene and Jules Polonetsky in their article in the Minnesota Journal of Law, Science & Technology 13.1, “To Track or ‘Do not Track’: Advancing Transparency and Individual Control in Online Behavioral Advertising,” the amount and type of data web services and advertisers collect on users has developed as quickly as the internet itself. (For an excellent overview of various technologies used to track online behavior, and the variety of information they can obtain, see section II of their article). The success and ability of online services to supply their products free to users is heavily dependent on this data tracking and the advertising revenue it generates. Though many online services are dependent on this data collection in order to generate revenue, users and privacy advocates are suspicious about the amount of data being collected, how it is being used, and who has access.

And it is in response to this growing environment of unease concerning the amount and types of user data being collected that Microsoft has added these new Do Not Track features (All other major browsers are set to include do not track settings, with Google’s Chrome the last to announce them. These settings, however, will likely not be enabled by default. This, however, may not be the boon for user privacy that some have been hoping for. Do Not Track is a voluntary standard developed by the web industry; it relies on browser headings to tell advertisers not to track users (for a more in depth description of how this technology works, see pgs. 325-26 of Tene and Polonetsky’s article). This is where the problem arises-websites can ignore browser headings and track users anyway. Part of the Do Not Track standard developed by the industry is that users must opt in to Do Not Track-it cannot be enabled by default. In response to Microsoft’s default Do Not Track settings, Apache (the most common webserver application), has been updated to ignore do not track setting from IE 10 users. With one side claiming that “Microsoft deliberately violate[d] the standard,” and the other claiming that the industry is ignoring privacy for profit, the conflict over user data collection seems poised to continue.

A variety of alternatives to the industry implemented Do Not Track settings have been proposed. As the conflict continues, one of the most commonly proposed solutions is legislation. Privacy advocates and web companies, however, have very different views about what Do Not Track legislation should cover. (For differing viewpoints see “‘Do Not Track’ Internet spat risks legislative crackdown). Tene and Polonetsky argue that a value judgment must be made, that policymakers must evaluate whether the “information-for-value business model currently prevailing online” is socially acceptable or “a perverse monetization of users’ fundamental rights,” and create Do Not Track standards accordingly. Unfortunately, this choice between the generally free-to-use websites and web services users have come to expect on one hand, and personal privacy on the other, does not seem like much of a choice at all.

There are, however, alternatives to the standard Do Not Track proposals. One of the best is allowing the collection of user data to continue, but to legally limit the ways in which that data could be used. Tene and Polonetsky recommend a variety of policies that could be enacted which could help to assuage users’ privacy concerns, while allowing web services to continue generating targeted advertising revenue. Some of their proposals include limiting user data use to advertising and fraud prevention, preventing the use of data collected from children, anonymizing data as much as possible, limiting the retention of user data, limiting transmission of data to third parties, and clearly explaining to users what data is being collected about them and how it is being used. Many of these options have been proposed before, but used in conjunction they could provide an acceptable alternative to the strict Do Not Track approach proposed by privacy advocates, while still allowing the free-to-use, advertising-based web to thrive.