2013

Making it Personal: The Key to Climate Change Action

by Brandon Palmen, UMN Law Student, MJLST Executive Editor

Climate change is the ultimate global governance challenge, right? It’s an intractable problem, demanding a masterfully coordinated international response and a delicate political solution, balancing entrenched economic interests against deeply-discounted, diffuse future harms that are still highly uncertain. But what if that approach to the problem were turned on its head? We often hear that the earth will likely warm 3-5 degrees centigrade (+/- 2 degrees), on average, over the next hundred years, and we may wonder whether that’s as painful as higher utility bills and the fear of losing business and jobs to free-riding overseas competitors. What if, instead, Americans asking “what’s in it for me?” could just go online and look up their home towns, the lakes where they vacation, the mountains where they ski, and fields where their crops are grown, and obtain predictions of how climate change is likely to impact the places they actually live and work?

A new climate change viewing tool from the U.S. Geological Survey is a first step toward changing that paradigm. The tool consolidates and averages temperature change predictions based on numerous climate change models and displays them on a map. The result is beautiful in its simplicity; like a weather map, it allows everyday information consumers to begin to understand how climate change will affect their lives on a daily basis, making what had been an abstract concept of “harm” more tangible and actionable. So far, the tool appears to use pre-calculated, regional values and static images (to support high-volume delivery over the internet, no doubt), and switching between models reveals fascinatingly wide predictive discrepancies. But it effectively communicates the central trend of climate change research, and suggests the possibility of developing a similar tool that could provide more granular data, either by incorporating the models and crunching numbers in real time, or by extrapolating missing values from neighboring data points. Google Earth also allows users to view climate change predictions geographically, but the accessibility of the USGS tool may give it greater impact with the general public.

There are still challenging bridges to be crossed — translation of what “N-degree” temperature changes will likely have on particular species, and “tagging,” “fencing,” or “painting” of specific tracts of land with those species — but it is plausible that within a few years, we will be able to obtain tailored predictions of climate change’s impact on the environments that actually matter to us — the ones in which we live. Of course those predictions will be imprecise or even wholly incorrect, but if they’re based on the best-available climate models, coupled with discoverable information about local geographic features, they’ll be no worse than many other prognostications that grip everyday experience, like stock market analysis and diet/nutrition advice. Maybe the problem with public climate change debate is that it’s too scientific, in the sense that scientists know the limitations of their knowledge and models, and are wary of “defrauding” the public by drawing inductive conclusions that aren’t directly confirmed by evidence. Or maybe there’s just no good way to integrate the best climate models with local environmental and economic knowledge … yet.

Well, so what? Isn’t tackling climate change still an intractable global political problem? Maybe not. The more that people understand about the impacts climate change will have on them personally, the more likely they are to personally take action to ameliorate climate change, even absent meaningful top-down climate change policy. And while global governance may be beyond the reach of most individuals, local and state programs are not so far removed from private participation. In her recent article, Localizing Climate Change Action, Myanna Dellinger examines several such “home-grown” programs, and concludes that they may be an important component of climate change mitigation. Minnesotans are probably most worried about climate change’s impact on snow storms, lake health, and crop yields, while Arizonans might worry more about drought and fragile desert ecosystems, and Floridians might worry about hurricanes and beach tourism. If all of these local groups are motivated by the same fundamental problem, their actions may be self-coordinating in effect, even if they are not coordinated by design.


Worldwide Canned Precooked Meat Product: The Legal Challenges of Combating International Spam

by Nathan Peske, UMN Law Student, MJLST Staff

On May 1, 1978 Gary Thuerk sent the first unsolicited mass e-mail on ARPANET, the predecessor to today’s Internet. Thuerk, a marketing manager for Digital Equipment Corporation (DEC), sent information about DEC’s new line of microcomputers to all 400 users of the ARPANET. Since ARPANET was still run by the government and subject to rules prohibiting commercial use, Thuerk received a stern tongue lashing from an ARPANET representative. Unfortunately this failed to deter future senders of unsolicited e-mails, or spam, and it has been a growing problem ever since.

From a single moderately annoying but legitimate advertisement sent by a lone individual in 1978, spam has exploded into a malicious, hydra-headed juggernaut. Trillions of spam e-mails are sent every year, up to 90% of all e-mail sent. Most spam e-mails are false ads for adult devices or health, IT, finance, or education products. The e-mails routinely harm the recipient through attempts to scam money like the famous Nigerian scam, phishing attacks to steal the recipient’s credentials, or distribution of malware either directly or through linked websites. It is estimated that spammers cost the global economy $20 billion a year in everything from lost productivity to the additional network equipment required to transmit the massive increase in e-mail traffic due to spam.

While spam is clearly a major problem, legal steps to combat it are confronted by a number of identification and jurisdictional issues. Gone are the Gary Thuerk days when the sender’s e-mail could be simply read off the spam e-mail. Spam today is typically distributed through large networks of malware-infected computers. These networks, or botnets, are controlled by botmasters who send out spam without the infected user’s knowledge, often for another party. Spam may be created in one jurisdiction, transmitted by a botmaster in another jurisdiction, distributed by bots in the botnet somewhere else, and received by recipients all over in the world.

Anti-spam laws generally share several provisions. They usually include one or all of the following: OPT-IN policies prohibiting sending bulk e-mails to users that have not subscribed to them, OPT-OUT policies requiring that a user must be able to unsubscribe at any time, clear and accurate indication of the sender’s identity and the advertising nature of the message, and a prohibition on e-mail address harvesting. While effective against spammers that can be found within that entity’s jurisdiction, these laws cannot touch other members in the spam chain outside of its borders. There is also a lack of laws penalizing legitimate companies, often more easily identified and prosecuted, that pay for spamming services. Only the spammers themselves are prosecuted.

Effectively reducing spam will require a more effective international framework to mirror the international nature of spam networks. Increased international cooperation will help identify and prosecute members throughout the spam chain. Changes in the law, such as penalizing those who use spamming services to advertise, will help reduce the demand for spam.

Efforts to reduce spam cannot include just legal efforts against spammers and their patrons. Much like the international drug trade, as long as spam continues to be a lucrative market, it will attract participants. Technical and educational efforts must be made to reduce the profit in spam. IT companies and industry groups are working to develop anti-spam techniques. These range from blocking IP address and domains at the network level to analyzing and filtering individual messages, and a host of other techniques. Spam experts are also experimenting with techniques like spamming the spammers with false responses to reduce their profit margins. Efforts to educate users on proper e-mail security and simple behaviors like “if you don’t know the sender, don’t open the attachment” will also help bring down spammers’ profit margins by decreasing the number of responses they get.

Like many issues facing society today, e-mail spam requires a response at all levels of society. National governments must work individually and cooperatively to pass effective anti-spam laws and prosecute spammers. Industry groups must develop ways to detect and destroy spam and the botnets that distribute them. And individual users must be educated on the techniques to defend themselves from the efforts of spammers. Only with a combined, multi-level effort can the battle against international e-mail spam be truly won.


Supreme Court Denies Request To Review FISC Court Order.

by Erin Fleury, UMN Law Student, MJLST Staff

Last week, the Supreme Court denied a petition requesting a writ of mandamus to review a decision that ordered Verizon to turn over domestic phone records to the National Security Administration (“NSA”) (denial available here). The petition alleged that the Foreign Intelligence Surveillance Court (“FISC”) exceeded its authority because the production of these types of records was not “relevant to an authorized investigation . . . to obtain foreign intelligence information not concerning a United States person.” 50 U.S.C. § 1861(b)(2)(A).

The Justice Department filed a brief with the Court that challenged the standing of a third party to request a writ of mandamus from the Supreme Court for a FISC decision. The concern, however, is that telecommunication companies do not adequately fight to protect their users’ privacy concerns. This apprehension certainly seems justified considering the fact that no telecom provider has yet challenged the legality of an order to produce user data. Any motivation to fight these orders for data is further reduced by the fact that telecommunication companies can obtain statutory immunity to lawsuits by their customers based on turning over data to the NSA. 50 USC § 1885a. If third parties cannot ask a higher court to review a decision made by the FISC, then the users whose information is being given to the NSA may have their rights limited without any recourse short of legislative overhaul.

Unfortunately, like most denials for hearing, the Supreme Court did not provide its reasoning for denying the request. The question remains though; if the end users cannot object to these orders (and may not even be aware that their data was turned over in the first place), and the telecommunication companies have no reason to, is the system adequately protecting the privacy interests of individual citizens? Or can the FISC operate with impunity as long as the telecom carriers do not object?


Problems with Forensic Expert Testimony in Arson Cases

by Becky Huting, UMN Law Student, MJLST Staff

In MJLST Volume 14, Issue 2, Rachel Diasco-Villa explored the evidentiary standard for arson investigation. Ms. Diasco-Villa, a lecturer at the School of Criminology and Criminal Justice at Griffith University, examined the history of arson-investigation knowledge, and how the manner in which it is conveyed in court can mislead, possibly leading to faulty conclusions and wrongful convictions. The article discussed the case of Todd Willingham, who was convicted and sentenced to death for setting fire to his home and killing his three children. Willingham had filed numerous unsuccessful appeals and petitions for clemency, and several years after his execution, a commission’s investigation concluded that there were several alternative explanations as to the cause of the fire, and that neither the investigation nor the evidence testimony were compliant with existing standards.

During the trial, the prosecutor’s fire expert, a Deputy Fire Marshall from the State Fire Marshall’s Office, testified as to why he believed the fire was set by arson. Little science was used in his explanation:

Heat rises. In the winter time when you are going to the bathroom and you don’t have any carpet on the rug. . .the floor is colder than the ceiling. It always is. . . So when I found that floor is hotter than the ceiling, that’s backwards, upside down. . .The only reason that the floor is hotter is because there was an accelerant. That’s the difference. Man made it hotter or woman or whatever.

The expert went on to explain that fire investigations and fire dynamics are logical and common sense, such that jurors themselves could evaluate with their sense and experiences to arrive at the same conclusions. All samples taken from “suspicious” areas of the house tested negative for any traces of an accelerant. The expert explained the chemical results: “And so there won’t be any — anything left; it will burn up.”

Fire and arson investigation has traditionally been experiential knowledge, passed down from mentors to their apprentices without experimental or scientific testing to validate their claims. Fire investigators do not necessarily have scientific training, nor is it necessary for them to hold a higher educational degree beyond a high school diploma. The National Academy of Science released a report in 2009 stating that the forensic sciences needed standardized reporting of their findings and testimony, and fire and arson investigation was no exception. The International Association of Arson Investigators has pushed back on such guidance, having filed an amicus brief arguing that arson investigation is experience-based and not novel or scientific, so it should not be subjected to higher evidentiary standards. This argument failed to convince the court, which ruled that fire investigation expertise should be subject to scrutiny under the Daubert standards that call for exacting measures of reliability.

Ms. Diasco-Villa’s note also considers the risk of contextual bias and overreach, should these experts’ testimony be admitted. In the Willingham case, the expert was given wide latitude as to his opinion on the defendant’s guilt or innocence. He was allowed to testify as to his belief that the suspect’s intent “was to kill the little girls” and identify the defendant by name as the individual who started the fire. Under Federal Rules of Evidence section 702, expert witnesses are given a certain degree of latitude in stating their opinions, but the author was concerned with the risk of jurors giving extra weight to this arguably overreaching testimony by expert witnesses.

She concluded by presenting statistics concerning the vast number of fires in the United States each year (1.6 million), and the significant quantity that are classified as having been intentionally set (43,000). There is a very real potential that thousands of arrests and convictions each year may have relied on overreaching testimony or evidence collected and interpreted using a defunct methodology. This state of affairs in arson investigations warrants continued improvements in forensic science techniques and evidentiary standards.


My Body, My Tattoo, My Copyright?

by Jenny Nomura, UMN Law Student, MJLST Managing Editor

A celebrity goes into a tattoo shop and gets an elaborate tattoo on her arm. The celebrity and her tattoo appear on TV and in magazines, and as a result, the tattoo becomes well-known. A director decides he wants to copy that tattoo for his new movie. He has an actress appear in the film with a copy of the signature tattoo. Not long after, the film company gets notice of a copyright infringement lawsuit filed against them, from the original tattoo artist. Similar situations are actually happening. Mike Tyson’s face tattoo artist sued Warner Bros. for copying his tattoo in “The Hangout Part 2.” Warner Bros. settled with the tattoo artist. Another tattoo artist, Christopher Escobedo, designed a large tattoo on a mixed martial arts fighter, Carlos Condit. Both the tattoo and the fighter appeared in a video game. Now Escobedo wants thousands of dollars for copyright infringement. Most people who get a tattoo never think about potential copyright issues, but these recent events might change that.

These situations leave us with a lot of uncertainties and questions. First of all, is there a copyright in a tattoo? It’s seems like it meets the basic definition of a copyright, but maybe just a thin copyright (most tattoos don’t have a lot of originality). Assuming there is a copyright, who owns the copyright: the wearer or the tattoo artist? Who can the owner, whoever he is, sue for copyright infringement? Can he or she sue other tattoo artists for violation of right of derivative works? Can he or she sue for violation of reproduction if another tattoo artist copies the original onto someone else? What about bringing a lawsuit against a film company for publicly displaying the tattoo? There are plenty of tattoos of copyrighted and trademarked materials, so could tattoo artists and wearers themselves be sued for infringement?

What can be done to avoid copyright infringement lawsuits? Assuming that the owner of the copyright is the tattoo artist, the potential-wearer could have the tattoo artist sign a release. It may cost more money to get the tattoo, but there’s no threat of a lawsuit. It has been argued that the best outcome would be if a court found an implied license. Sooner or later someone is going to refuse to settle and we will have a tattoo copyright infringement lawsuit and hopefully get some answers.


Uh-Oh Oreo? The Food and Drug Administration Takes Aim at Trans Fats

by Paul Overbee, UMN Law Student, MJLST Staff

In the near future, food currently part of your everyday diet may undergo some fundamental changes. From cakes and cookies to french-fries and bread, a recent action by the Food and Drug Administration puts these types of products in the spotlight. On November 8th, 2013 the FDA filed a notice requesting comments and scientific data on partially hydrogenated oils. The notice states that partially hydrogenated oils, most commonly found in trans fats, are no longer considered to be generally recognized as safe by the Food and Drug Administration.

Some partially hydrogenated oils are created during a stage of food processing in order to make vegetable oil more solid. The effects of this process contribute to a more pleasing texture, greater shelf life, and stronger flavor stability. Additionally, some trans fat is naturally occurring in some animal-based foods, including some milks and meats. The FDA’s proposal is meant to only to restrict the use of artificial partially hydrogenated oils. According to the findings of the FDA, exposure to partially hydrogenated oils raises bad cholesterol levels. This raised cholesterol level has been attributed to a higher risk of coronary heart disease.

Some companies have positioned their products so that they should not have to react to these new changes. The FDA incentivized companies in 2006 by putting rules in place to promote trans fat awareness. The new regulations allowed companies to label their products as trans fat free if they lowered the level of hydrogenated oils to near zero. Kraft Foods decided to change the recipe of its then 94-year-old product, the Oreo. It took 2 ½ years for Kraft Foods to reformulate the Oreo, and once that period was over, the trans fat free Oreo was introduced to the market. The Washington Post invited two pastry chefs to taste test the new trans fat free Oreo against the original product. Their conclusion was that the two products were virtually the same. This fact should act as a form of reassurance for consumers that are worried that their favorite snacks will be pulled off the shelves.

Returning to the FDA’s guidance, there are a few items worth highlighting. At this stage, the FDA is still in the process of formulating its opinion on how to regulate these partially hydrogenated oils. Actual implementation may take years. Once the rule comes into effect, products seeking to continue to use partially hydrogenated oils will still be able to seek approval on a case by case basis from the FDA. The FDA is seeking advice on the following issues: the correctness of its determination that partially hydrogenated oils are no longer considered safe, ways to approach a limited use of partially hydrogenated oils, and any other sanctions that have existed for the use of partially hydrogenated oils.

People interested in participating with the FDA in determining the next steps taken against partially hydrogenated oils can submit comments to http://www.regulations.gov.


Required GMO Food Labels Without Scientific Validation Could Undermine Food Label Credibility

by George David Kidd, UMN Law Student, MJLST Managing Editor

GMO food-label laws that are on the voting docket in twenty-four states will determine whether food products that contain genetically modified ingredients should be either labeled or banned from store shelves. Recent newspaper articles raise additional concerns that states’ voting outcomes may spur similar federal standards. State and perhaps future federal regulation, however, might be jumping the gun by attaching stigma to GMO products without any scientific basis. FDA labeling regulation, discussed in J.C. Horvath’s How Can Better Food Labels Contribute to True Choice?, provides that FDA labeling requirements are generally based upon some scientific support. Yet, no study has concluded that genetically modified ingredients are unsafe for human consumption. Required labeling based upon the belief that we have the right to know what we eat, without any scientific basis or otherwise, could serve to further undermine the credibility of food labeling practices as a whole.

The argument for labeling GMO food products is simple: we have a “right to know what we eat.” The upshot is that we should know, or be able to find out, exactly what we are putting into our bodies, and be able to make our own consumer decisions based upon the known consequences of its manufacture and consumption. But, the fact that we do not know whether our food is synthetic or its exact origins might not matter if the product is both better for us and the environment. Indeed, the FDA admits that “some ingredients found in nature can be manufactured artificially and produced more economically, with greater purity and more consistent quality, than their natural counterparts.” If some manufactured products are better than their natural counterparts, why are we now banning/regulating GMO products before we know whether they are good or bad? If we knew they were bad in the first place, GMO products would likely already be banned.

Analysis is an important part in establishing the underlying credibility of labeling claims on food products. Without some regulation of label credibility there would be an even greater proliferation of bogus health claims on food packaging. Generally, the U.S. Food and Drug Administration has held that health claims on food labels are allowed as long as they are supported by evidence, and that food labeling is required when it discloses information that is of “material consequence” to a consumer in their choice to purchase a product. For example, the FDA has found that micro- and macro-nutritional content, ingredients, net weight, commonly known allergens, and whether “imitation” or diluted product is used, must be included on food labeling. The FDA has not, however, required labeling for dairy products produced from cows treated with synthetic growth hormone (rBST) because extensive studies have determined that rBST has no effect on humans. Just imagine the FDA approving food labeling claims without evaluating whether or not that claim was supported by evidence.

Premature adoption of new state or federal labeling policy would contradict and undermine the current scientific FDA standards underlying labeling regulation. The decision of whether to require labeling or ban GMOs, absent any scientific rigor as to whether GMO products are safe, only serves to perpetuate the problem of “meaningless” food labels. Further, the possible increases in food cost and labeling requirements might ultimately be passed on to the consumer without enough information to justify the increase. But now that GMOs are allegedly commonplace ingredients, shouldn’t legislation wait until the verdict is in on whether GMO products are good or bad for human health before taking further action?


The Importance of Appropriate Identification within Social Networking

by Shishira Kothur, UMN Law Student, MJLST Staff

Social networking has become a prominent form of communication and expression for society. Many people continue to post and blog about their personal lives, believing that they are hidden by separate account names. This supposed anonymity gives a false sense of security, as members of society post and upload incriminating and even embarrassing information about themselves and others. This information, while generally viewed by an individual’s 200 closest friends, is has also become a part of the courtroom.

This unique issue is further explained in Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-
Networking Evidence
, Volume 13, Issue 1 of the Minnesota Journal of Law, Science and Technology. Professor Ira P. Robbins emphasizes that since social media provides an easy outlet for wrongful behavior, it will inevitably find its way as evidence in litigation. Her article focuses on the courts’ efforts to authenticate the evidence that is produced from Facebook, Twitter and other social media. Very few people take care to set appropriate privacy settings. The result from this practice is an easy way for anyone to find important, personal information, which they can use to hack accounts, submit their own postings under a different name, and incriminate others. Similarly, the creation of fake accounts is a prominent tool to harass and bully individuals to the point of disastrous and suicidal effects. With results such as untimely deaths and inappropriate convictions, the method of proving the authorship of such postings becomes a critical step when collecting evidence.

Professor Robbins comments that currently a person can be connected to and subsequently lawfully responsible for a posting without appropriate proof that the posting is, in fact, theirs. The article critiques the current method the court applies to identifying these individuals, claiming that there is too much emphasis on testimonials of current access, potential outside access, and other various factors. It proposes a new method of assigning authorship to the specific item instead of the account holder. She suggests a specific focus on the type of evidence when applying Federal Rule of Evidence 901(b)(4), which will raise appropriate questions such as the account ownership, security, and the overall posting that is related to the suit. The analysis thoroughly explains how this new method will provide sufficient support between the claims and the actual author. As social media continues to grow, so do the opportunities to hack, mislead, and ultimately cause harm. This influx of information needs to be filtered well in order for the courts to find the truth and serve justice to the right person.


All Signs Point Toward New Speed Limits on the Information Superhighway

by Matt Mason, UMN Law Student, MJLST Staff

The net neutrality debate, potentially the greatest hot-button issue surrounding the Internet, may be coming to a (temporary) close. After years of failed attempts to pass net neutrality legislation, the D.C. Circuit will soon rule as to whether the FCC possesses the regulatory authority to impose a non-discrimination principle against large corporate ISP providers such as Verizon. Verizon, the plaintiff in the case, alleges that the FCC exceeded its regulatory authority by promulgating a non-discrimination net neutrality principle. In 2010, the FCC adopted a number of net neutrality provisions, including the non-discrimination principle, in order to prevent ISPs like Verizon from establishing “the equivalents of tollbooths, fast lanes, and dirt roads” on the Internet. Marvin Ammori, an Internet policy expert, believes that based on the court’s questions and statements at oral argument, the judges plan to rule in favor of Verizon. Such a ruling would effectively end net neutrality, and perhaps the Internet, as we know it.

The D.C. Circuit Court is not expected to rule until late this year or early next year. If the D.C. Circuit rules that the FCC does not have the regulatory power to enforce this non-discrimination principle, companies such as AT&T and Verizon will have to freedom to deliver sites and services in a faster and more reliable fashion than others for any reason at all. As Ammori puts it, web companies (especially start-ups) will now survive based on the deals they are able to make with companies like Verizon, as opposed to based on the “merits of their technology and design.”

This would be terrible news for almost everyone who uses and enjoys the Internet. The Internet would no longer be neutral, which could significantly hamper online expression and creativity. Additional costs would be imposed on companies seeking to reach users, which would likely result in increased costs for users. Companies that lack the ability to pay the higher fees would end up with lower levels of service and reliability. The Internet would be held hostage and controlled by only a handful of large companies.

How the FCC will respond to the likely court ruling rejecting its non-discrimination principle is uncertain. Additionally, wireless carries such as Sprint, have begun to consider the possibility of granting certain apps or service providers preferential treatment or access to customers. Wireless phone carriers resist the application of net neutrality rules to their networks, and appear poised to continue to do so despite the fact that network speeds are beginning to equal those on traditional broadband services.

In light of the FCC potentially not having the regulatory authority to institute net neutrality principles, and because of the number of failed attempts by Congress to pass net neutrality legislation, the question of what can be done to protect net neutrality has no easy answers. This uncertainty makes the D.C. Circuit’s decision even more critical. Perhaps the consumer, media, and web company outcry will be loud enough to create policy change following to likely elimination of the non-discrimination rule. Maybe Congress will respond by making the passage of net neutrality legislation a priority. Regardless of what happens, it appears as though we will soon see the installation of speed limits on the information superhighway.


The Affordable Care Act “Death Spiral”: Fact or Fiction?

by Bryan Morben, UMN Law Student, MJLST Managing Editor

A major criticism about the Patient Protection and Affordable Care Act of 2010 (“Affordable Care Act” or “ACA”) is that it will lead to a premium “death spiral.” Because the Affordable Care Act proscribes health insurance companies from discriminating against individuals with preexisting health conditions, some believe that people might just wait until they’re sick before signing up for coverage. If that happens, everyone else’s premiums will rise, causing healthy people to drop their coverage. With only sick individuals left paying premiums, the rates go up even more. And so on . . .

On the other hand, supporters of the ACA cite its other provisions to safeguard against this scenario, specifically, the subsidy/cost sharing and “individual mandate” sections. The former helps certain individuals reduce the amount of their premiums. The latter requires individuals who forego buying minimal health insurance to pay a tax penalty. The penalty generally “is capped at an amount equal to the national average premium for qualified health plans which have a bronze level of coverage available through the state Exchange.” Therefore, the idea is that enough young, healthy individuals will sign up if they would have to pay a similar amount anyway.

States that have guaranteed coverage for everyone with preexisting conditions before have seen mixed results. New York now has some of the highest individual health insurance premiums in the country. Massachusetts, which also has an individual mandate, has claimed more success. But it still leaves some residents wondering whether breaking the law might make more sense.

There are notable differences between the ACA and the Massachusetts law as well. For example, the subsidies are larger in Massachusetts than they are with the ACA, so there’s less of an incentive for healthy people to sign up for the federal version. In addition, the ACA’s individual mandate seems to have less of a “bite” for those who elect to go without insurance. The penalty is enforced by the Treasury, and individuals who fail to pay the penalty will not be subject to any criminal penalties, liens, or levies.

Finally, the unveiling of the HealthCare.gov website, a health insurance exchange where individuals will learn about insurance plans, has been a catastrophe so far. There is also some concern that “only the sickest, most motivated individuals will persevere through enrollment process.” Since high enrollment of young, healthy participants is crucial to the success of the marketplace, the website problem, and any negative effect it has on enrollment, are just the latest contributor to the possible looming spiral.

In all, it remains to be seen whether the Affordable Care Act will succeed in bringing about a positive health care reform in the United States. For an excellent discussion on the ACA’s “right to health care” and additional challenges the law will face, see Erin C. Fuse Brown’s article Developing a Durable Right to Health Care in Volume 14, Issue 1 of the Minnesota Journal of Law, Science & Technology.