Uncategorized

Whitelist for Thee, but Not for Me: Facebook File Scandals and Section 230 Solutions

Warren Sexson, MJLST Staffer

When I was in 7th grade, I convinced my parents to let me get my first social media account. Back in the stone age, that phrase was synonymous with Facebook. I never thought too much of how growing up in the digital age affected me, but looking back, it is easy to see the cultural red flags. It came as no surprise to me when, this fall, the Wall Street Journal broke what has been dubbed “The Facebook Files,” and in them found an internal study from the company showing Instagram is toxic to teen girls. While tragic, this conclusion is something many Gen-Zers and late-Millennials have known for years. However, in the “Facebook Files” there is another, perhaps even more jarring, finding: Facebook exempts many celebrities and elite influencers from its rules of conduct. This revelation demands a discussion of the legal troubles the company may find itself in and the proposed solutions to the “whitelisting” problem.

The Wall Street Journal’s reporting describes an internal process by Facebook called “whitelisting” in which the company “exempted high-profile users from some or all of its rules, according to company documents . . . .” This includes individuals from a wide range of industries and political viewpoints, from Soccer mega star Neymar, to Elizabeth Warren, and Donald Trump (prior to January 6th). The practice put the tech giant in legal jeopardy after a whistleblower, later identified as Frances Haugen, submitted a whistleblower complaint with the Securities and Exchange Commission (SEC) that Facebook has “violated U.S. securities laws by making material misrepresentations and omissions in statements to investors and prospective investors . . . .” See 17 CFR § 240.14a-9 (enforcement provision on false or misleading statements to investors). Mark Zuckerberg himself has made statements regarding Facebook’s neutral application of standards that are at direct odds with the Facebook Files. Regardless of the potential SEC investigation, the whitelist has opened up the conversation regarding the need for serious reform in the big tech arena to make sure no company can make lists of privileged users again. All of the potential solutions deal with 47 U.S.C. § 230, known colloquially as “section 230.”

Section 230 allows big tech companies to censor content while still being treated as a platform instead of a publisher (where they would incur liability for what is on their website). Specifically, § 230(c)(2)(A) provides that no “interactive computer service” shall be held liable for taking action in good faith to restrict “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable [content] . . . .” It is the last phrase, “otherwise objectionable,” that tech companies have used as justification for removing “hate speech” or “misinformation” from their platform without incurring publisher like liability. The desire to police such speech has led Facebook to develop stringent platform rules which has in turn created the need for whitelisting. This brings us to our first proposal, eliminating the phrase “otherwise objectionable” from section 230 itself. The proposed “Stop the Censorship Act of 2020” brought by Republican Paul Gosar of Arizona does just that. Proponents argue that it would force tech companies to be neutral or lose liability protections. Thus, no big tech company would ever create standards stringent enough to require a “whitelist” or an exempted class, because the standard is near to First Amendment protections—problem solved! However, the current governing majority has serious concerns about forced neutrality, which would ignore problems of misinformation or the mental health effects of social media in the aftermath of January 6th.

Elizabeth Warren, similar to a recent proposal in the House Judiciary Committee, takes a different approach: breaking up big tech. Warren proposes passing legislation to limit big tech companies in competing with small businesses who use the platform and reversing/blocking mergers, such as Facebook purchasing Instagram. Her plan doesn’t necessarily stop companies from having whitelists, but it does limit the power held by Facebook and others which could in turn, make them think twice before unevenly applying the rules. Furthermore, Warren has called for regulators to use “every tool in the toolbox,” in regard to Facebook.

Third, some have claimed that Google, Facebook, and Twitter have crossed the line under existing legal doctrines to become state actors. So, the argument goes, government cannot “induce” or “encourage” private persons to do what the government cannot. See Norwood v. Harrison, 413 U.S. 455, 465 (1973). Since some in Congress have warned big tech executives to restrict what they see as bad content, the government has essentially co-opted the hand of industry to block out constitutionally protected speech. See Railway Employee’s Department v. Hanson, 351 U.S. 225 (1956) (finding state action despite no actual mandate by the government for action). If the Supreme Court were to adopt this reasoning, Facebook may be forced to adopt a First Amendment centric approach since the current hate speech and misinformation rules would be state action; whitelists would no longer be needed since companies would be blocked from policing fringe content. Finally, the perfect solution! The Court can act where Congress cannot agree. I am skeptical of this approach—needless to say, such a monumental decision would completely shift the nature of social media. While Justice Thomas has hinted at his openness to this argument, it is unclear if the other justices will follow suit.

All in all, Congress and the Court have tools at their disposal to combat the disturbing actions taken by Facebook. Outside of potential SEC violations, Section 230 is a complicated but necessary issue Congress must confront in the coming months. “The Facebook Files” have exposed the need for systemic change in social media. What I once used to use to play Farmville, has become a machine that has rules for me, but not for thee.


You Wouldn’t 3D Print Tylenol, Would You?

By Mason Medeiros, MJLST Staffer

3D printing has the potential to change the medical field. As improvements are made to 3D printing systems and new uses are allocated, medical device manufacturers are using them to improve products and better provide for consumers. This is commonly seen through consumer use of 3D-printed prosthetic limbs and orthopedic implants. Many researchers are also using 3D printing technology to generate organs for transplant surgeries. By utilizing the technology, manufacturers can lower costs while making products tailored to the needs of the consumer. This concept can also be applied to the creation of drugs. By utilizing 3D printing, drug manufacturers and hospitals can generate medication that is tailored to the individual metabolic needs of the consumer, making the medicine safer and more effective. This potential, however, is limited by FDA regulations.

3D-printed drugs have the potential to make pill and tablet-based drugs safer and more effective for consumers. Currently, when a person picks up their prescription the drug comes in a set dose (for example, Tylenol tablets commonly come in doses of 325 or 500 mg per tablet). Because the pills come in these doses, it limits the amount that can be taken to multiples of these numbers. While this will create a safe and effective response in most people, what if your drug metabolism requires a different dose to create maximum effectiveness?

Drug metabolism is the process where drugs are chemically transformed into a substance that is easier to excrete from the body. This process primarily happens in the kidney and is influenced by various factors such as genetics, age, concurrent medications, and certain health conditions. The rate of drug metabolism can have a major impact on the safety and efficacy of drugs. If drugs are metabolized too slowly it can increase the risk of side effects, but if they are metabolized too quickly the drug will not be as effective. 3D printing the drugs can help minimize these problems by printing drugs with doses that match an individual’s metabolic needs, or by printing drugs in structures that affect the speed that the tablet dissolves. These individualized tablets could be printed at the pharmacy and provided straight to the consumer. However, doing so will force pharmacies and drug companies to deal with additional regulatory hurdles.

Pharmacies that 3D print drugs will be forced to comply with Current Good Manufacturing Procedures (CGMPs) as determined by the FDA. See 21 C.F.R. § 211 (2020). CGMPs are designed to ensure that drugs are manufactured safely to protect the health of consumers. Each pharmacy will need to ensure that the printers’ design conforms to the CGMPs, periodically test samples of the drugs for safety and efficacy, and conform to various other regulations. 21 C.F.R. § 211.65, 211.110 (2020). These additional safety precautions will place a larger strain on pharmacies and potentially harm the other services that they provide.

Additionally, the original drug developers will be financially burdened. When pharmacies 3D print the medication, they will become a new manufacturing location. Additionally, utilizing 3D printing technology will lead to a change in the manufacturing process. These changes will require the original drug developer to update their New Drug Application (NDA) that declared the product as safe and effective for use. Updating the NDA will be a costly process that will further be complicated by the vast number of new manufacturing locations that will be present. Because each pharmacy that decides to 3D print the medicine on-site will be a manufacturer, and because it is unlikely that all pharmacies will adopt 3D printing at the same time, drug developers will constantly need to update their NDA to ensure compliance with FDA regulations. Although these regulatory hurdles seem daunting, the FDA can take steps to mitigate the work needed by the pharmacies and manufacturers.

The FDA should implement a regulatory exception for pharmacies that 3D print drugs. The exemption should allow pharmacies to avoid some CGMPs for manufacturing and allow pharmacies to proceed without being registered as a manufacturer for each drug they are printing. One possibility is to categorize 3D-printed drugs as a type of compounded drug. This will allow pharmacies that 3D print drugs to act under section 503A of the Food Drug & Cosmetic Act. Under this section, the pharmacies would not need to comply with CGMPs or premarket approval requirements. The pharmacies, however, will need to comply with the section 503A requirements such as having the printing be performed by a licensed pharmacist in a state-licensed pharmacy or by a licensed physician, limiting the interstate distribution of the drugs to 5%, only printing from bulk drugs manufactured by FDA licensed establishments and only printing drugs “based on the receipt of a valid prescription for an individualized patient”. Although this solution limits the situations where 3D prints drugs can be made, it will allow the pharmacies to avoid the additional time and cost that would otherwise be required while helping ensure the safety of the drugs.

This solution would be beneficial for the pharmacies wishing to 3D print drugs, but it comes with some drawbacks. One of the main drawbacks is that there is no adverse event reporting requirement under section 503A. This will likely make it harder to hold pharmacies accountable for dangerous mistakes. Another issue is that pharmacies registered as an outsourcing facility under section 503B of the FD&C Act will not be able to avoid conforming to CGMPs unless they withdraw their registration. This issue, however, could be solved by an additional exemption from CGMPs for 3D-printed drugs. Even with these drawbacks, including 3D-printed drugs under the definition of compounded drugs proposes a relatively simple way to ease the burden on pharmacies that wish to utilize this new technology.

3D printing drugs has the opportunity to change the medical drug industry. The 3D-printed drugs can be specialized for the individual needs of the patient, making them safer and more effective for each person. For this to occur, however, the FDA needs to create an exemption for these pharmacies by including 3D-printed drugs under the definition of compounded drugs.



Nineteen Eighty Fortnite

Valerie Eliasen, MJLST Staffer

The Sixth and Seventh Amendments affords people the right to a trial by jury. Impartiality is an essential element of a jury in both criminal and civil cases. That impartiality is lost if a juror’s decision is “likely to be influenced by self-interest, prejudice, or information obtained extrajudicially.” There are many ways by which a juror’s impartiality may become questionable. Media attention, for example, has influenced the jury’s impartiality in high-profile criminal cases.

In cases involving large companies, advertising is another way to appeal to jurors. It is easy to understand why: humans are emotional. Because both advertisement perception and jury decisions are influenced by emotions, it comes as no surprise that some parties have been “accused of launching image advertising campaigns just before jury selection began.” Others have been accused of advertising heavily in litigation “hot spots,” where many cases of a certain type, like patent law, are brought and heard.

A recent example of advertising launched by a party to a lawsuit comes from the emerging dispute between Apple Inc. and Epic Games Inc. Epic is responsible for the game Fortnite, an online “Battle-Royale” game, which some call the “biggest game in the world.” Epic sued Apple in August for violation of the Sherman Antitrust Act of 1980 and several other laws in reference to Apple’s practice of collecting 30 percent of every App and in-App purchase made on Apple products. When Epic began allowing Fortnite users to pay Epic directly on Apple products, Apple responded by removing Fortnite from the App Store. The App Store is the only platform where users can purchase and download applications, such as Fortnite, for their Apple products. In conjunction with the lawsuit, Epic released a video titled Nineteen Eighty Fortnite – #FreeFortnite. The video portrays Apple as the all-knowing, all-controlling “Big Brother” figure from George Orwell’s 1984. The ad was a play on Apple’s nearly identical commercial introducing the Macintosh computer in 1984. This was an interesting tactic given the majority of Fortnite users were born after 1994.

Most companies that have been accused of using advertisements to influence jurors have used advertisements to help improve the company image. With Epic, the advertisement blatantly points a finger at Apple, the defendant. Should an issue arise, a court will have an easy time finding that the purpose of the ad was to bolster support for Epic’s claims. But, opponents will most likely not raise a case regarding jury impartiality because this advertisement was released so far in advance of jury selection and the trial. Problems could arise, however, if Epic Games continues its public assault on Apple.

Epic’s ad also reminds us of large tech companies’ power to influence users. The explosion of social media and the development of machine learning over the past 10 years have yielded a powerful creature: personalization. Social media and web platforms are constantly adjusting content and advertisements to account for the location and the behavior of users. These tech giants have the means to control and tailor the content that every user sees. Many of these tech giants, like Google and Facebook, have often been and currently are involved in major litigation.

The impartial jury essential to our legal system cannot exist when their decisions are influenced by outside sources. Advertisements exist for the purpose of influencing decisions. For this reason, Courts should be wary the advertising abilities and propensities of parties and must take action to prevent and control advertisements that specifically relate to or may influence a jury. A threat to the impartial jury is a threat we must take seriously.

 

 

 

 

 


Death of a Gravesite: Alternatives to the Traditional Burial Practice

Jennifer Novo, MJLST Staffer

Halloween is often a time for ghosts, the dead, and for some, is the perfect time to make a trip to a local graveyard or cemetery. The obvious association with death as the final resting place for many makes graveyards an inherently spooky destination. Burial is just one of the many methods used for the final disposition of human remains around the world and is a common practice in the United States. However, considering environmental and economic factors, it may be time to consider alternative forms of final disposition.

In the United States, there is a presumed right to a decent burial under common law. Beyond that, different jurisdictions within the United States have their own regulations for the disposal of dead bodies and the reporting of deaths and final dispositions of the remains. For example, Minnesota Statute § 149A outlines regulations with the purpose of “regulat[ing] the removal, preparation, transportation, arrangements for disposition, and final disposition of dead human bodies for purposes of public health and protection of the public.” This chapter outlines license requirements, safety standards, and guidelines for a number of disposition practices, not just burial.

There are a number of negatives, from environmental to economic, to the modern burial. Modern burials consist of burying a casket containing embalmed remains. This practice has serious environmental effects. Embalming is used to delay the decay of a body, and the chemicals used for embalming include formaldehyde, phenol, methanol, and glycerin, all of which are irritants and some of which are carcinogenic or toxic. Over time, once the body and the casket have decomposed, these chemicals will seep into the soil and water table of the surrounding area and pose a health risk to the living. Burials also negatively affect the environment by using a large amount of resources to create caskets (hundreds of thousands of tons of various metals and concrete as well as millions of board feet of wood). Another primary negative environmental impact that burial has is that graveyards use up a lot of space (as of late 2018, there are a little under 145,000 graveyards across the United States totaling to approximately 1 million acres of land). This land requires a lot of maintenance, water, and fertilizer to keep green. In addition to the environmental effects, burials are expensive. By 2017, funeral expenses increased 227.1% and the cost of burial caskets rose by 230% since 1986. The current cost of a traditional full-service burial in North America is between $7,000 and $10,000.

In 2017, a report by the National Funeral Directors Association (NFDA) found that for the first time, more Americans were cremated than buried. Researchers ascribed this change to shifting religious beliefs and generational differences. Economically, cremations are less expensive than traditional burials. Additionally, cremation removes the need for large swaths of land required by burials. However, like burials, cremation has negative environmental impacts. For example, studies have suggested that the high level of energy required to cremate a body damages the environment. Additionally, the cremation process releases various chemicals (such as carbon monoxide, sulfur dioxide, and mercury) and soot into the atmosphere, and the resulting sterile ashes lack nutrients that could contribute to ecological cycles.

As people are becoming more aware of the downsides to traditional burial and cremation, other methods of final disposition have been created and adopted that address some of these concerns.

One alternative comparable to a traditional burial is a natural or green burial, in which a body is buried either in a shroud or a biodegradable container without going through the embalming process. Some types of natural burial (conservation burial) take this a step further in that some of the fees associated with the burial go towards protecting the land through a conservation easement.

One alternative comparable to traditional cremation is a flameless cremation process known as alkaline hydrolysis, in which the body is dissolved, leaving bone powder and a liquid that can then be “recycled” in a local wastewater treatment plant. Only a handful of states, including Minnesota, have formally adopted regulations for this final disposition process.

A number of states, including Minnesota, do not have many (or any) restrictions on remains post-cremation, so a number of alternatives focus on ways cremated remains can be used to negate the negative environmental effects of the traditional cremation process. Some examples include sending ashes in a concrete ball to the ocean floor to promote the growth of coral reefs, placing ashes in a pod that will eventually grow into a tree, and mixing ashes with fertilizer to feed a particular tree in lieu of having a gravestone.

Death is frightening and uncomfortable to think about, and contemplating the treatment of a loved one’s (or one’s own) remains is depressing. However, decisions on these matters have lasting effects for friends, family, and even the general population. There is a lot of legal leeway surrounding final disposition, so it never hurts to consider the options before it’s too late.


Should the FDA Strengthen Pet Food Regulation?

Jennifer Satterfield, MJLST Staffer

Recently, the Food and Drug Administration (FDA) began an investigation following numerous reports of dilated cardiomyopathy (DCM), a type of heart disease, in dogs. The FDA is exploring a potential connection between DCM and certain diets containing legumes (e.g., peas or lentils), legume seeds (pulses), or potatoes as main ingredients. These ingredients are commonly associated with “BEG” diets (boutique companies, exotic ingredients, or grain-free diets). The FDA has compiled a spreadsheet of all the DCM reports prior to April 30, 2019. The most frequently identified brands include: “Acana (67), Zignature (64), Taste of the Wild (53), 4Health (32), Earthborn Holistic (32), Blue Buffalo (31), Nature’s Domain (29), Fromm (24), Merrick (16), California Natural (15), Natural Balance (15), Orijen (12), Nature’s Variety (11), NutriSource (10), Nutro (10), and Rachael Ray Nutrish (10).”

The DCM scare has led pet owners to question the safety of pet food products and turn to online forums for help, including a popular Facebook group called Taurine-Deficient (Nutritional) Dilated Cardiomyopathy. This group’s purpose is to “share information concerning Nutritionally-Mediated DCM among veterinarians, breeders, members of the Ph.D. & DVM research community, nutritionists, food brand representatives, nutrient suppliers, and concerned dog owners.” Some of the most common concerns among dog owners in this group are “what should I be feeding my dog?” and “what food is safe for the long term?”

Unfortunately, the FDA only requires that pet food be “[s]afe to eat; [p]roduced under sanitary conditions; [f]ree of harmful substances; and [t]ruthfully labeled.” However, the Federal Food, Drug, and Cosmetic Act (FFDCA), the statute that gives the FDA the authority to regulate pet food, does not require any pre-market review. Hence, pet foods do not need to be formally approved or undergo testing before hitting the shelves. Consequently, the federal government may have inadvertently allowed pet foods to reach the market that may not be safe for animals in the long term. For example, french fries are “safe to eat.” But, eating just french fries every day for a person’s entire life is not healthy, and will probably lead to medical complications. Since dogs generally eat the same dog food over the course of their entire lives, the food may be “safe to eat,” but may not be healthy as the dog’s sole source of nutrition.

Although the FDA has partnered with the Association of American Feed Control Officials (AAFCO), AAFCO does not have regulatory authority. For a dog or cat food to have a “complete and balanced” label, it must meet either one of the nutrient profiles established by AAFCO or pass a feeding trial using AAFCO standards. But AAFCO cannot enforce its standards, and, what is more, its recommendations may not even be good enough to ensure pet food safety in the long term. For example, both the nutrient profile and feeding trial methods leave uncertainty regarding nutrient bioavailability (the nutrients the animal’s body actually absorbs and uses).

Moreover, the AAFCO feeding trial protocol only requires eight animals to participate and only six out of the eight need to complete the entire trial over a period of twenty-six weeks. This extremely small number of test subjects over a relatively short period of time is not enough to make a determination on the safety or nutritional longevity of a specific pet food. As a comparison, a human-controlled feeding clinical trial used a “relatively small” sample size of twenty two people per group. Logically, pet food companies should be conducting feeding trials with a substantially larger number of test subjects over a much longer time period.

To prevent another scare, like the surprising potential link between DCM and certain dog foods, and to ensure the safety of pet food, the FDA should require stringent pre-market testing using sound scientific methods. But, since it is likely that the FDA does not have the statutory authority to increase its regulatory oversight of the pet food industry based on the FFDCA, Congress should step in and require it. It is also important to note that, while pet food is also regulated by states, these regulations typically deal with labeling and nutrient profiles. Considering the federal government’s failure to ensure pet food safety, states may also be able to step up and require pre-market testing. For many people pets are like family and, surely, pet owners want the safest, healthiest options for their beloved family members.

 


Putting Patient Values in Value-Based Medicare

Peter J. Teravskis, MJLST Staffer

The vast majority of payments to medical providers are based on a fee-for-service reimbursement model. The fee-for-service model reimburses providers for every test, exam, intervention, and procedure they perform, potentially contributing to over-billing, increased health care costs, and waste. On the other hand, value-based care models tie provider reimbursement to efficiency of care and measures of patient wellness and satisfaction. For this reason, in recent years, there has been a nationwide effort to transition provider reimbursement away from fee-for-service towards value-based care.

In line with this effort, the Affordable Care Act contains many provisions designed to encourage the transition to value-based care. In 2015, then-Secretary of Health and Human Services Sylvia M. Burwell tasked the Centers for Medicare and Medicaid Services (CMS) with two goals for 2018: increasing (1) value-based purchasing practices, and (2) use of value-based reimbursement models (called alternative payment models or “APMs”) by Medicare providers. The hospital value-based purchasing program rewards acute-care providers for making purchasing decisions based on quality metrics rather than the volume of services provided while still operating in a fee-for-service framework. On the other hand, the APM adoption plan seeks to discard fee-for-service reimbursement entirely by encouraging the adoption of payment models that reimburse providers based on patient outcomes and efficiency of care rather than volume.

While CMS efforts have resulted in increased adoption of value-based care models, early data suggests that these models may not deliver the health and financial benefits initially promised. For example, recent studies indicate that multiple value-based Medicare reimbursement models in several clinical contexts fail to demonstrate meaningful improvements in hospital readmission rates, health outcomes, quality of care, and patient satisfaction. Nevertheless, some marginal cost savings have been reported. However, cost savings may be even more limited than the studies suggest. Government metrics may overestimate the actual adoption of value-based practices given the loose definition of “value-based care.” Further, the New York Times Upshot reports that CMS may overestimate the adoption of value-based purchasing metrics by miscounting many volume-based purchases as value-based purchases.

CMS’s attempt to implement value-based care through this top-down incentive structure is also ethically fraught. Current value-based care models have been criticized for making assumptions about what patients actually value, rather than adopting a pluralistic understanding of patient values. This “monistic” value system decreases patient autonomy in their health care decisions. Indeed, ethicists contend that it is only ethical to impose value-based decisions on patients if there is “strong and sound evidence” that they deliver “equivalent or greater clinical benefit at lower cost.” This ethical principle greatly limits the number of value-based reforms that can be instituted at the national- or hospital-level. Indeed, all other value-based decisions should be made in consultation with individual patients, taking into account their unique value systems. Furthermore, it is “ethically suspect” to withhold otherwise beneficial treatments based on cost savings alone. Unfortunately, value-based care models favor health care market efficiency and could penalize providers who tailor care to patient values rather than the monistic value structure described above.

Given the ethical limitations of value-based decisions that can be made without patient input, the early empirical shortcomings of Medicare’s value-based care initiatives may be partially explained by the slow process of aligning care to patient values. Specifically, the value-based decisions most likely to improve care and costs require physicians to (1) understand individual patients’ values, and (2) tailor efficient and effective care to align with those values. Unlike the APM adoption initiative which takes a holistic approach to incentivize value-based care, the hospital value-based purchasing plan does not allow for significant patient-provider collaboration, especially in the acute care setting where patient interactions are brief.

Recently, the Trump administration has reoriented value-based purchasing agreements to focus on drug price reduction and signaled it will slow the pace of APM adoption (a move that was criticized for creating uncertainty among health care market stakeholders). This deceleration likely stems, in part, from concerns over mandating the adoption of certain APMs in rural communities. Regardless of the motive, decelerating APM adoption will likely prove beneficial. The process of aligning care with patient values is largely intangible in the short-term; therefore providers and patients alike will benefit from the additional flexibility of a slower, voluntary transition away from fee-for-service reimbursement. Nevertheless, CMS must not lose sight of the goal of providing Medicare beneficiaries high-value-care while still affording providers the time and financial latitude to ensure that long-term benefits are genuinely patient-value-centric.


Will the Vaping Industry Go Up in Smoke?

Stephen Wood, MJLST Staffer

It’s no secret that vaping has become increasingly popular. The number of users has increased from 7 million in 2011 to 41 million as of 2018. The total market is now worth an estimated $19.3 billion. Less clear is the future of industry regulation in light of the recent respiratory illnesses linked to vaping. On September 24, 2019, the Centers for Disease Control and Prevention reported that vaping was attributed to 805 illnesses and 12 deaths. Pressure is building on the industry’s major players. In the last week, we have seen the cancellation of a merger between two of the largest tobacco companies, Altria and Philip Morris, and the release of the CEO of Juul, Kevin Burns.

However, the respiratory illnesses associated with vaping haven’t been linked to a specific product, and it is unclear what the long-term effects of vaping are. Because of this uncertainty, some states have implemented blanket restrictions on the sale of vaping products, President Trump has proposed new regulations, and the CDC has issued warnings regarding their safety. This is blindsiding the industry, which has been free from regulation by the FDA until recently.

Vaping devices, also known as electronic nicotine delivery systems (ENDS), became subject to the FDA’s regulatory scheme for all tobacco products on August 8, 2016. The Deeming Rule placed ENDS in the same category of products as cigarettes and other traditional tobacco products, which have been regulated under the Family Smoking Prevention and Tobacco Control Act since 2009. For this reason, the minimum age for purchasing ENDS is 18 years old, and the marketing, manufacturing, and distribution of ENDS is heavily regulated.

Juul, in particular, has come under fire for its marketing strategies. Among other claims, many lawsuits allege that the company specifically targeted minors through its use of social media and distribution of enticing flavors. These practices have also been the focal point of the recent surge of state regulations, which “are filling what many see as a regulatory void caused by federal inaction.” For example, in Michigan, Governor Gretchen Whitmer implemented an emergency ban, limiting the sale of vaping products to those which are tobacco flavored. New York did the same but exempted menthol from the ban. Massachusetts, notably, implemented a four-month emergency ban on all products. President Trump’s proposed ban, on the other hand, would be limited to flavored products.

If President Trump’s proposal is adopted, the industry would see an estimated 80% loss in sales. It will be interesting to see what the regulatory landscape looks like once the smoke clears.

 


A Green New City Plan? How Local Governments Should Plan For Climate Refugees

Shantal Pai 

Politicians, especially democratic presidential candidates, are competing to release the best “Green New Deal.” These proposals are national-scale climate plans that are meant to reduce carbon emissions to mitigate the impact of climate change. But, as these plans are released, a difficult reality remains: we may be less than one year away from irreversible changes to the climate.

Regardless of which Green New Deal eventually becomes United States Law (and one will—because climate change grows more undeniable each day), in addition to a climate mitigation plan, the U.S. and its cities need a climate adaptation plan: a way to survive in the new reality.

At the point of no return (2 C average warming, worldwide) the most inhabited regions of the world will face extremely hot temperatures, dramatic weather events including storms, flooding and drought, and sea-level rise. Though some regions have developed strategies to mitigate these damages—  such as a proposed levee surrounding Manhattan—the best possible solution may be to move threatened communities to higher, cooler ground.

So, in addition to national-scale plans, local governments in communities that will be attractive in our post-industrial climate, places like Minneapolis, Cincinnati, Buffalo, and Denver, should prepare. They need to be ready for a large influx of refugees from the coast looking for a secure future.

If Hurricane Katrina serves as an example, the first people to move permanently inland will not be the predominately white, wealthy residents of the city, but working-class residents and people of color. There are two reasons for this: (1) racially discriminatory housing practices mean people of color are most likely to face flooding and storm damage and (2) these groups are least likely to get government aid after a flood.

There has been a similar trend after Hurricane Dorian. Since the Trump Administration declined to grant temporary protected status to Bahamians fleeing uninhabitable conditions after the storm, many victims are fleeing with visas that will allow them to live in the U.S., but not to work. Many of these people will be staying with family in the United States while the Bahamas rebuilds, increasing demand for U.S. services while they are unable to contribute to local government revenue because they cannot earn an income.

Such a large influx of low and middle-income residents could wreak havoc on an unprepared regional plan. The people fleeing climate change need quick access to affordable housing, schools, and city resources, often at disproportionately high levels. At a city level, places with affordable housing already struggle to generate the revenue necessary to provide these services. In cities where property values are lower, the potential for a city to raise revenue from property taxes is lower. A massive influx of people fleeing climate change would further strain already deeply stressed city budgets.

Furthermore, a large influx of people of color often leads to “white flight”—an en masse departure of white people to nearby, more affluent cities—which deepens regional segregation and inequity.

The two combined lead to downward spirals in which the number of people of color in a community grows, leading to the departure of white people, causing property values to fall because there aren’t enough people of color who can afford to move into the neighborhood, which reduces a city’s ability to generate revenue while simultaneously leading to an influx of low-income people who are more likely to rely on city services. This phenomenon discourages building affordable housing, makes it hard for struggling cities to generate revenue, and maintains racial and economic segregation.

Strategic regional planning can combat these tendencies but needs to happen more aggressively than ever before as climate change amplifies existing inequality. First and foremost, the regions that will be most attractive to climate refugees need to encourage the development of affordable housing throughout the metropolitan area. Spreading the cost of supporting climate refugees across the region prevents any one city from being saddled with the expense of providing services and the inability to raise sufficient revenue.

Second, cities should desegregate school systems. In Louisville, Kentucky, a system to desegregate schools reduced white flight. The desegregation promoted stable housing prices and tax revenue, making it easier for cities to plan for the future.

Third, regions should build more public spaces than otherwise anticipated, in ways that avoid displacing existing poor and minority communities. Spaces like theaters, libraries, schools, and public transit will all face increased demand as new residents become acquainted with the region. These spaces increase property value, encourage wellbeing, and further reduce white flight, all of which help break the downward spiral of city revenue generation caused by white flight.

None of these solutions will prevent inequality, and refugees escaping climate change face extremely difficult challenges in relocating. But, by planning for climate refugees, local governments can help mitigate the effects of climate change on segregation.


The Atlantic Mackerel Plight: Roadblocks to Prevent Overfishing

Yvie Yao, MJLST Staffer

Atlantic mackerel, like sardines and herring, are small forage fish. Not only are they vital prey for seabirds and larger fish like bluefin tuna and cod, but also essential for the survival of ocean wildlife.

Although Atlantic mackerel are resilient to fishing pressure and bycatch risk, scientists announced this year that fishing activities along the coast have added too much pressure to the population of mackerel. That being said, Atlantic mackerel is overfished. On February 28, 2018, the federal government, unsurprisingly, declared that the catching cap for mackerel had been reached and the mackerel fishing season was officially closed for the rest of this year.

To prevent overfishing of a species, the Magnuson-Stevens Fishery Conservation and Management Act requires that local fish councils create a rebuilding plan as soon as possible, not to exceed 10 years. Conservative practices endorse setting a shorter rebuilding timeline with lower catch levels so that the species can recover as quickly as possible. Setting longer timelines with higher catch levels is risky. The species might be commercially inviable sooner than the projection and the council is less likely to reach its goal of rebuilding the under-stocked population. Moreover, low stock of the species is likely to negatively impact healthy and sustainable living of its predators in the ocean system.

The Magnuson-Stevens Act has been effective since it was first passed in 1976. Two amendments in 1996 and 2006 furthered the interest of fishery conservation, requiring local councils to place all overfished stocks on strict rebuilding timelines and mandate hard limits on total catches. These science-based provisions have recovered 44 fish stocks around the country and have generated $208 billion in sales in 2015 for fishermen.

However, this effective ocean fishery conservation law is facing challenges. On July 11, 2018, the House passed H.R. 200: Strengthening Fishing Communities and Increasing Flexibility in Fisheries Management Act. The bill, if it becomes law, would change rules about requirements to rebuild overfished stocks and allow councils to consider changes in an ecosystem and the economic needs of the fishing communities when establishing annual catch limits.

Recreational fishing and boating industry groups vehemently support this bill. They argue that the proposed changes would give alternatives to local councils to manage fish stocks, save taxpayers money, and modernize the management of recreational fishing.

Environmentalists and commercial fishermen oppose this bill. They argue that the proposed bill would let local councils rehabilitate them as fast as practicable, rather than rebuilding stocks as fast as possible, leading to looser regulation. The bill would also remove annual catch limits for short-lived species and ecosystem-component species, where forage fish including Atlantic Mackerel fall into the category. This backtrack from science-based policy would further delay restocking of forage fish and might even drive some species to commercial extinction.

It is unknown whether H.R. 200 will be passed in the Senate. Another companion bill S.1520, Modernizing Recreational Fisheries Management Act of 2017, envisions the same goal as H.R. 200. Will we be able to eat Atlantic Mackerel in the next ten years? The answer is uncertain. Regardless, the vote against such bill is a chance to “affirm that science, sustainability, and conservation guide the management of our ocean fisheries.”