Regulatory

Airbnb Regulations Spark Controversy, but Have Limited Effect on Super Bowl Market

MJLST Staffer, Sam Louwagie

 

As Super Bowl LII descends upon Minneapolis, many Twin Cities residents are hoping to receive a windfall by renting out their homes to visiting Eagles and Patriots fans. City regulations placed last fall on online short-term rental platforms such as AirBnB, which prompted an outcry from those platforms, do not appear to be having much of an effect on the dramatic surge in supply.

The short-term rental market in Minneapolis has been a renter’s market in the opening days since the Super Bowl matchup was set. There are 5,000 placements in the Twin Cities on AirBnB this week, as compared to 1,000 at this time last year, according to the Star Tribune. The flood of posted housing options has limited prices, as the average listing has cost $240 per night—more than usual, but much less than the thousands of dollars some would-be renters had hoped for. One homeowner told the Star Tribune that she had gotten no interest in her 4,000-square-foot, six-bedroom house just five blocks from U.S. Bank Stadium, and had “cut the price drastically.”

The surge in AirBnB listings comes despite ordinances that went into effect in December in both Minneapolis and St. Paul. The cities joined a growing list of major U.S. cities that are passing regulations aimed at ensuring guest safety and making a small cut of tax revenue from the rentals. Minneapolis’ ordinance requires a short-term renter to apply for a license with the city, which costs $46 annually. St. Paul’s license costs $40 per year. As of mid-December, according to MinnPost, only 18 applications had been submitted in Minneapolis and only 32 in St. Paul. That would suggest that many of the thousands of listings during Super Bowl week are likely unlicensed. The cities both say they will notify renters they are not in compliance before taking any enforcement action, but a violation will cost $500 in Minneapolis and $300 in St. Paul.

The online rental platforms themselves had strongly objected to the passage of the ordinances, which would require Airbnb to apply for a short-term rental platform license. This would bring a $10,000 annual fee in St. Paul and a $5,000 large platform fee in Minneapolis. According to MinnPost, as of mid-December, no platforms had submitted an application and it was “unclear whether they [would] comply.” Airbnb said in a statement that it believes the regulations violate the 1996 federal Communications Decency Act, and that “the ordinance violates the legal rights of Airbnb and its community.”

While the city ordinances created controversy in the legal world, they do not seem to be having a similar effect on the ground in Minneapolis, as Super Bowl guests still have a dramatic surplus of renting options.


Initial Coin Offerings: Buyer Beware

Kevin Cunningham, MJLST Staffer

 

Initial Coin Offerings, also known as ICOs or token sales, have become a new trend for startup companies raising capital using cryptocurrency and blockchain technology. ICOs are conducted online where purchasers use virtual currencies, like bitcoin or ether, or a flat currency, like the U.S. dollar, to pay for a new virtual coin or token created by the company looking to raise money. Promoters usually tell purchasers that the capital raised from the sales will be used to fund development of a digital platform, software, or other project and that the newly created virtual coin may be used to access the platform, use the software, or otherwise participate in the project. The companies that issue ICOs typically promote the offering through its own website or through various online blockchain and virtual currency forums. Some initial sellers may lead buyers of the virtual coins to expect a return on their investment or to participate in a share of the returns provided by the project. After the coins or tokens are issued, they may be resold to others in a secondary market.

 

Depending on the circumstances of each ICO, the virtual coins or tokens that are offered or sold may be considered to be securities. If they are classifiable as securities, the offer and sale of the coins or tokens are subject to the federal securities laws. In July 2017, the Securities Exchange Commission (SEC) issued a Report of Investigation under Section 21(a) of the Securities Exchange Act of 1934 stressing that any ICO that meets the definition of a security in the United States is required to comply with the federal securities law, regardless of whether the securities are purchased with virtual currencies or distributed with blockchain technology.

 

Since the SEC issued its July Report regarding ICOs, the Commission has charged two companies with defrauding investors. In the pair of ICOs purportedly backed by investments in real estate and diamonds, the SEC alleged that the owner of the companies, Maksim Zaslavskly, sold unregistered securities. In one instance, the SEC alleges that, despite the representations to investors of Diamond Reserve Club, Zaslavskly had not purchased any diamonds nor engaged in any business operations.

 

Issues with Initial Coin Offerings continue as the Tezos Foundation was hit with its second class-action lawsuit over its Initial Coin Offering after an ICO contributor alleged breaches of securities laws. The two cases have been filed in the California Superior Court in San Francisco and United States District Court in Florida. The Tezos ICO raised over $232 million just months ago and plaintiffs in the suit say that they have not received the promised tokens. Infighting amongst the owners of the company has led to a significant setback in the venture, which aims to create a computerized network for transactions using blockchain technology. The lawsuit alleges that contributors to the fundraiser were not told that it could take more than three years to purchase the ledger for the project’s source code. Additionally, the plaintiffs allege that the time frame was not disclosed to investors despite it being a material fact.

 

It is likely that many issuers of virtual coins and tokens will have a hard time convincing the SEC and other regulators that its coin is a merely a utility rather than a security. For many of the firms, including Diamond Reserve Club, the problem is that the tokens they are selling for the projects only exist on paper, and so they have no other function than to bring in money. Likewise, most investors currently buy tokens not for their utility, but because they are betting that on an increase in the value of the virtual currency. It seems that this will not be an issue that will be resolved quickly and it is likely that heightened regulatory scrutiny will come due to the continuing claims against ICOs for companies like Tezos.


The Electric Vehicle: A Microcosm for America’s Problem with Innovation

Zach Sibley, MJLST Staffer

 

Last year, former U.S. Patent and Trademark Office Director, David Kappos, criticized a series of changes in patent legislation and case law for weakening innovation protections and driving technology investments towards China. Since then it has become apparent that America’s problem with innovation runs deeper than just the strength of U.S. patent rights. State and federal policies toward new industries also appear to be trending against domestic innovation. One illustrative example is the electric vehicle (EV).

 

EVs offer better technological upsides than their internal combustion engine vehicle (ICEV) counterparts. Most notably, as our US grid system moves toward “smart” infrastructure that leverages the Internet of Things, EVs can interact with the grid and assist in maximizing the efficiency of its infrastructure in ways not possible with ICEVs. Additionally, with clean air and emission targets imminent—like those in the Clean Air Act or in more stringent state legislation—EVs offer the most immediate impact in reducing mobile source air pollutants, especially in a sector that recently became the highest carbon dioxide emitter. And finally, EVs present electrical utilities that are facing a “death spiral” an opportunity to recover profits by increasing electricity demand.   

 

Recent state and federal policy changes, however, may hinder efforts of EV innovators. Eighteen state legislators have enacted EV fees—including Wisconsin’s recent adoption, and the overturned fee in Oklahoma—ranging from $50 to $300 in some states. Proponents claim the fee creates parity between traditional ICEV drivers and the new EV drivers not paying fuel taxes that fund maintenance of transportation infrastructure. Recent findings, though, suggest EV drivers in some states with the fee were previously paying more upfront in taxes than their ICEV road-mates. The fee also only creates parity when solely focused on the wear and tear all vehicles cause on shared road infrastructure. The calculus for these fees often neglects that EV and ICEV drivers also share the same air resources and yet no tax accompanies EV fees that would also charge ICEVs for their share of wear and tear on air quality.

 

At the federal level, changes in administrative policy are poised to exacerbate the problem further. The freshly proposed GOP tax bill includes a provision to repeal a $7,500 rebate that has made lower cost EVs a more affordable option for middle class drivers. This change should be contrasted with foreign efforts, such as those in the European Union to increase CO2 reduction targets and offer credits for EV purchases. The contrast can be summed up with one commentator’s observation regarding The New York Times who reported, within the span of a few days, about the U.S. EPA’s rollback of the Clean Power Plan and then about General Motors moving toward a full electric line in response to the Chinese government. The latter story harkens back to Kappos’ comments at the beginning of this post, where again a changing U.S. legal and regulatory landscaping is driving innovation elsewhere.

 

It is a basic tenant of economics that incentives matter. Even in a state with a robust EV presence like California, critics question the wisdom of assessing fees and repealing incentives this early in a nascent industry offering a promising technological future. The U.S. used to be great because it was the world’s gold standard for innovation: the first light bulb, the first car, the first airplane, the first to the moon, and the first personal computers (to name a few). Our laws need to continue to reflect our innovative identity. Hopefully, with legislation like the STRONG Patents Act of 2017 and a series of state EV incentives on the horizon, we can return to our great innovative roots.


“Gaydar” Highlights the Need for Cognizant Facial Recognition Policy

Ellen Levish, MJLST Staffer

 

Recently, two Stanford researchers made a frightening claim; computers can use facial recognition algorithms to identify people as gay or straight.

 

One MJLST blog tackled facial recognition issues before back in 2012. Then, Rebecca Boxhorn posited that we shouldn’t worry too much, because “it is easy to overstate the danger” of emerging technology. In the wake of the “gaydar,” we should re-evaluate that position.

 

First, a little background. Facial recognition, like fingerprint recognition, relies on matching a subject to given standards. An algorithm measures points on a test-face, compares it to a standard face, and determines if the test is a close fit to the standard. The algorithm matches thousands of points on test pictures to reference points on standards. These test points include those you’d expect: nose width, eyebrow shape, intraocular distance. But the software also quantifies many “aspects of the face we don’t have words for.” In the case of the Stanford “gaydar,” researchers modified existing facial recognition software and used dating profile pictures as their standards. They fed in test pictures, also from dating profiles, and waited.

 

Recognizing patterns in these measurements, the Stanford study’s software determined if a test face was more like a standard “gay” or “straight” face. The model was accurate up to 91 percent of the time. That is higher than just chance, and far beyond human ability.

 

The Economist first broke the story on this study. As expected, it gained traction. Hyperbolic headlines littered tech blogs and magazines. And of course, when the dust settled, the “gaydar” scare wasn’t that straightforward. The “gaydar” algorithm was simple, the study was a draft posted online, and the results, though astounding, left a lot of room for both statistical and socio-political criticism. The researchers stated that their primary purpose in pursuing this inquiry was to “raise the alarm” about the dangers of facial recognition technology.

 

Facial recognition has become much more commonplace in recent years. Governments worldwide openly employ it for security purposes. Apple and Facebook both “recognize individuals in the videos you take” and the pictures you post online. Samsung allows smartphone users to unlock their device with a selfie. The Walt Disney Company, too, owns a huge database of facial recognition technology, which it uses (among other things) to determine how much you’ll laugh at movies. These current, commercial uses seem at worst benign and at best helpful. But the Stanford “gaydar” highlights the insidious, Orwellian nature of “function creep,” which policy makers need to keep an eye on.

 

Function creep “is the phenomenon by which a technology designed for a limited purpose may gain additional, unanticipated purposes or functions.” And it poses a major ethical problem for the use of facial recognition software. No doubt inspired developers will create new and enterprising means of analyzing people. No doubt most of these means will continue to be benign and commercial. But we must admit: classification based on appearance and/or affect is ripe for unintended consequences. The dystopian train of thought is easy to follow. It begs that we consider normative questions about facial recognition technology.

 

Who should be allowed to use facial recognition technologies? When are they allowed to use it? Under what conditions can users of facial technology store, share, and sell information?

 

The goal should be to keep facial recognition technology from doing harm. America has a disturbing dearth of regulation designed to protect citizens from ne’er-do-wells who have access to this technology. We should change that.

 

These normative questions can guide our future policy on the subject. At the very least, they should help us start thinking about cogent guidelines for the future use of facial recognition technology. The “gaydar” might not be cause for immediate alarm, but its implications are certainly worth a second thought. I’d recommend thinking on this sooner, rather than later.


Health in the Fast Lane: FDA’s Effort to Streamline Digital Health Technology Approval

Alex Eschenroeder, MJLST Staffer

 

The U.S. Food and Drug Administration (FDA) is testing out a fast-track approval program to see if it can accommodate the pace of innovation in the technology industry and encourage more ventures into the digital health technology space. Dr. Scott Gottlieb M.D., Commissioner of the FDA, announced the fast-track pilot program—officially named the “Pre-Cert for Software Pilot Program” (Program)—on July 27, 2017. Last week, the FDA announced the names of the nine companies it selected out of more than 100 applicants to participate in the Program. Companies that made it onto the participant list include tech giants such as Apple and Samsung, as well as Verily Life Sciences—a subsidiary of Alphabet, Inc. The FDA also listed smaller startups, indicating that it intends to learn from entities at various stages of development.

The FDA idea that attracted applicants from across the technology industry to the Program is roughly analogous to the TSA Pre-Check Program. With TSA Pre-Check certification, travelers at airports get exclusive access to less intensive pre-boarding security procedures because they submitted to an official background check (among other requirements) well before their trip. Here, the FDA Program completes extensive vetting of participating technology companies well before they bring a specific digital health technology product to market. As Dr. Gottlieb explained in the July Program announcement, “Our new, voluntary pilot program will enable us to develop a tailored approach toward this technology by looking first at the . . . developer, rather than primarily at the product (as we currently do for traditional medical products).” If the FDA determines through its review that a company meets necessary quality standards, it can pre-certify the company. A pre-certified company would then need to submit less information to the FDA “than is currently required before marketing a new digital health tool.” The FDA even proposed the possibility of a pre-certified company skipping pre-market review for certain products, as long as the company immediately started collecting post-market data for FDA to confirm safety and effectiveness.

While “digital health technology” does not have a simple definition, a recently announced Apple initiative illustrates what the term can mean and how the FDA Program could encourage its innovation. Specifically, Apple recently announced plans to undertake a Heart Study in collaboration with Stanford Medicine. Through this study, researchers will use “data from Apple Watch to identify irregular heart rhythms, including those from potentially serious heart conditions like atrial fibrillation.” Positive research results could encourage Apple, which “wants the Watch to be able to detect common heart conditions such as atrial fibrillation”, to move further into FDA regulated territory. Indeed, Apple has been working with the FDA, aside from the Program, to organize the Heart Study. This is a critical development, as Apple has intentionally limited Watch sensors to “fitness trackers and heart rate monitors” to avoid FDA regulation to date. If Apple receives pre-certification through the Program, it could issue updates to a sophisticated heart monitoring app or issue an entirely different diagnostic app with little or no FDA pre-market review. This dynamic would encourage Apple, and companies like it, to innovate in digital health technology and create increasingly sophisticated tools to protect consumer health.


The Future of AI in Self-Driving Vehicles

Kevin Boyle, MJLST Staffer

 

Last week, artificial intelligence (AI) made a big splash in the news after Russian President Vladimir Putin and billionaire tech giant Elon Musk both commented on the subject. Putin stated that whoever becomes the leader in artificial intelligence (AI) will become “the ruler of the world.” Elon Musk followed up Putin’s comments by declaring that competition for AI superiority between nations will most likely be the cause of World War III. These bold predictions grabbed the headlines; but in the same week, Lyft announced a new partnership with a company that produces AI for self-driving cars and the House passed the SELF DRIVE Act. The Lyft deal and the House bill are positive signs for investors of the autonomous vehicle industries; however, the legal landscape remains uncertain. As Putin and Musk have predicted, AI is certain to have a major impact on our future, but current legal hurdles exist before AI’s applications in self-driving vehicles can reach its potential.

 

One of the legal hurdles that currently exists is the varying laws between state and federal authorities. For example, Companies such as Google and Ford would like to introduce cars with no pedals or steering wheels that are operated entirely by AI. However, most states still require that a human driver be able to take “active physical control” of the car to manually override the autonomous vehicle. This requires a steering wheel and brakes, which would make those cars illegal to operate. At the federal level, the FAA requires that commercial drones be flown by certified operators, not computers or AI. Requiring operators instead of AI to steer drones for deliveries severely limits the potential of this innovative technology. Furthermore, international treaties, including the Geneva Convention, need to be addressed before we see fully autonomous cars.

 

The bipartisan SELF DRIVE Act recently passed by the House attempts to address most of the issues created by the patchwork of local, state, and federal regulations so that AI in self-driving cars can reach its potential. The House bill proposed clear guidelines for car manufacture guidelines, clarified the role of the NHTSA in regulating automated driving systems, and detailed cybersecurity requirements for automated vehicles. The Senate, however, is drafting its own bill for the SELF DRIVE Act. This week, the Senate Commerce, Science, and Transportation Committee will convene a hearing on automated safety technology in self-driving vehicles and the potential impacts on the economy. The committee will hear testimony from car manufacturers, public interest groups, and labor unions. Some of these groups will inevitably lobby against this bill and self-driving technology for fear of the potentially devastating impact on jobs in some industries. But ideally, the Senate bill will stick to the fundamentals from the House bill, which focuses on prioritizing safety, strengthening cybersecurity, and promoting the continued innovation of AI in autonomous vehicles.

 

Several legal obstacles still exist that are preventing the implementation of AI in automated vehicles. Congress’ SELF DRIVE Act has the potential to be a step in the right direction. The Senate needs to maintain the basic elements of the bill passed in the House to help advance the use of the innovative AI technology in self-driving cars. Unlike Musk, Mark Zuckerberg has taken a stance similar to those in the auto industry, and believes AI will bring about countless “improvements in the quality of our lives,” especially in the application of AI in self-driving vehicles.

 

 


Say Goodbye to Net Neutrality: Why FCC Protection of the Open Internet Is Over

Kristin McGaver, MJLST Guest Blogger

[Editor’s Note: Ms. McGaver’s blog topic serves as a nice preview for two articles being published in this Spring’s Issue 18.2, one on the FCC generally by researchers Brent Skorup and Joe Kane, and one on the Open Internet Order more specifically by MJLST Staffer Paul Gaus.]

Net neutrality is a complex issue at the forefront of many current online regulation debates. In these debates, it is often unclear what the concept of “net neutrality” actually entails, what parties and actors it affects, and how many different approaches to its regulation exist. Nevertheless, Ajit Pai—newly appointed chairman of the United States Federal Communications Commission (“FCC”)—thinks, “the issue is pretty simple.” Pai is openly opposed to net neutrality and has publicly expressed his intent not to enforce current FCC regulations pertaining to the issue with his recently acquired position of power. This is troubling to many net neutrality supporters. Open Internet advocates are rightfully concerned that Pai will hinder recent success for the advancement and protection of net neutrality achieved under former President Obama, resulting in the FCC’s 2015 “Protecting and Promoting the Open Internet” Regulation. With Pai at the FCC helm, net neutrality policy in the United States (“US”) is noticeably in flux. Thus, even though official policies protecting net neutrality exist on the books, the circumstances surrounding their enforcement and longevity leave much gray area to be explored, chiseled out, and set into stone.

Net neutrality is the idea that all Internet traffic should be treated equally. Yet, since 2003 when Tim Wu coined the term, scholars and commentators cannot agree on a standard definition since that very definition is at the base of a multi-layered over-arching debate. In the US, the most recent FCC articulation of net neutrality is defined by three principles—“no blocking, no throttling and no paid prioriti[z]ation.” These principles mean that ISPs should not be allowed to charge companies or websites higher rates for speedier connections or charge the user higher amounts for specific services. The new “bright-line” rules forbid ISPs from restricting access, tampering with Internet traffic, or favoring certain kinds of traffic via the use of “fast lanes.” Markedly, one thing the 2015 Regulation did not completely forbid is “zero-rating” or “the practice of allowing customers to consume content from certain platforms without it counting towards their data plan cap”—a practice many see as violating net neutrality. Even with this and other exceptions, the 2015 Regulation’s passing was not met without resistance: Republican Senator Ted Cruz from Texas tweeted that the 2015 Regulation was “Obamacare for the Internet.”

Additionally, net neutrality supporters and the FCC majority did not have long to bask in their success after the 2015 Regulation’s approval. The United States Telecom Association and Alamo Broadband quickly challenged it in a lawsuit. Because the new regulation re-classified ISPs as common carriers and therefore subject to the FCC’s authority, Telecom claimed that the FCC was overreaching, harming businesses, and impeding innovation in the field. Fortunately for the FCC, the United States Court of Appeals for the District of Columbia upheld the 2015 Regulation in a 2–to–1 decision.

Yet, the waves are far from settling for the FCC and net neutrality supporters in the US. Following the D.C. Circuit’s 2016 decision, American company AT&T and other members of the cable and telecom industry signaled an intent to continue the challenge, potentially all the way to the Supreme Court. More importantly, the lead dissenter to the 2015 Regulation is now chairman of the FCC. In his first few months as Chairman, Ajit Pai declined to comment on whether the FCC plans to enforce the 2015 Regulation. Pai’s “no comment” does not look promising for net neutrality or for those hoping the US will maintain its intent to protect the open Internet as was articulated in the 2015 Regulation.

Although the 2015 Regulation remains on the books, the likelihood that it is carefully enforced, or really enforced at all, is pretty low. This leaves a total lack of accountability for breaching ISPs. Achieving a policy that is not entirely spineless is admittedly complicated in the context of an Internet that is constantly evolving and a market that is increasingly dominated by just a few ISPs. But, effective policies are not impossible, as evidenced by success in the European Union and several of their member states in setting policies that protect and promote net neutrality. It is clear from these examples that effective net neutrality regulation in the online context requires setting, maintaining, and enforcing official articulations of policy. However, with a clear signal from the FCC chairman to back away from the enforcement of a set policy, it will be as if no regulation exists at all.


What’s in that? The Dilemma of Artificial Flavor, Natural Flavor & Artificial Color

Zach Berger, MJLST Executive Editor

By law, most food is required to display nutritional information; if a product bears nutrient content or health messages, it must comply with specific requirements. However, as questioned by J.C. Horvath in volume 13 of MJLST, do these requirements really help consumers? For example, how often do you see “contains artificial flavor” or something similar listed on your groceries? The use of the non-descriptive descriptor phrases such as “artificial flavor,” “natural flavor,” and ‘artificial color” are common on food labels, yet do not help the average consumer. These phrases can substitute for over 3900 different food additives. The difference between artificial and natural flavors is much more technical than meaningful as both contain chemicals. The distinction comes from the source of the chemicals. In reality, there is little difference between the two, as both are made in a laboratory by a trained professional, a “flavorist,” who blends appropriate chemicals together in the right proportions.

The Food and Drug Administration (FDA) does regulate these additives, but once a substance is Generally Recognized as Safe (GRAS) it may be added to anything without further testing for any unexpected chemical interactions with other ingredients. Examples of ingredients that fall under GRAS[1] range from beef tallow, lard, and gelatin to ambergris a “waxy substance generated in the digestive system of and regurgitated by sperm whales” and Lcystine, “a dough conditioner often derived from duck feathers or human hair.” Basically, these non-descriptive descriptors don’t tell the consumer anything useful, so companies allowed to use these stand-ins?

The Food industry is generally reluctant about releasing all of its ingredients in order to prevent competitors from easily replicating their product. However, “the information that would actually be useful to consumers tends to be categorical information. Things such as whether or not the product conflicts with dietary restrictions or contains artificial hormones or genetically engineered products. The goal of food labeling is clarity for the consumer and the use of the non-descriptive descriptor phrases are anything but clear; for the average consumer, they may as well not even be on the packaging. To make labeling more informative, Horvath recommended “FDA-mandated universal allergen warnings and front-of-pack labels to better educate consumers.” Whatever the solution is, it is time to end the use of non-descriptive descriptors.

[1] 21 C.F.R. 182.1–.99


What’s in that? The Dilemma of Artificial Flavor, Natural Flavor & Artificial Color

Zach Berger, MJLST Executive Editor

By law, most food is required to display nutritional information; if a product bears nutrient content or health messages, it must comply with specific requirements. However, as questioned by J.C. Horvath in volume 13 of MJLST, do these requirements really help consumers? For example, how often do you see “contains artificial flavor” or something similar listed on your groceries? The use of the non-descriptive descriptor phrases such as “artificial flavor,” “natural flavor,” and ‘artificial color” are common on food labels, yet do not help the average consumer. These phrases can substitute for over 3900 different food additives. The difference between artificial and natural flavors is much more technical than meaningful as both contain chemicals. The distinction comes from the source of the chemicals. In reality, there is little difference between the two, as both are made in a laboratory by a trained professional, a “flavorist,” who blends appropriate chemicals together in the right proportions.

The Food and Drug Administration (FDA) does regulate these additives, but once a substance is Generally Recognized as Safe (GRAS) it may be added to anything without further testing for any unexpected chemical interactions with other ingredients. Examples of ingredients that fall under GRAS[1] range from beef tallow, lard, and gelatin to ambergris a “waxy substance generated in the digestive system of and regurgitated by sperm whales” and Lcystine, “a dough conditioner often derived from duck feathers or human hair.” Basically, these non-descriptive descriptors don’t tell the consumer anything useful, so companies allowed to use these stand-ins?

The Food industry is generally reluctant about releasing all of its ingredients in order to prevent competitors from easily replicating their product. However, “the information that would actually be useful to consumers tends to be categorical information. Things such as whether or not the product conflicts with dietary restrictions or contains artificial hormones or genetically engineered products. The goal of food labeling is clarity for the consumer and the use of the non-descriptive descriptor phrases are anything but clear; for the average consumer, they may as well not even be on the packaging. To make labeling more informative, Horvath recommended “FDA-mandated universal allergen warnings and front-of-pack labels to better educate consumers.” Whatever the solution is, it is time to end the use of non-descriptive descriptors.

[1] 21 C.F.R. 182.1–.99


Autonomous Weapon Systems: Legal Responsibility for the Terminator

Ethan Konschuh, MJLST Staffer

While technological progress has been the hallmark of the twenty-first century, the rise has been especially drastic in weapons technology.  As combatants in armed conflicts rely more and more heavily on automated systems pursuing such goals as safety, efficiency, and effectiveness on the battlefield, international law governing the use of force in armed conflicts is under threat of becoming outdated.

International law governing the application of force in conflicts is premised on notions of control.  Humans have traditionally been the masters of their weapons: “A sword never kills anybody; it is a tool in a killer’s hand.”  However, as automation in weapons increases, this relationship is becoming tenuous- so much so that some believe that there is not enough control to levy responsibility on anyone for the consequences of the use of these weapons.  These actors are calling for a preemptive ban on this technology to avoid the possibility of the offloading of moral responsibility for war crimes.  Others, however, believe that there are frameworks available that can prevent this gap in responsibility, and allow for the realization of the aforementioned benefits of using autonomous machines on the battlefield.

There are three general categories of policies proposed regarding the regulation of using these machines.  One has been proposed by Human Rights Watch (HRW), International Committee for Robot Arms Control (ICRAC), the International Committee of the Red Cross (ICRC), and other NGO’s and humanitarian organizations have called for a preemptive ban on all autonomous weapons technology, believing that human input should be a pre-requisite for any targeting or attacking decision.  The second regulatory regime has been espoused by, among others, the United Kingdom and Norther Ireland, who claim that there would be no military utility in employing autonomous weapon systems and agree they will never use them, effectively agreeing to a ban.  However, the way that they define autonomous weapon systems belies their conviction.  The definition put forth by these actors defines autonomous weapon systems in a way that effectively regulates nothing:

“The UK understands [an autonomous weapon system] to be one which is capable of understanding, interpreting and applying higher level intent and direction based on a precise understanding and appreciation of what a commander intends to do and why.  From this understanding, as well as a sophisticated perception of its environment and the context in which it is operation, such a system would decide to take – or abort – appropriate actions to bring about a desired end state, without human oversight, although a human may still be present.”

This definition sets the threshold of autonomy so high that there is no technology that currently exists, or will likely ever exist, that would within its purview.  The third policy framework was put forth by the United States Department of Defense.  This policy regulates fully autonomous weapon systems (no human action connected to targeting or attacking decisions), semi-autonomous weapon systems (weapon depends on humans to determine the type and category of targets to be engaged), and human-supervised autonomous weapon systems (weapon can target and attack, but a human can intervene if necessary).  This policy bans all fully autonomous weapon systems, but allows for weapons that can target and attack as long as there is human supervision, with the ability to intervene if necessary.

The debate surrounding how to regulate this type of weapons technology is continually gaining traction up in the face of advances approaching the threshold of autonomy.  I believe the U.S. policy is the best available policy to prevent the responsibility gap while preserving the benefits of using automated weapons technology, but others disagree.  Whichever policy is ultimately chosen, hopefully an international agreement is reached before it is too late, and your favorite sci-fi movies become all too realistic.