2014

The Data Dilemma for Cell Phone Carriers: To Throttle or Not to Throttle? FTC Seeks to Answer by Suing AT&T Over Speed Limitations for Wireless Customers

Benjamin Borden, MJLST Staff Member

Connecting to the Internet from a mobile device is an invaluable freedom in the modern age. That essential BuzzFeed quiz, artsy instagram picture, or new request on Friendster are all available in an instant. But suddenly, and often without warning, nothing is loading, everything is buffering, and your once treasured piece of hand-held computing brilliance is no better than a cordless phone. Is it broken? Did the satellites fall from the sky? Did I accidentally pick up my friend’s blackberry? All appropriate questions. The explanation behind these dreadfully slow speeds, however, is more often than not a result of data throttling courtesy of wireless service providers. This phenomenon arises from the use of unlimited data plans on the nation’s largest cell phone carriers. Carriers such as AT&T and Verizon phased out their unlimited data plans in 2010 and 2011, respectively. This came just a few years after requiring unlimited data plans for new smartphone purchases. Wireless companies argue that tiered data plans offer more flexibility and better value for consumers, while others suggest that the refusal to offer unlimited data plans is motivated by a desire to increase revenue by selling to data hungry consumers.

Despite no longer offering unlimited data plans to new customers, AT&T has allowed customers who previously signed up for these plans to continue that service. Verizon also allows users to continue, but refuses to offer discounts on new phones if they keep unlimited plans. Grandfathering these users into unlimited data plans, however, meant that wireless companies had millions of customers able to stream movies, download music, and post to social media without restraint, and more importantly, without a surcharge. Naturally, this was deemed to be too much freedom. So, data throttling was born. Once a user of an unlimited data plan goes over a certain download size, 3-5GB for AT&T in a billable month, their speeds are lowered by 80-90% (to 0.15 mbps in my experience). This speed limit makes even the simplest of smartphone functions an exercise in patience.

I experienced this data throttling firsthand and found myself consistently questioning where my so-called unlimited data had escaped to. Things I took for granted, like using Google Maps to find the closest ice cream shop, were suddenly ordeals taking minutes rather than seconds. Searching Wikipedia to settle that argument with a friend about the plot of Home Alone 4? Minutes. Requesting an Uber? Minutes. Downloading the new Taylor Swift album? Forget about it.

The Federal Trade Commission (FTC) understands this pain and wants to recoup the losses of consumers who were allegedly duped by the promise of unlimited data, only to have their usage capped. As a result, the FTC is suing AT&T for misleading millions of consumers about unlimited data plans. After recently consulting with the Federal Communications Commission (FCC), Verizon decided to abandon its data throttling plans. AT&T and Verizon argue that data throttling is a necessary component of network management. The companies suggest that without throttling, carrier service might become interrupted because of heavy data usage by a small group of customers.
AT&T had the opportunity to settle with the FTC, but indicated that it had done nothing wrong and would fight the case in court. AT&T contends that its wireless service contracts clearly informed consumers of the data throttling policy and those customers still signed up for the service. Furthermore, there are other cellular service options for consumers that are dissatisfied with AT&T’s terms. These arguments are unlikely to provide much solace to wireless customers shackled to dial-up level speeds.
If there is a silver lining though, it is this: with my phone acting as a paperweight, I asked those around me for restaurant recommendations rather than turning to yelp, I got a better understanding of my neighborhood by finding my way rather than following the blue dot on my screen, and didn’t think about looking at my phone when having dinner with someone. I was proud. Part of me even wanted to thank AT&T. The only problem? I couldn’t tweet @ATT to send my thanks.


Open Patenting, Innovation, and the Release of the Tesla Patents

Blake Vettel, MJLST Staff Member

In Volume 14 Issue 2 of the Minnesota Journal of Law, Science & Technology, Mariateresa Maggiolino and Marie Lillá Montagnani proposed a framework for standardized terms and conditions for Open Patenting. This framework set forth a standard system for patent holders to license their patents in order to encourage open innovation, in a way that was easy to administer for patent holders of all sizes. Maggiolino and Montagnani argued for an open patenting scheme in which the patent owner would irrevocably spread their patented knowledge worldwide, based on non-exclusive and no-charge licensing. Futhermore, the licensing system would be centrally operated online and allow the patentee to customize certain clauses in the licensing agreement; while maintaining a few compulsory clauses such as a non-assertion pledge that would keep the license open.

On June 12, 2014 Elon Musk, CEO of Tesla Motors, shocked the business world by announcing via blog post that “Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.” Musk described his reasoning for opening Tesla’s patents for use by others as a way to encourage innovation and growth within the electric car market, and depicted Tesla’s true competition as gasoline cars instead of electric competitors. By allowing use of their patented technology, Tesla hopes to develop the electric car market and encourage innovation. Some commentators have been skeptical about the altruistic motive behind releasing the patents, arguing that it may in fact be a move intended to entice other electric car manufacturers to produce cars that are compatible with Tesla’s patented charging stations in an effort to develop the network of stations around the country.

However, Musk did not unequivocally release these patents; instead he conditioned their subsequent use upon being in “good faith.” What constitutes a good faith use of Tesla’s technology is not clear, but Tesla could have instead opted for a standardized licensing system as proposed by Maggiolino and Montagnani. A clear standardized licensing scheme with compulsory clauses designed to encourage free movement of patented technology and spur innovation may have been more effective in promoting use of Tesla’s patents. An inventor who wants to use Tesla’s patents may be hesitant under Musk’s promise not to initiate lawsuits, where he could be much more confident of his right to use the patented technology under a licensing agreement. The extent to which Tesla’s patents will be used and their effect on the car market and open innovation is yet to be seen, as is the true value of Tesla’s open innovation.


Scientific Responsibility: Why Lawyers are Imperative in Scientifically Informed Neuro-ethics

Thomas Hale-Kupiec, MJLST Staff Member

In Volume 11, Issue 1 of the Minnesota Journal of Law, Science, & Technology, Eagleman, et al. conclude that “the neuroscientific community should continue to develop rehabilitative strategies so that the legal community can take advantage of those strategies for a rational, customized approach” in Why Neuroscience Matters for Rational Drug Policy. Though perhaps this assertion is valid in the context of Drug Addiction, I believe it is necessary to limit this assertion to solely rehabilitative drug addiction studies; to allow a further extension of this conclusion would be sociologically detrimental. I postulate that beyond ideas of who we define as a “Neuroscientist,” legal experts need to be at the forefront of this debate in order to better define and formulate ideas of “rehabilitation.”

In a related reflection entitled ‘Smart Drugs’: Do they work? Are they ethical? Will they be legal?, researcher Stephen Rose poses a number of ethical and neurological impacts of mind enhancing substances. The author posits an interesting question: what is “normal” for a brain? If someone undergoes pharmacological manipulation, what should the standard be for “abnormal?” For instance, Rose poses that some substances could be used for patients with Down Syndrome to allow for cognitive enhancement. Is this a valid designation as “abnormal?” Inexorably linked to this issue would be Autism Spectrum Disorder — where on the spectrum does a cognitive “abnormality” manifest? Further, how do we define potentially less visible disorders such as “anxiety?” With this spectrum of diseases and mental health conditions, variety of measured “abnormalities,” and varying pharmacological treatment effectiveness, I think we need to be mindful that neuroscientific constructions are often blurry, but always need to be conceptualized within the paradigm of ethics.

More than ever, the question of “what is abnormal” and “what mandates treatment” needs to be addressed in pharmaceutical policy. For instance, federally designated controlled substances like marijuana may be effective at treating anxiety and other medical conditions. Should the legal community allow for Eagleman’s assertion to snowball? Imagine an increasing number of states embrace evidence that the active ingredients in marijuana could treat certain medical conditions? Should the scientific community solely argue the validity of these findings? Legal professionals, bioethicists, and regulators need to be included in these questions. It is not that the data driven outcomes need to be pursued; rather, that a level of ethics and sociological morals need to be layered above these decisions.


FCC Issues Notice of Proposed Rulemaking to Ensure an Open Internet, Endangers Mid-size E-Commerce Retailers

Emily Harrison, MJLST Staff

The United States Court of Appeals for the D.C. Circuit twice struck down key provisions of the Federal Communication Commission’s (FCC) orders regarding how to ensure an open Internet. The Commission’s latest articulation is its May 15, 2014 notice of proposed rulemaking, In the Matter of Protecting the Open Internet. According to the proposed rulemaking, it seeks to provide “broadly available, fast and robust Internet as a platform for economic growth, innovation, competition, free expression, and broadband investment and deployment.” The notice of proposed rulemaking includes legal standards previously affirmed by the D.C. Circuit in Verizon v. FCC, 740 F.3d 623 (2014). For example, the FCC relies on Verizon for establishing how the FCC can utilize Section 706 of the Telecommunications Act of 1996 as its source of authority in promulgating Open Internet rules. Additionally, Verizon explained how the FCC can employ a valid “commercially reasonable” standard to monitor the behavior of Internet service providers.

Critics of the FCC’s proposal for network neutrality argue that the proposed standards are insufficient to ensure an open Internet. The proposal arguably allows broadband carriers to offer “paid prioritization” services. The sale of this prioritization not only leads to “fast” and “slow” traffic lanes, but also allows broadband carriers to charge content providers for priority in “allocating the network’s shared resources,” such as the relatively scarce bandwidth between the Internet and an individual broadband subscriber.

Presuming that there is some merit to the critics’ arguments, if Internet Service Providers (ISPs) could charge certain e-commerce websites different rates to access a faster connection to customers, the prioritized websites could gain a competitive advantage in the marketplace. Disadvantaged online retailers could see a relative decrease in their respective revenue. For example, without adequate net neutrality standards, an ISP could prioritize certain websites, such as Amazon or Target, and allow them optimal broadband speeds. Smaller and mid-sized retail stores may only have the capital to access a slower connection. As a result, customers would consistently have a better retail experience on the websites of larger retailers because of the speed in which they can view products or complete transactions. Therefore, insufficient net neutrality policies could potentially have a negative effect on the bottom line of many e-commerce retailers.

Comments can be submitted in response to the FCC’s notice of proposed rulemaking at: http://www.fcc.gov/comments


Self-driving Vehicles are Coming

Spencer Peck, RA State and Local Policy Program, MJLST Guest Blogger

Self-driving vehicles are coming, possibly within the decade. But what exactly do drivers, laws and lawmakers, and local economies need to do to prepare for autonomous vehciles? On Friday, October 31 technical, legal, and policy experts will gather at the Humphrey School of Public Affairs to discuss exactly this. More information about the all-day conference, Autonomous Vehicles: The Legal and Policy Road Ahead, is available by following the link.

Self-driving vehicles (SDVs) are the future of automotive transportation. Driverless cars are often discussed as a “disruptive technology” with the ability to transform transportation infrastructure, expand access, and deliver benefits to a variety of users. Some observers estimate limited availability of driverless cars by 2020 with wide availability to the public by 2040. Recent announcements by Google and other major automakers indicate huge potential for development in this area. In fact, an Audi RS7 recently self-piloted around the famous Hockheimring race track. The fully autonomous car reached 150mph and even recorded a lap that was 5 seconds faster than a human competitor! The federal automotive regulator, the National Highway Traffic Safety Administration (NHTSA), issued a policy statement about the potentials of self-driving cars and future regulatory activity in mid-2013. The year 2020 is the most often quoted time frame for the availability of the next level of self-driving vehicles, with wider adoption in 2040-2050. However, there are many obstacles to overcome to make this technology viable, widely available, and permissible. These include developing technology affordable enough for the consumer market, creating a framework to deal with legal and insurance challenges, adapting roadways to vehicle use if necessary, and addressing issues of driver trust and adoption of the new technology. There is even some question as to who will be considered the ‘driver’ in the self-driving realm.

Although self-driving cars are few and far between, the technology is becoming ever-more present and legally accepted. For example, NHTSA requires all newly manufactured cars to have at least a low-level of autonomous vehicle technology. Some scholars even suggest that self-driving vehicles are legal under existing legal frameworks. Five states have some form of legislation expressly allowing self-driving cars or the testing of such vehicles within state boundaries. In fact, two states–California and Nevada–have even issued comprehensive regulations for both private use and testing of self-driving vehicles. Several companies, most notably Google (which drove over 500,000 miles on its original prototype vehicles), are aggressively pursuing the technology and advocating for legal changes in favor of SDVs. Automotive manufacturers from Bosch to Mercedes to Tesla are all pursuing the technology, and frequently provide updates on their self-driving car plans and projects.

The substantial benefits derived from SDVs are hard to ignore. By far the greatest implication referenced by those in the field is related to safety and convenience. NHTSA’s 2008 Crash Causation survey found that close to 90% of crashes are caused by driver mistakes. These mistakes, which include distractions, excessive speed, disobedience of traffic rules or norms, and misjudgment of road conditions, are factors within control of the driver. Roadway capacity improvement often means improvements in throughput, the maximum number of cars per hour per lane on a roadway, but can extend to other capacity concerns. Other hypothesized improvements include fewer necessary lanes due to increased throughput, narrower lanes because of accuracy and driving control of SDVs, and a reduction in infrastructure wear and tear through fewer crashes. While supplemental transportation programs and senior shuttles have provided needed services in recent decades, SDVs have the ability to expand the user base of cars to those who would normally be unable to physically drive. The elderly, disabled, and even children may be beneficiaries.


Is the US Ready for the Next Cyber Terror Attack?

Ian Blodger, MJLST Staff Member

The US’s military intervention against ISIL carries with it a high risk of cyber-terror attacks. The FBI reported that ISIL and other terrorist organizations may turn to cyber attacks against the US in response to the US’s military engagement of ISIL. While no specific targets have been confirmed, likely attacks could result in website defacement to denial of service attacks. Luckily, recent cyber terror attacks attempting to destabilize the US power grid failed, but next time we may not be so lucky. Susan Brenner’s recent article, Cyber-threats and the Limits of Bureaucratic Control, published in the Minnesota Journal of Law Science and Technology volume 14 issue 1, describes the structural reasons for the US’s vulnerabilities to cyber attacks, and offers one possible solution to the problem.

Brenner argues that the traditional methods of investigation do not work well when it comes to cyber attacks. This ineffectiveness results from the obscured origin and often hidden underlying purpose of the attack, both of which are crucial in determining whether a law enforcement or military response is necessary. The impairment leads to problems assessing which agency should control the investigation and response. A nation’s security from external attackers depends, in part, on its ability to present an effective deterrent to would be attackers. In the case of cyber attacks, however, the US’s confusion on which agency should respond often precludes an efficient response.

Brenner argues that these problems are not transitory, but will increase in direct proportion to our reliance on complex technology. The current steps taken by the US are unlikely to solve the issue since they do not address the underlying problem, instead continuing to approach cyber terrorists as conventional attackers. Concluding that top down command structures are unable to respond effectively to the treat of cyber attacks, Brenner suggests a return to a more primitive mode of defense. Rather than trusting the government to ensure the safety of the populace, Brenner suggests citizens should work with the government to ensure their own safety. This decentralized approach, modeled on British town defenses after the fall of the Roman Empire, may avoid the ineffective pitfalls of the bureaucratic approach to cyber security.

There are some issues with this proposed model for cyber security, however. Small British towns during the early middle ages may have been able to ward off attackers through an active citizen based defense, but the anonymity of the internet makes this approach challenging when applied to a digitized battlefield. Small British towns were able to easily identify threats because they knew who lived in the area. The internet, as Brenner concedes, makes it difficult to determine to whom any given person pays allegiance. Presumably, Brenner theorizes that individuals would simply respond to attacks on their own information, or enlist the help of others to fed off attacks. However, the anonymity of the internet would mean utter chaos in bolstering a collective defense. For example, an ISIL cyber terrorist could likely organize a collective US citizen response against a passive target by claiming they were attacked. Likewise, groups utilizing pre-emptive attacks against cyber terrorist organizations could be disrupted by other US groups that do not recognize the pre-emptive cyber strike as a defensive measure. This simply shows that the analogy between the defenses of a primitive British town and the Internet is not complete.

Brenner may argue that her alternative simply calls for current individuals, corporations, and groups to build up their own defenses and protect themselves from impending cyber threats. While this approach would avoid the problems inherent in a bureaucratic approach, it ignores the fact that these groups are unable to protect themselves currently. Shifting these groups’ understanding of their responsibility of self defense may spur innovation and increase investment in cyber protection, but this will likely be insufficient to stop a determined cyber attack. Large corporations like Apple, JPMorgan, Target, and others often hemorrhage confidential information as a result of cyber attacks, even though they have large financial incentives to protect that information. This suggests that an individualized approach to cyber protection would also likely fail.

With the threat of ISIL increasing, it is time for the United States to take additional steps to reduce the threat of a cyber terror attack. At this initial stage, the inefficiencies of bureaucratic action will result in a delayed response to large-scale cyber terror attacks. While allowing private citizens to band together for their own protection may have some advantages over government inefficiency, this too likely would not solve all cyber security problems.


The Benefits of Juries

Steven Groschen, MJLST Staff Member

Nearly 180 years ago Alexis de Tocqueville postulated that jury duty was beneficial to those who participated. In an often quoted passage of Democracy in America he stated that “I do not know whether the jury is useful to those who have lawsuits, but I am certain it is highly beneficial to those who judge them.” Since that time many commentators, including the United States Supreme Court, have echoed this belief. Although this position possesses a strong intuitive appeal, it is necessary to ask whether there is any evidentiary basis to support it. Up until recently, the scientific evidence on the effects of serving on a jury was scant. Fortunately for proponents of the jury system, the research of John Gastil is building a scientific basis for the positive effects of jury duty.

One of Gastil’s most extensive studies focused on finding a correlation between serving on a jury and subsequent voting patterns. For purposes of the study, Gastil and his colleagues compiled a large sample of jurors from various counties–8 total–across the United States. Next, the research team gathered voting records for jurors in the sample–examining each juror’s voting patterns five years prior and subsequent to serving on a jury. Finally, regression analyses were performed on the data and some interesting effects were discovered. Individuals who were infrequent voters prior to serving as a juror on a criminal trial were 4-7% more likely to vote after serving. Interestingly, this effect held for the group of previously infrequent voters regardless of the verdict reached in the criminal trials they served on. Further, for hung juries the effect held and was even stronger.

Despite these findings, the jury is still out on whether the scientific evidence is substantial enough to support the historically asserted benefits of jury duty. More evidence is certainly needed, however, important policy questions regarding jury duty are already implicated. As researchers begin correlating jury participation with more aspects of civic life, there remains a possibility that negative effects of serving on a jury may be discovered. Would such findings serve as a rationale for modifying the jury selection process in order to preclude those who might be negatively affected? More importantly, do further findings of positive effects suggest more protections are needed during the voir dire process to ensure certain classes are not excluded from serving on a jury and thus receiving those benefits?


America’s First Flu Season Under the ACA

Allison Kvien, MJLST Staff Member

Have you seen the “flu shots today” signs outside your local grocery stores yet? Looked at any maps tracking where in the United States flu outbreaks are occurring? Gotten a flu shot? This year’s flu season is quickly approaching, and with it may come many implications for the future of health care in this country. This year marks the first year with the Patient Protection and Affordable Care Act (ACA) in full effect, so thousands of people in the country will get their first taste of the ACA’s health care benefits in the upcoming months. The L.A. Times reported that nearly 10 million previously uninsured people now have coverage under the ACA. Though there might still be debate between opponents and proponents of the ACA, the ACA has already survived a Supreme Court challenge and is well on its way to becoming a durable feature of the American healthcare system. Will the upcoming flu season prove to be any more of a challenge?

In a recent article entitled, “Developing a Durable Right to Health Care” in Volume 14, Issue 1 of the Minnesota Journal of Law, Science, and Technology, Erin Brown examined the durability of the ACA going forward. Brown explained, “[a]mong its many provisions, the ACA’s most significant is one that creates a right to health care in this country for the uninsured.” Another provision of the ACA is an “essential benefits package,” in which Congress included “preventative and wellness services,” presumably including flu shots. For those that will be relying on the healthcare provided by the ACA in the upcoming flu season, it may also be important to understand where the ACA’s vulnerabilities lie. Brown posited that the vulnerabilities are concentrated mostly in the early years of the statute, and the federal right to health care may strengthen as the benefits take hold. How will the end of the ACA’s first year go? This is a very important question for many Americans, and Brown’s article examines several other questions that might be on the minds of millions in the upcoming months.


Infinite? In the Political Realm, The Internet May Not be Big Enough for Everyone

Will Orlady, MJLST Staff Member

The Internet is infinite. At least, that’s what I thought. But Ashley Parker, a New York Times reporter doesn’t agree. When it comes to political ad space, our worldwide information hub may not be the panacea politicians hoped for this election season.

Parker based her argument on two premises. First, not all Internet content providers are equal, at least when it comes to attracting Internet traffic. Second, politicians–especially those in “big” elections–wish to reach more people, motivating their campaigns to run ads on a major content hubs i.e. YouTube.

But sites like YouTube can handle heavy network traffic. And, for the most part, political constituents do not increase site traffic for the purpose of viewing (or hearing) political ads. So what serves to limit a site’s ad space if not its own physical technology that facilitates the site’s user experience? Parker contends that the issue is not new: it’s merely a function of supply and demand.

Ad space on so-called premium video streaming sites like YouTube is broken down into two categories: ads that can be skipped (“skip-able ads”) and ads that must be played entirely before you reach the desired content (“reserved by ads”). The former is sold without exhaustion at auction, but the price of each ad impression increases with demand. The latter is innately more expensive, but can be strategically purchased for reserved times slots, much like television ad space.

Skip-able ads are available for purchase without regard to number. But they are limited by price and by desirability. Because they are sold by auction, in times of high demand (during a political campaign, for example) Parker contends that their value can increase ten-fold. Skip-able ads are, however, most seriously limited by their lack of desirability. Assuming, as I believe it is fair to do here, that most Internet users actually skip the skip-able ads, advertising purchasers would be incentivized to purchase a site’s “reserved by” advertising space.

“Reserved by” ads are sold as their name indicates, by reservation. And if the price of certain Internet ad space is determined by time or geography, it is no longer fungible. Thus, because not all Internet ad space is the same in price, quality, and desirability, certain arenas of Internet advertising are finite.

Parker’s argument ends with the conclusion that political candidates will now compete for ad space on the Internet. This issue, however, is not necessarily problematic or novel. Elections have always been adversarial. And I am not convinced that limited Internet ad space adds to campaign vitriol. An argument could be made to the contrary: that limited ad space will confine candidate to spending resources on meaningful messages about election issues rather than smear tactics. Campaign tactics notwithstanding, I do not believe that the Internet’s limited ad space presents an issue distinct from campaign advertising in other media. Rather, Parker’s argument merely forces purchasers and consumers of such ad space to consider the fact that the internet, as an advertising and political communication medium, may be more similar to existing media than some initially believed.


A Review of Replay Technology in Major League Baseball

Comi Sharif, Managing Editor

This week marks the end of the 2014 Major League Baseball regular season, and with it, the completion of the first regular season under the league’s expanded rules regarding the use of instant replay technology. Though MLB initially resisted utilizing instant replay, holding out longer than other American professional sport leagues, an agreement between team owners, the players association, and the umpires association produced a gross expansion of the use of replay technology beginning this season.

The expanded rules permit managers to “challenge” at least one call made by an umpire during a game. The types of calls allowed to be challenged are limited to objective plays such as whether a runner was safe or out at a base, or whether a fielder caught or “trapped” a batted ball. Subjective umpire calls, including calls regarding balls and strikes and “check” swings, are not reviewable. The complete set of MLB’s instant replay rules is available here.

As alluded to above, the process of going from the idea of instant replay in baseball to actual implementation was long and complex. First, rule changes must be collectively bargained by MLB and the players association (MLBPA). Thus, the proposal to expand the use of instant replay had to be proposed during the recent collective bargaining agreement (CBA) discussions in 2011. What both sides agreed to was language in the CBA stating that subject to approval by the umpires association, MLB baseball could expand the use of instant replay. Second, after agreeing to general idea of more instant replay, MLB developed specific rules and policies for instant replay, which had to be approved by the owners of the 30 MLB franchises. Once the owners approved the specific rules, which they did unanimously, the rules could finally be put into action. One issue to watch is how each of the different parties involved in the approval process reacts to the changes instant replay has on the league. The current CBA expires in December of 2016, at which time wholesale changes to the current instant replay system could be realized.

The replay technology used by MLB is somewhat unique compared to that used by other professional sports leagues such as the National Basketball Association and the National Football League. Often in the NBA and NFL, referees or officials view video replays of a contested call themselves with technology located at the playing venue itself. MLB, however, created a “Replay Operation Center” (ROC), located at MLB headquarters in New York City, where a team of umpires reviews video replays and communicates a final ruling through headsets to the umpires on the field. Additionally, MLB permits each team to have a “video specialist” located in the clubhouse to watch for challengeable plays; the specialist can call the manager by phone to communicate whether or not the play should be challenged.

In one sense, the MLB system may be advantageous because it allows the ROC to have the best available technology, whereas the NBA and NFL have to adapt the sophistication of their replay systems to make it possible for use at every stadium and to the referees or officials at the venue immediately. While the NFL and NBA referees and officials typically look at one relatively small monitor when reviewing a play, the ROC houses 37 high-definition televisions, each of which can be subdivided into 12 smaller screens. Though this may not seem like a big deal to the casual observer, a number of calls are so close that the quality of the image available on replay can directly impact the call. One might conclude, then, that because MLB has more advanced technology at its disposable, its replay system is, in fact, more accurate. The MLB system does have its downsides, however. Outsourcing the review process can lead to lengthy delays and put decisions in the hands of an umpire thousands of miles away from the action, which many find unappealing.

The site Retrosheet has a comprehensive collection of data on MLB’s replay system, including an entry for every play reviewed, its result, and the length of time taken for the review to be completed.

Overall, there are mixed reviews concerning the success of the expanded replay rules used in MLB this season. Though it’s unclear exactly how MLB will adjust its system in the future, if the current trend continues, as increasingly effective technology becomes available, the impact of that technology on the sport of baseball is only likely to rise as well.