Autonomous Vehicles

Would Autonomous Vehicles (AVs) Interfere with Our Fourth Amendment Rights?

Thao Nguyen, MJLST Staffer

Traffic accidents are a major issue in the U.S. and around the world. Although car safety features are continuously enhanced and improved, traffic crashes continue to be the leading cause of non-natural death for U.S. citizens. Most of the time, the primary causes are human errors rather than instrumental failures. Therefore, autonomous vehicles (“AVs”), which promise to be the automobiles that operate themselves without the human driver, are an exciting up and coming technology, studied and developed in both academia and industry[1].

To drive themselves, AVs must be able to perform two key tasks: sensing the surrounding environment and “driving”—essentially replacing the eyes and hands of the human driver.[2] The standard AV design today includes a sensing system that collects information from the outside world, assisting the “driving” function. The sensing system is composed of a variety of sensors,[3] most commonly a Light Detection and Ranging (LiDAR) and cameras.[4] A LiDAR is a device that emits laser pulses and uses sound navigation and ranging (“SONAR”) principles to get a depth estimation of the surroundings: the emitted laser pulses travel forward, hit an object, then bounce back to the receivers; the time taken for the pulses to travel back is measured, and the distance is computed. With this information about distance and depth, a 3D point cloud map is generated about the surrounding environment. In addition to precise 3D coordinates, most LiDAR systems also record “intensity.” “Intensity” is the measure of the return strength of the laser pulse, which is based, in part, on the reflectivity of the surface struck by the laser pulse. LiDAR “intensity” data thus reveal helpful information about the surface characteristics of their surroundings. The two sensors, the camera and the LiDAR, complement each other: the former conveys rich appearance data with more details on the objects, whereas the latter is able to capture 3D measurements[5]. Fusing the information acquired by each allows the sensing system to gain a reliable environmental perception.[6]

LiDAR sensing technology is usually combined with artificial intelligence, as its goal is to imitate and eventually replace human perception in driving. Today, the majority of artificial intelligences use “machine learning,” a method that gives computers the ability to learn without explicitly being programmed. With machine learning, computers train itself to do new tasks in a similar manner as do humans: by exploring data, identifying patterns, and improving upon past experiences. Applied machine learning is data-driven: the greater the breadth and depth of the data supplied to the computer, the greater the variety and complexity of the tasks that the computer can program itself to do. Since “driving” is a combination of multiple high-complexity tasks, such as object detection, path planning, localization, lane detection, etc., an AV that drives itself requires voluminous data in order to operate properly and effectively.

“Big data” is already considered a valuable commodity in the modern world. In the case of AVs, however, this data will be of public streets and road users, and the large-scale collection of this data is empowered further by various technologies to detect and identify, track and trace, mine and profile data. When profiles about a person’s traffic movements and behaviors exist in a database somewhere, there is a great temptation for the information to be used for other purposes than the purpose for which they were originally collected, as has been the case with a lot of other “big data” today. Law enforcement officers who get their hands on these AVs data can track and monitor people’s whereabouts, pinpointing individuals whose trajectories touch on suspicious locations at a high frequency. The trajectories can be matched with the individual identified via use of car models and license plates. The police may then identify crime suspects based on being able to see the trajectories of everyone in the same town, rather than taking the trouble to identify and physically track each suspect. Can this use of data by law enforcement be sufficiently justified?

As we know, use of “helpful” police tools may be restricted by the Fourth Amendment, and for good reasons. Although surveillance helps police officers detect criminals,[7] extraneous surveillance has its social costs: restricted privacy and a sense of being “watched” by the government inhibits citizens’ productivity, creativity, spontaneity, and causes other psychological effects.[8] Case law has given us guidance to interpret and apply the Fourth Amendment standards of “trespass” or “unreasonable searches and seizures” by the police. Three principal cases, Olmstead v. United States, 277 U.S. 438 (1928), Goldman v. United States, 316 U.S. 129 (1942), and United States v. Jones, 565 U.S. 400 (2012), a modern case, limit Fourth Amendment protection to protecting against physical intrusion into private homes and properties. Such protection would not be helpful in the case of LiDAR, which operates on public street as a remote sensing technology. Nonetheless, despite the Jones case, the more broad “reasonable expectation of privacy” test established by Katz v. United States, 389 U.S. 347 (1967) is more widely accepted. Instead of tracing physical boundaries of “persons, houses, papers, and effects,” the Katz test focuses on whether there is an expectation of privacy that is socially recognized as “reasonable.” The Fourth Amendment “protects people, not places,” wrote the Katz court.[9]

United States v. Knotts, 460 U.S. 276 (1983) was a public street surveillance case that invoked the Katz test. In Knotts, the police installed a beeper on to the defendant’s vehicle to track it. The Court found that such tracking on public streets was not prohibited by the Fourth Amendment: “A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[10] The Knotts Court thus applied the Katz test and considered the question of whether there was a “reasonable expectation of privacy,” meaning that such expectation was recognized as “reasonable” by society.[11] The Court’s answer is in the negative: unlike a person in his dwelling place, a person who is traveling on public streets “voluntarily conveyed to anyone who wanted to look at the fact that he was traveling over particular roads in a particular direction.”[12]

United States v. Maynard, 615 F.3d 544 (2010), another public street surveillance case taking place in the twenty-first century, reconsidered the Knotts holding regarding “reasonable expectation of privacy” on public streets. The Maynard defendant argued that the district court erred in admitting evidence acquired by the police’s warrantless use of a Global Pointing System (GPS) device to track defendant’s movements continuously for a month.[13] The Government invoked United States v. Knotts and its holding that “[a] person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[14] The DC Circuit Court of Appeals, however, distinguished Knotts, pointing out that the Government in Knotts used a beeper that tracked a single journey, whereas the Government’s GPS monitoring in Maynard was sustained 24 hours a day continuously for one month.[15]The use of the GPS device over the course of one month did more than simply tracking defendant’s “movements from one place to another.” The result in Maynard was the discovery of the “totality and pattern” of defendant’s movement. [16]The Court is willing to make a distinction between “one path” and “the totality of one’s movement”: since someone’s “totality of movement” is much less exposed to the view of the public and much more revealing of that person’s personal life, it is constitutional for the police to track an individual on “one path,” but not that same individual’s “totality of movement.”

Thus, with time the Supreme Court appears to be recognizing that when it comes to modern surveillance technology, the sheer quantity and the revealing nature of data collected on movements of public street users ought to raise concerns. The straightforward application of these to AV sensing data would be that data concerning a person’s “one path” can be obtained and used, but not the totality of a person’s movement. It is unclear where to draw the line      between “one path” and “the totality of movement.” The surveillance in Knotts was intermittent over the course of three days,[17] whereas the defendant in Maynard was tracked for over one month. The limit would perhaps fall somewhere in between.

Furthermore, this straightforward application is complicated by the fact that the sensors utilized by AVs do not pick up mere locational information. As discussed above, AV sensing system, being composed of multiple sensors, capture both camera images and information about speed, texture, and depth of the object. In other words, AVs do not merely track a vehicle’s location like a beeper or GPS, but they “see” the vehicle through their cameras and LiDAR and radar devices, gaining a wealth of information. This means that even if only data about “one path” of a person movement is extracted, this “one path” data as processed by AV sensing systems is much more in-depth than what a beeper or CSLI can communicate. Adding to this, current developers are proposing to create AVs networks that share data among many vehicles, so that data on “one path” can potentially be combined with other data of the same vehicle’s movement, or multiple views of the same “one path” from different perspectives can be combined. The extensiveness of these data goes far beyond the precedents in Knotts and Maynard. Thus, it is foreseeable that unwarranted subpoenaing AVs sensing data is firmly within the Supreme Court’s definition of a “trespass.”

[1] Tri Nguyen, Fusing LIDAR sensor and RGB camera for object detection in autonomous vehicle with fuzzy logic approach, 2021 International Conference on Information Networking (ICOIN) 788, 788 (2021).

[2] Id. (“An autonomous vehicle or self-driving car is a vehicle having the ability to sense the surrounding environment and capable of operation on its own without any human interference. The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounting on it.”)

[3] Id. “The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounted on it.”

[4] Heng Wang and Xiaodong Zhang, Real-time vehicle detection and tracking using 3D LiDAR, Asian Journal of Control 1, 1 (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”)

[5] Id. (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”) (“Conversely, LiDARs are able to produce 3D measurements and are not affected by the illumination of the environment [9,10].”).

[6] Nguyen, supra note 1, at 788 (“Due to the complementary of two sensors, it is necessary  to gain a more reliable environment perception by fusing the  information acquired from these two sensors.”).

[7] Raymond P. Siljander & Darin D. Fredrickson, Fundamentals of Physical Surveillance: A Guide for Uniformed and Plainclothes Personnel, Second Edition (2002) (abstract).

[8] Tamara Dinev et al., Internet Privacy Concerns and Beliefs About Government Surveillance – An Empirical Investigation, 17 Journal of Strategic Information Systems 214, 221 (2008) (“Surveillance has social costs (Rosen, 2000) and inhibiting effects on spontaneity, creativity, productivity, and other psychological effects.”).

[9] Katz v. United States, 389 U.S. 347, 351 (1967).

[10] United States v. Knotts, , 460 U.S. 276, 281 (1983) (“A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”)

[11] Id. at 282.

[12] Id.

[13] United States v. Maynard, 615 F.3d 544, 549 (2010).

[14]  Id. at 557.

[15] Id. at 556.

[16] Id. at 558 “[O]nes’s movements 24 hours a day for 28 days as he moved among scores of places, thereby discovering the totality and pattern of his movements.”).

[17] Knotts at 276.


AI: Legal Issues Arising from the Development of Autonomous Vehicle Technology

Sooji Lee, MJLST Staffer

Have you ever heard of the “Google deep mind challenge match?” AlphaGo, the artificial intelligence (hereinafter “AI”) created by Google, had a Go game match with Lee Sedol, 18-time world champion of Go in 2016. Go game is THE most complicated human made game that has more variable moves than you can ever imagine – more than a billion more variables than a chess game. People who knew enough about the complexity of Go game did not believe that it was possible for AI to calculate all these variables to defeat the world champion, who depended more on his guts and experiences. AlphaGo, however, defeated Mr. Lee by five to one leaving the whole world amazed.

Another use of AI is to make autonomous vehicles (hereinafter “AV”), to achieve mankind’s long-time dream: driving a car without driving. Now, almost every automobile manufacturer including GM, Toyota, Tesla and others, who each have enough capital to reinvest their money on the new technology, aggressively invest in AV technologies. As a natural consequence of increasing interest on AV technology, vehicle manufacturers have performed several driving tests on AVs. Many legal issues arose as a result of the trials. During my summer in Korea, I had a chance to research legal issues for an intellectual property infringement lawsuit regarding AV technology between two automobile manufacturers.

For a normal vehicle, a natural person is responsible if there is an accident. But who should be liable when an AV malfunctions? The owner of the vehicle, the manufacturer of the vehicle, or the entity who developed the vehicle’s software? This is one of the hardest questions that arises from the commercialization of AV. I personally think that the liability could be imposed on any of the various entities depending on different scenarios. If the accident happened because of the malfunctioning of the vehicle’s AI system, the software provider should be liable. If the accident occurred because the vehicle itself malfunctioned, the manufacturer should be held liable. But if the accident occurred because the owner of the vehicle poorly managed his/her car, the owner should be held liable. To sum up, there is no one-size fits all solution to who should be held liable. Courts should consider the causal sequence of the accident when determining liability.

Also, the legislative body must take data privacy into consideration when enacting statutes governing AVs. There are tons of cars on the road. Drivers should interact with other drivers to safely get to their destination. Therefore, AVs should share locations and current situations to interact well with other AVs. This means that a single entity should collect each AVs information and calculate it to prevent accidents or to effectively manage traffic. Nowadays, almost every driver is using navigation. This means that people must provide their location to a service provider, such as Google maps. Some may argue that service providers like Google maps already serve as a collector of vehicle information. But there are many navigation services. Since all AVs must interact with each other, centralizing the data with one service provider is wise. While centralizing the data and limiting consumer choice to one service provider is advisable, the danger of a data breach would be heightened should one service provider be selected. This is an important and pressing concern for legislatures considering enacting legislation regarding centralizing AV data with one service provider.

Therefore, enacting an effective, smart, and predictive statute is important to prevent potential problems. Notwithstanding its complexity, many states in the U.S. take a positive stance toward the commercialization of AV since the industry could become profitable. According to statistics from National Conference of State Legislatures, 33 states have introduced legislation and 10 states have issued executive orders related to AV technology. For example, Florida’s 2016 legislation expands allowed operation of autonomous vehicles on public roads. Also, Arizona’s Governor issued an executive order which encouraged the development of relevant technologies. With this steps, development of a legal shield is possible someday.


The Future of AI in Self-Driving Vehicles

Kevin Boyle, MJLST Staffer

 

Last week, artificial intelligence (AI) made a big splash in the news after Russian President Vladimir Putin and billionaire tech giant Elon Musk both commented on the subject. Putin stated that whoever becomes the leader in artificial intelligence (AI) will become “the ruler of the world.” Elon Musk followed up Putin’s comments by declaring that competition for AI superiority between nations will most likely be the cause of World War III. These bold predictions grabbed the headlines; but in the same week, Lyft announced a new partnership with a company that produces AI for self-driving cars and the House passed the SELF DRIVE Act. The Lyft deal and the House bill are positive signs for investors of the autonomous vehicle industries; however, the legal landscape remains uncertain. As Putin and Musk have predicted, AI is certain to have a major impact on our future, but current legal hurdles exist before AI’s applications in self-driving vehicles can reach its potential.

 

One of the legal hurdles that currently exists is the varying laws between state and federal authorities. For example, Companies such as Google and Ford would like to introduce cars with no pedals or steering wheels that are operated entirely by AI. However, most states still require that a human driver be able to take “active physical control” of the car to manually override the autonomous vehicle. This requires a steering wheel and brakes, which would make those cars illegal to operate. At the federal level, the FAA requires that commercial drones be flown by certified operators, not computers or AI. Requiring operators instead of AI to steer drones for deliveries severely limits the potential of this innovative technology. Furthermore, international treaties, including the Geneva Convention, need to be addressed before we see fully autonomous cars.

 

The bipartisan SELF DRIVE Act recently passed by the House attempts to address most of the issues created by the patchwork of local, state, and federal regulations so that AI in self-driving cars can reach its potential. The House bill proposed clear guidelines for car manufacture guidelines, clarified the role of the NHTSA in regulating automated driving systems, and detailed cybersecurity requirements for automated vehicles. The Senate, however, is drafting its own bill for the SELF DRIVE Act. This week, the Senate Commerce, Science, and Transportation Committee will convene a hearing on automated safety technology in self-driving vehicles and the potential impacts on the economy. The committee will hear testimony from car manufacturers, public interest groups, and labor unions. Some of these groups will inevitably lobby against this bill and self-driving technology for fear of the potentially devastating impact on jobs in some industries. But ideally, the Senate bill will stick to the fundamentals from the House bill, which focuses on prioritizing safety, strengthening cybersecurity, and promoting the continued innovation of AI in autonomous vehicles.

 

Several legal obstacles still exist that are preventing the implementation of AI in automated vehicles. Congress’ SELF DRIVE Act has the potential to be a step in the right direction. The Senate needs to maintain the basic elements of the bill passed in the House to help advance the use of the innovative AI technology in self-driving cars. Unlike Musk, Mark Zuckerberg has taken a stance similar to those in the auto industry, and believes AI will bring about countless “improvements in the quality of our lives,” especially in the application of AI in self-driving vehicles.

 

 


The Excitement and Danger of Autonomous Vehicles

Tyler Hartney, MJLST Staffer

“Roads? Where we’re going we don’t need roads.”

Ok. Sorry Doc. Brown, but vehicular technology is not quite to where Back to the Future thought it would be in 2017; but, there are a substantial amount of companies making investments into autonomous vehicles. Ford invested $1 billion to acquire an artificial intelligence startup company that had been founded by engineers previously employed by Google and Uber with the intent to develop self-driving vehicles. Tesla already has an autopilot feature in all of its vehicles. Tesla includes a warning on its website that the use of the Self-Driving functionality maybe limited depending on regulations that vary by jurisdiction.

To grasp an understanding of what many of the experts are saying in this field, one must be familiar with the six levels of autonomy:

  1. No autonomy
  2. Driver assistance level – most functions still controlled by human driver
  3. At least one driver assistance system – cruise control or lane monitoring
  4. Critical safety features – shifts emergency safety features such as accident awareness from vehicle to human
  5. Fully autonomous – designed for the vehicle to perform all critical safety features and monitor road and traffic conditions
  6. Human like autonomy – fully capable of autonomy even in extreme environments such as off-road driving

The societal benefits could be vast. With level 4 autonomy on household vehicles, parents and siblings need not worry about driving the kids to soccer practice because the car is fully capable of doing it for them. Additionally, the ridesharing economy, which has grown incredibly fast over the past few years, would likely see a drastic shift away from human drivers. Companies have already begun to make vehicles for the purpose of clean energy ride sharing using autonomous vehicles such as Ollie. Ollie is an electric and 3D printed bus that can be summoned by those in need of a ride in a similar fashion to Uber.

While this self-driving vehicle technology is exciting, are we really there yet? Last June, a fatal car accident occurred involving a Tesla using its autopilot function. The driver in this case had previously touted his car saving him from an accident; he was very confident in the ability of his vehicle. It was later reported by a witness that there was a portable DVD player in the car that was found playing Harry Potter at the time of the accident. If this witness is correct, the driver of the vehicle violated the Tesla disclaimer that reads the autopilot feature “is an assist feature that requires you to keep your hands on the steering wheel at all times.” Some experts argue these manufacturers have to be more upfront about the limitations of their autopilot feature and how drivers should be cautious in the use of this advanced technology. It is no question that the driver of the Tesla (if you can call him that?) was reckless in the use of this technology. But what about the liability of the vehicle manufacturer?

The deceased’s family hired a product defect litigation law firm to conduct an investigation in conjunction with the federal government’s investigation. The goal of these investigations was to determine if Tesla is at fault for the vehicle’s autopilot feature failing to stop. Recently, news broke that the government investigation concluded that no recall must be made for the Tesla vehicles nor did the government levy any fines to the automaker. The government reported that the autopilot feature was not defective at the time of the crash because it was built to protect the driver from rear-end collisions (the man’s car rear-ended a truck) and also gave notice to consumers that the driver must remain fully attentive to the operation of the vehicle.

Legally, it appears that plaintiffs won’t likely have much luck in suits against Tesla in cases like this. The company requires purchasers to sign a contract stating the autopilot function is not to be considered self-driving and that they are aware they will have to remain attentive and keep their hands on the wheel at all times. However, Tesla operates on an interesting structure where purchasers buy directly from the manufacturer which may give them more of the ability to engage in these types of contracts with their consumers. Other automobile manufacturers may have a more difficult time maneuvering around liability for accidents that occur when the vehicle is driving itself. Car companies are going to have to ensure they provide repeated reminders to consumers that, until technology is tested and confidence in the feature is significantly higher, that the autopilot features are in the beta-testing mode and driver attention and intervention is still required.


Navigating the Future of Self-Driving Car Insurance Coverage

Nathan Vanderlaan, MJLST Staffer

Autonomous vehicle technology is not new to the automotive industry. For the most part however, most of these technologies have been incorporated as back-up measures for when human error leads to poor driving. For instance, car manufactures have offered packages that incorporate features such as blind-spot monitoring, forward-collision warnings with automatic breaking, as well as lane-departure warnings and prevention. However, the recent push by companies like Google, Uber, Tesla, Ford and Volvo are making the possibility of fully autonomous vehicles a near-future reality.

Autonomous vehicles will arguably be the next greatest technology, that will be responsible for saving countless lives. According to alertdriving.com, over 90 percent of accidents are the result of human error. By taking human error out of the driving equation, The Atlantic estimates that the full implementation of automated cars could save up to 300,000 lives a decade in the United States alone. In a show of federal support, U.S. Transportation Secretary Anthony Foxx released an update in January 2016 to the National Highway Traffic Safety Administration’s (NHTSA) stance on Autonomous Vehicles, promulgating a set of 15 standards to be followed by car manufactures in developing such technologies. Further, in March 2016, the NHSTA promised $3.9 billion dollars in funding over 10 years to “support the development and adoption of safe vehicle automation.” As the world makes the push for fully autonomous vehicles, the insurance industry will have to respond to the changing nature of vehicular transportation.

One of the companies leading the innovative charge is Tesla. New Tesla models may now come equipped with an “autopilot” feature. This feature incorporates multiple external sensors that relay real-time data to a computer that navigates the vehicle in most highway situations.  It allows the car to slow down when it encounters obstacles, as well as change lanes when necessary. Elon Musk, Tesla’s CEO estimates that the autopilot feature is able to reduce Tesla driver accidents by as much as 50 percent. Still, the system is not without issue. This past June, a user of the autopilot system was killed when his car collided with a tractor trailer that the car’s sensors failed to detect. Tesla quickly distributed a software operating system that he claims would have been able to detect the trailer. The accident has quickly prompted the discussion of how insurance claims and coverage will adapt to accidents in which the owners of a vehicle are no longer cause of such accidents.

Auto Insurance is a state regulated industry. Currently, there are two significant insurance models: no-fault concepts, and the tort system. While each state system has many differences, each model has the same over-arching structure. No-fault insurance models require the insurer to pay parties injured in an accident regardless of fault. Under the tort system, the insurer of the party who is responsible for the accident foots the bill. Under both systems however, the majority of insurance premium costs are derived from personal liability coverage. A significant portion of insurance coverage structure is premised on the notion that drivers cause accidents. But when the driver is taken out of the equation, the basic concept behind automotive insurance changes.

 

What seems to be the most logical response to the implementation of fully-autonomous vehicles is to hold the manufacture liable. Whenever a car crashes that is engaged in a self-driving feature, it can be presumed that the crash was caused by a manufacturing defect. The injured party would then instigate a products-liability action to recover for damages suffered during the accident. Yet this system ignores some important realities. One such reality is that manufactures will likely place the new cost on the consumer in the purchase price of the car. These costs could leave a car outside the average consumer’s price range, and could hinder the wide-spread implementation of a safer automotive alternative to human-driven cars. Even if manufactures don’t rely on consumers to cover the bill, the new system will likely require new forms of regulation to protect car manufactures from going under due to overwhelming judgments in the courts.

Perhaps a more effective method of insurance coverage has been proposed by RAND, a company that specializes in evaluating and suggesting how best to utilize new technologies. RAND has suggested that a universal no-fault system be implemented for autonomous vehicle owners. Under such a system, autonomous car drivers would still pay premiums, but such premiums would be significantly lower as accident rates decrease. It is likely that for this system to work, regulation would have to come from the federal level to insure the policy is followed universally in the United States. One such company that has begun a system mirroring this philosophy is Adrian Flux in Britain. This insurer offers a plan for drivers of semi-autonomous vehicles that is lower in price than traditional insurance plans. Adrian Flux has also announced that it would update its policies as both the liability debate and driverless technology evolves.

No matter the route chosen by regulators or insurance companies, the issue of autonomous car insurance likely won’t arise until 2020 when Volvo plans to place commercial, fully-autonomous vehicles on the market. Even still, it could be decades before a majority of vehicles on the street have such capabilities. This time will give regulators, insurers, and manufactures alike, adequate time to develop a system that will best propel our nation towards a safer, autonomous automotive society.


U.S. Letter to Google: A Potential Boost for Self-Driving Cars

Neal Rasmussen, MJLST Managing Editor

As Minnesota Journal of Law, Science & Technology Volume 16, Issue 2 authors Spencer Peck, Leili Fatehi, Frank Douma, & Adeel Lari note in their article, “The SDVs are Coming! An Examination of Minnesota Laws in Preparation for Self-Driving Vehicles,” current laws already permit certain aspects of self-driving cars, but these laws will need to be modified to allow self-driving cars to reach their full potential. While the process will be slow, this modification is starting to happen as evidenced by a recent letter sent to Google, Inc. from the National Highway Traffic Safety Administration (NHTSA).

In this letter, Paul Hemmersbaugh, writing as chief counsel for the NHTSA, accepts the fact that the computers driving Google’s self-driving vehicles can be considered the same as a human driver such that the “NHTSA will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the [self-driving system], and not to any of the vehicle occupants.” Mr. Hemmersbaugh further explains that the NHTSA “agree[s] with Google [that] its [self-driving vehicle] will not have a ‘driver’ in the traditional sense” and that the NHTSA must work to develop better rules moving forward.

This letter was in response to Google’s proposal for a self-driving vehicle without basic controls, such as a steering wheel and pedals, and ultimately no human driver. The proposal stems from Google’s belief that having features that allow humans to take control could be “detrimental to safety” because human drivers are often ill equipped to take over in emergency situations due to distractions and over reactions.

According to Karl Brauer, a senior analyst for Kelly Blue Book, if the “NHTSA is prepared to name artificial intelligence as a viable alterative to human-controlled vehicles, it could substantially streamline the process of putting autonomous vehicles on the road.” While this letter is definitely a step in the right direction, manufacturers still have a long ways to go at the state and federal levels.

In December the California Department of Motor Vehicles (DMV) issued proposed regulations that would require a human driver to always be behind the wheel with the ability to take over controls at any time. The DMV expressed concerns that manufactures haven’t obtain enough experience with driverless vehicles on public roads and that more must be done before such technology can be readily available.

So while the letter from the NHTSA offers hope to those within the industry, there are many more barriers to be crossed before self-driving cars can become a full reality.


Long-Term Success of Autonomous Vehicles Depends on its First-Generation Market Share

Vinita Banthia, MJLST Articles Editor

In its latest technology anticipations, society eagerly awaits a functional autonomous car. However, despite the current hype, whether or not these cars will be ultimately successful remains a question. While autonomous cars promise to deliver improved safety standards, lower environmental impacts, and greater efficiency, their market success will depend on how practical the first generation of autonomous vehicles are, and how fast they are adopted by a significantly large portion of the population. Because their usability and practicality depends inherently on how many people are using them, it will be important for companies to time their first release for when they are sufficiently developed and can infiltrate the market quickly. Dorothy J. Glancy provides a detailed account of the legal questions surrounding autonomous cars in Autonomous and Automated and Connected Cars Oh My! First Generation Autonomous Cars in the Legal Ecosystem. This blog post responds to Glancy’s article and suggests additional safety and regulation concerns that Glancy’s article does not explicitly discuss. Finally, this post proposes certain characteristics which must be true of the first generation of autonomous vehicles if autonomous vehicles are to catch-on.

Glancy thoroughly covers the expected benefits of autonomous cars. Autonomous cars will allow persons who are not otherwise able to drive, such as visually impaired people, and the elderly, to get around conveniently. All riders will be able to save time by doing other activities such as reading or browsing the internet during their commute. And in the long run, autonomous vehicles will allow roads and parking lots to be smaller and more compact because of the cars’ more precise maneuvering abilities. Once enough autonomous vehicles are on the road, they will be able to travel faster than traditional cars and better detect and react to dangers in their surroundings. This will decidedly lead to fewer crashes.

On the contrary, several other features may discourage the use of autonomous vehicles. First, because of the mapping systems, the cars will likely be restricted to one geographic region. Second, they might be programmed to save the most number of people during a car crash, even if that means killing the occupant. Therefore, many prospective buyers may not buy a car that is programmed to kill him or her in the event of an inevitable crash. In addition, initial autonomous cars may not be as fast as imagined, depending on whether they can detect faster moving lanes, frequently change lanes, and adapt to changing speed limits. Until there are significant numbers of autonomous cars on roads, they may not be able to drive on longer, crowded roads such as highways, because vehicles will need to interact with each other in order to avoid crashes. Some argue that other car-service provides will suffer as taxis, Ubers, busses, and trails become less relevant. However, this change will be gradual because people will long continue to rely on these services as cheap alternatives to car-ownership.

When these cars are available, in order to promote autonomous cars to enter the market rapidly, manufacturers should make the cars most attractive to potential buyers, instead of making them good for society as a whole. For example, instead of programming the car to injure its own occupants, it should be programmed to protect its occupants. This will encourage sales of autonomous cars, reducing the number of car crashes in the long run.

Glancy also states that the first generation of autonomous vehicles will be governed by the same state laws that apply for conventional vehicles, and will not have additional rules of their own. However, this is unlikely to be true, and specific state and possibly even federal laws will most likely affect autonomous vehicles before they may be driven on public roads and sold to private individuals. Because autonomous cars will co-exist with traditional vehicles, many of these laws will address the interaction between autonomous and conventional cars, such as overtaking, changing lanes, and respecting lane restrictions.

In the end, the success of autonomous cars depends widely on how practical the first fleet is, how many people buy into the idea and how fast, as well as the car’s cost. If they are successful, there will be legal and non-legal benefits and consequences, which will only be fully realized after a few decades of operation of the cars.


Recent Developments in Automated Vehicles Suggest Broad Effects on Urban Life

J. Adam Sorenson, MJLST Staffer

In “Climbing Mount Next: The Effects of Autonomous Vehicles on Society” from Volume 16, Issue 2 of the Minnesota Journal of Law, Science & Technology, David Levinson discusses the then current state of automated vehicles and what effects they will have on society in the near and distant future. Levinson evaluates the effect of driverless cars in numerous ways, including the capacity and vehicles-as-a-service (VaaS). Both of these changes are illuminated slightly by a recent announcement by Tesla Motors, a large player in the autonomous vehicle arena.

This week Tesla announced Summon which allows a user to summon their tesla using their phone. As of now, this technology can only be used to summon your car to the end of your drive way and to put it away for the night. Tesla sees a future where this technology can be used to summon your vehicle from anywhere in the city or even in the country. This future technology, or something very similar to it, would play a pivotal role in providing urban areas with VaaS. VaaS would essentially be a taxi service without drivers, allowing for “cloud commuting” which would require fewer vehicles overall for a given area. Ford has also announced what it calls FordPass, which is designed to be used with human-driven cars, but allows for leasing a car among a group of individuals and sharing the vehicle. This technology could easily be transferred to the world of autonomous vehicles and could be expanded to include entire cities and multiple cars.

Beyond VaaS, these new developments bring us closer to the benefits to capacity Levinson mentions in his article. Levinson mentions the benefits to traffic congestion and bottlenecks which could be alleviated by accurate and safe autonomous vehicles. Driverless vehicles would allow for narrower lanes, higher speed limits, and less space between cars on the highway, but Levinson concedes that these cars still need to “go somewhere, so auto-mobility still requires some capacity on city streets as well as freeways, but ubiquitous adoption of autonomous vehicles would save space on parking, and lane width everywhere.” Tesla is seeking to alleviate some of these issues by allowing a vehicle to be summoned from a further distance, alleviating some parking congestion.

Audi, however, is seeking to tackle the problem in a slightly different fashion. Audi is partnering with Boston suburb Somerville to develop a network including self-parking cars. “UCLA urban planning professor Donald Shoup found 30 percent of the traffic in a downtown area is simply people looking for parking” and eliminating this traffic would allow for much higher capacity in these areas. Similarly, these cars will not have people getting in and out of them, allowing for much more compact parking areas and much higher capacity for parking. Audi and Tesla are just some of the companies working to be at the forefront of automated vehicle technology, but there is no denying that whoever the developments are coming from, the effects and changes David Levinson identified are coming, and they’re here to stay.


General Motor’s $500 Million Investment in Lyft: a Reminder to State Legislatures to Quickly Act to Resolve Legal Issues Surrounding Self-Driving Cars

Emily Harrison, MJLST Editor-in-Chief

On January 4, 2016, General Motors’ (G.M.) invested $500 million in Lyft, a privately held ridesharing service. G.M. also pledged to collaborate with Lyft in order to create a readily accessible network of self-driving cars. According to the New York Times, G.M.’s investment represents the “single largest direct investment by an auto manufacturer into a ride-hailing company in the United States . . . .” So why exactly did General Motors, one of the world’s largest automakers, contribute such a significant amount of capital to a business that could eventually cause a decrease in the number of cars on the road?

The short answer is that G.M. views its investment in Lyft as a way to situate itself in a competitive position in the changing transportation industry. As John Zimmer, president of Lyft, said in an interview, the future of cars will not be based on individual ownership: “We strongly believe that autonomous vehicle go-to-market strategy is through a network, not through individual car ownership.” In addition, this partnership will allow G.M. to augment its current profits. The president of G.M., Daniel Ammann, explained that G.M.’s ‘core profit’ predominately comes from cars that are sold outside of the types of urban environments in which Lyft conducts its main operations. Therefore, G.M. can capitalize on its investments by aligning itself at the forefront of this burgeoning automated vehicle industry.

A transition to a network of self-driving cars raises a variety of legal implications, particularly with respect to assigning liability. As Minnesota Journal of Law, Science & Technology Volume 16, Issue 2 author Sarah Aue Palodichuk notes in her article, “Driving into the Digital Age: How SDVs Will Change the Law and its Enforcement,”: “[a]utomated vehicles will eliminate traffic offenses, create traffic offenses, and change the implications of everything from who is driving to how violations are defined.” Underlying all of these changes is the question: who or what is responsible for the operation of self-driving cars? In some states, for example, there must be a human operator who is capable of manual control of the vehicle. As additional states begin to adopt legislation with respect to self-driving cars, it is foreseeable that there will be great debate as to who or what is responsible for purposes of liability. Yet, in the meantime, G.M.’s significant investment in Lyft signals to consumers and state legislators that these issues will need to be resolved quickly, as the automotive industry is moving full-speed ahead.


Liability in Driverless Car Accidents

Daniel Mensching, MJLST Staffer

Driverless cars made national headlines last week when a police officer in California pulled over a car for driving too slowly only to find that there was no driver to be ticketed. While this car was pulled over only for being too slow and no laws were actually broken, this incident is an example of the legal problems that will arise as driverless cars become a reality.

Driverless cars are currently being developed by several large automobile manufacturers, and Google is also producing a model which they plan on making available to the public in 2020. Advocates of driverless cars emphasize not only the convenience of not needing to drive, but also pointing out that they are much safer than human drivers. Robots will not experience road rage, they will not get drunk, and they will not text. That being said, accidents will inevitably occur and the legal system will need to determine liability and provide recourse to those who are injured.

Some commentators have noted that the problem of determining liability has the potential to kill automated vehicles despite the fact that these vehicles are safer than human drivers.

Some states have already passed statutes in anticipation of the rise of driverless cars, but these laws only make driverless cars legal for research purposes, and there are still many questions to be answered. The most likely legal policy that will emerge will be that manufacturers of driverless cars will be the sole target in lawsuits arising from accidents involving driverless cars. In fact, Volvo has already released a statement where the Swedish automobile manufacturer claimed that it would take full responsibility for any accident involving a driverless Volvo.

The legal system will most likely provide recourse to those injured in accidents by finding manufacturers liable in product liability cases. Plaintiffs can use several legal theories to win these cases, including design defect, manufacturing defect, or failure to warn. The legal system should avoid creating a strict liability standard for driverless car accidents, as this would have the effect of chilling research and development of this technology, which will have the overall effect of saving scores of lives and making society more efficient.