Artificial intelligence: Approach with caution

Yana Karapetyan. 05/29/2021


This image depicts a futuristic autonomous vehicle. (iStock)


Tesla, Waymo and Zoox have all become household names in the market of driverless cars. These companies claim that inattention and human error are responsible for the majority of automobile accidents. By removing the possibility of driver mistakes, they hope that their use of artificial intelligence technologies will improve road conditions and help save lives.

However, these intentions alone are not enough to introduce self-driving cars into the automotive market. First, companies must perform extensive testing across a variety of road conditions. Multiple cities across the United States have become testing sites for autonomous vehicles, including the fifth most populous city in the nation, Phoenix.

While there is general curiosity and enthusiasm surrounding driverless car technologies, not everyone shares the excitement. Many autonomous cars and their testers have become the subject of attacks. Some residents in testing cities have slashed tires, threatened self-driving car passengers with violence and even tried to run the cars off of the road.

Arizona has particularly been impacted by these attacks. In March of 2018, an Uber autonomous vehicle struck and killed a pedestrian in Tempe. In response to this incident, Uber’s driverless cars were banned from testing in the entire state. Since then, many residents have continued to hold even more cautious attitudes toward self-driving vehicles.

While Uber no longer tests its autonomous vehicles in Phoenix, Waymo has rushed in to fill the gap. Similarly, Waymo vehicle supervisors have faced violent threats, such as a local man waving a .22-caliber revolver at a Waymo vehicle and the emergency backup driver at the wheel. When questioned by police, the man said that he “despises” driverless cars, citing the incident of a pedestrian being killed by a self-driving car Uber car in 2018.

Companies with a vested interest in the use of artificial intelligence for the automotive industry often use their desire to make the road a safer place as their driving force behind developing this technology. Tesla, for instance, asserts that its technologies, such as a forward-facing radar with enhanced processing, provides the ability to see through difficult road conditions such as heavy rain, fog and dust.

Nonetheless, the self-driving car project is an ambitious one. Self-driving cars and their inability to process information the way that humans do concerns many who could potentially be sharing much more of the road with them. Additionally, the reluctance to welcome autonomous vehicles is not solely based on fear and past negative experiences. A survey conducted by Ipsos that polled 21,000 adults across 28 countries about the acceptance of autonomous vehicles found that nearly one in four Americans reported that they would never use a self-driving vehicle.

Ipsos noted that part of this reluctance is linked to the identity of classic car culture in the United States; six in ten surveyed identified themselves as “car people”. Ipsos had also found hints of a coming car-culture clash, coupled with disagreements on whether the government should regulate this niche market.

A full introduction of fully autonomous vehicles onto American roadways will trigger a litany of legal questions. Dean Kamen, an American engineer best known for his invention of the Segway, once said “Every once in a while, a new technology, an old problem and a big idea turn into an innovation.” Applying Kamen’s argument to this legal conundrum leaves us with two legal arguments: the old problem here is human error in driving causing casualties and the innovative solution is using artificial intelligence to reduce casualties in driving. However, our innovative solution begs the question of how this freshly emerging technology, with little legal precedent to compare to, will be regulated.

Cover Photo: (iStock)


Yana Karapetyan