Traffic fatalities involving “self-driving” EVs have been receiving considerable media attention of late. Is this a general issue with EVs, or something unique to one specific designer?
What exactly is “self-driving”, and how does it differ from “autonomous vehicles” (AVs) or “driver assistance”? Many high-tech firms (e.g., Waymo, Uber, etc.) would like to dispense with the driver altogether; only such driverless vehicles are “autonomous” in my book. What we have today are many different stages on the path to full autonomy. Most car companies claim only that the new features (lane holding, automated following, passing assistance, etc.) are for driver assistance, and are not intended to replace the driver.
Tesla is making much larger claims: In April 2019 Musk declared that “By the middle of next year, we’ll have over a million Tesla cars on the road with full self-driving hardware.” Full Self Driving (FSD) would be so reliable, he said, the driver “could go to sleep.”
It hasn’t worked out quite that way. Most famous are the collisions in which Teslas have slammed into parked emergency vehicles, despite the emergency vehicles’ flashing lights. The National Highway Transportation Safety Agency (NHTSA) is investigating at least 12 of those. Less famously, hundreds of owners have filed complaints about FSD-equipped Teslas stopping for no apparent reason, as if the self-driving car had seen a ghost that it was keen to avoid running over. NHTSA has an investigation going on those complaints as well. As of the beginning of the year, at least 19 deaths had been reported for Teslas using FSD.
Musk claims that such an accident record is already safer than the average driver. Specifically, he claims that Teslas are involved in only 20% of the number of accidents that they would have been involved in without FSD. The actual statistics are sketchy, in part because Tesla doesn’t share the data, but mostly because any number of assumptions are needed to make a fair comparison. For example, are the Tesla drivers turning on their FSD feature when they are driving in particularly suitable conditions? Or are FSD Teslas a good sample of the driving conditions and drivers that are out there? The New York Times did a nice job dissecting this statistical challenge (https://nytimes.com/2023/01/17/magazine/ tesla-autopilot-self-driving-elon-musk.html). The Times failed to reach a clear conclusion. FSD is probably safer than allowing your drunk teenager to drive home; but still nerve-wracking for the passengers not able to grab the wheel when it pauses to ponder conditions intuitive to human drivers like snow occupying a portion of the roadway, or road construction requiring caution and a wide berth when passing.
I’ll wait for the NHTSA to finish their investigations, but the agency has already ordered Tesla to revoke some of the features of their FSD. For example, Tesla has agreed to remove the feature that allows drivers to set how far ABOVE the posted speed limit the car will go. My impression is that FSD is yet another example of Musk promising more than Tesla can deliver. Tesla is not the only institution that has over-promised the rate of progress towards full autonomous driving, but its dominant market position makes it especially prone to “moving fast and breaking things”. Regulators have their hands full studying the various manufacturers’ claims and keeping up with the rapid development of driving assistance technologies.
We cannot shape the future to our needs if we cannot see where the technology is headed. What are the benefits of driverless autonomous vehicles (AVs), assuming that they develop as the futurists have suggested? Surely AVs can reduce the carnage in the roadways. Not all accidents are preventable, but many are. Anything that a human can perceive, a machine can be made to perceive (if cost is no object). Any human reaction time can be expedited by a machine; one of the easiest fixes is simply to press the brakes harder and quicker than a human in the presence of an unforeseen road obstacle. While that might produce some rear-end collisions, it will assuredly reduce fatalities overall. We know distracted drivers are a major contributor to accidents, and machines can be made immune to distractions.
A less obvious advantage is that AVs can be packed more tightly together on freeways, eliminating some of the need for more freeways, especially in urban areas where vacant land for adding routes or lanes just isn’t there. Driverless AVs can be summoned as needed to extend personal mobility to children, the elderly, and those that are temporarily or permanently mobility impaired. Taxis can already do that, but at much greater expense because taxis need drivers and drivers need to be paid. Urban parking problems can be reduced, because autonomous vehicles will be in motion a much greater percentage of the day than privately owned vehicles are now, and many fewer of them will be needed for the same reason; furthermore, they can be parked in places for which there is less demand for space than in residential neighborhoods or urban centers.
Pollution from making and operating autonomous vehicles will be reduced both because fewer will be needed (and they will be EVs), and because many private car owners now oversize the vehicles they purchase, in order to accommodate the most demanding use they will encounter during ownership. For example, many people now buy and commute with their enormous personal pickups, on a rationale such as that the big engine is needed to haul the boat to the lake on weekends. If you request the services of a big autonomous pickup on weekends but ride a small efficient AV when commuting, we will all breathe easier.
A more subtle advantage of fleet-owned driverless AVs is that fleet owners really care about maintenance down time. Car manufacturers today are motivated to increase sales by degrading the durability of their products beyond the length of the warranty period. The old Yellow Cabs routinely ran for twice or more the mileage of conventional cars, because it was worth it to the cab company to pay more for a durable product than a maintenance-prone one. The fleet-owned AVs of the future will presumably enjoy the same focus on life-cycle rather than initial-purchase costs.
It isn’t all good of course. Some have estimated that fully autonomous driving will greatly increase computing energy needs, as millions of AVs executing very complex decision-making in real time could drive up the world’s need for power generation. And people may commute further when the commute is less arduous.
Which brings up the ugly matter of cost. While machines can be made almost infinitely smart and quick, there is no guarantee that the cost for such capability will be reasonable. Musk has already prohibited Teslas from relying on lidar detectors, which he deems excessively costly, but he recently also removed reliance on radar and ultrasound detectors to save money. A recent Washington Post piece was titled, “The Elon Musk request that left Tesla engineers aghast…” (https://www.washingtonpost.com/2023/03/19/elon-musk-tesla-driving/?itid=mc_magnet-futuretransportation_1). Cost-savings versus safety will always be a fraught tradeoff; advanced technology only makes the calculation more complex.
Beyond the vexing safety-cost tradeoff, there are many issues that arise when attempting to change the way a society works; humans can be pushed only so far. For example, automobiles are beloved by many for the sense of freedom they inspire. Once one has paid for a vehicle, tires, insurance, etc., the marginal cost of going for a spin is so low that many people do not even consider it. They can always jump in the car and drive ‘til their blues roll away. Would that happen with AVs? Probably not, because AV rides would most likely be priced on a per mile and per hour basis. Can you wash away your sorrows when the meter is ticking? Privately owned vehicles give the owner a sense of agency that even a cheap taxi will rarely inspire.
I also anticipate a political battle over the practice of speeding while using FSD or similar software. Legislators, in my experience, view speed limits as for the little people; most judge themselves far too important to be held back by their car. How is this to be resolved without an endless arms race: raise the speed limits and repeat?
I worry also about how small towns like Durango will cope with the complexities of solving breakdowns in hyper-complicated computerized machinery. If my recent experiences with my Volt, my furnace, and my washing machine are any indication, complexity easily overwhelms local repair capabilities.
Is it possible for crooks to hack an AV to your detriment? Suppose they toss two obstacles out onto the roadway, one in front and one behind your AV, and demand protection money to remove the obstacles that your AV isn’t programmed to run over. Or hack into your AVs computer and offer to drive it off a cliff unless you pay up immediately?
Undoubtedly AVs will be put into operation (as they are now in Phoenix) long before they are capable of handling Colorado mountain driving conditions. An example was brought home to me a few years ago while driving at night over Blue Hill in a blizzard. With no other cars around, the landscape in 360 degrees was pure white, with maybe 25 feet visibility in the blowing snow. Only my having memorized a familiar route, and the feel of gravity shifting subtly as I slid first off the left side and then off the right side of the road’s crown enabled me to zigzag down the midline of the snow packed and vacant road.
Of course, those conditions occur only rarely. Engineering problems that are rarely encountered can get expensive to solve, maybe excessively so. Tesla is relying on self-learning by artificial intelligence (AI) to address the complexities of driving. AI for self-driving learns from being fed a gargantuan diet of highway videos, but if a particular situation occurs only infrequently, it may not be solved cost effectively. It may not even be solvable within the price point of AV sales. I have a hunch that just such a problem is responsible for Teslas running into parked emergency vehicles. Humans focus a great deal of attention on emergency vehicle flashing lights, but artificial intelligence has no reason to (most flashing lights just denote left turns). Tesla has tried to solve that problem after the fact by putting “do not go” markers in the software to set emergency vehicles off limits, but rare problems will continue to emerge in perpetuity if patched after-the-fact one-by-one.
Another conundrum with AI is that it is a black box; the engineers never really understand how AI makes decisions. I can just hear the prosecuting attorney asking the jury to convict a driver or company for behavior that relies on “artificial” decision-making that the company does not fully understand. I don’t know that AI will ever constitute an ironclad defense for life and death problems that will go to a jury. AVs need not rely on AI, but that is the way Tesla has chosen to go about it.
An inherent obstacle to the acquisition of full autonomy is the “valley of death” in which driver assistance has gotten so close to full autonomy that driver vigilance is impossible to maintain. People will find ways to look attentive to the driver-facing camera while their attention wanders.
And what of driving the Colorado Mountains in a blinding blizzard? Will future drivers still know how to deal with those? Will the AV systems that can master blizzards, mud-splattered sensors, and unmarked roads be affordable? Only the future will tell. The future is EVs and the future is AVs, but perhaps we shouldn’t muddle the distinction.
[Special thanks to Chris Calwell for excellent feedback on an earlier draft of this piece.]