Human Problem With Self-Driving Cars

Human Problem With Self-Driving Cars
Share it:
Toyota showroom
A good, old-fashioned human-driving Toyota.
Photo by Toru Hanai/Reuters

This may be a battle between the human and the computer world. Who should drive cars? It’s a more complicated question than it might seem.
 
For decades the answer in the United States has been “people over the age of 16 (or so, depending on the state) who have passed a driving test.” We humans aren’t doing such a great job of it, however. More than 30,000 Americans die on the country’s roads every year.
Recently, companies like Google, Uber, and Tesla have presented us with an alternative answer: Artificially intelligent computers should drive our cars. They’d do a far safer job of it, and we’d be free to spend our commutes doing other things.

There are, however, at least two big problems with computers driving our cars.
One is that, while they may drive flawlessly on well-marked and well-mapped roads in fair weather, they lag behind humans in their ability to interpret and respond to novel situations. They might balk at a stalled car in the road ahead, or a police officer directing traffic at a malfunctioning stoplight. Or they might be taught to handle those encounters deftly, only to be flummoxed by something as mundane as a change in lane striping on a familiar thoroughfare. Or—who knows?—they might get hacked en masse, causing deadlier pileups than humans would ever blunder into on our own

Which leads us to the second problem with computers driving our cars: We just don’t fully trust them yet, and we aren’t likely to anytime soon. Several states have passed laws that allow for the testing of self-driving cars on public roadways. In most cases, they require that a licensed human remain behind the wheel, ready to take over at a moment’s notice should anything go awry.

Engineers call this concept “human in the loop.”
It might sound like a reasonable compromise, at least until self-driving cars have fully earned our trust. But there’s a potentially fatal flaw in the “human as safety net” approach: What if human drivers aren’t a good safety net? We’re bad enough at avoiding crashes when we’re fully engaged behind the wheel, and far worse when we’re distracted by phone calls and text messages. Just imagine a driver called upon to take split-second emergency action after spending the whole trip up to that point kicking back while the car did the work. It’s a problem the airline industry is already facing as concerns mount that automated cockpits may be eroding pilots’ flying skills.
Google is all too aware of this problem. That’s why it recently shifted its approach to self-driving cars. It started out by developing self-driving Toyota Priuses and Lexus SUVs—highway-legal production cars that could switch between autonomous and human-driving modes. Over the past two years, it has moved away from that program to focus on building a new type of autonomous vehicle that has no room for a human driver at all. Its new self-driving cars come with no steering wheel, no accelerator, no brakes—in short, no way for a human to mess things up. (Well, except for the ones with whom it has to share the roads.) Google is cutting the human out of the loop.
Car companies are understandably a little wary of an approach that could put an end to driving as we know it and undermine the very institution of vehicle ownership. Their response, for the most part, has been to develop incremental “driver-assistance” features like adaptive cruise control while resisting the push toward fully autonomous vehicles.
Share it:
Reactions:

Tech360

Post A Comment:

0 comments:

What's On Your Mind?