As we all know, Tesla has always attempted to be at the vanguard of advanced science and technology. Recently, they have been obsessed with creating a self-driving car. An article directly on the Tesla site covers the basic components of the futuristic vehicle:

We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. Eight surround cameras provide 360-degree visibility around the car at up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system. A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength, capable of seeing through heavy rain, fog, dust and even the car ahead.

As exciting as it sounds, NASA, the leader of science and technology in the world, expressed their concerns. After the death of Joshua Brown, who died while driving one of the autopilot Teslas, there was much debate about whether or not the fatality was the fault of the driver or the car. According to an article by John Pavlus, which was published in the Scientific American:

NASA has been down this road before, too. In studies of highly automated cockpits, NASA researchers documented a peculiar psychological pattern: The more foolproof the automation’s performance becomes, the harder it is for an on-the-loop supervisor to monitor it. “What we heard from pilots is that they had trouble following along [with the automation],” Casner says. “If you’re sitting there watching the system and it’s doing great, it’s very tiring.” In fact, it’s extremely difficult for humans to accurately monitor a repetitive process for long periods of time. This so-called “vigilance decrement” was first identified and measured in 1948 by psychologist Robert Mackworth, who asked British radar operators to spend two hours watching for errors in the sweep of a rigged analog clock. Mackworth found that the radar operators’ accuracy plummeted after 30 minutes; more recent versions of the experiment have documented similar vigilance decrements after just 15 minutes.

So despite the fact that the actual inner workings of Tesla’s new technology are perfectly functional, it proves to be a very dangerous invention. The better the technology, the more those who are using it rely upon it. The more they rely upon it, the less they bother to use their own senses. And once they surrender their attention, there is a huge possibility that an accident will occur.

With that in mind, is a self-operated car really a good idea? There is a lot of room for error, and instead of making roads safer, it has a large potential to increase the average number of accidents on the road. For now, it might be best for Tesla to keep high-tech off the road, even if it is flawless. Because the greatest danger does not lie in the advancement itself, but in the flaws of human nature.