Does AI Make Self-Driving Cars Less Safe?

Recently, most industries are facing a significant change due to the improvement in technology. The automotive sector is one of the industries that are defined by the normal technological development. Various automotive companies have been working to release self-driving vehicles. The self-driving vehicles use the machine learning technology to substitute the human driver.

The self-driving vehicle project is proving to bear fruit. Major stakeholders in the automotive industry like Google and Uber have launched their first self-driven vehicles. However, the vehicles have a driver on the wheel to help drive the vehicle in case it’s not running properly.

If the technology goes through, car owners will be required to take an online driving course to enhance their skills on how to operate the fully self-driven vehicle.

Experts from the field have been trying to use artificial intelligence to make the vehicles full self-driving. The subject has turned to be controversial since some experts argue that Artificial Intelligence may make self-driving cars less safe while some see it as an opportunity to the success of the autonomous vehicle industry.

Autonomous Intelligence

Autonomous intelligence systems have been proved to master various things that scientists thought that they were impossible. For example, the learning machines can learn a language, play games and even recognize images.

Due to the advancements in Artificial Intelligence, Scientists are moving to apply the technology to self-driving cars. The self-driving cars using AI are expected to identify objects, recognize them and then learn to respond.

The self-driving car using artificial intelligence is required to adhere to all traffic rules. It should stop on a zebra crossing, slow down on a busy street, shift lanes and make other decisions that a human driver can do.

However, most scientists are not convinced that the fully autonomous car can make right independent decisions since there are so many changing scenarios on the road. The scenarios may make the vehicle inefficient and unsafe.

Phillip Koopman, a computer science expert, is one of those experts who feel that the machine learning technology will not make the autonomous vehicle safe. According to Koopman, AI machines learn through the use of computerized codes.

Using the programmed code, a machine learning system behaves in a given way whenever it is exposed to the automated system. The case is different on the roads. The computer will be required to memorize images that keep changing. It’s hard for engineers to code such information. For instance, it may be difficult to determining when to stop or to proceed. The machine will memorize the features like people crossing and whenever similar objects are seen it will stop. But individuals and features keep changing in different locations.

The machine will learn to act on concepts rather than judgment. For instance, if during a test the machine learning system recorded images of people wearing red colored clothes, it will only stop on seeing people with red colors. Hence people with different colors are likely to be run on. Likewise, the fully self-driven vehicle may stop paralyzed at a stop, unable to decide.

Various moral dilemmas happen on the road. Ethics are required to make useful reasoning on what to do in such events. Human drivers act to such circumstances using their human reasoning. Machine learning systems will respond using unreliable program information.

Autonomous vehicles using the AI technology will only be useful in a particular mapped area. Machine learning uses standard features in a zone to derive a decision. This means that it will use the analyzed information in a different area and applies it to different traffic zones which might be having different traffic rules. This will need manufacturers to expose the vehicle to all scenarios in traffic, which is almost impractical.

Slight changes like fog or mist might affect what algorithms identify. For instance, scientist discovered that a small shift in the pixels of a camera could change how the learning machine perceived images. Such errors have prompted engineers to give Artificial Intelligence a second thought.

Based on the concerns, most scientists feel that the self-driving cars that use the machine learning system are not safe. The system is prone to mechanical reasoning errors. For that reason, the cars are less safe on the roads.

Jeremy Sutter

Jeremy is a tech and business writer from Simi Valley, CA. He's worked for Adobe, Google, and himself. He lives for success stories, and hopes to be one someday.
About Jeremy Sutter

Jeremy is a tech and business writer from Simi Valley, CA. He's worked for Adobe, Google, and himself. He lives for success stories, and hopes to be one someday.

Leave a Reply