How Much Should You Trust AI?

Anh Nguyen, assistant professor, Department of Computer Science and Software Engineering

How much should you trust Artificial Intelligence that’s going to be taking over for you sometime soon? And how soon will that be?

Those are a couple of the questions addressed in “Why Deep-Learning AIs Are So Easy to Fool,” in the October issue of Nature magazine.

“Some people predict 10 years. We’re actually very near, within 10 years,” says Auburn University’s Anh Nguyen, speaking of the time when your car might take over the driving from you.

Nguyen is one of the key sources in the Nature article, which explains how easily fooled are deep neural networks (DNNs) that operate much of today’s technology and soon the tech of autonomous vehicles (AVs).

Maybe they get fooled by circumstance, such as a stop sign that gets bumped to a slight tilt, causing an AV to speed on without stopping. Or maybe it’s not so accidental.

- Sponsor -

“We cannot explain how these machines are making these decisions in a way that such a small change can fool them. Maybe their decisions are not so trustworthy,” says Nguyen, explaining the findings of two of the research papers he coauthored, cited by Nature.

The first of these, “Deep Neural Networks are Easily Fooled,” published in 2015, when Nguyen was with the University of Wyoming, found that slight changes in an image, “almost imperceptible to a human,” can cause a DNN to mistake a lion for a library and declare it with a high degree of certainty. “It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects,” the study said.

The second paper, published in 2019 and based on research at Auburn, shows that DNNs “are even more brittle than we thought,” says Nguyen. The thinking machine can be fooled by “taking an object and randomly rotating it slightly, such that there is only a 3 percent change.”

So, what are the implications for self-driving cars, and how soon?

“The technology is already somewhat mature, if you want to deploy them on the street,” says Nguyen. “Once in a while a vehicle makes a big mistake and it might kill someone. But under normal conditions they are ok, even safer than humans driving.”

For AVs, Nguyen says his concern is more one of security than accidents. “From a security point of view, someone can mess with your input, cause cars to ignore stop signs or the speed limit. How do you avoid failures when hackers are coming in and change camera input to something your car has never seen before?”

How much you should trust an AI’s DNN, says Nguyen, is also a mater of what task you are turning over to them.

“Moving boxes, it’s highly unlikely someone will mess up. A car is a different story. If you assume nobody is doing malicious stuff and there are no major changes to the city you’re driving in, it may not be a problem. But, then, should we trust the machine in other applications, such as the medical domain?”

The latest Alabama business news delivered to your inbox