Scientists Increasingly Can’t Explain How AI Works

IMAGE: GETTY IMAGES

There’s a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.

https://www.vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-works