In Ex Machina, Alex Garland portrays the dangers of AI in the unbridled intelligence that good AI would most likely possess. Ava is able to instantly distinguish a lie from the truth as shown with her interactions with Caleb and her ability to tell that Nathan was not going to let her go as he said he would. AI has such an immense advantage over even the minds of the greatest thinkers on the face of the planet. Artificial intelligence would be able to easily overwhelm a population of humans with their superior intelligence. Even without having a body, a network connected AI would be able to decimate humanity.Elon Musk gave a speech at the Code Conference in which he mentioned the idea of a neural lace in order to govern the behavior of a general AI. The neural lace would be a device implanted in one’s brain with the power to send and receive massive amounts of information, at a much higher rate of data transfer than the fastest keyboard typists and the quickest speakers combined. Elon’s vision for this is so that everyone has one and everyone has the ability to decide how artificial intelligence behaves. If 90% of the population wants the AI to start working on the solution to The Twin Prime Conjecture1, then the AI will put it’s processing power towards that task. If the majority of humans wanted to end humankind (which I doubt they would), the artificial intelligence does their bidding. The biggest part of this is so that the power of AI does not lie in one person’s hands, the hands of few, or in no ones. Elon mentions, “I don’t love the idea of being a house cat, but what’s the solution? I think one of the solutions that seems maybe the best is to add an AI layer”, when he is speaking about this AI layer he is talking about the neural tether that we would implant such that the AI is subject to our intentions, not the other way around, in which we become the AI’s house pet as Elon likes to say.AI has many benefits, the sky is the limit as far as what is accomplishable. Elon believes that AI is the number one risk to humanity in the next 100 years, and that AI is not that far off. He says that it would not even take a terminator-esque type of AI to destroy humanity, but even one connected to the network could destroy humanity; it could hack into vital data servers, cut off power, launch missiles, start wars. It would not take an AI like Eva to destroy us, an AI that is comprised of a server room would be able to decimate humanity without us taking the precautions necessary to manage and control a general AI.