Elon Musk is worried about Larry Page’s robotics developments

Like this post?

Since the creepy AI HAL 9000 turned on its human companions in 1968’s 2001: A Space Odyssey, the idea of an AI that would one day harm the very humans that created it has been one that movies, games and other forms of fiction have embraced whole heatedly. Robots and their computer controlled servers are some of the best antagonists, because we don’t need to feel bad for blowing them up. That may not be so easy in real life however and is something a lot of futurists are genuinely worried about.

One stand out name that has voiced his real concern for the creation of an AI that could self replicate and improve itself, thereby evolving faster than any other organism has ever done before, is Elon Musk, the founder of Tesla and Space X. While he’s been outspoken about his fears of AI development before, in an upcoming approved biography of the man, he’s said to be rather afraid of the developments his close friend Larry Page is taking part in at Google, where he says the creation of an AI could one day take place.

In a video featuring the book’s author, Ashlee Vance claims that Elon Musk has bought up shares in many companies developing AI technologies, purely so he can keep an eye on what they are doing. In the past year particularly however, Vance says that Musk has grown increasingly concerned about some developments.

This is a very contrasting view to the one taken by Larry Page, who has been pushing Google’s self-driving cars forward, as well as other robotic and AI developments. In his mind, creating an artificial intelligence would be the crowning achievement of mankind and could usher in a whole new age of developments and products. He sees it as possible to build in safe guards that would prevent an AI from ever going rogue and decided it needed to destroy humanity.

While Larry Page sees the pros and benefits of developing AI however, Elon Musk is of the complete opposite opinion and believes that the potential for harm to humanity – in his eyes, the possibility of ending it entirely – far outweigh the benefits such a technology may bring.

Which camp do you fall into? Is AI something dangerous that needs to be prevented from coming into existence, or is it something that could benefit mankind to no end?

Image source: Wikpedia

The following two tabs change content below.
Jon Martindale is an English author and journalist, who's written for a number of high-profile technology news outlets, covering everything from the latest hardware and software releases, to hacking scandals and online activism.