Killer Robots: The dangers of AI

4 March 2019Danni Warner2 min read

Transportation, healthcare, customer service and media are just a few of the many areas of our everyday lives adopting the use of AI. Artificial Intelligence is an area of computer science that focuses on creating machines that work and react as humans would. Whether it’s a relatively menial task, such as playing chess or helping to evolve the automotive industry, AI is ubiquitous. We are surrounded by narrow AI (also known as Weak AI), intelligent systems that have been taught to carry out specific tasks without necessarily being programmed to do so. Narrow AI exceeds at the task that it was created to do (only internet searches or only driving a car etc.) and is what we see in practice today.  Many researchers aim to create a more developed version of AI (general AI, AGI, or strong AI) which would not only complete it’s a specific task but all intellectual tasks, resulting in AI outperforming humans.

What are the risks of AI?

AI systems will typically imitate a variety of behaviours that resemble human intelligence (planning, problem-solving, manipulation, for example). This often leads people to conclude that it’s possible for AI to potentially become intentionally malevolent or conversely, benevolent. According to a majority of researchers, superintelligent AI is very unlikely to show human emotions like love, anger or hatred, although this doesn’t mean AI won’t potentially be a risk in the future for different reasons.

Although it’s intended to be beneficial, it can be remarkably challenging to align the AI’s intention with our own. As a superintelligent system, an AI will do exactly what you tell it to without regard for human attributes such as emotions. An AI personal assistant, for example, may not be able to differentiate between a simple search on the internet or an emergency situation.

In some cases, an AI can already be designed to cause damage. Autonomous weapons are lethal devices made to survey their surroundings and independently choose to attack based on refined algorithms. These weapons could easily cause mass destruction if in the hands of the wrong person. They’re notoriously difficult to disable in an attempt to prevent the enemy from compromising any plans, making it very easy for humans to potentially lose control of the situation. Narrow AI also presents similar risks to a lesser extent, but the risk increases along with the levels of AI intelligence.

Will AI take our jobs?

It is predicted that AI will impact a range of industries throughout the next decade and although automation might take some jobs, it will ultimately create more. According to Garter, AI will create 2.3 million jobs, whilst only eliminating 1.8 million by the end of 2020. As AI technology evolves and develops, it is likely that many middle or low-level jobs will be eliminated to increase productivity. Although sounding scary, this will also create new positions of highly-skilled, low-skilled or entry-level variety. Over the next decade, it is predicted that women will be more affected by automation but in the long run, males are more likely to be at risk. Manual tasks, where men’s share of employment is higher, will be affected as autonomous vehicles and other machines become more developed and efficient. However initially, the automation of administrative and clerical positions will involve a greater percentage of women. By the mid-2030s, up to 30% of jobs could be automatable but routine jobs will be more at risk according to the statistics presented by PWC.

Can AI kill us all?

A question that divides AI researchers. Physicist Stephen Hawking was one of many who have warned of the dangers AI could pose in the future. He predicted that AI “would take off on its own, and redesign itself at an ever-increasing rate” and evolve to a point where it will surpass human capabilities (known at the singularity). Another well-known sceptic of AI is Elon Musk, CEO of Tesla and SpaceX. He openly displays his cautious stance on AI, pushing for stronger regulations and more intense research into the risks of artificial intelligence. Because of this, Musk set up a non-profit company called OpenAI to delve into deeper more responsible research with an end goal to develop an AI that will benefit society with a friendly approach. On the other hand, a majority of researchers believe that AI isn’t a risk, or at least won’t be for a few decades.

“The real risk with AI isn’t malice but competence.”Stephen Hawking