The ethics of

Artificial Intelligence

By Naoimh Reilly

Can we instill a conscience in something that is not conscious? Academics, journalists and researchers are voicing concerns over the rise of artificial intelligence (AI) because no-one is policing this technology. It’s amazing technological progress – but nobody knows who is creating what.

How do we instill ethics into a machine? There are so many concerns over biased, deceptive, agenda-setting technology because, regardless of idealism, money is always king. The only way to even try and ensure ethical practices is with laws made in government. The last thing we want to do is encourage the fuelling of automated warfare or mass surveillance.

There are so many issues related to AI and ehtics – or lack thereof. What if something is ethical to one person but not to another? Ethics are in the eye of the beholder; in other words, whomever is creating the AI. It is an enormously complicated task to build an AI that has an ethical component.

What if individuals are no longer valuable and it’s decided to accelerate the progress of humanity by upgrading a chosen few? Is that ethical?

Could driverless cars be hacked to see something that’s not there; or not to see something that is there?

What happens with military AI? Should that have different ethical standards to everyday AI? Should they be allowed to kill some to make the masses safer?

Or will these weapons be more accurate by eliminating human error?

Are we creating something that is so much more powerful than ourselves that it doesn’t really matter what ethics we programme into it? Will it just adapt and evolve by itself until we have no control whatsoever? What are we not considering? What are the unintended consequences?

So many questions...

When machines become more economically viable than humans, will we stop attaching as much value as we currently do to human life? The system will continue to value humans as a whole, but not necessarily individuals.

Putting humans first and the political idea of liberalism came about because it made sense to give value to every single human being. Likewise, on the battlefield, every person counted.

Industry depended on individuals in the past. What happens when we don't need people to do these jobs anymore because AI does it better? When machines become more valuable then a person, liberalism will struggle to survive.

As people lose their economic importance, will a moral argument be enough to protect human rights? Will powerful governments and corporations, whose main priority it is to make money, continue to value every human being - even when that person costs more then it makes?

It is possible that a two-tier existence could emerge - many would say that’s already here. Only the very wealthy will have a good quality of life as the gap between rich and poor grows larger. How will the poor survive if robots can do all the jobs? If driverless cars mean there will be less accidents, people will live longer and need more health services. How will we distribute wealth that is created by a minority? What will a post-work society look like?

AI might not have a conscience but that does not mean is cannot function intelligently. These are completely separate things.

Companies such as Google, Facebook and Microsoft have created partnership organisations on AI, but most believe they don’t or won’t do much. Will we ourselves instill ethics into certain machines but not others?

We have long considered ourselves to be the most intelligent species but that theory is about to be tested. In 1942, a science fiction writer named Isaac Asimov came up with ‘The Three Laws of Robotics,’ as part of a book called, ‘Handbook of Robotics’. These three laws have been used by technology companies when considering what should be programmed into a robot. The laws are as follows:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Most creators of technology believe these laws are paramount and should be enshrined in every AI we create. However, what if they are not enough? What if a robot has to decide who lives and who dies? What if they can only save one person in a car crash? Who will they choose?

The fact that modern technology has a science fiction book to fall back on does not instill confidence in its ability to consider the unintended consequences of giving robots a moral compass.

We can make a machine intelligent and give it a task to do. But what if the most efficient way to do that task is to harm, restrict, or kill a human? It would not be doing this out of badness, but out of a lack of understanding. Because intelligence comes from learning, it stands to reason robots will make mistakes as they train to detect patterns of behaviour. Will this testing phase be extra dangerous for humans? How do we continue to control them long after they become more intelligent and efficient than ourselves?

If it can interact with us, but is not a conscious being, how will we be able to

control it?