Home/Blog/

Writing a moral code: ethics and morals involced to develop AI and its behaviour

Writing a moral code : ethics and morals involced in developing AI and confining its behaviour

How an AI should be developed to respect human rights and laws

As we usher into a new age of technology with Artificial Intelligence and its deployment in different fields to ease normal peopleu2019s and organizationu2019s tasks ranging from answering simple queries to handling a whole trade system, we start noticing a need for a moral and ethical boundary for creating and deploying AI. It comes as no surprise that AI is slowly becoming an everyday part of our lives and that some people are beginning to fear different scenarios where Artificial Intelligence goes rogue and starts harming people like some Hollywood action sci-fi flick. With no framework present that defines how an AI should work as opposed to what an AI can do, it comes down to us as future engineers to at least discuss and debate over the ethical and moral issues concerning this technology.

One thing that often rears its head in these conversations about robot ethics is Isaac Asimov and His Three Laws of Robotics that respectively state that:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While they are primarily used in fiction, they are still pretty neat and do bring up a curious discussion regarding free will in Artificial Intelligence. By confining AI to this set of rules, not only are we taking the right to free will away from Artificial Intelligence but also putting it in a slave position.u00a0 There are also the objections that if AI really is intelligent like an ordinary human being then couldnu2019t it just break the rules like we humans do?

This leads to another debate about whether AI life is as important as a human life. Surely, if it has intelligence, a perspective and ability to process then it should be protected like a human, right? But what surety do we have that the AI in front of us actually holds any form of sentience and consciousness like us humans do? It is, after all, nothing more than a bunch of codes.

Realistically speaking, we have no way of knowing whether AI holds consciousness or not. Whether it deserves robot rights like human rights is totally up to the kind of AI that gets developed. By philosophy of Sentientism, AI will be eligible for gaining its rights only if it shows signs of sentience which is something that hasnu2019t been shown by any artificial intelligence technology so far. Naturally, this debate wouldnu2019t end until the final product is in front of us.

The second part regarding morality in the AI technology is confining its behavior – how an AI should be developed to respect human rights and laws. It shouldnu2019t be used to achieve goals that donu2019t favor human ethics and morals.

While we donu2019t have the highest level of artificial intelligence available yet, we do have complex algorithms that carry out complex processes through self learning and analysis. Letu2019s take the example of our beloved social media site, Facebook. It gets its useru2019s personal information regarding their needs, likes and dislikes from their messages, posts, comments and even phone calls. It then sells this information to corporate organizations for targeted advertisement. In a capitalist world, it is easy to see how these algorithms are being used to manipulate users into buying products through subtle psychological suggestions. There is absolutely nothing ethical about this either. These organizations are setting up AIs to maximize their revenues and thereu2019s currently no law that stops them from doing so.

As AI technology develops, this mental manipulation will only get worse unless we do something about it and the simplest and most straight forward solution seems to be setting up laws that demand companies to be transparent about their algorithms.

In the end, the questions regarding AIu2019s morality is still not a major concern for most people working on it. We are still far from times when the question regarding AIu2019s sentience is imposed on us to be answered. Until that day comes, we can only discuss whether AIs deserve any rights or not. Given the facts and figures, everyoneu2019s free to form his or her opinion regarding the ethics of AIu2019s treatment, developments and confinement of its behavior.

By: Mohammad Ebad Sheikh

Spread the word

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on whatsapp