AI refers to a set of technologies based on advanced computing power enabled to perform activities heretofore thought to be unique to human beings, such as playing (and winning) chess or Go, writing news reports, or responding to natural speech, to name a few. Not only does AI push the boundaries between activities proper to humans and those to machines, but it also oftentimes significantly augments what the best endowed humans can do. For instance, even the most intelligent people would be no match to the information storage and recall capacity the Google search engine puts at anyone’s fingertips. Because of these advantages, there is a growing tendency to replace human agents with AI in many fields, both mechanical, as in driverless cars, and cognitive, as in interpreting radiology scans.
Most ethical issues concerning AI involve areas such as privacy, equity, or accountability. People are increasingly distressed at how Facebook and Google store their personal data and sell them to advertisers and political interest groups. Complaints are on the rise on how sexual, racial, and economic biases are embedded in employee hiring and court sentencing software. And lawyers and law-makers still haven’t figured out how exactly to attribute responsibilities when self-driving vehicles such as those tested by Uber, for instance, figure in accidents. However, the question of who the ethical subject is in AI is often overlooked, taken for granted, or deliberately ignored.
In the Western philosophical traditions certain distinctive features have always been required of moral agents. Does AI, in its various forms, meet the test? Let us run down the list.
First, moral agents need to be alive, that is, capable of self-motion due to an internal principle. We see this even in the simplest unicellular organisms that seek nourishment, grow, and reproduce by themselves, free from external intervention. They are not tethered to a power source from which they could be plugged or un-plugged, nor do they use batteries. They themselves “produce” energy from within, although counting, of course, on “ingredients” from the external environment: think of photosynthesis, for example.
Second, it is not enough simply to be alive to become a moral agent. One also needs to be a life-form of a particular kind, displaying rationality and willfulness. That’s why we generally do not consider plants and non-human animals as moral agents (whether we should regard individuals belonging to other animal species as moral subjects is a different matter altogether). We do not get angry or scold a plant if it fails to flower and bear fruit, or if its fruit is not to our liking. We do not sue our neighbor’s dog if it bites us, but our neighbor, who perhaps has been careless with the pet. We give plants and animals of other species a pass because we know that whatever they do, they do not, indeed, they cannot mean it, the way we humans do. Non-human subjects cannot envision a future goal or purpose, nor choose one among several possible objectives or aims. Neither can they self-direct their behavior consciously towards the achievement of that target. They have no sense of self-worth.
Let us now return to AI.
We realize from the above that despite the appearances, AI is not alive. It is not alive because it is a human-made machine, albeit a very sophisticated one. Actually, only nature has been able so far to produce life forms; through genetic engineering, all we humans have done is to mix and match from previously existing samples. I’d even wager humans will never be able to create new life forms for the same reason that they cannot create that internal principle of motion the Ancients called “soul”. Because AI doesn’t have a “soul”, neither does it have a body. When we disassemble the pieces we verify there’s nothing there. And more importantly, unlike any living thing, we could skilfully put the pieces back together and AI would continue to function as before.
Not being alive, without a body, AI does not have consciousness either. It is incapable of willful, rational action nor does it have a sense of self-worth. AI cannot know and does not know what it is doing. It simply carries out instructions contained in the controlling algorithm. What happens is that these algorithms or instructions can sometimes get so complicated that not even programmers are able to foresee, predict, and control results or outcomes. Not having a will, AI cannot choose; of itself it has no inclination or tendency towards anything; it can have no preferences. Why not? Because there is no self to protect, satisfy, or perfect.
As a consequence of the above, AI is not nor can it be a moral agent. There is nobody, no “I” behind AI.
As AI moves from research labs to society at large by the hand of business, clarity in the moral agent should help us deal with the myriad ethical problems concerning algorithm design, benchmarking, training, deployment, and decision-making capacities of these systems. But whatever the promises or dangers of AI use, the responsibilities belong to humans alone. AI has no shame, honor or intrinsic worth. In this sense, there is no such thing as AI ethics .