The “Master-Slave Dialectic” in Hegel’s Phenomenology of Spirit is perhaps among the most commented passages in his work. It has been used to explain a wide range of processes from how the human species evolved from lower life-forms (“hominization”) to the psychological development of children, from the transformation of societies through industrialization to the history of nations as they progress into sovereign states. It could also serve as a lens through which to examine the trajectories of human-AI engagement.
The “Master-Slave Dialectic” is a conceptual construct, an idealized story of how two unequal individuals meet, experiencing a deep conflict or even life-threatening struggle in their joint quest for higher level self-consciousness. As they go through different stages in their relationship they come to realize how they inescapably depend on each other. Superior self-awareness could only come through recognition of and from the other; self-reflection could only be achieved through the mediation of the other as mirror. Although to affirm the self, one needs to deny the other, at the same time the other turns out to be necessary even just to forge the notion of self. Despite the inequality, there is mutual, reciprocal need between them.
AI represents the latest episode in the grand scheme of technological advancements. Like all tools, whether as computer algorithms alone or as algorithms embedded in robots, AI was invented by humans to make work easier, allowing us to save time, and make life more pleasant by freeing it from drudgery. AI accomplishes this not only by assisting in routinary tasks, but also in augmenting and enhancing human agency through precise medical imaging diagnostics, for instance. So in that respect, we humans are masters, and AI is our close-to-ideal slave. (Indeed, “robot” comes from the Czech word for “serf”, a tenant who pays rent on farmland through servitude or labor.) Yet there is a constant danger of dependence, exacerbated by the fact that AI can now do tasks such as driving, for example, previously imagined to be exclusive to humans. And here is where the dialectic comes in. By learning to perform activities in ways even better than humans can, might there be a chance that AI one day lord it over us? Could AI in the future be the master and we the slave?
I believe not. Strong AI, artificial general intelligence, not to mention superintelligence, for reasons I won’t be able to explain in this essay, will always be consigned to the realm of science fiction. It will never constitute a real threat.
There are limits to Hegel’s master-slave allegory applied to human-AI relationships. First, despite the original inequality, the master did not create the slave, rather, they just found each other almost by chance, as previously existing individuals. Humans, on the other hand, create AI from scratch, although they are unable to endow it with life. AI fully depends on humans for existence with its reason for being in its instrumental value for the goals set and determined by humans. Between humans and AI there is no intersubjectivity. No matter how expert and efficient AI can become, surpassing humans in particular tasks, it can never establish its own purpose; it has no preferences or desires, it experiences no satisfaction or fulfillment. The ends of AI will always be extrinsic; it is a slave by “nature”.
In the Hegelian version, inexorably, the master becomes dependent on the slave, even as the slave, in turn, becomes dependent, not only on the master, but also on nature as the source and store of raw materials for its work. By carrying out productive activities for the master, however, the slave develops intelligence, skills and creativity, grounds for dignity, recognition, and heightened self-knowledge. Meanwhile, the master, by contrast, regresses to a life dedicated to consumption and enjoyment, becoming no different from individuals belonging to irrational life-forms. The lack of work has pushed the master unwittingly to a slavish existence.
Troubling as this twist of fate may seem between master and slave, it is far from inevitable. Not being alive, AI will never develop intelligent consciousness, no matter how hard or how well it worked. Moreover, the actual danger lies not with AI, but with humans themselves. There are instances in which dependence on AI is not a bad thing at all: think of robot bomb-defusers. Yet admittedly, there are occasions in which over-dependence on AI causes distinctive human powers to atrophy. Why bother to remember telephone numbers, birthdays, addresses, and the like when there are voice-controlled PDAs like Siri, Alexa, or Cortana? What’s the point in learning maths, memorizing poetry, or even learning a language with Google’s ever-expanding array of apps?
Basically, it boils down to a problem of moderation or temperance. Humans ought to make every effort to retain, not relinquish dominion. They have to be on guard against allowing AI to do whatever they don’t want to, be it homework or driving, without considering why. Lacking nothing but a soul, AI will never refuse to do their bidding. Humans may then be spared immediate pain and suffering, but at the steep price of losing higher-order agency and fulfillment.
AI will only be as good as its human master and commander.