(Second of two parts)
(The first part provides definitions of AI and surveys its business applications. It also establishes links between AI and business ethics.)
Virtue Ethics and AI
Virtue ethics focuses on agents, not actions. It considers how agents achieve the moral good proper to their nature, flourishing, by cultivating the distinctive human excellences or virtues through free and purposeful actions. Human engagement with AI is just one among the myriad of opportunities to develop the virtues.
How to develop the virtues while engaging with AI? Some attempt this centering on the intellectual virtues. Since AI deals mainly with data, information, and statistical correlations for decision making, this emphasis is highly understandable. For Grodzinsky, Big Data is essentially a quantification of human behavior to which AI is applied to perform predictive analyses through correlations. AI has shown exceptional ability in interpolation, predicting what happens next if a trend continues; not so in extrapolation, when no such trend has been identified. Correlations discovered by AI can point to possible causations, although only humans can take that leap, taking advantage of AI leads and making use of complementary scientific methods.
Ironically, AI was invented in part to remedy human weaknesses, both intellectual and moral, in deliberation, decision making, and action. For instance, in selecting candidates for employment, we would like to be free from biases regarding sex, age, race, and so forth, choosing the best on the basis of predetermined criteria. But AI systems need and depend on training sets based on (historical) data and algorithms, provided and generated by biased human beings. As a result, AI becomes quite useless in eliminating biases, serving instead to extend and perhaps even augment them.
One approach is to ignore ethics altogether and simply use all data, accept their messiness, and just focus on finding correlations. But this position is itself unethical, for humans are inescapably ethical beings who reflect values and moral worth in their decisions. Another is to acknowledge the ethical import of human-AI engagement.
Grodzinsky chooses the latter and strives to discover the intellectual virtues Big Data scientists need to perform well. The detection of statistical correlations cannot be the sole epistemic end of Big Data research. There should be room likewise for other types of knowledge and greater understanding of phenomena through causal relations. This implies recognizing data is never free from hypotheses or background theories, including human biases. Data are responses to questions containing complex beliefs or intuitions that are incomplete or require verification. Researchers drawing attention to certain data already show external interests and ulterior motivations. Moreover, data always need a theoretical framework to make sense, be interpreted, or understood; they are never isolated or self-contained. Further, communicating data meaningfully requires a narrative thread. And in order to draw useful inferences, we need not only domain-specific knowledge, but also a worldview and a commitment to values. Values may not serve as evidence in research, but they complement it.
What intellectual virtues or habits of thought do data scientists need? Grodzinsky offers a list: creativity, curiosity, critical thinking, collaboration, communication, humility, prudence, intellectual courage. From this, she highlights three. First, open-mindedness in taking generated patterns and predictions, and putting them into context, despite seeming counterintuitive. Second, rigor in validating evidence, producing reliability and trust. And third, honesty in documenting and communicating findings, to ensure transparency, distribute responsibility adequately, and safeguard openness. Only then will data scientists be able to evaluate inputs to models and models themselves (algorithms, variables, data sets) reliably.
Apart from intellectual virtues there are moral ones in play as well. Concern with practical wisdom has arisen over the challenges that automation (through machine learning and robotics) poses to human work (Vallor). Not only can human work be facilitated, augmented, or enhanced; it can also be substituted or replaced by AI systems. This results in a loss of work (even fear that there may not be enough work AI cannot do) and wages, as humans fail to compete with AI productivity. In an extreme version, we enter into the “end of work” dystopia. So just how much work, of which kind, and at which pace do we cede to AI?
But the need for virtues will not disappear even in a world where AI is omnipresent. Studies have underscored the emotional roots of experience, something to which AI is impervious. Hence AI’s lack of feelings may be more of a handicap than an advantage. Moreover, AI was meant to overcome human weaknesses not by remedying, but by sidestepping them. For example, machine translation saves humans the trouble of learning a language while allowing them to communicate. Yet virtues, unlike AI, actually remedy human weaknesses at their root, as patience and perseverance improve language learning, for instance.
Practical wisdom is more than technical expertise. It’s choosing the right end, from which one decides on the means. It includes an all encompassing evaluation of the acting self with regard to the end and means chosen. Focusing on AI, Vallor unpacks the distinctiveness of practical wisdom. First, practical wisdom allows for decision making on complex goals, over the span of a lifetime. Second, practical wisdom permits identification of an ultimate goal or final end, providing reasons not only to oneself but also to others, intersubjectively. Third, practical wisdom involves taking ownership over decisions and self-regulation toward the end goal, in reference to which the good is measured.
Thus practical wisdom cannot be reduced to the productive expertise of AI. AI systems are mere extensions of ourselves, depending on us for existence and maintenance, although we may not fully understand their decision making processes. AI systems cannot take on responsibility. However, practical wisdom is not the only moral virtue humans exercise in engagement with AI. Justice, courage, and moderation are also relevant.
AI scientists and users need justice to avoid historic inequalities in employee selection and criminal sentencing software, for instance. Justice requires special attention to the vulnerable, such as the poor, children, senior citizens, the disabled, and the marginalized, giving them access to AI to participate in the benefits. For example, the visual or hearing impaired could use AI to navigate cities or obtain information through specially designed interfaces.
Courage is equally necessary for optimal AI use. Instead of putting brakes on innovation, AI research should be encouraged and promoted. It would be great if self-driving road vehicles, combinations of AI and robotics, were to reach the safety standards of air travel, to cite an example. Similarly, the extensive use of AI in medical diagnostics could boost early detection of illnesses and improve prognosis. And AI embedded in robots could likewise take over hazardous jobs in mining or bomb-defusing, for instance.
Moderation, too, is required for AI to fulfil its potential. Marketers of AI products that create addictions like games should take proper measures in touting them to vulnerable populations. And governments must also take care in data gathering, so as not to intrude into the privacy of citizens, becoming surveillance states. That’s why we need guidelines even for contact-tracing to fight against the spread of diseases, for example. Otherwise, both marketers and governments would fail to respect the dignity of customers and citizens.
AI at the service of human flourishing
AI systems are instruments or tools invented for the ultimate purpose of contributing to flourishing, the good life for human beings in society. Rules or behavioral norms are necessary to ensure proper human-AI engagement, particularly in developing or designing, deploying, and using such systems. AI can augment, extend, and enhance human agency in perception, reasoning or decision making, and actuation. Here, once again, the role of the virtues is crucial. The intellectual and moral virtues ensure human-AI engagement is not only efficient, but also perfective of human beings and properly ordered toward their flourishing.