Trustworthy AI? What’s lost in translation

3

When telephones were first installed in homes, people worried whether they were properly dressed to receive calls, apparently. Since then, we’ve come a long way in understanding this technology, judging from the ubiquity of phones in hotel bathrooms and the endless chatter one overhears in public toilet stalls.

Similar adjustments are taking place with regard to AI. To this end, the EU Commission set up an expert group to draft “Ethics Guidelines for Trustworthy AI” published in April, 2019. By “trustworthy AI” the document understands a system which, in its entire “life cycle”, proves “lawful”, “ethical”, and “robust” (although the focus is only on the second and third elements). In this context, “robustness” has to do with safety, security, resilience, accuracy, and reliability. If I had a car instead of AI, I suppose I’d call it “robust” if it brought me safely, efficiently, and in relative comfort from one place to another. In other words, if it did everything you’d expect a car to do, if it were “dependable”. But call it “ethical” or “trustworthy”? I can only hope that my Uber or Lyft driver be ethical and trustworthy; but their car? How and why should AI be different from a car?

The Guidelines continue to explain “ethical” AI in terms of safeguarding fundamental rights by respecting human autonomy, preventing harm, procuring fairness, and guaranteeing explicable functioning. None of this would make sense if they did not refer to human agents who design, deploy, and use AI, for by itself, AI can do nothing. It would be no different from a car parked in a garage from the standpoint of mobility. It cannot respect, prevent, procure, or guarantee anything on its own accord. Only human beings would have reason to pay attention to vulnerable groups, as children, the sick, or the poor, and recognize their intrinsic worth beyond the semblance of weakness. Only they could value democracy or participation in political issues, upholding of the rule of law against arbitrariness. AI could not care less about these or anything else for that matter.

Further, the Guidelines establish seven requirements for AI to be deemed “trustworthy”. Again we have to infer that designers, deployers, and users of AI ought to support human agency and decision making (for instance, by enabling individuals to store and keep track of their personal data) and defer always to human oversight and correction (by maintaining humans “in” or “on the loop” and “in command”, such as the combat pilots in northern Virginia who fly drones in Afghanistan). AI systems should be robust, programmed to prevent harms and minimize risks to human integrity and wellbeing, as in malicious hacking attacks, for example. Also, AI must be devised to protect data privacy and integrity, ensuring that sensitive information is neither accessed nor used by the wrong people or for the wrong motives. That’s what sets a “smart city” apart from a “surveillance city” or a panopticon. The Guidelines likewise require that AI design be transparent in its data management and decision making. This means AI service providers should communicate in simple, understandable terms how data are gathered, kept, and used, such that the people involved can give their informed consent, trace errors, and introduce rectifications in the system. Moreover, AI must be used in a manner that is inclusive or non-discriminatory (not penalizing people of certain ethnicities in employee selection) and fair (by eschewing price collusion among sellers). AI ought to be employed so as to advance societal and environmental wellbeing, through responsible energy consumption and resource usage at all stages of its life cycle. And lastly, AI should enhance accountability, enabling audits, minimizing harms and reporting them, flagging trade-offs, and permitting redress when problems arise, as in the case of collateral damage in drone-strikes, for instance.

Yet once more, all these demands are to be made on the people who design, fabricate, and operate AI. For AI is an object, not a subject, and much less, an ethical agent. We should not allow ourselves to be misled by the anthropomorphic metaphors and allegories we use in speaking of AI. There is no homunculus nor ghost inside the machine.

In the same way that a telephone is a tool to speak to someone out of hearing distance or a car a means of land transport, AI is an instrument we have invented for the same ultimate purpose of achieving a better human life. As all forms of technology, its goodness lies in its usefulness for the ends we humans freely ordain. Thus “AI ethics” is nothing else but a new kind of applied ethics at the service of individual and social flourishing. It would be utterly foolish to expect an AI ethics code to be written, or worse, to spontaneously emerge, that would supplant human reasoning about what ought to be done and what avoided. 

That’s why perhaps the most valuable part of the Guidelines is the final “examination of conscience” where those who design, deploy, and use AI, individually or in organizations, can ask themselves about their motives and behavior regarding the above-mentioned principles: “Are human users made aware that they are interacting with a non-human agent?” “Does the AI system enhance or augment human capabilities?” “Did you put in place mechanisms to ensure human control or oversight?” and so forth.

Because in the end, it’s the humans operating AI that need to be trustworthy, more than the system itself. Unfortunately, that’s often lost in translation.             

Who is the “I” behind AI? 

FR-Vazirani-Age-AI-Artificial-Intelligence-2400AI refers to a set of technologies based on advanced computing power enabled to perform activities heretofore thought to be unique to human beings, such as playing (and winning) chess or Go, writing news reports, or responding to natural speech, to name a few. Not only does AI push the boundaries between activities proper to humans and those to machines, but it also oftentimes significantly augments what the best endowed humans can do. For instance, even the most intelligent people would be no match to the information storage and recall capacity the Google search engine puts at anyone’s fingertips. Because of these advantages, there is a growing tendency to replace human agents with AI in many fields, both mechanical, as in driverless cars, and cognitive, as in interpreting radiology scans.

Most ethical issues concerning AI involve areas such as privacy, equity, or accountability. People are increasingly distressed at how Facebook and Google store their personal data and sell them to advertisers and political interest groups. Complaints are on the rise on how sexual, racial, and economic biases are embedded in employee hiring and court sentencing software. And lawyers and law-makers still haven’t figured out how exactly to attribute responsibilities when self-driving vehicles such as those tested by Uber, for instance, figure in accidents. However, the question of who the ethical subject is in AI is often overlooked, taken for granted, or deliberately ignored. 

In the Western philosophical traditions certain distinctive features have always been required of moral agents. Does AI, in its various forms, meet the test? Let us run down the list. 

First, moral agents need to be alive, that is, capable of self-motion due to an internal principle. We see this even in the simplest unicellular organisms that seek nourishment, grow, and reproduce by themselves, free from external intervention. They are not tethered to a power source from which they could be plugged or un-plugged, nor do they use batteries. They themselves “produce” energy from within, although counting, of course, on “ingredients” from the external environment: think of photosynthesis, for example.   

Second, it is not enough simply to be alive to become a moral agent. One also needs to be a life-form of a particular kind, displaying rationality and willfulness. That’s why we generally do not consider plants and non-human animals as moral agents (whether we should regard individuals belonging to other animal species as moral subjects is a different matter altogether). We do not get angry or scold a plant if it fails to flower and bear fruit, or if its fruit is not to our liking. We do not sue our neighbor’s dog if it bites us, but our neighbor, who perhaps has been careless with the pet. We give plants and animals of other species a pass because we know that whatever they do, they do not, indeed, they cannot mean it, the way we humans do. Non-human subjects cannot envision a future goal or purpose, nor choose one among several possible objectives or aims. Neither can they self-direct their behavior consciously towards the achievement of that target. They have no sense of self-worth.

Let us now return to AI. 

We realize from the above that despite the appearances, AI is not alive. It is not alive because it is a human-made machine, albeit a very sophisticated one. Actually, only nature has been able so far to produce life forms; through genetic engineering, all we humans have done is to mix and match from previously existing samples. I’d even wager humans will never be able to create new life forms for the same reason that they cannot create that internal principle of motion the Ancients called “soul”. Because AI doesn’t have a “soul”, neither does it have a body. When we disassemble the pieces we verify there’s nothing there. And more importantly, unlike any living thing, we could skilfully put the pieces back together and AI would continue to function as before. 

Not being alive, without a body, AI does not have consciousness either. It is incapable of willful, rational action nor does it have a sense of self-worth. AI cannot know and does not know what it is doing. It simply carries out instructions contained in the controlling algorithm. What happens is that these algorithms or instructions can sometimes get so complicated that not even programmers are able to foresee, predict, and control results or outcomes. Not having a will, AI cannot choose; of itself it has no inclination or tendency towards anything; it can have no preferences. Why not? Because there is no self to protect, satisfy, or perfect. 

As a consequence of the above, AI is not nor can it be a moral agent. There is nobody, no “I” behind AI.            

As AI moves from research labs to society at large by the hand of business, clarity in the moral agent should help us deal with the myriad ethical problems concerning algorithm design, benchmarking, training, deployment, and decision-making capacities of these systems. But whatever the promises or dangers of AI use, the responsibilities belong to humans alone. AI has no shame, honor or intrinsic worth. In this sense, there is no such thing as AI ethics .

 

Why I Teach

caps

June is the month of graduations. It is also a time to take stock of the school year that has just passed. A recurring question I ask myself is “Why do I teach?” (or try my best to do so). Below is my response. 

Aristotle begins his Metaphysics by saying that all human beings by nature desire to learn. We are all innately curious; we seek to know, always and everything. It’s our default mode. This is probably the main reason why I’ve dedicated the greater part of my life to teaching. In teaching I’ve found the most effective way to pursue my real passion, which is learning.

But teaching isn’t learning! True enough. However, in some sense, teaching is the perfect excuse for learning. One can only teach what one has previously learned. And in teaching, one learns double. Firstly, he finds out whether he has really learned whatever it is, or was simply under a false impression. Secondly, he also learns to share that precise bit of knowledge with others, something which requires the mastery of a very important craft.

Despite more than 25 years in the teaching profession, frankly, I still have serious doubts about whether it is really possible to teach anyone anything. Socrates’ irony —he only knows that he knows nothing, and knowing nothing, he cannot claim to teach anyone— increasingly makes sense to me. Little by little, I’ve disabused myself of the common notion of teaching as transmitting knowledge, as if it were a thing or an object. Google and Wikipedia can provide all the factoids and for this students don’t need my help. Perhaps the only way to keep myself from becoming redundant as a teacher is by wedding my teaching to mentoring. Like all human beings, students, by nature, already desire to learn. Sometimes, I just have to get out of their way. Other times, however, I may have to take a more active role, rekindling the desire for learning when the journey becomes too long, too arduous or even too dangerous for them to travel alone. My role is to provide guidance and to accompany them to the extent necessary in this wondrous adventure. But ultimately, learning, like walking, is something they can only do by themselves. It would be wrong even to pretend that it could be otherwise. 

Another metaphor for teaching, besides being a travel companion, is that of offering a gift. Normally, when presenting a gift, we give the best of what we have. Yet despite our best intentions, unfortunately, there’s no guarantee that the other person receiving the gift will actually like it. Just the same, I think this is a risk is worth taking: presenting my students with the best that I can muster, in the hope that, if not immediately, at least, some day, they will come to appreciate my gift. After all, we do not always value things at once, based on our first impression. This is especially true, I realize, when it comes to the content of business ethics courses. In any case, it would be a pity that, having renounced wealth by choosing the teaching profession, I would now seek applause and fame.

In other words, teaching for me is nothing else but cultivating in others the love for learning. That way, my students will always continue to learn, even if I’m no longer around to nudge, cajole or grade them. 

“Well and good”, one might say. “But let’s get real. What’s in it for you, in this teaching philosophy of ‘tending the fire of learning’ in others?”

Knowledge, viewed as a gift, is indeed a very strange creature. Most of the time, we imagine gifts as objects of competition, something whose possession necessarily entails exclusion. What one wins, all the rest lose. For me, however, this is not the case with knowledge. By sharing what I’ve learned with students, I do not lose it. On the contrary, what I know only grows and increases. Knowledge displays the characteristics of what we call a “common good”. And as a result of sharing knowledge through teaching, everyone  —students, myself and the society of which we form part— ends up very much better, enjoying greatly enriched and improved lives.

 

No “dirty” money for “clean” art?

300_ar_matthew_green_picasso_thumb_01

Isn’t it rich? First, the National Portrait Gallery and the Tate in London, then, the Guggenheim and the Met in New York, all recently decided to turn down money from the Sackler family. Why? Because of the Sacklers’ role in the US opioid epidemic. Their company, Purdue Pharma, by misrepresenting and aggressively marketing the drug Oxycontin, is said to have contributed to the opioid addiction which, so far, has claimed around 200,000 lives. The pressure is now on on a long list of art galleries and museums, not only in the UK and the US, but also in France, Germany, and Israel to rebuff scores of millions of Sackler donations. Ditto for the universities of Oxford, Princeton, Columbia, and Harvard which, through their museums and research institutions, have long been beneficiaries of the Sacklers’ largesse.

Already in 2007, Purdue Pharma and three top executives paid more than $630 million in fines to the US government for “misbranding” Oxycontin, fraudulently promoting it and playing down the risk of abuse. Since then a handful of states have pursued their own cases, giving rise to settlements in the tens (West Virginia, $10 million; Kentucky, $24 million), if not in the hundreds of millions (Oklahoma, $270 million). During the past year, the advocacy group PAIN (Prescription Addiction Intervention Now), founded by photographer and opioid-abuse victim Nan Goldin, has turned up the heat on the management of Sackler-funded museums, regularly staging disruptive protests or “die-ins” on their premises. Thus far, PAIN has chalked up significant triumphs, with art institutions walking away from “gifts not in the public interest” and the Sackler family foundation itself withdrawing donation offers. It is difficult indeed to cast a company many perceive no less than murderous to be a “benefactor” or “philanthropic”.

The causal relation between Oxycontin and the opioid abuse epidemic is clear. Not so between the Sackler money, on the one hand, and the aims of the art and educational institutions, on the other.

Museum authorities state they do not wish to be complicit in the Sackler family’s attempts at “white-washing” or even “moral money-laundering”. They object to abandoning the moral high-ground by accepting the Sacklers’ tainted money. Is there sufficient basis for such thinking?

It’s difficult to condemn, at face value, the mere act of making donations to cultural and educational institutions. On the contrary, many governments tend to encourage and reward such civic behaviors, giving tax breaks. We generally want more, not less of them. Hence the need to look into the donors’ intentions. Not being privy to the legal deeds or agreements, we do not know for sure what they were. However, it seems reasonable to think that the Sacklers’ motivations were more reputational and social, than economic or financial. Otherwise, why make donations in the first place? Surely, there would have been other more lucrative ways to invest money. Perhaps members of the Sackler family were just genuinely interested in the arts and would like the wider public to benefit from aesthetic experiences through their patronage. So to complete our analysis, we have to turn to the circumstances surrounding the donations, particularly the opioid abuse epidemic and the lawsuits.

Whichever way you examine it, Richard Sackler’s call to create a “blizzard of prescriptions to bury the competition” sounds horrific. But by giving money to museums, his family cannot be accused of concealing the sources of their estimated $13 billion wealth. Neither could one maintain that their fortune, accumulated through decades and a wide array of business activities, was entirely illegitimate. We depend on courts to determine that. And if they do, most likely, it would not be for all, but only for specific earnings or profits. That’s why judges take care in establishing fines and damages.

So despite the appalling optics, it doesn’t seem right to accuse the Sacklers of trying to buy legitimacy or covering up misdeeds through charitable donations. They have long been engaged in giving to the arts, even before the first charges were levied, and no attempts were made to mislead government authorities or the public about the true source of their wealth.

How, then, can we explain the smugness of the museums’ reactions?

Certainly, as art critic Philip Kennicott suggested, they could have adopted a pragmatic stance and continued accepting the donations. They need support and would put the money to good use; they had no say on how it was made and could not be held accountable. To argue that museums condone the opioid-pushing by simply displaying the Sackler name on their halls seems ridiculous. Further, there is also the issue of consistency. Are museums now supposed to investigate the origins of all the gifts, in cash or in kind, that they receive? Can they guarantee that none of their donors was ever involved in illegal dealings or made a fortune while engaging in human or environmental exploitation? Could they vouch for the legitimate conveyance of all the artworks they possess? Would all these inquiries even be feasible? If not, then why pick on the Sacklers?

The problem is that many, like the museum directors, seem to look for a state of innocence long gone among donors. They fail to realize that we are all flawed human beings, the Sacklers included. But this shouldn’t prevent us from at least attempting to do good works. Otherwise, we couldn’t even perform the reparation or restitution that justice demands. Sainthood should not be a prerequisite to support cultural and philanthropic organizations. Even imperfect people can and should do good deeds.

Theodore Roosevelt was supposed to have said, criticizing John D. Rockefeller’s newly-launched foundation, “No amount of charities in spending such fortunes can compensate in any way for the misconduct in acquiring them”. But even if Roosevelt were right in his allegations, aside from court-mandated sentences, how else could Rockefeller redeem himself, if not through charitable giving?

Debunking Performance-Related Pay for MBA CEOs

IMG-5312

Data from the Corporate Governance Research Initiative at Stanford Graduate School of Business track the steep, relentless climb of average CEO annual compensation in the top 100 US companies. In the 80s, it was $1.8M, mostly in keeping with the trend for the past 45 years. In the 90s, however, it shot up to $4.1M, and at the turn of the millennium, to $9.2M, scaling new heights at $13.7M in 2014. The sky is indeed the limit for chief executive pay. Perhaps no one can claim greater responsibility for this phenomenon than Harvard Business School finance professor Michael Jensen. In line with Milton Friedman’s financial theory of the firm (its purpose is “to maximize shareholder value”) and his own agency theory (managers are “agents” of stockholders, who are “principals”), Jensen (and co-author Murphy) advocated tying CEO compensation to stock price performance through stock options and other equity-based pay. Thus CEO incentives align with those of stockholders, and agency problems such as shirking are avoided. Moreover, this way corporate management careers would attract “the best and the brightest”, for no doubt they prefer greater monetary rewards, not less, and they’d rather be paid for their performance, not independently of it. Further, performance-related pay could stop the flight of top talent to investment banking or consulting, encouraging them to seek jobs in the “real economy” instead.

In a recent article for Institutional Investor, Dan Rasmussen and Haonan Li tested Jensen’s almost intuitive claims relating MBA CEO pay to stock price performance and found no supportive empirical evidence. Stock options did not discourage MBAs from elite schools from working in investment banking and consulting. Even more worrying, elite MBA diplomas for CEOs were not correlated with market-beating stock price performance, nor was CEO performance measured in stock price consistent through a period of six years. In other words, the impact of CEOs on their companies’ stock prices was no greater than that of randomness. If Rasmussen and Li are right, then aren’t CEOs being grossly overpaid on the basis of a mistaken attribution?

Let’s have a look at their methods. Rasmussen and Li created a database of around 8500 CEOs, their tenure and education, with information on company stock returns. First they tested the predictive power of CEO characteristics on stock price performance. They found that CEOs with MBAs did not outperform those without them. It didn’t matter from which school they graduated. Neither did CEOs from banks and consultancies do better than the market average. “MBA programs simply do not produce CEOs who are better at running companies, if performance is measured by stock price return,” they reported. CEOs with MBAs from top schools are of course well-represented in corporate boards. But this only indicates that executive search firms have a clear preference for recruiting them, despite the lack of evidence for superior performance. It would be just another case of confirmation bias, when people tend to see a clustering of MBA CEOs in successful companies although there is no due cause.  

Next, Rasmussen and Li examined whether CEO performance was persistent through a six-year period, either in the same company or in successive companies. Again, surprisingly, they discovered it was not. At least, outcomes were no different from what we could expect to occur by mere chance. Companies, then, seem to be rewarding (or punishing) CEOs based on exogenous factors rather than their own performance or trajectory, measured in stock price behavior during their tenure. Similarly, this may be an example of the “hot-hand” fallacy, when we project a pattern from past results to the future without justification.

Absence of evidence is not evidence of absence. And we cannot really prove or demonstrate that CEO MBAs do not increase stock prices (all things being equal), because that is a null hypothesis. But the lack of correlation between CEO credentials and superior stock performance is certainly puzzling, were the contrary true.

What the research seems to imply, however, is that we cannot attribute company success in terms of stock price to the CEO, either exclusively or mainly. Management has always been a team effort. A just compensation policy has to take this principle more into account. Also, it may not be wise to focus on a single metric as stock price to gauge company performance, because this is far more complex. Doing so only creates an enormous temptation for CEOs and other members of the management team to be working for the wrong motives. And lastly, perhaps we should reconsider the usefulness of MBA programs to society at large, apart from boosting school and alumni incomes through networking. Originally, business schools were meant to help transform business into a profession, like law, medicine, or religion; now they are considered pricey leadership bootcamps. Do they really provide the right education for the kind of corporate and civic leaders we need?