Ain’t no AI in Heaven – II

IMG_8013

(Second of two parts)

(The first part provides definitions of AI and surveys its business applications. It also establishes links between AI and business ethics.)

Virtue Ethics and AI

Virtue ethics focuses on agents, not actions. It considers how agents achieve the moral good proper to their nature, flourishing, by cultivating the distinctive human excellences or virtues through free and purposeful actions. Human engagement with AI is just one among the myriad of opportunities to develop the virtues.

How to develop the virtues while engaging with AI? Some attempt this centering on the intellectual virtues. Since AI deals mainly with data, information, and statistical correlations for decision making, this emphasis is highly understandable. For Grodzinsky, Big Data is essentially a quantification of human behavior to which AI is applied to perform predictive analyses through correlations. AI has shown exceptional ability in interpolation, predicting what happens next if a trend continues; not so in extrapolation, when no such trend has been identified. Correlations discovered by AI can point to possible causations, although only humans can take that leap, taking advantage of AI leads and making use of complementary scientific methods.

Ironically, AI was invented in part to remedy human weaknesses, both intellectual and moral, in deliberation, decision making, and action. For instance, in selecting candidates for employment, we would like to be free from biases regarding sex, age, race, and so forth, choosing the best on the basis of predetermined criteria. But AI systems need and depend on training sets based on (historical) data and algorithms, provided and generated by biased human beings. As a result, AI becomes quite useless in eliminating biases, serving instead to extend and perhaps even augment them. 

One approach is to ignore ethics altogether and simply use all data, accept their messiness, and just focus on finding correlations. But this position is itself unethical, for humans are inescapably ethical beings who reflect values and moral worth in their decisions. Another is to acknowledge the ethical import of human-AI engagement.      

 Grodzinsky chooses the latter and strives to discover the intellectual virtues Big Data scientists need to perform well. The detection of statistical correlations cannot be the sole epistemic end of Big Data research. There should be room likewise for other types of knowledge and greater understanding of phenomena through causal relations. This implies recognizing data is never free from hypotheses or background theories, including human biases. Data are responses to questions containing complex beliefs or intuitions that are incomplete or require verification. Researchers drawing attention to certain data already show external interests and ulterior motivations. Moreover, data always need a theoretical framework to make sense, be interpreted, or understood; they are never isolated or self-contained. Further, communicating data meaningfully requires a narrative thread. And in order to draw useful inferences, we need not only domain-specific knowledge, but also a worldview and a commitment to values. Values may not serve as evidence in research, but they complement it. 

What intellectual virtues or habits of thought do data scientists need? Grodzinsky offers a list: creativity, curiosity, critical thinking, collaboration, communication, humility, prudence, intellectual courage. From this, she highlights three. First, open-mindedness in taking generated patterns and predictions, and putting them into context, despite seeming counterintuitive. Second, rigor in validating evidence, producing reliability and trust. And third, honesty in documenting and communicating findings, to ensure transparency, distribute responsibility adequately, and safeguard openness. Only then will data scientists be able to evaluate inputs to models and models themselves (algorithms, variables, data sets) reliably. 

Apart from intellectual virtues there are moral ones in play as well. Concern with practical wisdom has arisen over the challenges that automation (through machine learning and robotics) poses to human work (Vallor). Not only can human work be facilitated, augmented, or enhanced; it can also be substituted or replaced by AI systems. This results in a loss of work (even fear that there may not be enough work AI cannot do) and wages, as humans fail to compete with AI productivity. In an extreme version, we enter into the “end of work” dystopia. So just how much work, of which kind, and at which pace do we cede to AI?

But the need for virtues will not disappear even in a world where AI is omnipresent. Studies have underscored the emotional roots of experience, something to which AI is impervious. Hence AI’s lack of feelings may be more of a handicap than an advantage. Moreover, AI was meant to overcome human weaknesses not by remedying, but by sidestepping them. For example, machine translation saves humans the trouble of learning a language while allowing them to communicate. Yet virtues, unlike AI, actually remedy human weaknesses at their root, as patience and perseverance improve language learning, for instance.

Practical wisdom is more than technical expertise. It’s choosing the right end, from which one decides on the means. It includes an all encompassing evaluation of the acting self with regard to the end and means chosen. Focusing on AI, Vallor unpacks the distinctiveness of practical wisdom. First, practical wisdom allows for decision making on complex goals, over the span of a lifetime. Second, practical wisdom permits identification of an ultimate goal or final end, providing reasons not only to oneself but also to others, intersubjectively. Third, practical wisdom involves taking ownership over decisions and self-regulation toward the end goal, in reference to which the good is measured. 

Thus practical wisdom cannot be reduced to the productive expertise of AI. AI systems are mere extensions of ourselves, depending on us for existence and maintenance, although we may not fully understand their decision making processes. AI systems cannot take on responsibility. However, practical wisdom is not the only moral virtue humans exercise in engagement with AI. Justice, courage, and moderation are also relevant. 

AI scientists and users need justice to avoid historic inequalities in employee selection and criminal sentencing software, for instance. Justice requires special attention to the vulnerable, such as the poor, children, senior citizens, the disabled, and the marginalized, giving them access to AI to participate in the benefits. For example, the visual or hearing impaired could use AI to navigate cities or obtain information through specially designed interfaces. 

Courage is equally necessary for optimal AI use. Instead of putting brakes on innovation, AI research should be encouraged and promoted. It would be great if self-driving road vehicles, combinations of AI and robotics, were to reach the safety standards of air travel, to cite an example. Similarly, the extensive use of AI in medical diagnostics could boost early detection of illnesses and improve prognosis. And AI embedded in robots could likewise take over hazardous jobs in mining or bomb-defusing, for instance. 

Moderation, too, is required for AI to fulfil its potential. Marketers of AI products that create addictions like games should take proper measures in touting them to vulnerable populations. And governments must also take care in data gathering, so as not to intrude into the privacy of citizens, becoming surveillance states. That’s why we need guidelines even for contact-tracing to fight against the spread of diseases, for example. Otherwise, both marketers and governments would fail to respect the dignity of customers and citizens.

AI at the service of human flourishing

AI systems are instruments or tools invented for the ultimate purpose of contributing to flourishing, the good life for human beings in society. Rules or behavioral norms are necessary to ensure proper human-AI engagement, particularly in developing or designing, deploying, and using such systems. AI can augment, extend, and enhance human agency in perception, reasoning or decision making, and actuation. Here, once again, the role of the virtues is crucial. The intellectual and moral virtues ensure human-AI engagement is not only efficient, but also perfective of human beings and properly ordered toward their flourishing.            

 

Ain’t no AI in Heaven – I

IMG_8013

(First of two parts)

Besides the Covid-19 coronavirus, the threat of a pandemic, and its economic consequences, perhaps nothing grabs global attention more nowadays than AI. In the popular press at the moment AI stands in turns for the greatest threat and the greatest opportunity the world faces. Everyone seems to be aware AI use is fraught with ethical issues. Yet only few take the trouble to figure out how problems could be systematically addressed from a sober, level-headed perspective in line with long-established ethical principles. For indeed, although many AI affordances are new, technology itself is not, and humankind has always co-existed and co-evolved with technology, beginning with speech and writing. So rather than inventing a newfangled AI ethics, it may make more sense to examine the resources already in store and see how they can help better engage with AI.  

Definitions of AI and its Business Applications

There is no standard definition for AI and the majority of definitions are inconsistent, referring to particular intelligent systems applied to specific domains. AI comprises distinct, although related technologies such as machine learning, natural language processing, chatbots, robotics, and so forth. Perhaps the simplest definition comes from Google CEO Sundar Pichai who speaks of AI as “computer programming that learns and adapts”. “Learning” and “adapting” are the type of activities humans engage in, leading us to consider AI as a machine that mimics human intelligence: “strong AI” seeks to think, feel and purpose, becoming a “mind” and not just a model of one; while “weak AI” is meant to be a tool at the service of human designs. 

Pressed by the need to legislate, the UK government has come up with its own definition: “Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”. However, for at least two of these functions, human intelligence is not strictly required, since even dogs are quite capable of visual perception and speech recognition.

Perhaps the best definition of AI is a combination of those offered by the Expert Groups of the Collaborations Between People and AI Systems and the European Commission: “any computational process or product that appears to demonstrate intelligence through non-biological/natural processes”, “analyzing [its] environment –with some degree of autonomy– to achieve specific goals”. Composed of digital data, algorithms, and computer hardware, AI is not only expected to perform rational functions, but also to change its environment towards a preset direction. 

Being non-biological or non-natural, the “artificial” in AI is clear. “Intelligence” denotes “rationality”, the abstract quality of doing things (or making them happen) for an end or purpose, as opposed to chance. This entails an explanation, a propositional response to the question “why?”. AI displays intelligence in three ways. First, through the use of sensors (cameras, microphones, or keyboards) in machine perception, AI is able to draw and collect data from the environment (physical features such as light, temperature, pressure, distance) relative to its goal (for example, an image of the floor, for a cleaning robot like Roomba). Second, through machine-reasoning and “decision making”, AI interprets relevant data (determining whether the floor is clean or not) and decides on a course of action (if clean, remain still, and if not, proceed). AI is never fully autonomous and its range of options limited. Closely allied to AI is “machine learning” (ML), the mathematical modeling approach that uncovers statistical correlations and patterns within data sets, producing novel outputs. Third, AI shows intelligence in actuation, carrying out responses or environmental modifications virtually (with chatbots), or physically (with cleaning robots). Learning rational AI is able to adapt algorithms or decision-making rules depending on the success of previous interventions.      

AI systems may be purely software-based or embedded in hardware devices. Examples of the former are voice assistants, image analysis software, search engines, and speech and face recognition systems; and of the latter, advanced robots, autonomous vehicles, drones, and applications in the Internet of Things.  

Numerous AI processes or products are already widely used: computer vision, which identifies objects; natural language processing, which interprets texts; and reinforcement learning, which controls robots or game agents through feedback loops. Thus AI can identify faces, follow voice commands or read handwriting, and play (even win) board games such as chess or Go.

Business applications of AI are vast. AI can be used in decision support, taking care of repetitive tasks in finance, marketing, or project management. It can also help in predictive maintenance of machine failures, based on equipment data histories. Another area is customer support and relationship management, answering queries and analyzing opinions. Likewise, AI is useful in process acceleration, boosting efficiency in e-mail management or database information retrieval. The widespread use of AI in the Data Economy heralds the “Fourth Industrial Revolution”, after the adoption of steam engines, electricity, and electronics.

Links between AI and Business Ethics

Because AI is a machine, the connection with ethics isn’t obvious. AI designates processes or products imitating human intelligence, while ethics is concerned with what’s right and wrong. Although AI is human-made, it isn’t human, and only imitates, not actually performs intelligent human activity. Hence, to examine AI ethics per se would be akin to evaluating the color, cut, clarity, and carat of a fake diamond, misattributing features of human action to something which is not human and incapable of human action. Like all tools or instruments, AI can only appraised technically, whether it produces the desired output (effectiveness) and whether it does so optimally (efficiency or economy), but not ethically. It is not enough to be a “functional equivalent” of human action to be subject to ethical judgment (think of a person holding the door and a doorstop); agency itself has to be human, that is, proceeding freely and purposefully from an individual of the human species. AI fails to exhibit ethically salient causality associated with human beings. Not being alive, a fortiori, AI cannot be rational, for ethical reasoning depends on a kind of life for scaffolding. So no matter how good, effective, or efficient, there ain’t no AI in heaven. 

There can only be ethical judgment, moral praise or blame, for the way humans engage with AI. Humans make use of AI to augment or enhance their own activities; AI extends, but cannot supplant human agency. For instance, we could raise the volume of our voice with a bullhorn, but it is still we who speak (truths or lies), not the bullhorn. With AI, we can program a machine to emit sounds similar to speech; yet this can only happen thanks to our inputs, even when resulting outputs are unforeseen or novel. AI cannot produce original speech because it depends on previous data and algorithms which identify statistical correlations and patterns. Although AI can be “taught” to scramble letters, it cannot form new words, because it is a “nobody”, and a “nobody” cannot create words, expressing or understanding meaning. That is why ethical judgment always bears upon human agents, never on AI.

Humans develop, deploy and use AI, oftentimes, with a business intent. While acknowledging its socially transformative and revolutionary potential, we shouldn’t forget that “AI is not an end in itself, but rather a promising means to increase human flourishing, thereby enhancing individual and societal well-being and the common good”. Engagement with AI in business is ethical if it contributes to the common good of flourishing. Proper AI use could prove helpful to reach this goal.  

Businesses generally decide to use AI for utilitarian principles, after an analysis of costs and benefits. Over time, smart robots cost less than the salaries of the workers they replace. However, most ethical approaches to AI-use are rule-governed, concerned with safeguarding fundamental human rights. For instance, the European Commission’s “Ethical Guidelines for Trustworthy AI” mandate that AI-use respect human autonomy, prevent harm, uphold fairness, and remain explicable. Further, AI design and deployment should observe seven key requirements: support human agency and defer to oversight; be technically robust, preventing or minimizing harms; protect data privacy; be transparent in data management and decision making; allow for diversity and inclusion, eschewing unfair discrimination; preserve societal and environmental wellbeing; and exhibit accountability.

Nothing objectionable in these rules and principles. Harm-avoidance may seem obvious, but no less necessary. Yet problems may arise in their application. For instance, demands for privacy and security can enter into conflict with transparency and explicability. Privacy and security require that sensitive information (preferences, sex, age, religious or political views) be accessible only to authorized agents and not be used unlawfully. But at the same time, AI transparency and explicability precisely demands that data gathering, labelling, and processing be documented, so as to allow traceability of errors and biases. Ideally, users ought to provide as much and as accurate information as possible for optimal results, but that could compromise privacy. For example, a correlation may be established between vegans (known from meal preferences) and a lower propensity to miss flights due to tardiness. Is this an acceptable bias? Would it be fair to charge omnivores more? How about this particular omnivore who, historically, has always been punctual? Further, explainability depends on the technological competence of listeners. Hence it becomes likely that the most vulnerable (children, seniors, and the disabled) suffer even greater discrimination and exclusion from the employment of AI. How then to proceed? 

On closer look, the rule-based approach offers little practical guidance to navigate the conflicts and tradeoffs in human-AI engagement.

That’s why we may have to explore other less popular options such as virtue ethics. The objective isn’t so much to replace, as to extend the rules-based method and have a greater focus on what AI engagement does to the people themselves, rather than the business outcomes. We shall begin the second part with an explanation of the fundamental aspects in which virtue ethics and the rules-based approach differ.

Listen carefully

deep-listening

You’re Not Listening: What You’re Missing and Why It Matters” by Kate Murphy is a book I’ve read about the other day, and I can’t wait til I get my hands on it. 

It’s been a struggle to manage an almost digital technology-free class (no laptops, tablets, or phones, although powerpoint presentations are fine, if only to avoid the chalk dust), and one of the reasons I tell my students for taking notes by hand is that we can type faster than we think. Writing, then, gives us more time to process information and actively engage with ideas, making learning more effective. What Murphy tells us now is that we can also think faster than people talk, so it makes sense again to slow down and listen, if we want to understand what we’re told. 

Careful, empathic listening has long been undervalued or taken for granted. People tend to think that as long as you’re not hard of hearing, you’re good. However, that only refers to enabling, physiological conditions. Good listening requires, besides, paying attention, becoming aware of the what (text), how, and why people say (context) what they do. But for this, first, you have (to learn) to hold your peace; that is, to keep quiet and listen.

Listening demands sustained attention or concentration, which makes it very difficult nowadays. Impatient and pressed for time, we are prone to interrupting; easily bored and unable to focus, we quickly glance at our phones; convinced of our own brilliance and self-worth, we eagerly speak our piece and dish out our two cents. We even avoid calls, finding them intrusive, and prefer text messages instead, which we can answer at our convenience. Neither do readily available earbuds, headphones, and vr goggles help our state of permanent distraction. These gadgets not only prevent us from listening, but they also stop meaningful dialogue altogether. They isolate us from our surroundings and envelop us in a comfortable, but unreal world, populated exclusively by elements of our own choosing. That is a shame, and worse, its consequences can be tragic. For as Murphy reminds us “Wars have been fought, fortunes lost, and friendships wrecked for lack of listening. it is only by listening that we engage, understand, empathize, cooperate and develop as human beings.

Although I buy into listening being fundamental to all sorts of relationships, personal, professional, and political, however, I disagree that “listening can be more valuable than speaking.” For mutual understanding, which is what the conversation is all about, listening and speaking can only be as valuable as each other. So no problem with investing time, money, and effort in honing rhetorical or public speaking skills, as long as commensurate resources are dedicated to careful listening as well.

Murphy shines a light on the instrumental benefits of good listening. For one, our interlocutors would be more inclined to return to us the favor, if they noticed they have our full attention when they speak. Also, it would certainly be easier to strike a sympathetic chord in others and get them to do as we propose, if we learned more about them by respectfully listening first.

However, good listening could also be a reward unto itself, for it makes us less anxious, more aware and in the moment. Cancelling ambient noise, we become better attuned to ourselves and our surroundings, we focus on the task at hand. All of which help create a greater sense of calm and serenity, which we need to live life to the full.

Are you with me?   

Why WeWork Doesn’t

Alibaba

Why not “didn’t”? Because for that to be true, lessons would have to be learned. But unfortunately, they have not.

The short response is due to some egregious character flaws among the two main protagonists, Adam Neumann, the WeWork co-founder and former CEO, and his main enabler and investor, Masayoshi Son, the SoftBank boss. Particularly, rapacious greed in the former and willful blindness in the latter, ironic for the head of a multi-billion dollar tech investment company called “Vision Fund”.

Now for the details.

In January, 2019, WeWork was valued at a whopping $47 billion, and according to SEC filings, it was looking forward to opportunities worth $1.6 trillion spread among a market of 300 million clients or “members”. But a third quarter IPO was botched and by November, its market value crashed to only $7 billion, despite infusion of fresh funds from SoftBank and a restructuring of the executive team with Neumann being shown the door. 2,400 workers, around 20% of the staff was laid off, with 1,000 more jobs dangling by a thread as the company frantically tried to concentrate on core businesses.

The idea behind WeWork was hatched when Neumann was a student in Baruch College, New York in the early 2000s. Having grown up in a kibbutz near Gaza, he wanted to bring that same communal spirit to work spaces, carefully crafting them to millennial tastes without renouncing his unabashedly capitalist ambitions. WeWork was to negotiate long-term leases on prime office space in the world’s major cities, redesigning and refurbishing them before renting them out to individuals and companies, souped up with all sorts of catering, concierge, entertainment, and wellness services, among others. Neumann met Son at a state event in India organized by Prime Minister Modi in 2016. A few months later, he secured a $4.5 billion initial investment from SoftBank. This was eventually increased to more than $10 billion. Son, for this part, was convinced that he had found the next Alibaba, the Chinese internet startup in which he invested $20 million in 2000. Alibaba now has a market cap of $500 billion with Softbank owning 24% of its shares.

Not only did WeWork expand at breakneck speed to more than a hundred cities globally, becoming the biggest private tenant in Manhattan. It also branched out into new businesses with the same communal, capitalistic, millennial vibe: WeLive for apartments, WeGrow for schools (charging $42,000 yearly tuition), WeMRKT for retail, and RiseByWe for luxury gyms, for instance. For Neumann, WeWork wasn’t just about real estate, but about a state of consciousness; he was out to transform not only the way we work or live, but the very world in which we live. 

Until then, it’s corporate narrative was that of the iconic stock-market unicorn. But as we all know, unicorns are just the stuff of fairy tales. 

Neumann’s wife, Rebekah was supposed to have told him during their first date: “You, my friend, are full of crap. Every single word that comes out of your mouth is fake.” She was clearly on to something. But love is blind and, unconscionably, she went on to marry him anyway.

Neumann’s governance style was rife with conflicts of interest. He would invest in buildings from which WeWork would take long-term leases, basically engaging in self-dealing. He’d use WeWork almost like a personal ATM, taking out loans to finance luxuries including a $500 million private Gulf Stream jet and $80 million for five exclusive homes in New York and the Bay Area, among others. He once charged WeWork $6 million for the “We” trademark to be used by its new parent, the WeCompany. He also owned special class shares, with voting rights 20 times the normal ones to ensure corporate control. He even named his wife and two other close friends to a special committee who would choose his successor in case of death. His penchant for drugs and alcohol on company premises, time, and expense was likewise widely known. On top of that was his hubristic, mercurial, tech-nerdy fratboy demeanor. 

The saddest part is that the saga continues. After the failed IPO, SoftBank injects another $1 billion into WeWork to stabilize the company, elevating its exposure in investments and loans to around $14 billion. SoftBank’s Son may have removed Neumann from his top perch, but at the price of extending a $500 loan and guaranteeing another $185 million for the next four years for “consultancy services”. Meanwhile, the newly laid-off scramble for their severance pay. 

Masayoshi Son has indeed found the new Alibaba, with Adam Neumann making up for what even 40 thieves would struggle to accomplish. Together, they give capitalism a deservedly bad name.   

Trustworthy AI? What’s lost in translation

3

When telephones were first installed in homes, people worried whether they were properly dressed to receive calls, apparently. Since then, we’ve come a long way in understanding this technology, judging from the ubiquity of phones in hotel bathrooms and the endless chatter one overhears in public toilet stalls.

Similar adjustments are taking place with regard to AI. To this end, the EU Commission set up an expert group to draft “Ethics Guidelines for Trustworthy AI” published in April, 2019. By “trustworthy AI” the document understands a system which, in its entire “life cycle”, proves “lawful”, “ethical”, and “robust” (although the focus is only on the second and third elements). In this context, “robustness” has to do with safety, security, resilience, accuracy, and reliability. If I had a car instead of AI, I suppose I’d call it “robust” if it brought me safely, efficiently, and in relative comfort from one place to another. In other words, if it did everything you’d expect a car to do, if it were “dependable”. But call it “ethical” or “trustworthy”? I can only hope that my Uber or Lyft driver be ethical and trustworthy; but their car? How and why should AI be different from a car?

The Guidelines continue to explain “ethical” AI in terms of safeguarding fundamental rights by respecting human autonomy, preventing harm, procuring fairness, and guaranteeing explicable functioning. None of this would make sense if they did not refer to human agents who design, deploy, and use AI, for by itself, AI can do nothing. It would be no different from a car parked in a garage from the standpoint of mobility. It cannot respect, prevent, procure, or guarantee anything on its own accord. Only human beings would have reason to pay attention to vulnerable groups, as children, the sick, or the poor, and recognize their intrinsic worth beyond the semblance of weakness. Only they could value democracy or participation in political issues, upholding of the rule of law against arbitrariness. AI could not care less about these or anything else for that matter.

Further, the Guidelines establish seven requirements for AI to be deemed “trustworthy”. Again we have to infer that designers, deployers, and users of AI ought to support human agency and decision making (for instance, by enabling individuals to store and keep track of their personal data) and defer always to human oversight and correction (by maintaining humans “in” or “on the loop” and “in command”, such as the combat pilots in northern Virginia who fly drones in Afghanistan). AI systems should be robust, programmed to prevent harms and minimize risks to human integrity and wellbeing, as in malicious hacking attacks, for example. Also, AI must be devised to protect data privacy and integrity, ensuring that sensitive information is neither accessed nor used by the wrong people or for the wrong motives. That’s what sets a “smart city” apart from a “surveillance city” or a panopticon. The Guidelines likewise require that AI design be transparent in its data management and decision making. This means AI service providers should communicate in simple, understandable terms how data are gathered, kept, and used, such that the people involved can give their informed consent, trace errors, and introduce rectifications in the system. Moreover, AI must be used in a manner that is inclusive or non-discriminatory (not penalizing people of certain ethnicities in employee selection) and fair (by eschewing price collusion among sellers). AI ought to be employed so as to advance societal and environmental wellbeing, through responsible energy consumption and resource usage at all stages of its life cycle. And lastly, AI should enhance accountability, enabling audits, minimizing harms and reporting them, flagging trade-offs, and permitting redress when problems arise, as in the case of collateral damage in drone-strikes, for instance.

Yet once more, all these demands are to be made on the people who design, fabricate, and operate AI. For AI is an object, not a subject, and much less, an ethical agent. We should not allow ourselves to be misled by the anthropomorphic metaphors and allegories we use in speaking of AI. There is no homunculus nor ghost inside the machine.

In the same way that a telephone is a tool to speak to someone out of hearing distance or a car a means of land transport, AI is an instrument we have invented for the same ultimate purpose of achieving a better human life. As all forms of technology, its goodness lies in its usefulness for the ends we humans freely ordain. Thus “AI ethics” is nothing else but a new kind of applied ethics at the service of individual and social flourishing. It would be utterly foolish to expect an AI ethics code to be written, or worse, to spontaneously emerge, that would supplant human reasoning about what ought to be done and what avoided. 

That’s why perhaps the most valuable part of the Guidelines is the final “examination of conscience” where those who design, deploy, and use AI, individually or in organizations, can ask themselves about their motives and behavior regarding the above-mentioned principles: “Are human users made aware that they are interacting with a non-human agent?” “Does the AI system enhance or augment human capabilities?” “Did you put in place mechanisms to ensure human control or oversight?” and so forth.

Because in the end, it’s the humans operating AI that need to be trustworthy, more than the system itself. Unfortunately, that’s often lost in translation.