Why WeWork Doesn’t

Alibaba

Why not “didn’t”? Because for that to be true, lessons would have to be learned. But unfortunately, they have not.

The short response is due to some egregious character flaws among the two main protagonists, Adam Neumann, the WeWork co-founder and former CEO, and his main enabler and investor, Masayoshi Son, the SoftBank boss. Particularly, rapacious greed in the former and willful blindness in the latter, ironic for the head of a multi-billion dollar tech investment company called “Vision Fund”.

Now for the details.

In January, 2019, WeWork was valued at a whopping $47 billion, and according to SEC filings, it was looking forward to opportunities worth $1.6 trillion spread among a market of 300 million clients or “members”. But a third quarter IPO was botched and by November, its market value crashed to only $7 billion, despite infusion of fresh funds from SoftBank and a restructuring of the executive team with Neumann being shown the door. 2,400 workers, around 20% of the staff was laid off, with 1,000 more jobs dangling by a thread as the company frantically tried to concentrate on core businesses.

The idea behind WeWork was hatched when Neumann was a student in Baruch College, New York in the early 2000s. Having grown up in a kibbutz near Gaza, he wanted to bring that same communal spirit to work spaces, carefully crafting them to millennial tastes without renouncing his unabashedly capitalist ambitions. WeWork was to negotiate long-term leases on prime office space in the world’s major cities, redesigning and refurbishing them before renting them out to individuals and companies, souped up with all sorts of catering, concierge, entertainment, and wellness services, among others. Neumann met Son at a state event in India organized by Prime Minister Modi in 2016. A few months later, he secured a $4.5 billion initial investment from SoftBank. This was eventually increased to more than $10 billion. Son, for this part, was convinced that he had found the next Alibaba, the Chinese internet startup in which he invested $20 million in 2000. Alibaba now has a market cap of $500 billion with Softbank owning 24% of its shares.

Not only did WeWork expand at breakneck speed to more than a hundred cities globally, becoming the biggest private tenant in Manhattan. It also branched out into new businesses with the same communal, capitalistic, millennial vibe: WeLive for apartments, WeGrow for schools (charging $42,000 yearly tuition), WeMRKT for retail, and RiseByWe for luxury gyms, for instance. For Neumann, WeWork wasn’t just about real estate, but about a state of consciousness; he was out to transform not only the way we work or live, but the very world in which we live. 

Until then, it’s corporate narrative was that of the iconic stock-market unicorn. But as we all know, unicorns are just the stuff of fairy tales. 

Neumann’s wife, Rebekah was supposed to have told him during their first date: “You, my friend, are full of crap. Every single word that comes out of your mouth is fake.” She was clearly on to something. But love is blind and, unconscionably, she went on to marry him anyway.

Neumann’s governance style was rife with conflicts of interest. He would invest in buildings from which WeWork would take long-term leases, basically engaging in self-dealing. He’d use WeWork almost like a personal ATM, taking out loans to finance luxuries including a $500 million private Gulf Stream jet and $80 million for five exclusive homes in New York and the Bay Area, among others. He once charged WeWork $6 million for the “We” trademark to be used by its new parent, the WeCompany. He also owned special class shares, with voting rights 20 times the normal ones to ensure corporate control. He even named his wife and two other close friends to a special committee who would choose his successor in case of death. His penchant for drugs and alcohol on company premises, time, and expense was likewise widely known. On top of that was his hubristic, mercurial, tech-nerdy fratboy demeanor. 

The saddest part is that the saga continues. After the failed IPO, SoftBank injects another $1 billion into WeWork to stabilize the company, elevating its exposure in investments and loans to around $14 billion. SoftBank’s Son may have removed Neumann from his top perch, but at the price of extending a $500 loan and guaranteeing another $185 million for the next four years for “consultancy services”. Meanwhile, the newly laid-off scramble for their severance pay. 

Masayoshi Son has indeed found the new Alibaba, with Adam Neumann making up for what even 40 thieves would struggle to accomplish. Together, they give capitalism a deservedly bad name.   

Trustworthy AI? What’s lost in translation

3

When telephones were first installed in homes, people worried whether they were properly dressed to receive calls, apparently. Since then, we’ve come a long way in understanding this technology, judging from the ubiquity of phones in hotel bathrooms and the endless chatter one overhears in public toilet stalls.

Similar adjustments are taking place with regard to AI. To this end, the EU Commission set up an expert group to draft “Ethics Guidelines for Trustworthy AI” published in April, 2019. By “trustworthy AI” the document understands a system which, in its entire “life cycle”, proves “lawful”, “ethical”, and “robust” (although the focus is only on the second and third elements). In this context, “robustness” has to do with safety, security, resilience, accuracy, and reliability. If I had a car instead of AI, I suppose I’d call it “robust” if it brought me safely, efficiently, and in relative comfort from one place to another. In other words, if it did everything you’d expect a car to do, if it were “dependable”. But call it “ethical” or “trustworthy”? I can only hope that my Uber or Lyft driver be ethical and trustworthy; but their car? How and why should AI be different from a car?

The Guidelines continue to explain “ethical” AI in terms of safeguarding fundamental rights by respecting human autonomy, preventing harm, procuring fairness, and guaranteeing explicable functioning. None of this would make sense if they did not refer to human agents who design, deploy, and use AI, for by itself, AI can do nothing. It would be no different from a car parked in a garage from the standpoint of mobility. It cannot respect, prevent, procure, or guarantee anything on its own accord. Only human beings would have reason to pay attention to vulnerable groups, as children, the sick, or the poor, and recognize their intrinsic worth beyond the semblance of weakness. Only they could value democracy or participation in political issues, upholding of the rule of law against arbitrariness. AI could not care less about these or anything else for that matter.

Further, the Guidelines establish seven requirements for AI to be deemed “trustworthy”. Again we have to infer that designers, deployers, and users of AI ought to support human agency and decision making (for instance, by enabling individuals to store and keep track of their personal data) and defer always to human oversight and correction (by maintaining humans “in” or “on the loop” and “in command”, such as the combat pilots in northern Virginia who fly drones in Afghanistan). AI systems should be robust, programmed to prevent harms and minimize risks to human integrity and wellbeing, as in malicious hacking attacks, for example. Also, AI must be devised to protect data privacy and integrity, ensuring that sensitive information is neither accessed nor used by the wrong people or for the wrong motives. That’s what sets a “smart city” apart from a “surveillance city” or a panopticon. The Guidelines likewise require that AI design be transparent in its data management and decision making. This means AI service providers should communicate in simple, understandable terms how data are gathered, kept, and used, such that the people involved can give their informed consent, trace errors, and introduce rectifications in the system. Moreover, AI must be used in a manner that is inclusive or non-discriminatory (not penalizing people of certain ethnicities in employee selection) and fair (by eschewing price collusion among sellers). AI ought to be employed so as to advance societal and environmental wellbeing, through responsible energy consumption and resource usage at all stages of its life cycle. And lastly, AI should enhance accountability, enabling audits, minimizing harms and reporting them, flagging trade-offs, and permitting redress when problems arise, as in the case of collateral damage in drone-strikes, for instance.

Yet once more, all these demands are to be made on the people who design, fabricate, and operate AI. For AI is an object, not a subject, and much less, an ethical agent. We should not allow ourselves to be misled by the anthropomorphic metaphors and allegories we use in speaking of AI. There is no homunculus nor ghost inside the machine.

In the same way that a telephone is a tool to speak to someone out of hearing distance or a car a means of land transport, AI is an instrument we have invented for the same ultimate purpose of achieving a better human life. As all forms of technology, its goodness lies in its usefulness for the ends we humans freely ordain. Thus “AI ethics” is nothing else but a new kind of applied ethics at the service of individual and social flourishing. It would be utterly foolish to expect an AI ethics code to be written, or worse, to spontaneously emerge, that would supplant human reasoning about what ought to be done and what avoided. 

That’s why perhaps the most valuable part of the Guidelines is the final “examination of conscience” where those who design, deploy, and use AI, individually or in organizations, can ask themselves about their motives and behavior regarding the above-mentioned principles: “Are human users made aware that they are interacting with a non-human agent?” “Does the AI system enhance or augment human capabilities?” “Did you put in place mechanisms to ensure human control or oversight?” and so forth.

Because in the end, it’s the humans operating AI that need to be trustworthy, more than the system itself. Unfortunately, that’s often lost in translation.             

Who is the “I” behind AI? 

FR-Vazirani-Age-AI-Artificial-Intelligence-2400AI refers to a set of technologies based on advanced computing power enabled to perform activities heretofore thought to be unique to human beings, such as playing (and winning) chess or Go, writing news reports, or responding to natural speech, to name a few. Not only does AI push the boundaries between activities proper to humans and those to machines, but it also oftentimes significantly augments what the best endowed humans can do. For instance, even the most intelligent people would be no match to the information storage and recall capacity the Google search engine puts at anyone’s fingertips. Because of these advantages, there is a growing tendency to replace human agents with AI in many fields, both mechanical, as in driverless cars, and cognitive, as in interpreting radiology scans.

Most ethical issues concerning AI involve areas such as privacy, equity, or accountability. People are increasingly distressed at how Facebook and Google store their personal data and sell them to advertisers and political interest groups. Complaints are on the rise on how sexual, racial, and economic biases are embedded in employee hiring and court sentencing software. And lawyers and law-makers still haven’t figured out how exactly to attribute responsibilities when self-driving vehicles such as those tested by Uber, for instance, figure in accidents. However, the question of who the ethical subject is in AI is often overlooked, taken for granted, or deliberately ignored. 

In the Western philosophical traditions certain distinctive features have always been required of moral agents. Does AI, in its various forms, meet the test? Let us run down the list. 

First, moral agents need to be alive, that is, capable of self-motion due to an internal principle. We see this even in the simplest unicellular organisms that seek nourishment, grow, and reproduce by themselves, free from external intervention. They are not tethered to a power source from which they could be plugged or un-plugged, nor do they use batteries. They themselves “produce” energy from within, although counting, of course, on “ingredients” from the external environment: think of photosynthesis, for example.   

Second, it is not enough simply to be alive to become a moral agent. One also needs to be a life-form of a particular kind, displaying rationality and willfulness. That’s why we generally do not consider plants and non-human animals as moral agents (whether we should regard individuals belonging to other animal species as moral subjects is a different matter altogether). We do not get angry or scold a plant if it fails to flower and bear fruit, or if its fruit is not to our liking. We do not sue our neighbor’s dog if it bites us, but our neighbor, who perhaps has been careless with the pet. We give plants and animals of other species a pass because we know that whatever they do, they do not, indeed, they cannot mean it, the way we humans do. Non-human subjects cannot envision a future goal or purpose, nor choose one among several possible objectives or aims. Neither can they self-direct their behavior consciously towards the achievement of that target. They have no sense of self-worth.

Let us now return to AI. 

We realize from the above that despite the appearances, AI is not alive. It is not alive because it is a human-made machine, albeit a very sophisticated one. Actually, only nature has been able so far to produce life forms; through genetic engineering, all we humans have done is to mix and match from previously existing samples. I’d even wager humans will never be able to create new life forms for the same reason that they cannot create that internal principle of motion the Ancients called “soul”. Because AI doesn’t have a “soul”, neither does it have a body. When we disassemble the pieces we verify there’s nothing there. And more importantly, unlike any living thing, we could skilfully put the pieces back together and AI would continue to function as before. 

Not being alive, without a body, AI does not have consciousness either. It is incapable of willful, rational action nor does it have a sense of self-worth. AI cannot know and does not know what it is doing. It simply carries out instructions contained in the controlling algorithm. What happens is that these algorithms or instructions can sometimes get so complicated that not even programmers are able to foresee, predict, and control results or outcomes. Not having a will, AI cannot choose; of itself it has no inclination or tendency towards anything; it can have no preferences. Why not? Because there is no self to protect, satisfy, or perfect. 

As a consequence of the above, AI is not nor can it be a moral agent. There is nobody, no “I” behind AI.            

As AI moves from research labs to society at large by the hand of business, clarity in the moral agent should help us deal with the myriad ethical problems concerning algorithm design, benchmarking, training, deployment, and decision-making capacities of these systems. But whatever the promises or dangers of AI use, the responsibilities belong to humans alone. AI has no shame, honor or intrinsic worth. In this sense, there is no such thing as AI ethics .

 

Why I Teach

caps

June is the month of graduations. It is also a time to take stock of the school year that has just passed. A recurring question I ask myself is “Why do I teach?” (or try my best to do so). Below is my response. 

Aristotle begins his Metaphysics by saying that all human beings by nature desire to learn. We are all innately curious; we seek to know, always and everything. It’s our default mode. This is probably the main reason why I’ve dedicated the greater part of my life to teaching. In teaching I’ve found the most effective way to pursue my real passion, which is learning.

But teaching isn’t learning! True enough. However, in some sense, teaching is the perfect excuse for learning. One can only teach what one has previously learned. And in teaching, one learns double. Firstly, he finds out whether he has really learned whatever it is, or was simply under a false impression. Secondly, he also learns to share that precise bit of knowledge with others, something which requires the mastery of a very important craft.

Despite more than 25 years in the teaching profession, frankly, I still have serious doubts about whether it is really possible to teach anyone anything. Socrates’ irony —he only knows that he knows nothing, and knowing nothing, he cannot claim to teach anyone— increasingly makes sense to me. Little by little, I’ve disabused myself of the common notion of teaching as transmitting knowledge, as if it were a thing or an object. Google and Wikipedia can provide all the factoids and for this students don’t need my help. Perhaps the only way to keep myself from becoming redundant as a teacher is by wedding my teaching to mentoring. Like all human beings, students, by nature, already desire to learn. Sometimes, I just have to get out of their way. Other times, however, I may have to take a more active role, rekindling the desire for learning when the journey becomes too long, too arduous or even too dangerous for them to travel alone. My role is to provide guidance and to accompany them to the extent necessary in this wondrous adventure. But ultimately, learning, like walking, is something they can only do by themselves. It would be wrong even to pretend that it could be otherwise. 

Another metaphor for teaching, besides being a travel companion, is that of offering a gift. Normally, when presenting a gift, we give the best of what we have. Yet despite our best intentions, unfortunately, there’s no guarantee that the other person receiving the gift will actually like it. Just the same, I think this is a risk is worth taking: presenting my students with the best that I can muster, in the hope that, if not immediately, at least, some day, they will come to appreciate my gift. After all, we do not always value things at once, based on our first impression. This is especially true, I realize, when it comes to the content of business ethics courses. In any case, it would be a pity that, having renounced wealth by choosing the teaching profession, I would now seek applause and fame.

In other words, teaching for me is nothing else but cultivating in others the love for learning. That way, my students will always continue to learn, even if I’m no longer around to nudge, cajole or grade them. 

“Well and good”, one might say. “But let’s get real. What’s in it for you, in this teaching philosophy of ‘tending the fire of learning’ in others?”

Knowledge, viewed as a gift, is indeed a very strange creature. Most of the time, we imagine gifts as objects of competition, something whose possession necessarily entails exclusion. What one wins, all the rest lose. For me, however, this is not the case with knowledge. By sharing what I’ve learned with students, I do not lose it. On the contrary, what I know only grows and increases. Knowledge displays the characteristics of what we call a “common good”. And as a result of sharing knowledge through teaching, everyone  —students, myself and the society of which we form part— ends up very much better, enjoying greatly enriched and improved lives.