Rest

Rest is not only what’s left once you’ve done everything; it’s also a right. But to what?

Man was made to work, even more so, to rest. Rest is not only what’s left, after one has done their job and fulfilled their various duties. It’s not just to stop working, taking a momentary pause, as it were, to refresh and recharge, preparing for future endeavors. Neither is it simply to spend time, smugly enjoying the hard-earned fruits of one’s labor. 

Rest is also a right. But to what? 

Firstly, to leisure, what the ancient Greeks called “schole”, from which the modern English word “school” comes. So one went to “school” not for credentialing or job-training, but to learn how to properly engage in leisure, which was the mark of a cultivated man. Obviously, this opportunity wasn’t open to everyone, only to a privileged few (citizens or land-owning males) who had free time because they didn’t have to carry out servile tasks in farming, artisanry, and commerce. They could count on other people to do this for them. Such “division of labor” provided this class of school-going elites the chance to take part in apparently endless discussions about their natural surroundings and, above all, the city, giving rise to philosophy and politics. Note that it wasn’t at all necessary, to rest or to take part in leisure, to travel very far to some exotic place. Their idea of a holiday or vacation was very much a “staycation”; in any case, always within the confines of the city.  

A unique achievement of the Judeo-Christian tradition is to have extended the right to rest to all human beings, not only to the elites, as had occurred in perhaps all ancient civilizations. This was done in obedience to a divine precept admonishing everyone, represented by the chosen people, to imitate God who rested on the seventh day, after finishing the work of creation. So rest, now, was much more than just engaging in philosophical discussions, as the ancient Greeks did. It means to do like God does and to contemplate, leaving maybe pressing, but in the end ephemeral concerns aside. To rest is to try to live, in the measure possible, in God’s eternal present. 

How are we to do that? 

We rest by contemplating nature, which delights the senses and inspires the mind: “God looked upon all that He had made, and indeed, it was very good” (Gen 1:31). We also rest by entering in friendly dialogue with fellow human beings, starting with members of our own families, discovering with wonder similarities and differences among us of amazing richness. Yet more than anything else, we rest by contemplating God’s mystery, at once transcendent and intimate, in prayer. None more beautiful, none more powerful, none greater; the truest of friends, whose words, in loving conversation, at the same time satisfies and makes one hunger for more.

 

College, Post-Covid 19

I hope the global pandemic ends soon, because we’ve found a cure or discovered a vaccine. But until then, we’ll just have to learn to make do, despite life not being anywhere near normal. (Even those who claim otherwise know it.) What began as a health crisis has evolved into a full-blown systemic threat, affecting politics and international relations, the economy, and education, among others. I’ll be centering on the impact of the Covid 19 pandemic on college education and what we can reasonably expect from the short to the middle term. Most references will be to universities in English-speaking countries, although they would be relevant as well for institutions in Europe and in other industrialized regions. In any case, seats of higher learning elsewhere could benefit by keeping an eye on these developments. 

It’s common knowledge that the Chinese word for crisis, Wéijī, is actually composed of two characters, one meaning “danger”, and the other, “opportunity”. We also know the English “crisis” derives from the Greek verb “krínein”, meaning “to decide”, thereby indicating an occasion when the need for crucial choice or thoughtful judgement comes to fore. Let’s apply these cues to our analysis of college pre- and post-Covid.

Even before Covid-19 set in, college education both in North America and Europe was already in crisis, mainly for demographic reasons. In five years, the pool of applicants would have shrunken by a fourth, and by some estimates about 20% of US colleges, specially the smaller ones with less than a thousand students, may be forced to shut down. Part of the solution is to try to fill the halls with foreign students from developing countries, and in particular, China, who besides can be charged full tuition. My own institution, a medium-sized private university in northern Spain, has about a quarter of its students coming from abroad, mostly from Latin America.

A second factor refers to costs and financing, although perhaps this is more acute in the US. In the past forty years, tuition fees have risen by 260%, double the inflation rate, such that a four-year degree could easily cost between $200,000 from a private college, and $100,000 from a public one. University education, worth $5.8 billion in 2018, is Australia’s fourth largest export, after commodities such as coal, iron, and natural gas, and caters mostly to Asians. In Europe, the majority of universities are publicly funded, with none or very low fees, that can be paid off with cheap loans. But the problem then becomes finding a job. 

This was precisely the situation MOOCs (“Massive Online Open Courses”) sought to address in the early 2000s. Through the use of digital technologies, marginal costs for every nth user would practically disappear as college-level instruction was broadcast to millions. Such initiatives were not free from difficulties however, beginning with student motivation, retention and degree completion, as well as economic sustainability, all of which significantly improved once MOOCs started collecting fees, however minimal. 

Then came the Wuhan virus.

Covid-19 certainly did not cause all the troubles afflicting college education, but it served to exacerbate them. First by preventing classroom gatherings where most traditional instruction took place. The loss of personal contact was worsened by lockdowns or grave restrictions in freedom of movement amongst people scattered in different time zones across the globe. Many national borders are still closed and some warn they will remain that way at least until Christmas. For sure, not all international students will be able to return to school in September. Second is the economic fallout with all non-essential business put on hold. Not only government revenues, but private incomes as well have taken a big hit, such that students and their families begin to question the value of a college education. We know the price, but is it worth it? No one is having to grapple with this existential question as much as the Class of 2020, as they look for a job under the worst labor market conditions since the Great Depression.   

So how will college be transformed in the wake of Covid-19?

Pundits speak of at least three different models.

First is the “Cyborg University”, which is like MOOCs on steroids, offering everything online. The only difference now is the buy-in from BigTech, poised to partner with the best brands in education to cash in on the tremendous growth opportunities. Previous, not-for-profit joint ventures such as Harvard/MIT-EdX and Stanford-Udacity/Coursera could now morph into Udacity/Google-Amazon and Coursera/IBM. They’d pay star-professors handsomely for broadcast lectures while an army of TA-equivalents would be given a pittance for the nitty-gritty of student engagement. Once more this illustrates the “Matthew effect”: to those who have, more shall be given, while to those who have little, even that will be taken away. Presumably there’d be limited subject offerings, most of which will be skills-based and immediately job-friendly. 

Second is the “Parallel University” model with a premium offline and a standard online option. The University of Michigan and Georgia Institute of Technology, for instance, have gone down this route with some full degree programs. This formula introduces some sort of caste system in studies even in the same institution. 

Third is the temporary “Hybrid model” between online and offline teaching, without renouncing the residential college experience to the extent health conditions permit. On the one hand, international students may be stranded in their home countries, unable to travel, and on the other, locals may be caught in a lockdown or forced to self-isolate because they’re sick or have been in close quarters with someone who is. In any case, college facilities cannot simply expand to accommodate everyone while observing mandatory social distancing measures. The stop-gap alternative to classroom teaching then becomes virtual, online instruction, both in synchronous and asynchronous modes. But who would pay tens of thousands of dollars for what amounts, essentially, to a series of Zoom sessions? The hefty price tag would be extremely difficult to justify. So every effort must be taken to try to make up for the loss of personal engagement through staggered attendance, modified calendars, campus testing and tracing, social bubbles, and technology.

The dangers and opportunities among post-Covid college formulas are clear. Now how do we choose?

To decide which among the three models fits best, individuals should consider what they really pursue with a college education and why. For some, it might be mere credentialing, having a certificate they’re legally up to the job or function they wish to perform. For others, it might gaining some instruction, perhaps not much different from the information available from Wikipedia or the practical knowledge imparted from YouTube. But there will be some more who truly seek the full college experience, a period of intense learning and socialization with professors and classmates at a special developmental stage, not only to form a dense web of contacts to move forward professionally, but more importantly, to become the best version of themselves, intellectually and morally, and serve society.     

WFH in times of Covid-19

At some point between March and April, two-thirds of humanity was under lockdown due to the Covid-19 pandemic, and a significant portion was working, or struggling to work from home (WFH). 

What does this entail? What are its unique opportunities and challenges? Which coping strategies function best? And what does this portend for the future, or at least until specifics and a vaccine are discovered?

Working from home is a luxury

Working from home is an option, but only for a privileged few. For the majority in developing countries (say, India or the Philippines), many of whom were in the informal sector in the first place, it’s not at all feasible. The lockdown has been for them the fastest ticket to dependency and destitution.

Sheltering in place has made social inequalities among those living in Western, industrialized, rich, and democratic (WEIRD) nations even starker. For those in “essential jobs”, meaning employed in supermarkets, healthcare, farms, meat-packing, transport, sanitation, and so forth, WFH is simply impossible. Ironically, “essential” here is a euphemism for work grossly undervalued. Despite their importance, these jobs are poorly paid, mostly carried out by women and immigrants, in precarious conditions, and depending on the countries (for instance, the US), without health insurance or unemployment benefits. Calling them “heroes” may make us feel better by expressing indebtedness and gratitude, but it does little to improve their plight. Healthcare workers in Spain, for example, would very much trade the round of applause at 8 pm for more effective “personal protection equipment” (PPE) which government has routinely failed to provide. So now there’s pot-banging all over the country. Many frontliners claim they’re not “heroes”, but ordinary folk trying to make a living. 

The government mandate to WFH translates for a good number into forced leaves with diminished earnings, if not entirely without pay, and for scores of millions, unemployment and loss of income. Already in these past two months we’ve witnessed the fastest rate of job destruction and bankruptcies in recent memory. Perhaps marginally better is the situation of some self-employed, freelancers, independent contractors, or participants in the “gig economy”, more used to flexible working conditions. Although it’s not the same when you’re in travel and hospitality, where demand is close to zero, as when you’re remotely processing a steady flow of medical, legal, or financial records.

So what does WFH actually mean for those who keep their jobs and are lucky to have decent internet connections?

In some respects, WFH is not completely new. “Telecommuting” was supposedly invented by people from NASA in the early 1970s, referring to employees who went to “satellite offices” instead of headquarters for motley reasons. Even in pre-Covid days, this work arrangement was already familiar to new mothers. In the US, eight percent of employees worked from home at least once a week, and two percent did so all the time. These were the better educated members of the workforce, a third of whom had college degrees. Yet despite technological advances, however, working from home still isn’t plausible for 60 percent of jobs, not just hairdressing. 

Once adjusted to the technicalities of Zoom, the biggest impact of working from home is the blurring of boundaries in space and time. The public (workplace, professional contacts, office hours) bleeds into the private (home, family, rest). The challenges of being constantly on call are compounded for those with young children abruptly thrust, with nary a warning, into distance-learning activities. And although fathers think they do their fair share of teaching and child-minding, 80 percent of mothers believe otherwise, taking this extra strain in their own careers. Then there’s managing cabin fever, one’s own and that of others, which can take a toll on relationships, especially between couples. 

The winners…

WFH is a game that has produced clear winners. Companies like Amazon and Microsoft have chalked up noteworthy gains in revenues during the pandemic, quickly reflected in skyrocketing stock prices. Amazon is even looking into hiring 100,000 more workers in fulfillment centers, all the better to deliver “hot” items such as toilet paper, hand sanitizers, bleach or disinfectants (non-ingestible!), masks, and groceries, especially baking products. Certain sectors in finance (Capital One, PayPal) and media (Omnicon) are registering higher levels of productivity, bolstered perhaps by savings in rent and travel. Uber presents an uneven picture, with UberEats, the food delivery business in an uptick, while the ride-sharing service is on a steep drop. Because of people staying in place, air quality, especially in cities, has largely improved, which of course benefits everyone.

Apart from avoiding contagion, home-workers are also able to cut down, maybe entirely, on commuting costs; in principle, they can also clock fewer work hours for the same pay. And entrepreneurs with enough capital can start a business with lower costs due to the depressed economy, or avail of cheaper, government-backed loans.

… and the losers 

We’ve already mentioned the biggest casualties among those who lost jobs or saw work hours drastically reduced, unable to WFH. It’s been the brick and mortar retail apocalypse (Neiman Marcus, J. Crew, Macy’s), and the worst nightmare for the global airline industry. As part of response and recovery plans, all sectors will have to reimagine themselves and their market relationships, from education (online teaching), to automotives (robot-made electric cars), from fashion (loungewear) to the restaurant industry (gourmet take-away).

But even those who remain gainfully employed suffer severe setbacks. Work is not just about earning a living; it’s also about having structure, order, and rhythm in our lives, cultivating a variety of social connections, and an opportunity for deep engagement with a larger community. The workplace could be for unattached millennials, for instance, the warp and woof of their identity. Social isolation can be devastating. In any case, for most individuals, the family alone is unable to supply these needs. WFH causes added stress by forcing people to juggle multiple roles, while persistently anxious over one’s future, and their family’s (the Class of 2020 might want other jobs besides contact-tracing). There’s a tendency to overcompensate, for fear of job loss or to demonstrate one’s worth. For those already disposed, this is more than enough to trigger burnout, depression, and other mental conditions.

With the excuse of monitoring productivity, employers can even make use (or abuse) of snooping software or surveillance technology on their home-workers. Unsurprisingly, employees could get back at this intrusiveness and resort to cheating, if only to prove bosses right. Thus, organizational trust goes on a free-fall when most needed. A valuable lesson in these circumstances is that loyalty cannot be imposed or trustworthiness bought.    

Making lemonade

When life gives you lemons, make lemonade, we are often told. This applies to WFH under Covid-19 as well.

Although trillion-dollar companies such as Amazon, Google, Facebook, and Twitter don’t seem to be in a hurry to get back to working from work, pushing return dates to the fall  or even to 2021, that’s not the case with the majority. This only shows the chasm that separates Silicon Valley winners from the rest who have become whiners by necessity.

Until targeted medicines and vaccines are discovered, we’re stuck with social distancing, which means we won’t go back to packing workers into crowded office spaces or much less to hot-desking. That’s why WeWork is going down the drain. Imagine the elevator wait times due to social distancing in office towers. Given the virus’ contagion cycles, some Israeli professors have proposed a staggered 10-4 work calendar, where employees can stay home for 10 days, then come to the office for 4, and still drive the transmission rate to zero. That would be a boon to vital, in-person work and services. Others suggest simply cutting down to a 30 hour workweek and keeping it that way. Productivity need not suffer, they argue, and while engaging in sophistry they claim, “more could work if everyone worked less.” In any case, because everyone grows tired, working more does not always mean working better.

Societies can rediscover the value of health as a public good and invest in healthcare benefits and care subsidies for dependents. Some economists even advocate putting a price on social distancing efforts and including it in the GDP.  

The possibility of exploring “Uberization” also seems promising. It consists of applying information technology to labor markets, such that we pay people for performing tasks rather than for holding jobs or titles. It tests the limits of transaction cost theory, according to which firms exist only because it can be more efficient to assign certain tasks in-house, that is, hierarchically or through authority, rather than contracting them in open markets. It may be rational for agents (buyers and sellers) to give up some freedom to be guaranteed better outcomes or results. Generally, this has worked for ride-sharing, although with substantial hidden costs, especially for drivers. But that would be a lot harder for receiving quality education, unless one’s happy getting by with Google and Wikipedia.

Successfully working from home depends on how well we’re able to relate through the physical and psychological distance to others, taking into account their embodied condition and the various roles they inhabit. Despite the fancy bells and whistles, virtual communication is no substitute for in-person dialogue. It misses out on all the nonverbal cues, such as context, empathy, and expressive force essential to genuine human understanding. It’s also helpful to realize individuals aren’t just workers but family and community members as well. Balancing the demands of these overlapping and intersecting roles through pre-set time and place allotments is too mechanistic. Instead, companies and workers ought to engage each other, deliberating on the values, priorities, and objectives they share. Once an agreement is reached, they’ll then be free to pursue these common goods responsibly, bearing in mind that work, wherever, whenever, is just a means to these ends.       

WFH in times of Covid-19

WFH

At some point between March and April, two-thirds of humanity was under lockdown due to the Covid-19 pandemic, and a significant portion was working, or struggling to work from home (WFH). 

What does this entail? What are its unique opportunities and challenges? Which coping strategies function best? And what does this portend for the future, or at least until specifics and a vaccine are discovered?

Working from home is a luxury

Working from home is an option, but only for a privileged few. For the majority in developing countries (say, India or the Philippines), many of whom were in the informal sector in the first place, it’s not at all feasible. The lockdown has been for them the fastest ticket to dependency and destitution.

Sheltering in place has made social inequalities among those living in Western, industrialized, rich, and democratic (WEIRD) nations even starker. For those in “essential jobs”, meaning employed in supermarkets, healthcare, farms, meat-packing, transport, sanitation, and so forth, WFH is simply impossible. Ironically, “essential” here is a euphemism for work grossly undervalued. Despite their importance, these jobs are poorly paid, mostly carried out by women and immigrants, in precarious conditions, and depending on the countries (for instance, the US), without health insurance or unemployment benefits. Calling them “heroes” may make us feel better by expressing indebtedness and gratitude, but it does little to improve their plight. Healthcare workers in Spain, for example, would very much trade the round of applause at 8 pm for more effective “personal protection equipment” (PPE) which government has routinely failed to provide. So now there’s pot-banging all over the country. Many frontliners claim they’re not “heroes”, but ordinary folk trying to make a living. 

The government mandate to WFH translates for a good number into forced leaves with diminished earnings, if not entirely without pay, and for scores of millions, unemployment and loss of income. Already in these past two months we’ve witnessed the fastest rate of job destruction and bankruptcies in recent memory. Perhaps marginally better is the situation of some self-employed, freelancers, independent contractors, or participants in the “gig economy”, more used to flexible working conditions. Although it’s not the same when you’re in travel and hospitality, where demand is close to zero, as when you’re remotely processing a steady flow of medical, legal, or financial records.

So what does WFH actually mean for those who keep their jobs and are lucky to have decent internet connections?

In some respects, WFH is not completely new. “Telecommuting” was supposedly invented by people from NASA in the early 1970s, referring to employees who went to “satellite offices” instead of headquarters for motley reasons. Even in pre-Covid days, this work arrangement was already familiar to new mothers. In the US, eight percent of employees worked from home at least once a week, and two percent did so all the time. These were the better educated members of the workforce, a third of whom had college degrees. Yet despite technological advances, however, working from home still isn’t plausible for 60 percent of jobs, not just hairdressing. 

Once adjusted to the technicalities of Zoom, the biggest impact of working from home is the blurring of boundaries in space and time. The public (workplace, professional contacts, office hours) bleeds into the private (home, family, rest). The challenges of being constantly on call are compounded for those with young children abruptly thrust, with nary a warning, into distance-learning activities. And although fathers think they do their fair share of teaching and child-minding, 80 percent of mothers believe otherwise, taking this extra strain in their own careers. Then there’s managing cabin fever, one’s own and that of others, which can take a toll on relationships, especially between couples. 

The winners…

WFH is a game that has produced clear winners. Companies like Amazon and Microsoft have chalked up noteworthy gains in revenues during the pandemic, quickly reflected in skyrocketing stock prices. Amazon is even looking into hiring 100,000 more workers in fulfillment centers, all the better to deliver “hot” items such as toilet paper, hand sanitizers, bleach or disinfectants (non-ingestible!), masks, and groceries, especially baking products. Certain sectors in finance (Capital One, PayPal) and media (Omnicon) are registering higher levels of productivity, bolstered perhaps by savings in rent and travel. Uber presents an uneven picture, with UberEats, the food delivery business in an uptick, while the ride-sharing service is on a steep drop. Because of people staying in place, air quality, especially in cities, has largely improved, which of course benefits everyone.

Apart from avoiding contagion, home-workers are also able to cut down, maybe entirely, on commuting costs; in principle, they can also clock fewer work hours for the same pay. And entrepreneurs with enough capital can start a business with lower costs due to the depressed economy, or avail of cheaper, government-backed loans.

… and the losers 

We’ve already mentioned the biggest casualties among those who lost jobs or saw work hours drastically reduced, unable to WFH. It’s been the brick and mortar retail apocalypse (Neiman Marcus, J. Crew, Macy’s), and the worst nightmare for the global airline industry. As part of response and recovery plans, all sectors will have to reimagine themselves and their market relationships, from education (online teaching), to automotives (robot-made electric cars), from fashion (loungewear) to the restaurant industry (gourmet take-away).

But even those who remain gainfully employed suffer severe setbacks. Work is not just about earning a living; it’s also about having structure, order, and rhythm in our lives, cultivating a variety of social connections, and an opportunity for deep engagement with a larger community. The workplace could be for unattached millennials, for instance, the warp and woof of their identity. Social isolation can be devastating. In any case, for most individuals, the family alone is unable to supply these needs. WFH causes added stress by forcing people to juggle multiple roles, while persistently anxious over one’s future, and their family’s (the Class of 2020 might want other jobs besides contact-tracing). There’s a tendency to overcompensate, for fear of job loss or to demonstrate one’s worth. For those already disposed, this is more than enough to trigger burnout, depression, and other mental conditions.

With the excuse of monitoring productivity, employers can even make use (or abuse) of snooping software or surveillance technology on their home-workers. Unsurprisingly, employees could get back at this intrusiveness and resort to cheating, if only to prove bosses right. Thus, organizational trust goes on a free-fall when most needed. A valuable lesson in these circumstances is that loyalty cannot be imposed or trustworthiness bought.    

Making lemonade

When life gives you lemons, make lemonade, we are often told. This applies to WFH under Covid-19 as well.

Although trillion-dollar companies such as Amazon, Google, Facebook, and Twitter don’t seem to be in a hurry to get back to working from work, pushing return dates to the fall  or even to 2021, that’s not the case with the majority. This only shows the chasm that separates Silicon Valley winners from the rest who have become whiners by necessity.

Until targeted medicines and vaccines are discovered, we’re stuck with social distancing, which means we won’t go back to packing workers into crowded office spaces or much less to hot-desking. That’s why WeWork is going down the drain. Imagine the elevator wait times due to social distancing in office towers. Given the virus’ contagion cycles, some Israeli professors have proposed a staggered 10-4 work calendar, where employees can stay home for 10 days, then come to the office for 4, and still drive the transmission rate to zero. That would be a boon to vital, in-person work and services. Others suggest simply cutting down to a 30 hour workweek and keeping it that way. Productivity need not suffer, they argue, and while engaging in sophistry they claim, “more could work if everyone worked less.” In any case, because everyone grows tired, working more does not always mean working better.

Societies can rediscover the value of health as a public good and invest in healthcare benefits and care subsidies for dependents. Some economists even advocate putting a price on social distancing efforts and including it in the GDP.  

The possibility of exploring “Uberization” also seems promising. It consists of applying information technology to labor markets, such that we pay people for performing tasks rather than for holding jobs or titles. It tests the limits of transaction cost theory, according to which firms exist only because it can be more efficient to assign certain tasks in-house, that is, hierarchically or through authority, rather than contracting them in open markets. It may be rational for agents (buyers and sellers) to give up some freedom to be guaranteed better outcomes or results. Generally, this has worked for ride-sharing, although with substantial hidden costs, especially for drivers. But that would be a lot harder for receiving quality education, unless one’s happy getting by with Google and Wikipedia.

Successfully working from home depends on how well we’re able to relate through the physical and psychological distance to others, taking into account their embodied condition and the various roles they inhabit. Despite the fancy bells and whistles, virtual communication is no substitute for in-person dialogue. It misses out on all the nonverbal cues, such as context, empathy, and expressive force essential to genuine human understanding. It’s also helpful to realize individuals aren’t just workers but family and community members as well. Balancing the demands of these overlapping and intersecting roles through pre-set time and place allotments is too mechanistic. Instead, companies and workers ought to engage each other, deliberating on the values, priorities, and objectives they share. Once an agreement is reached, they’ll then be free to pursue these common goods responsibly, bearing in mind that work, wherever, whenever, is just a means to these ends.       

Ain’t no AI in Heaven – II

IMG_8013

(Second of two parts)

(The first part provides definitions of AI and surveys its business applications. It also establishes links between AI and business ethics.)

Virtue Ethics and AI

Virtue ethics focuses on agents, not actions. It considers how agents achieve the moral good proper to their nature, flourishing, by cultivating the distinctive human excellences or virtues through free and purposeful actions. Human engagement with AI is just one among the myriad of opportunities to develop the virtues.

How to develop the virtues while engaging with AI? Some attempt this centering on the intellectual virtues. Since AI deals mainly with data, information, and statistical correlations for decision making, this emphasis is highly understandable. For Grodzinsky, Big Data is essentially a quantification of human behavior to which AI is applied to perform predictive analyses through correlations. AI has shown exceptional ability in interpolation, predicting what happens next if a trend continues; not so in extrapolation, when no such trend has been identified. Correlations discovered by AI can point to possible causations, although only humans can take that leap, taking advantage of AI leads and making use of complementary scientific methods.

Ironically, AI was invented in part to remedy human weaknesses, both intellectual and moral, in deliberation, decision making, and action. For instance, in selecting candidates for employment, we would like to be free from biases regarding sex, age, race, and so forth, choosing the best on the basis of predetermined criteria. But AI systems need and depend on training sets based on (historical) data and algorithms, provided and generated by biased human beings. As a result, AI becomes quite useless in eliminating biases, serving instead to extend and perhaps even augment them. 

One approach is to ignore ethics altogether and simply use all data, accept their messiness, and just focus on finding correlations. But this position is itself unethical, for humans are inescapably ethical beings who reflect values and moral worth in their decisions. Another is to acknowledge the ethical import of human-AI engagement.      

 Grodzinsky chooses the latter and strives to discover the intellectual virtues Big Data scientists need to perform well. The detection of statistical correlations cannot be the sole epistemic end of Big Data research. There should be room likewise for other types of knowledge and greater understanding of phenomena through causal relations. This implies recognizing data is never free from hypotheses or background theories, including human biases. Data are responses to questions containing complex beliefs or intuitions that are incomplete or require verification. Researchers drawing attention to certain data already show external interests and ulterior motivations. Moreover, data always need a theoretical framework to make sense, be interpreted, or understood; they are never isolated or self-contained. Further, communicating data meaningfully requires a narrative thread. And in order to draw useful inferences, we need not only domain-specific knowledge, but also a worldview and a commitment to values. Values may not serve as evidence in research, but they complement it. 

What intellectual virtues or habits of thought do data scientists need? Grodzinsky offers a list: creativity, curiosity, critical thinking, collaboration, communication, humility, prudence, intellectual courage. From this, she highlights three. First, open-mindedness in taking generated patterns and predictions, and putting them into context, despite seeming counterintuitive. Second, rigor in validating evidence, producing reliability and trust. And third, honesty in documenting and communicating findings, to ensure transparency, distribute responsibility adequately, and safeguard openness. Only then will data scientists be able to evaluate inputs to models and models themselves (algorithms, variables, data sets) reliably. 

Apart from intellectual virtues there are moral ones in play as well. Concern with practical wisdom has arisen over the challenges that automation (through machine learning and robotics) poses to human work (Vallor). Not only can human work be facilitated, augmented, or enhanced; it can also be substituted or replaced by AI systems. This results in a loss of work (even fear that there may not be enough work AI cannot do) and wages, as humans fail to compete with AI productivity. In an extreme version, we enter into the “end of work” dystopia. So just how much work, of which kind, and at which pace do we cede to AI?

But the need for virtues will not disappear even in a world where AI is omnipresent. Studies have underscored the emotional roots of experience, something to which AI is impervious. Hence AI’s lack of feelings may be more of a handicap than an advantage. Moreover, AI was meant to overcome human weaknesses not by remedying, but by sidestepping them. For example, machine translation saves humans the trouble of learning a language while allowing them to communicate. Yet virtues, unlike AI, actually remedy human weaknesses at their root, as patience and perseverance improve language learning, for instance.

Practical wisdom is more than technical expertise. It’s choosing the right end, from which one decides on the means. It includes an all encompassing evaluation of the acting self with regard to the end and means chosen. Focusing on AI, Vallor unpacks the distinctiveness of practical wisdom. First, practical wisdom allows for decision making on complex goals, over the span of a lifetime. Second, practical wisdom permits identification of an ultimate goal or final end, providing reasons not only to oneself but also to others, intersubjectively. Third, practical wisdom involves taking ownership over decisions and self-regulation toward the end goal, in reference to which the good is measured. 

Thus practical wisdom cannot be reduced to the productive expertise of AI. AI systems are mere extensions of ourselves, depending on us for existence and maintenance, although we may not fully understand their decision making processes. AI systems cannot take on responsibility. However, practical wisdom is not the only moral virtue humans exercise in engagement with AI. Justice, courage, and moderation are also relevant. 

AI scientists and users need justice to avoid historic inequalities in employee selection and criminal sentencing software, for instance. Justice requires special attention to the vulnerable, such as the poor, children, senior citizens, the disabled, and the marginalized, giving them access to AI to participate in the benefits. For example, the visual or hearing impaired could use AI to navigate cities or obtain information through specially designed interfaces. 

Courage is equally necessary for optimal AI use. Instead of putting brakes on innovation, AI research should be encouraged and promoted. It would be great if self-driving road vehicles, combinations of AI and robotics, were to reach the safety standards of air travel, to cite an example. Similarly, the extensive use of AI in medical diagnostics could boost early detection of illnesses and improve prognosis. And AI embedded in robots could likewise take over hazardous jobs in mining or bomb-defusing, for instance. 

Moderation, too, is required for AI to fulfil its potential. Marketers of AI products that create addictions like games should take proper measures in touting them to vulnerable populations. And governments must also take care in data gathering, so as not to intrude into the privacy of citizens, becoming surveillance states. That’s why we need guidelines even for contact-tracing to fight against the spread of diseases, for example. Otherwise, both marketers and governments would fail to respect the dignity of customers and citizens.

AI at the service of human flourishing

AI systems are instruments or tools invented for the ultimate purpose of contributing to flourishing, the good life for human beings in society. Rules or behavioral norms are necessary to ensure proper human-AI engagement, particularly in developing or designing, deploying, and using such systems. AI can augment, extend, and enhance human agency in perception, reasoning or decision making, and actuation. Here, once again, the role of the virtues is crucial. The intellectual and moral virtues ensure human-AI engagement is not only efficient, but also perfective of human beings and properly ordered toward their flourishing.            

 

Ain’t no AI in Heaven – I

IMG_8013

(First of two parts)

Besides the Covid-19 coronavirus, the threat of a pandemic, and its economic consequences, perhaps nothing grabs global attention more nowadays than AI. In the popular press at the moment AI stands in turns for the greatest threat and the greatest opportunity the world faces. Everyone seems to be aware AI use is fraught with ethical issues. Yet only few take the trouble to figure out how problems could be systematically addressed from a sober, level-headed perspective in line with long-established ethical principles. For indeed, although many AI affordances are new, technology itself is not, and humankind has always co-existed and co-evolved with technology, beginning with speech and writing. So rather than inventing a newfangled AI ethics, it may make more sense to examine the resources already in store and see how they can help better engage with AI.  

Definitions of AI and its Business Applications

There is no standard definition for AI and the majority of definitions are inconsistent, referring to particular intelligent systems applied to specific domains. AI comprises distinct, although related technologies such as machine learning, natural language processing, chatbots, robotics, and so forth. Perhaps the simplest definition comes from Google CEO Sundar Pichai who speaks of AI as “computer programming that learns and adapts”. “Learning” and “adapting” are the type of activities humans engage in, leading us to consider AI as a machine that mimics human intelligence: “strong AI” seeks to think, feel and purpose, becoming a “mind” and not just a model of one; while “weak AI” is meant to be a tool at the service of human designs. 

Pressed by the need to legislate, the UK government has come up with its own definition: “Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”. However, for at least two of these functions, human intelligence is not strictly required, since even dogs are quite capable of visual perception and speech recognition.

Perhaps the best definition of AI is a combination of those offered by the Expert Groups of the Collaborations Between People and AI Systems and the European Commission: “any computational process or product that appears to demonstrate intelligence through non-biological/natural processes”, “analyzing [its] environment –with some degree of autonomy– to achieve specific goals”. Composed of digital data, algorithms, and computer hardware, AI is not only expected to perform rational functions, but also to change its environment towards a preset direction. 

Being non-biological or non-natural, the “artificial” in AI is clear. “Intelligence” denotes “rationality”, the abstract quality of doing things (or making them happen) for an end or purpose, as opposed to chance. This entails an explanation, a propositional response to the question “why?”. AI displays intelligence in three ways. First, through the use of sensors (cameras, microphones, or keyboards) in machine perception, AI is able to draw and collect data from the environment (physical features such as light, temperature, pressure, distance) relative to its goal (for example, an image of the floor, for a cleaning robot like Roomba). Second, through machine-reasoning and “decision making”, AI interprets relevant data (determining whether the floor is clean or not) and decides on a course of action (if clean, remain still, and if not, proceed). AI is never fully autonomous and its range of options limited. Closely allied to AI is “machine learning” (ML), the mathematical modeling approach that uncovers statistical correlations and patterns within data sets, producing novel outputs. Third, AI shows intelligence in actuation, carrying out responses or environmental modifications virtually (with chatbots), or physically (with cleaning robots). Learning rational AI is able to adapt algorithms or decision-making rules depending on the success of previous interventions.      

AI systems may be purely software-based or embedded in hardware devices. Examples of the former are voice assistants, image analysis software, search engines, and speech and face recognition systems; and of the latter, advanced robots, autonomous vehicles, drones, and applications in the Internet of Things.  

Numerous AI processes or products are already widely used: computer vision, which identifies objects; natural language processing, which interprets texts; and reinforcement learning, which controls robots or game agents through feedback loops. Thus AI can identify faces, follow voice commands or read handwriting, and play (even win) board games such as chess or Go.

Business applications of AI are vast. AI can be used in decision support, taking care of repetitive tasks in finance, marketing, or project management. It can also help in predictive maintenance of machine failures, based on equipment data histories. Another area is customer support and relationship management, answering queries and analyzing opinions. Likewise, AI is useful in process acceleration, boosting efficiency in e-mail management or database information retrieval. The widespread use of AI in the Data Economy heralds the “Fourth Industrial Revolution”, after the adoption of steam engines, electricity, and electronics.

Links between AI and Business Ethics

Because AI is a machine, the connection with ethics isn’t obvious. AI designates processes or products imitating human intelligence, while ethics is concerned with what’s right and wrong. Although AI is human-made, it isn’t human, and only imitates, not actually performs intelligent human activity. Hence, to examine AI ethics per se would be akin to evaluating the color, cut, clarity, and carat of a fake diamond, misattributing features of human action to something which is not human and incapable of human action. Like all tools or instruments, AI can only appraised technically, whether it produces the desired output (effectiveness) and whether it does so optimally (efficiency or economy), but not ethically. It is not enough to be a “functional equivalent” of human action to be subject to ethical judgment (think of a person holding the door and a doorstop); agency itself has to be human, that is, proceeding freely and purposefully from an individual of the human species. AI fails to exhibit ethically salient causality associated with human beings. Not being alive, a fortiori, AI cannot be rational, for ethical reasoning depends on a kind of life for scaffolding. So no matter how good, effective, or efficient, there ain’t no AI in heaven. 

There can only be ethical judgment, moral praise or blame, for the way humans engage with AI. Humans make use of AI to augment or enhance their own activities; AI extends, but cannot supplant human agency. For instance, we could raise the volume of our voice with a bullhorn, but it is still we who speak (truths or lies), not the bullhorn. With AI, we can program a machine to emit sounds similar to speech; yet this can only happen thanks to our inputs, even when resulting outputs are unforeseen or novel. AI cannot produce original speech because it depends on previous data and algorithms which identify statistical correlations and patterns. Although AI can be “taught” to scramble letters, it cannot form new words, because it is a “nobody”, and a “nobody” cannot create words, expressing or understanding meaning. That is why ethical judgment always bears upon human agents, never on AI.

Humans develop, deploy and use AI, oftentimes, with a business intent. While acknowledging its socially transformative and revolutionary potential, we shouldn’t forget that “AI is not an end in itself, but rather a promising means to increase human flourishing, thereby enhancing individual and societal well-being and the common good”. Engagement with AI in business is ethical if it contributes to the common good of flourishing. Proper AI use could prove helpful to reach this goal.  

Businesses generally decide to use AI for utilitarian principles, after an analysis of costs and benefits. Over time, smart robots cost less than the salaries of the workers they replace. However, most ethical approaches to AI-use are rule-governed, concerned with safeguarding fundamental human rights. For instance, the European Commission’s “Ethical Guidelines for Trustworthy AI” mandate that AI-use respect human autonomy, prevent harm, uphold fairness, and remain explicable. Further, AI design and deployment should observe seven key requirements: support human agency and defer to oversight; be technically robust, preventing or minimizing harms; protect data privacy; be transparent in data management and decision making; allow for diversity and inclusion, eschewing unfair discrimination; preserve societal and environmental wellbeing; and exhibit accountability.

Nothing objectionable in these rules and principles. Harm-avoidance may seem obvious, but no less necessary. Yet problems may arise in their application. For instance, demands for privacy and security can enter into conflict with transparency and explicability. Privacy and security require that sensitive information (preferences, sex, age, religious or political views) be accessible only to authorized agents and not be used unlawfully. But at the same time, AI transparency and explicability precisely demands that data gathering, labelling, and processing be documented, so as to allow traceability of errors and biases. Ideally, users ought to provide as much and as accurate information as possible for optimal results, but that could compromise privacy. For example, a correlation may be established between vegans (known from meal preferences) and a lower propensity to miss flights due to tardiness. Is this an acceptable bias? Would it be fair to charge omnivores more? How about this particular omnivore who, historically, has always been punctual? Further, explainability depends on the technological competence of listeners. Hence it becomes likely that the most vulnerable (children, seniors, and the disabled) suffer even greater discrimination and exclusion from the employment of AI. How then to proceed? 

On closer look, the rule-based approach offers little practical guidance to navigate the conflicts and tradeoffs in human-AI engagement.

That’s why we may have to explore other less popular options such as virtue ethics. The objective isn’t so much to replace, as to extend the rules-based method and have a greater focus on what AI engagement does to the people themselves, rather than the business outcomes. We shall begin the second part with an explanation of the fundamental aspects in which virtue ethics and the rules-based approach differ.

Listen carefully

deep-listening

You’re Not Listening: What You’re Missing and Why It Matters” by Kate Murphy is a book I’ve read about the other day, and I can’t wait til I get my hands on it. 

It’s been a struggle to manage an almost digital technology-free class (no laptops, tablets, or phones, although powerpoint presentations are fine, if only to avoid the chalk dust), and one of the reasons I tell my students for taking notes by hand is that we can type faster than we think. Writing, then, gives us more time to process information and actively engage with ideas, making learning more effective. What Murphy tells us now is that we can also think faster than people talk, so it makes sense again to slow down and listen, if we want to understand what we’re told. 

Careful, empathic listening has long been undervalued or taken for granted. People tend to think that as long as you’re not hard of hearing, you’re good. However, that only refers to enabling, physiological conditions. Good listening requires, besides, paying attention, becoming aware of the what (text), how, and why people say (context) what they do. But for this, first, you have (to learn) to hold your peace; that is, to keep quiet and listen.

Listening demands sustained attention or concentration, which makes it very difficult nowadays. Impatient and pressed for time, we are prone to interrupting; easily bored and unable to focus, we quickly glance at our phones; convinced of our own brilliance and self-worth, we eagerly speak our piece and dish out our two cents. We even avoid calls, finding them intrusive, and prefer text messages instead, which we can answer at our convenience. Neither do readily available earbuds, headphones, and vr goggles help our state of permanent distraction. These gadgets not only prevent us from listening, but they also stop meaningful dialogue altogether. They isolate us from our surroundings and envelop us in a comfortable, but unreal world, populated exclusively by elements of our own choosing. That is a shame, and worse, its consequences can be tragic. For as Murphy reminds us “Wars have been fought, fortunes lost, and friendships wrecked for lack of listening. it is only by listening that we engage, understand, empathize, cooperate and develop as human beings.

Although I buy into listening being fundamental to all sorts of relationships, personal, professional, and political, however, I disagree that “listening can be more valuable than speaking.” For mutual understanding, which is what the conversation is all about, listening and speaking can only be as valuable as each other. So no problem with investing time, money, and effort in honing rhetorical or public speaking skills, as long as commensurate resources are dedicated to careful listening as well.

Murphy shines a light on the instrumental benefits of good listening. For one, our interlocutors would be more inclined to return to us the favor, if they noticed they have our full attention when they speak. Also, it would certainly be easier to strike a sympathetic chord in others and get them to do as we propose, if we learned more about them by respectfully listening first.

However, good listening could also be a reward unto itself, for it makes us less anxious, more aware and in the moment. Cancelling ambient noise, we become better attuned to ourselves and our surroundings, we focus on the task at hand. All of which help create a greater sense of calm and serenity, which we need to live life to the full.

Are you with me?   

Why WeWork Doesn’t

Alibaba

Why not “didn’t”? Because for that to be true, lessons would have to be learned. But unfortunately, they have not.

The short response is due to some egregious character flaws among the two main protagonists, Adam Neumann, the WeWork co-founder and former CEO, and his main enabler and investor, Masayoshi Son, the SoftBank boss. Particularly, rapacious greed in the former and willful blindness in the latter, ironic for the head of a multi-billion dollar tech investment company called “Vision Fund”.

Now for the details.

In January, 2019, WeWork was valued at a whopping $47 billion, and according to SEC filings, it was looking forward to opportunities worth $1.6 trillion spread among a market of 300 million clients or “members”. But a third quarter IPO was botched and by November, its market value crashed to only $7 billion, despite infusion of fresh funds from SoftBank and a restructuring of the executive team with Neumann being shown the door. 2,400 workers, around 20% of the staff was laid off, with 1,000 more jobs dangling by a thread as the company frantically tried to concentrate on core businesses.

The idea behind WeWork was hatched when Neumann was a student in Baruch College, New York in the early 2000s. Having grown up in a kibbutz near Gaza, he wanted to bring that same communal spirit to work spaces, carefully crafting them to millennial tastes without renouncing his unabashedly capitalist ambitions. WeWork was to negotiate long-term leases on prime office space in the world’s major cities, redesigning and refurbishing them before renting them out to individuals and companies, souped up with all sorts of catering, concierge, entertainment, and wellness services, among others. Neumann met Son at a state event in India organized by Prime Minister Modi in 2016. A few months later, he secured a $4.5 billion initial investment from SoftBank. This was eventually increased to more than $10 billion. Son, for this part, was convinced that he had found the next Alibaba, the Chinese internet startup in which he invested $20 million in 2000. Alibaba now has a market cap of $500 billion with Softbank owning 24% of its shares.

Not only did WeWork expand at breakneck speed to more than a hundred cities globally, becoming the biggest private tenant in Manhattan. It also branched out into new businesses with the same communal, capitalistic, millennial vibe: WeLive for apartments, WeGrow for schools (charging $42,000 yearly tuition), WeMRKT for retail, and RiseByWe for luxury gyms, for instance. For Neumann, WeWork wasn’t just about real estate, but about a state of consciousness; he was out to transform not only the way we work or live, but the very world in which we live. 

Until then, it’s corporate narrative was that of the iconic stock-market unicorn. But as we all know, unicorns are just the stuff of fairy tales. 

Neumann’s wife, Rebekah was supposed to have told him during their first date: “You, my friend, are full of crap. Every single word that comes out of your mouth is fake.” She was clearly on to something. But love is blind and, unconscionably, she went on to marry him anyway.

Neumann’s governance style was rife with conflicts of interest. He would invest in buildings from which WeWork would take long-term leases, basically engaging in self-dealing. He’d use WeWork almost like a personal ATM, taking out loans to finance luxuries including a $500 million private Gulf Stream jet and $80 million for five exclusive homes in New York and the Bay Area, among others. He once charged WeWork $6 million for the “We” trademark to be used by its new parent, the WeCompany. He also owned special class shares, with voting rights 20 times the normal ones to ensure corporate control. He even named his wife and two other close friends to a special committee who would choose his successor in case of death. His penchant for drugs and alcohol on company premises, time, and expense was likewise widely known. On top of that was his hubristic, mercurial, tech-nerdy fratboy demeanor. 

The saddest part is that the saga continues. After the failed IPO, SoftBank injects another $1 billion into WeWork to stabilize the company, elevating its exposure in investments and loans to around $14 billion. SoftBank’s Son may have removed Neumann from his top perch, but at the price of extending a $500 loan and guaranteeing another $185 million for the next four years for “consultancy services”. Meanwhile, the newly laid-off scramble for their severance pay. 

Masayoshi Son has indeed found the new Alibaba, with Adam Neumann making up for what even 40 thieves would struggle to accomplish. Together, they give capitalism a deservedly bad name.