The Pitfalls Of Outsourcing Self-Awareness To Artificial Intelligence

insights_pitfalls-artificial-intelligence-hero.jpg

Civilization’s first tools were hammerstones. Over time, our Stone-age forebearers learned to fashion those simple stone cores into other implements which could be used to cut, chop, bang, and dig. As Earth’s earliest humans became increasingly more resourceful, their primitive technologies progressed. They developed better implements to get food, build shelters, cook, carry and store things, protect themselves. They discovered new skills and used ingenuity to make things needed to live and thrive in adverse circumstances. They—we—invented to survive.

That was a million years ago.

In the centuries since, our tools evolved from mechanisms of necessity to those which can assist us and enhance our lives. We’ve developed machines to build, lift and haul far beyond our physical abilities; compute faster and more accurately than we can; enable us to safely travel at tremendous speeds; leave Earth’s orbit to explore the universe; cure illness, eradicate diseases and extend life spans; communicate in real-time with people thousands of miles away; access the wealth of human knowledge on command; and many other extraordinary innovations and dazzling technological achievements which, in ways both positive and negative, shape the world we live in now.

But many of today’s technological innovations have a different purpose: to do things just so we don’t have to. Vehicles that travel without our controlling them. Robotic manufacturing. Genetically engineering foods instead of farming them. Passive threat monitoring. Algorithms pre-selecting information customized for mass consumption. Autonomous agents providing services in human affairs without human supervision or involvement.

A New Class of Tools

And now, drawing on advances in machine learning and artificial intelligence, a new, different class of technological and commercial applications is gaining broader acceptance, including for instance, affective computing, sentiment analysis, behavior change analytics, synthetic decision-making, and, most recently, intelligent nudging. These aren’t intended to entertain, simplify, expedite, problem-solve, or do work for us in any conventional sense. Their core purpose is to tell us about ourselves—assess, surveil, predict and manage aspects of human thought, emotion, and behavior. We’re outsourcing self-awareness, self-knowledge, and self-agency. It’s a paradigm shift. The dominant principles governing human tool-making and innovation are being fundamentally upended.

Consider Humu, a start-up co-founded in 2017 by three former veterans of Google, including Laszlo Bock who’d led Google’s human -resources, what Google calls People Operations. As described in a New York Times feature article last week, Humu “uses A.I. to ‘nudge’ workers toward happiness.” Bringing data analytics to human resources functions isn’t new. Humu’s self-declared differentiator is leveraging artificial intelligence using natural language programing and proprietary algorithms to run its special-purpose “Nudge Engine” which “deploys thousands of customized nudges—small, personal steps—throughout the organization to empower every employee, manager, team, and leader as a change agent.” All of this draws on Nudge Theory, a concept catapulted into the mainstream by Richard Thaler, the University of Chicago Behavioral Science and Economics professor who won the 2017 Nobel Prize in Economics for his work on the subject.

People are “complex, messy things,” Humu co-founder and CEO Laszlo Bock wrote in a company blog post last October. “If work is going to be better tomorrow,” Bock continued, “we have to change the way we do things today. And getting us to change? That’s one of the hardest organizational challenges out there. So to make work better, we have to make change easier.” Building a stronger, happier, and more productive workplace is the aspiration of many organizations. It promises solutions in a slew of vexing people-oriented areas, including retention, engagement, talent acquisition, performance, culture, absenteeism and insider misbehavior, among others. The premise is so appealing, in year one, Humu raised $40 million and is already deployed to 15 enterprise customers including one with 65,000 employees.

But looking past the hype, Humu is a case study in problems typical of many ventures pushing products and services which center on machine learning, artificial intelligence and other iterations of enhanced computational cognition to analyze and forecast human thought and behavior.

By design, AI and ML mimic aspects of human-like cognition and decision-making, using algorithms and training data to “learn” and develop answers and solutions independently rather than operating according to set programing. Simply put, AI is modeled on neural (brain) structures and human cognition and decision-making.

But there’s a sub-group of businesses, of which Humu is one, I call Human-Oriented AI (HOAI for short). These don’t just apply AI as an advanced computational tool to analyze data or run the interactive technologies we use and, for the most part, enjoy and benefit from every day (think Siri, Alexa, Google, Amazon and the like). They monitor and assess people for the express purpose of delivering insight, feedback, reporting, and forecasts about ourselves we ostensibly couldn’t get on our own.

While there are positives to the enhanced analytics human-oriented systems can offer (for instance, in sports training and performance and pilot, first responder and medical training, among many other areas), there are two major problems. One, endemic to most current-day iterations of AI, is a bias which over-values cognition—human mentation and decision-making as dominantly mechanistic and behavioristic—and subordinates or even dismisses arrays of non-cognitive constituents of mental processing and psychological existence. The paradox is that even as we talk about emotions—happiness for example—we do so in terms which over-simplify the complex psychology of emotions and misconstrue how affects—not just irrationality—influence thinking and behaving. Absent encoding more sophisticated facets of the psychodynamic drivers of human thought and behavior into the general architecture undergirding machine learning and artificial intelligence, applications will remain flawed.

The second is using analytics drawing on biometrics, behavior-based observation, self-reporting and laboratory research data to gain insight into profoundly nuanced, inscrutable aspects of our internal worlds—how subjective and embodied experience and all the feelings and memories of a life lived become and influence who we are and how we think and behave as a person. It’s a cardinal misunderstanding about the exquisitely complex non- and para-cognitive dimensions of how we learn, think and act.

The intersections of technology and people can create or amplify many unintended risks, including concerns about employee privacy, deprivation or curtailment of free-will (nudging by definition doesn’t dictate but suggests choice-making) and, importantly, the potential for abuse and misuse by malicious actors. Consider, for instance, that Humu’s recommendations, its nudge emails, are generated by machine-learning algorithms. Wayne Crosby, Humu’s head of technology, asserts that there is nothing covert or opaque about how its nudge communications are distributed. And Humu public relations representative Meghan Casserly says: “all Humu emails are signed and use DKIM, which allows an organization to take responsibility for transmitting a message in a way that can be verified by mailbox providers. Each email is sent from an address that is designed to be easily identifiable, and solely for the purpose of Humu communicating with our users.” Consequently, it would strictly speaking be incorrect to classify them as spoofing.

I nevertheless question the long-term efficacy of such emails. Though based on and informed by data derived from a client organization, the mechanism fundamentally outsources key managerial or leadership decision-making—the content of the nudges—to a computational agent. How will leaders who request this form of algorithm-driven leadership support develop much needed leadership skills if the work at which they have a deficiency is being delegated to a computer? I also wonder whether exposure to these sorts of communications could over time degrade or inure employees to accurately discerning Phishing emails from legitimate ones? Could members of an organization who are most receptive to Humu’s nudges be more susceptible targets for manipulation or social engineering ploys? Humu 's Crosby and people science lead Jessica Wisdom assert that Humu uses established behavioral science and security practices, and further state that “Humu's founders argue that all Humu systems adhere to the strictest of privacy and security protocols."

Irrespective of these assertions, these and other related questions and concerns are not summarily solved just by strong security protocols and should remain top of mind for every organization as a part of its enterprise human risk and cyber-security awareness training and policy-making.

Even beyond the many ways in which nudging alarmingly resembles and can unwittingly promote social engineering, the core business model cannot escape the limitations of technology’s ultimate handicap as a tool to tell us about people. All claims to the contrary notwithstanding, computational agents cannot intuit human intention, understand meaning, infer subtext, comprehend conflict, anguish, fury, shame, love and desire as motivators, detect percolating internal machinations, and predict either imminent malicious action or most other forms of anomalous behavior. People aren’t weather systems. High-octane pattern analysis only suggests so much. Everything else are unknown unknowns. Until they’re not.

Recommendations for Leaders

What are the alternatives for enterprise leaders wanting to better understand and address people and culture issues without completely outsourcing them to technology? That of course is the million-dollar question. The short answer: reference the vast literature on leadership development and consult a knowledgeable and experienced mentor or advisor. Ask questions, listen, learn. There are no instant bromides or silver-bullets solutions.

But here’s one immediately actionable recommendation. Any business leader who’s considering a technological solution to a people-related issue has already importantly identified at least three critical data points: (1) the awareness of an issue, (2) it’s potential source, and (3) uncertainty about how to proceed. What’s next? Further investigation. Good solutions come from good diagnoses. Determining the correct response is predicated on understanding the causes of the problem. Technology might be an effective response. Or a placebo. Or a mistake. We’re tasking technology to understand and resolve the complexities intrinsic to our humanness. Why? Because we’re flummoxed to contend with them.  We can do better.


This article was originally published on the Forbes Leadership Strategy Channel on 6 January 2019.


Subscribe to The Briefing

Sign up to receive a curated digest of thought-leadership and analysis connected to Dolus Advisors’ work focusing on leadership, decision-making, and organizational issues involving complex psychological underpinnings.


Previous
Previous

Imposters Are Everywhere: How To Avoid Being Duped

Next
Next

Reflections at year-end on the human mind