The Unintended Consequences of Outsourcing Human Issues to Technological Surrogates

insights-technological-surrogates-hero.jpg

Since the first intentionally lit fire, crudely fashioned shovel, lever, or wheel, along a continuum to cuneiform, mathematics, and the multitude of technologies we enjoy and benefit from today, humanity has been inventing tools and innovating solutions to life’s myriad challenges. Artificial intelligence (AI) and machine learning (ML) are next-generation advancements which can extend that heritage. But we are crossing a threshold. Prosthetic intelligence and autonomous agents empowered by commercial and social agreement, possibly one day even gaining legal standing, to act for and on behalf of us and themselves represents a different order of involvement in human affairs from enlisting mechanical tools and enhanced processing power to help us accomplish tasks beyond our unassisted capabilities.

As a specialist advisor on human factors in corporate governance, ethics, compliance, culture, and cyber- and white-collar malfeasance risks, these issues are top of mind for me every day. The business leaders and boards I work with strive to identify and mitigate risk, find efficient and effective ways to scale, and keep their enterprise competitive, agile, and secure. For many companies that can mean contemplating heavy investments in cutting-edge technology like AI. In those C-suite deliberations, confusion often precedes clarity, especially where the enterprise-specific benefits and applications may be indeterminate. Invariably, “how much will it cost?” “what does it do?” and “do we really need it?” are asked and, hopefully, satisfactorily answered. But I’ve found from experience that some thornier over-the-horizon questions regarding soft implications, e.g., “what happens if it actually works and really does what it claims?”, can get left unanswered.

Much of this crystalizes in a WIRED article I recently read. It focuses on several artificial intelligence companies founded to “reinvent hiring” by “building tools and platforms that recruit using artificial intelligence.” Having identified “taking human bias largely out of the recruitment process” as the solution to the problem, these companies aim to replace human recruiters with bots “programmed to ask objective, performance-based questions and avoid the subconscious judgments that a human might make.”

To its credit, the Wired article neither endorses nor lambasts, but equitably discusses the concepts and business strategies with minimal bias. From my perspective, it’s a case study in risk forecasting: I see organizations enthralled with the promise and potential of AI but either unwittingly ignoring or willfully blind to other lurking issues.Such as? First, clusters of predicate assumptions which flatten multi-dimensional aspects of mental life and social organization into a single plane: Bias is an embedded natural component of mental architecture, the hidden influences (positive and negative) of which become visible in decision-making. While having unquestionably deleterious effects when deliberately weaponized or, more commonly, enacted unthinkingly, the dynamic nature and psycho-social functions of biases serve multiple useful purposes as part of a complicated array of psychological mechanisms necessary to relationship formation and maintenance, assessing and surviving micro- and macro- moments in day-to-day life, and navigating organized social structures.

And not least, the flawed central premise and misdiagnosis of problems ostensibly involving diversity or cultural homogeneity but which derive from and are constituted of other elements. They are “solving” for an incorrect problem-set which presumes as a baseline that bias is a unitary thing, is intrinsically destructive and unhelpful, and moreover is a universal rather than individualistic or situationally-variable dynamic. The conclusion that removing it outright will be an unequivocal enhancement is wrong.

Not all of this is specific to bias, recruiting, or other aspects of organizational staffing and operational governance. There are more critical problems. They center on the disjunction between the technological promise and moon-shot ambitions for AI and ML, and also the voracious push for accelerated commercialization which doesn’t yet align with the current realities. These systems are still squarely in a rudimentary developmental phase and are being tasked to hit far above their weight class. No matter how prodigious emulated intelligence computing may be in comparison to yesterday’s programming, tech investors, entrepreneurs, and society at large are looking to have it achieve more than it currently can or, in many respects, should.

But the limitations are not merely computational or technological. What does the growth of businesses like these say about us as people? Put bluntly, we suffer from a pathologic disinclination to accurately assess complex human problems. Psychodynamic issues are usually treated as fundamentally the same as physical problems. They’re not. Consequently, we tend to develop solutions to many human problems which fail to address the actual problem.What’s to be done? While the answers aren’t simple, the question can be simply answered: seize the opportunity we’re giving ourselves to better understand the human dimension. The advent of AI, ML, and IoT has catalyzed multi-sector focus on many non-technological questions and issues in psychology, sociology, and philosophy including ethics, decision-making, pro-social and anti-social dynamics, and other qualities, characteristics, and behaviors intrinsic to being human. Yet the strong tendency, it seems, is to supplant understanding ourselves with algorithms which might do it for us.

It’s a paradox of human nature that we selectively see ourselves as centrally powerful and responsible for our many influences yet also deny and abdicate agency and accountability for many of our own actions. Outsourcing distinctly human issues to technological surrogates is a decision we will come to lament. It simultaneously avoids examining and properly diagnosing the root issues themselves. It buries understanding and avoids solving the original problem. And it inadvertently creates new, secondary and tertiary ones, not least including the problems of as-yet-undiscovered unintended consequences.


Subscribe to The Briefing

Sign up to receive a curated digest of thought-leadership and analysis connected to Dolus Advisors’ work focusing on leadership, decision-making, and organizational issues involving complex psychological underpinnings.


Previous
Previous

Why Humans are Cruel

Next
Next

What Corporate Boards Really Need to Consider When Looking for a Replacement CEO