Reflections at year-end on the human mind

insights-year-end-reflections-human-mind-hero.jpg

This end-of-year holiday has offered much-needed down-time to play (soccer, ski), spend extra precious time with my wife and our boys, sleep more, relax, and reset. The week away from client work and strategizing upcoming business and speaking and writing projects was also a prime opportunity to listen to music (I re-discovered Astor Piazzolla this week), and catch-up a little on the back-log of un-listened-to podcasts and un-watched Ted-talks. To think and reflect. And, of course, to read (first book off the stack: “Life 3.0: Being Human in the Age of Artificial Intelligence” by Max Tegmark).

Among all the content I’ve recently traversed, one short piece—“Scientists say your “mind” isn’t confined to your brain, or even your body”—catalyzed my writing this post. I offer this as both a kind of summary statement and prelude to further comments, observations, and insights coming in the year ahead on these sorts of topics and issues I frequently write and speak about and to which I devote my time and expertise every day.

One positive consequence of the emergence and early commercial applications of machine intelligence is technologists’, data scientists’, business leaders’, and even the general public’s growing appetite for more information about so-called “soft” science areas—psychology, sociology, ethics, linguistics, anthropology, philosophy. Likewise, advances in medical technology and neuroscience research, spurred in part by the push to develop artificial intelligence and other forms of synthetic cognition modeled on neural structures and other features of brain function, are explosively expanding our understanding of the brain. With that comes critical new knowledge about this remarkable, complex organ, including innovations melding people and tech to, for example, help patients who’ve suffered catastrophic limb, sensory organ and brain injuries in new and remarkable ways. It has also brought fresh perspectives on organic factors involved in how both our brains and minds function and malfunction. All excellent.

But it’s vital for interested stakeholders to realize that the body of knowledge and data regarding the mind—encompassing mental architecture, emotions, subjective experience, maturation and development, psychological processes, and more—is vast, deeply researched, rigorously articulated and, while continually expanding, absolutely not new. This is especially pertinent for those working in AI/ML and affiliated areas of automated cognition, technology and cybersecurity who, whether they intended to or not, are indirectly but unavoidably engaging in the dense realms of human decision-making, consciousness, perception, awareness, embodied experience, developmental learning, imagination, creativity, phenomenological and existential subjectivity, linguistic and symbolic communication, early attachment and family dynamics, culture, sexuality, and gender, and other aspects of humanness. None of which is purely cognitive, behaviorally quantifiable, definitively codifiable, or capable of being comprehensively rendered algorithmically.

Brains are amazing. Not having one would be distinctly problematic. I’m constantly thankful for mine, and hope everyone feels similarly about their own. But we are more than just our brains. We aren’t merely thinking organisms or cognitive beings. Mentation may be seated in the brain, carried out by neurons and other cortical and biochemical activities, but what we experience and speak of as of-the-Mind—your you-ness, the self of one’s self and other selves, the unique characteristic features of identity, personality, and individual personhood—is not understandable solely as the work of brain function. Who we are as psychological persons—how we each feel, think, desire, love, socialize, behave, misbehave, our memories (real and imagined) and life experiences (actual, confabulated, elaborated, accessible, forgotten, repressed)—is at the center of everything we do, constructive and destructive, as individuals and as members of families, enterprises, communities and societies.

Also: an entire generation is coming of age brain-washed by an emoji-based propaganda campaign using brain-shaped clouds, brains overlaid with circuit-boards or spinning electrons, heads containing computers instead of brains, robots that look like a person thinking, and other pictograms signifying that artificial intelligence, machine cognition and the human brain are interchangeably synonymous. No! AI ≠ brain. This association is not only creating an indelible cartoon of AI which will be difficult to alter but, more perniciously, consolidating a mis-representation being algorithmically embedded into and deforming early-generation AI iterations. It reflects a damaging cardinal misunderstanding in itself, as I’m discussing here, about the exquisitely complex non-cortical and para-cognitive dimensions of how we learn, think and act. Businesses, courts, and agencies which increasingly rely on machine-learning in front-line operations must be alert to racial, gender, class, socio-economic and other biases incorporated into training data and algorithms. Similarly but even more grossly impactful, misconceptions about the underpinnings of human cognition and models which distort, oversimplify and de- or undervalue other critical aspects and constituents of mental processing and psychological existence are adversely impacting outcomes at scale in ways that will ultimately dwarf all other currently known bias and general intelligence problems.

With this in mind, we need more effective discussions among different experts across multiple disciplines. Every enterprise working in cybersecurity, ethical AI, advanced ML, and socially-responsible automated cognition should have a seat available for a specialist in the human mind to assist in developing enhancements and mitigating and resolving problems using lenses and tools beyond the scope of data scientists and conventional academics and cognitive researchers.

Twenty years ago, Dan Siegel, MD, a professor of psychiatry at UCLA School of Medicine, proposed succinctly defining the mind as: “the emergent self-organizing process, both embodied and relational, that regulates energy and information flow within and among us.”

On this view, for instance, what behavioral economists and social scientists broadly refer to as ‘irrationality’ can be productively understood, in part, as ‘mental disorganization.’ And certain psychopathologies and psycho-social dysfunctions can be seen to represent a failure (constituted of organic and/or psychologically maladaptive causes) to be able to resolve the conflicts and effects of subjective disorganization.

Considering the mind in terms of a “self-organizing process” seems useful, then, at least as a provisional and temporary Esperanto phrase with a general uniform meaning understandable to diverse professionals who otherwise talk past each other in widely disparate dialects. But it is nonetheless unavoidably insufficient. The mind is far more than a system in intermittently fluctuating states of organization and disorganization. In 1987, the novelist and poet Margaret Atwood (probably most celebrated today as the author of “The Handmaid’s Tale”) wrote: “I would like to be the air / that inhabits you for a moment only / I would like to be that unnoticed and that necessary.” I think this comes a little closer to how we can think of the mind—of matters psychological but not necessarily cognitive; coming under the heading of ‘intelligence’ but not just concerning logic, rationality, behavioral functionality and decision-making; relating to and governing our inner world and how we live, love, and relate to others and ourselves. Of being human. That is: so unnoticeable as to be unnoticed. But, actually, so very, very necessary.

So, with all of this in mind and as we enter this new year, I end with an invitation, a call to action, and a hope:

(1) I welcome multi-disciplinary discussion and collaboration—With business leaders. Entrepreneurs. Technology and cyber-security professionals. Data scientists. I invite you to connect with me. Let’s meet for coffee or lunch, by Skype, WebEx, or Zoom, or in your office or mine. Talk with me. Ask me about the mind, mental functioning, and psychodynamics. About sophisticated multi-faceted but also actionable solutions-oriented ways of understanding and addressing the human element in your business, your software development cycle, your cybersecurity, awareness, and malicious-actor threat defense system, your ethics, culture, risk and compliance programs, your boardroom, your AI/ML/automation initiative. Give me the opportunity to ask you questions and listen to your responses. Help me understand what you know and understand, what you’re uncertain about or question—about how and why people behave as they do, what you expect should happen in various scenarios in which people figure centrally, how you’ve accounted for and mitigated the human risks.

(2) To all who consider people primarily (or even exclusively) in behavioralist, functionalist, mechanistic, cognitive terms, I strongly urge you to re-assess your thinking. Revise any negative bias against the many other psychodynamic drivers and constituents of human thought, behavior, and decision-making. There is no aspect of any enterprise or social institution which isn’t influenced or affected by everything human. We are astonishingly adept at serially reinventing old solutions for new problems. Decisions, policies and procedures grounded in fallacy, misnomer, simplification, incompleteness, minimization, or outright denial about how we actually think and make decisions only amplifies risk and vulnerability and increases the likelihood of unintended consequences.

(3) Some future rogue AI or singularity event may yet imperil our existence, an avoidable but tragically conceivable potentiality if we fail to more thoughtfully consider what we’re doing and why we’re doing it and continue to outsource the resolution of human issues to technological surrogates. But presently, our severest problem isn’t the threat of either amoral or hyper-perfect synthetic cognition; it’s us: indifference, cruelty, pettiness, willful blindness, prejudice, denialism, primitivism, abuses of authority, denigration of dignity, mass manipulation, fetishized ethnic balkanization, rationalized genocide, perpetration of trauma on children. My hope, really another call-to-action, then: our being better able to value ourselves and our humanity. Maybe then we’ll succeed in developing new technologies we will be truly proud of, not fear.


Subscribe to The Briefing

Sign up to receive a curated digest of thought-leadership and analysis connected to Dolus Advisors’ work focusing on leadership, decision-making, and organizational issues involving complex psychological underpinnings.


Previous
Previous

The Pitfalls Of Outsourcing Self-Awareness To Artificial Intelligence

Next
Next

Why Humans are Cruel