I recently responded to a text from my cell phone carrier promising to lower my bill. “Just click this link to activate,” it promised. The link brought me to the carrier’s home page from which I was unable to find anything related to the offer. Poking around a bit and trying various search words and phrases got me plenty frustrated but no closer to the discount. My allotted time dwindling, I resorted to calling. An advanced version of the automated phone menus of the past, the bot that greeted me presented itself as a full-blown customer service agent. It was not.
The customer service bot and I tussled for several minutes as I stated and restated my reason for calling; eventually I gave up and requested a human. Fortunately my request—“human being, please”—was understood and I was transferred. Unfortunately, due to fruitless back and forth with the bot and my shrinking window of time, I was a less satisfied customer than I had been when the exchange began.
To be clear, I am not anti-AI or anti-automation. AI is powerful technology with significant potential for improving efficiency and solving complex problems. Indeed, I felt safer during a recent Waymo One ride than I have with most taxi, Uber or Lyft drivers. But I wonder whether customer service—or in the case of nonprofits, donor relations—is the best use of this technology? Can bots connect with customers as effectively as humans or is connection being sacrificed in the name of efficiency?
Humanizing AI
As the New York Times and others recently reported, incorporating manners—please and thank you, for example— into AI has cost tens of millions of dollars. Money “well spent” in the view of Open AI CEO Sam Altman who, like others, has expressed the view that impolite or unkind interactions with AI will build bad habits and thus decrease kindness in human interactions.
“While it is true that an AI has no feelings, my concern is that any sort of nastiness that starts to fill our interactions will not end well,” explains Scott Z. Burns, screenwriter and host of “What Could Go Wrong?” a podcast focused on working with AI. “Kindness should be everyone’s default setting—man or machine.”
Unquestionably, decline in civility and kindness is a problem. Perhaps AI can promote kindness in humans. Or, at the very least, avoid contributing to further decline. But are there risks to humanizing these tools? Are we unintentionally contributing to disconnection and loneliness by passing off bots as substitute humans rather than the tools they are? Not long ago, social media was heralded as a boon to human connection and the sharing of information. While it certainly facilitates those things, we now recognize there are significant risks and potential harm as well.
According to Axios, Meta is developing AI chatbots to serve as “friends” in an effort to address loneliness. "The average American has, I think, it's fewer than three friends," Mark Zuckerberg said on a recent podcast. "And the average person has demand for meaningfully more."
Interesting logic. Assuming Zuckerberg is referring to “Facebook” friends, most Facebook users have more than three friends—far more in most cases—yet loneliness is at an all time high and has increased along with use of social media. It seems virtual friendships may be contributing—if not causing—loneliness. And, more of them will be better?
Can AI fool our brains?
Beyond concerns about civility, loneliness, and privacy, substituting connection to humans with AI feels out of sync with what we know about ourselves as complex beings. A recent neurological study commissioned by the Mauritshuis Museum in The Hague found that viewing real works of art stimulates the brain 10 times stronger than viewing a reproduction on a poster or screen. If our brains so capably recognize and respond to original art, it stands to reason reproductions of human beings will fall short as well.
In his recent essay, “EI (or EQ) in the age of AI,” Dan Goleman, the man who (literally) wrote the book on emotional intelligence—”Emotional Intelligence: Why It Can Matter More Than IQ”—notes that AI can outperform humans in many tasks and functions, including reading comprehension; recognizing speech, handwriting, and images; common sense reasoning; and generating code and math.
“But the good news for us is we are quite likely to remain better than AI at human skills like emotional intelligence—and that advantage is likely to stay,” he assures. “What AI is not so good at is ‘heart skills’ or ‘durable human skills’—basically human abilities in the emotional intelligence domain. Think of the warmth of a caring relationship; the bulls-eye messaging of a doctor explaining to a particular patient why to take a medicine; a skilled salesperson telling a customer why a certain product fits their needs. Or jobs like nursing or teaching school children, that require trust and caring.”
Love for humanity
For better or worse, philanthropy—like most industries—is experimenting with and incorporating AI. Given current trends and challenges facing the sector, increasing efficiency, automating functions, and analyzing and using data in new ways is crucial to nonprofit survival and success. New and improved tools are necessary and welcome. It seems incredibly important, however, that we recognize the value of human connection—the foundation on which philanthropy was built—and avoid further damage in an effort to increase efficiency.