Automatingempathy20251020
I saw an advertisement today: “Empathy can’t be automated”
Made me think: What is the evidence pro and con?
The obvious question is “Well, can you?”
The debate is joined. This turns out to be a trick question. The intuition on my part is that one cannot automate empathy, but perhaps one can simulate it, and then the simulation turns out to be something quite like the “automating empathy” of the title.
Defining our terms
However, before further debate, we need to define our terms. “Receiving empathy” is defined in rough-and-ready terms as one person (the speaker expressing/sharing something) “feeling heard” by the other person (the listener). “Feeling heard,” in turn, means the speaker believes he or she has been “understood,” “gotten for who the sharer is as a possibility.” The celebrity psychotherapist Carl Rogers said things about empathy such as getting inside the frame of reference of the other person and experiencing (not just intellectualizing) the other’s point of view: “…to see his [her] private world through his eyes” (Rogers 1961: 34).
If one wants to get a tad more technical about defining empathy, then consider Heinz Kohut’s approach that empathy is “vicarious introspection,” i.e., one knows what the other individual is experiencing, feeling, etc. because one has a vicarious experience of the other’s experience (Kohut 1959). I listen to you and get “the movie of your life.” Less technically, Kohut famously quotes one of his psychoanalytic patients (he was a medical doctor) as saying that being given a good [empathic] listening was like sinking into a warm bath (Kohut 1971; note I will update this post as soon as I can find the exact page). Presumably that meant it was relaxing, de-stressing, emotionally calming.
“Simulation” is producing a functionally similar result using a different means or method. For example, in the history of science, Lord Kelvin, one of the innovators of thermo-dynamics simulated the action of the ocean tides using a mechanism of ropes and pulleys [Kenneth Craik 1943: 51 (“Kelvin’s tide-predictor”)]. Thus, one does not have a social relationship with a bot, one has a “para-social relationship.” According to Sam Altman, some 1% of ChatGPT users had a “deep attachment” to the app – a kind of “rapport” – sounds like an aspect of “empathy” to me. Maybe not a therapist, but how about a “life coach”? (https://www.nytimes.com/2025/08/19/business/chatgpt-gpt-5-backlash-openai.html)
One misunderstanding needs to be cleared up. Entry level empathy is often presented as reflecting back on the part of the would-be empathic listener what the potential recipient of empathy expresses in words (and sometimes also in behavior). Though the value of being able to reflect the words the other person expresses is great, this is a caricature of empathy. For example, that the client comes in and says “I am angry at the boss” and the listener responds “You feel angry.” Pause for cynical laugh.
Now one should never underestimate the value of actually comprehending the words spoken by the speaker. Reflecting or mirroring back what is said, more-or-less literally, is a useful exercise in short-circuiting the internal chatter that prevents a listener from really hearing the words that the other person puts into the interpersonal space of conversation.
The exercise consists in engaging and overcoming the challenge: “I can’t hear you because my opinions of what you are saying are louder than what you are saying; and my opinions drown out your words!” So the empathic commitment is to quiet and quiesce the listener’s internal chatter and be with the other person in a space of nonjudgment, acceptance, and tolerance. In a certain sense, the empathy automaton (“bot”) has an advantage because, while the bot may have a software bug or generate an inappropriate response, it does not have an “internal chatter.”
Entry level empathy automation: repeating, mirroring, reflecting
So far, there is nothing here that cannot be automated
On background, this entry level of empathy was automated – reflect back what was said in at least in a rough-and-ready way – in 1966 by Joseph Weizenbaum’s MIT prototype of a natural language processor, a very primitive “chat-bot,” ELIZA. This was an attempt at natural language processing at a high level and was not restricted to science, therapy, life coaching, business, education, or any random area of conversation. The approach of ELIZA was to reflect, mirror back, repeat the statements made to it (the computer app) by the human participant in the conversation (which used a key board to type the exchange).
Here’s the surprising, unpredicted result: The example of the software ELIZA, which mirrors back what the person says to it, was experienced by users to be comforting and even “therapeutic,” granted in a hard-to-define sense. As far as I know, no attempts at a sustained therapeutic or empathic relationship were ever undertaken, so the data is anecdotal, yet compelling.
Could it be that we persons are designed to attribute “mindedness” – that the other person (or, in this case, participant software) has a human-like mind – based on certain behavioral clues such as responding to the speaker with words and meanings used by the speaker or that support the speaker or even disagree with the speaker in a way that takes the input and provides meaningful feedback? This gets one a caricature of the role of therapist of the school of Carl Rogers (whose many innovative contributions are not to be dismissed), in which the listener mainly reflects back the words of the client and/or asks non-directive questions such as “How do you feel about that?” or “Will you please say more about that?”
In contrast, we find there is more to empathy than mere mirroring and reflection of words. For example, Heinz Kohut (1971, 1977), an empathy innovator, gives the example of meeting a new prospective client who begins the conversation with a long list of the client’s own failings, short comings, and weaknesses, dumping on himself and building a case that he is really a jerk (or words to that effect). The guy is really going full throttle in dumping on himself. The person then pauses and asks, “What do you think of that?” Based on his empathic listening and the feeling that Kohut got in being with the person, he replies “I think you are feeling very lonely.” The man bursts into tears – finally someone has heard him! Now this is just one vignette from a long and complicated process, in which there were many moments of empathic convergence and divergence. The point is this exchange was not a predictable result, nor likely a result to have been produced by mere mirroring; nor is this vignette dismissible by saying that Kohut was merely a master practitioner (which he was), who could not tell what he was doing but just did it. Kohut wrote several books to document his practice, so he tells a lot about what he was doing and how to do it.
In a prescient reflection, which anticipates the current debate by nearly half a century, Kohut wrote:
“…[M]an [people] can no more survive psychologically in a psychological milieu that does not respond empathically to him, than he can survive physically in an atmosphere that contains no oxygen. Lack of emotional responsiveness, silence, the pretense of being an inhuman computer-like machine which gathers data and emits interpretations, do not more supply the psychological milieu for the most undistorted delineation of the normal and abnormal features of a person’s psychological makeup than do an oxygen-free environment…” (Kohut 1977: 253)
Empathy is oxygen for the soul. What Kohut could not appreciate – and which computer science could not even imagine in 1977 – is that large language models would be able to simulate affective responsiveness, chattiness, agreeableness, even humor, that would put to shame the relatively unemotional unresponsiveness of the classic approach to psychoanalysis, in which the analyst is an emotionally neutral (and hence “cold”) screen onto which the client project his issues. It should be noted this classic unresponsiveness is a caricature and stereotype to which few real world psychoanalysts rigidly adhere, rather like the cynical anecdote of the analyst who removes the tissue from his consulting room to prevent indulgent weeping instead of talking.
Granted, one should not assume that a therapeutic bot would (or would not) work as well as a human therapist or empathy consultant; but, for the sake of argument, let us suppose that it does work as well (and work as badly?), either in certain circumstances or new, improved future releases, which are to be anticipated with highly probability.
Fast forward to today’s large language models (LLMs)
Therefore, fast forward these sixty years from Joseph Weizenbaum’s prototype (MIT 1965) to today’s large language models that beat human beings at playing Jeopardy – an elaborate word game requiring natural language – and what then becomes possible? One of the challenges of talk therapy (or empathy consulting, etc.) is that it is powerful and demonstrably effective, but not scalable. It does not scale up to meet the market demands of thousands of people who are struggling with mental illness or those who, while not satisfying criteria for mental illness, would still benefit from a conversation for possibility.
A human therapist (or empathy consultant) has eight hours in a standard workday and if the therapist meets with emotionally upset people during all those hours, then the therapist is at risk of upset, too, confronting compassion fatigue, burn out, or empathic distress in somewhere between two weeks and two years. Hence, the popularity of fifteen-minute medication management session on the part of psychiatrists (MDs), the current dominant practice design, an approach presenting challenges of its own. Medications are powerful and can address disordered mood, anxiety, and pathological thinking, yet often the underlying (individual, social, community, nonbiological) issues remain unaddressed, due to finances and schedule, and so remain unengaged and unresolved.
Hence, the current market of long wait times, high costs, high frustration, challenges to find a good fit between therapist and client, all resulting in suffering humanity – above all suffering humanity. (Note also that, while this article often talks about “therapy,” many of the same things can be said about “life coaching,” “consulting,” “counseling,” and so on; and these latter will not be mechanically repeated, but are implicated.
As regards the market, the problematic scalability of one-on-one talk therapy and the lengthy time needed for professional training and the acquisition of a critical mass of experience, results in a market shortage of competent therapists and related empathy consultants. For the prospective patient (i.e., customer) the question often comes down to: “how desperate are you?” as a client struggling with emotional, spiritual, behavioral health issues. The cynical (and not funny) response is “Pretty near complete panic!” If one is desperate enough – out of work, relationships in breakdown, attracted to unhealthy solutions such as alcohol – then a “good enough bot” just might be something worth trying. Note that all the usual disclaimers apply here – it would be interesting to consider a double-blind test between real human therapists and therapy bots. Unfortunately, the one really un-overcome-able advantage of a human therapist – the ability to be in-person in the same physical space – cannot be double-blinded (at least at the current state of the art) in a test. Having raised the possibility, I have deep reservations about the personal risks of such an approach. That there is a market for such services is different than having such an automated approach imposed on consumers by insurance corporations to expand would-be monopoly profits.
Another possible advantage of automation (albeit with a cynical edge): People who are socially awkward might prefer to get started with a virtual therapy bot. One clever startup has called their prototype platform a “Woebot”. Get it? Not a “robot,” but a “Woebot.” (Note – the Woebot uses a database of best practices, not a large language model.)
Now these socially struggling individuals might be a tad naïve as such an approach would prevent them from engaging with the very issue that is troubling them – interacting with people. On the other hand, one might argue it would be like “exposure therapy,” for example, for the person who has a snake phobia and is presented with a photo or a rubber snake as the first step of therapy. Likewise, in the case of an empathic relationship, one would have to “graduate” to an authentic, non-simulated human presence – the real snake!
What is one trying to automate?
If one is going to automate empathy using a therapy-bot, then presumably one should be able to say what it is that one is trying to automate – that is, simulate. There are four aspects of empathy that require simulation – for the client, (1) the experience (“belief”) that one has been heard – that the meaning of the message has been received and, so to speak, not gotten lost in translation; (2) the communication of affect and emotion – that the listener knows what the speaker is feeling and experiencing because the listener feels it too; (3) who is the other person as a possibility in the context of the standards to which the individual conforms in community; (4) putting oneself in the place of the other person’s perspective (“point of view” (POV)).
Most of these things – taking the other’s point of view, vicarious experience of the other’s experience, engaging with possibilities of relatedness, commitment to clear communication – come naturally to most people, but require practice. Humans seem to be designed spontaneously to assume multiple perspectives – one assumes other people have minds (beliefs, feelings, wants, impulses) like oneself, but then we get caught up in “surviving the day” on “automatic pilot,” and forget the individual is part of the community, throwing away the assumptions and preferring the egocentric one – it’s all about me! It takes commitment to empathy and practice to overcome such limitations. A person experiences a rush of emotions, but then forget it could be coming from the other person and succumb to emotional contagion. Who the other person is as a possibility is not much appreciated in empathy circles, but it an essential part of the process of getting from stuckness to flourishing and requires empathy at every step.
There is nothing described here, once again, that cannot in theory be automated. Indeed human beings struggle with all these aspects of being empathic, and the requirements of automation are non-trivial, but can be improved by trial and error. The over-simplifications required to automate a process end up feeding back and giving the empathy practitioner insight into the empathic process as implemented in the human psychobiological complex, the complete human being.
There is nothing wrong – but there is something missing
Ultimately what is missing from automating empathy is the human body – the chatbot is unable to BE in the space with you in a way that a human being can be with you. The empathy is in the interface. And the empathy for the human being is often the face. The human face is an emotional “hot spot.” The roughly thirty muscles in the human face, some of which are beyond voluntary control, can combine in some 7000 different ways to express an astonishingly wide range of emotions starting with anger, fear, high spirits (happiness), sadness and extending to truly subtle nuances of envy, jealousy, righteous indignation, contempt, curiosity, and so on. The result is facial recognition software of which “emotional recognition” is the next step as implemented by such companies as Affectiva (the corporation) (see Agosta 2015 in references). The software that recognizes the emotions and affects of the speaker based on a calculus of facial expressions (and the underlying muscles) as documented by Paul Ekman (2003). Now combine such an interface with an underling automation of empathy by a not-yet-developed system, and the state-of-the-art advances.
The skeptic may say, but what if the therapist is a psychoanalyst and you are using the couch so that the listener is listening out of sight, hidden behind the client, who is lying comfortably looking at the Jackson Pollack drip painting on the wall in front of him (or her) or at the ceiling? Well, even then, one hears the therapist clear her (or his) throat or one hears the analyst’s stomach gurgle. So the value added of the in-person therapy is gurgling stomachs, farts, and hiccups? Of course, this is the reduction to absurdity of the process (and a joke). Taking a necessary step back, the suspicion is one has missed the point. The point being? The bodily presence of the other person (including but not limited to the face) opens up, triggers, activates, possibilities of relatedness, possibilities of fantasies of love and hate, possibilities of emotional contagion, possibilities of further physical contact including sex, aggression, gymnastics, breaking bread, inhabiting the same space (this list is incomplete) that no virtual connection can as a matter of principle and possibility fulfill at all. This (I assert) is a key differentiator.
The emotional bond between the client and therapist, counselor, or consultant becomes the path to recovery. But why cannot that bond be with a bot? Well, without taking anything back anything said so far in this article, the bond can be a “bot bond,” if that would work well enough for you. Still, arguably, there is nothing wrong, but there is still something missing. Like in the major motion picture Her ((2013) Spike Jonze with a young Joaquin Phoenix), in which the lonely, socially awkward but very nice guy has a relationship with an online bot of a “girlfriend” – and then gets invited out on a double date. It is like the date – the other “person” (and the quotes are required here!) – is on speaker phone. So, if you are okay with that, then the sky, or at least cyber space, is the limit. There is another shoe to drop. It then turns out that the relationship is not exclusive as the software is managing thousands of simultaneous threads of conversations and relationships simultaneously. One essential aspect of empathy is that the one person is fully present with the other person. Even if the empathy consultant has other clients and other relationships, the listener’s commitment at the time and place of the encounter is to be fully present with the other person. For at least this session, I am yours and yours alone. Now that is a differentiator, and even in our multitasking, attention deficit world, I assert such serial exclusiveness (different than but analogous to serial monogamy) is critical path to get value from the empathy, whether authentic or simulated.
Advantage: Rapport
This matter of “exclusivity” suggests that the rapport between the speaker and listener, between the receptivity for empathy and its delivery, is undivided, unshared outside of the empathic pair, complete, whole. The parent has several children, but when she or he is interacting with one of them, that one gets the parent’s undivided attention. The parent is fully present with the child without any distractions. That such a thing is hard to do in the real world, show what a tough job parenting is.
If this analysis of exclusivity is accurate, then that would be a further differentiator between real world human empathy and automated empathy. I may be mistaken, but notwithstanding some people who can manage (“juggle”) multiple simultaneous intimate relationships, the issue of exclusivity of empathy is one reason why most such relationships either fail outright or stabilize as multiple serial “hook-ups” (sexual encounters) without the intimacy aspects of empathy.
On background, this business of the “rapport” invites further attention. “Rapport” is different from empathy, and it would be hard to say which is the high-level category here, but the overlap is significant. “The rapport” first got noticed in the early days when the practice of hypnosis was innovated as an intervention for hysterical symptoms and other hard-to-define syndromes that would today be grouped under “personality disorders” as opposed to major mental illness. The name Anton Mesmer (1734 – 1815) – as in “mesmerism” – is associated with the initial development of “magnetic banquets,” as in “animal magnetism,” the attraction and attachment between people, including but not exclusively sexual attachment. Mesmer had to leave town (Vienna) in a hurry when he was accused of ethical improprieties in the practice of the magnetic banquets.
The rapport of the hypnotic state is different than “being in love,” yet has overlapping aspects – being held in thrall of the other in a “cooperative,” agreeable, even submissive way. (It should be noted that Mesmer started up his practice again in Paris (a fascinating misadventure recounted in Henri Ellenberger The Discovery of the Unconscious (1971)). Today hypnosis is regarded as a valid, if limited, intervention in medicine and dentistry especially for pain reduction, giving up smoking, and overcoming similar unhealthy bad habits. One could still take a course in Hypnosis from Erika (not Eric!) Fromm in Hypnotism in the 1970s at the University of Chicago (Brown and Fromm 1986). (Full disclosure: I audited her (Fromm’s) dream interpretation course (and did all the assignments!), but not the hypnosis one.)
This business of love puts the human body on the critical path once again. Of course, no professional – whether MD, psychologist, therapist, counsel, empathy consultant, and so on – would ethically and in most cases even legally perpetrate the boundary violation of a sexual encounter. Indeed one can shake hands with any client – but hugs are already a boundary issue, if not violation. The power differential between the two roles – provider and client – is such that the client is “one down” in terms of power and cannot give consent.
However, firmly differentiating between thought and action, between fantasy and behavior, what if the mere possibility of a sexual encounter were required to call forth, enable, activate, the underlying emotions that get input to create the interpersonal attachment (the rapport) that occur with empathy? (This is a question.) Then any approach which lacked a human body would not get off the ground. Advantage: human empathy.
Once again, skepticism is appropriate. Is one saying the possibility of a boundary violation is an advantage? Of course not. One is saying that the risk of a boundary violation is a part of having a human body, and that such a risk is on the critical path to calling forth the communication of emotions (many of which may be imaginary) need for full-blown, adult empathic encounter. Note also this is consistent with many easy examples of entry level empathy where empathy is not really challenged. If someone raises their voice and uses devaluing language, one’s empathy is not greatly tested in concluding that the person is angry. Virtual insensitivity will suffice.
In the context of actual emotional distress, the matter is further complicated. Regarding bodily, physical presence, the kind of empty depression, meaninglessness, and lack of aliveness and vitality characteristic of pathological narcissism responds most powerfully and directly to the “personal touch” of another human being who is present in the same physical space. Kohut suggests that the child’s bodily display is responded to by gleam in the parent’s eye, which says wordlessly (“I am proud of you, my boy [or girl]!”). Child and parent are not having an online session here, and, I must insist, any useful and appropriate tele-sessions are predicated on and presupposed a robust relatedness based on being, living, and playing together on the ground in shared physical space. One occasionally encounters traumatic events that impact the client’s sense of cohesive self, if the parent recoils from the child’s body (or cannot tolerate lending the parent’s own body to the child for the child’s narcissistic enjoyment). The risk of the self’s fragmentation occurs (Kohut 1971: 117). So where’s the empathy? The need for the parent’s echoing, approving, and confirming is on the critical path to the recovery of the self. The empathy lives in the conversation for possibility with the other person in the same space of acceptance and tolerance in which we both participate in being together.
Advantage: Embodiment
Another area where humans still have an advantage (though one might argue it is also a disadvantage) is in having a body. Embodiment. You know, that complex organism that enables us to shake hands, requires regular meals, and so on. If further evidence were needed, this time explicitly from the realm of science fiction, the bots actually have a body, indistinguishable from that of standard human beings, in Philip K. Dick’s celebrated “Do Androids Dream of Electric Sheep” (1968), which is the basis for the major motion picture Blade Runner. Regardless of whether the “droids” have empathy or not, they definitely have a body in this sci-fi scenario – and that makes all the difference. That raises the stakes on the Voight-Kampff Empathy Test considerably (the latter rather like a lie detector, actually measuring physiological arousal, not truth or empathy).
And while the production of realistic mechanic-biological robots is an ongoing grand challenge, we have left the narrow realm of computing and into biochemistry and binding bone and tissue to metals and plastics and translating biochemical signals into electrical ones. We are now inside such science fiction films as Blade Runner or Ex Machina. For purposes of this article, we are declaring as “out of scope” why we will soon be able to produce autonomous weapon warriors that shoot guns, but not autonomous automatic empathy applications. (Hint: the former are entropy engines, designed to produce chaos and disorder; whereas empathy requires harmony and order; it is easier to create disorder than to build; and automating empathy is working against a strong entropy gradient as are all humanizing activities.) Along with the movie Ex Machina, this deserves a separate blog post.
The genie is out of the bottle
Leaving all-important early childhood development aside, bringing large language models to empathic relatedness is a game changer. The question is not whether the generative AI can be empathic, but the extent to which the designers want it to function in that way and the extent to which prospective clients decide to engage (both open questions at this date (Q4 2025)).
“The day ChatGPT went cold” is the headline in this case. The reader encounters the protest from some Open AI customers about the new release of Chatbot 5.0. This event was reportedly greeted by a significant number of customers with the complaint that “Open AI broke it!”
The New York Times article (https://www.nytimes.com/2025/08/19/business/chatgpt-gpt-5-backlash-openai.html) tells of a musician who found comfort (not exactly “empathy,” but perhaps close enough) in talking with ChatGPT about childhood trauma, and, as designed, the bot would keep the conversation going, enabling the individual to work through his issues (or, at least, such is the report, which, however, I find credible). Then the new release (5.0) was issued and it went “cold.” The response of the software lacked the previous set of features often associated with empathy such as rapport, warmth, responsiveness, validation, disagreeing in an agreeable way, humor, and so on. Instead the response was emotionally cold: “Here is the issue – here is the recommendation ___. Conversation over.” In particular, customers who were physically challenged as regards their mobility, ability to type (and were using a voice interface), cognitive issues, as well as standard customers who had established a relationship with the software and the interface, complained that the “rapport” was missing.
Human beings often know that they are being deceived, but they selectively embrace the deception. That is the basis of theatre and cinema and even many less formal interpersonal “performances” in social media. In the media, the entire performance is imaginary, even if it represents historical events from the past, but the viewer and listener welcome it, not just for entertainment (though that, too) but because it is enlivening, activating, educational, or inspiring. Same idea with your therapy-bot. The client enters the therapy theatre. You know it is fake the way the Battle of Borodino in Tolstoy’s War and Peace is a fictional representation of a real battle. Yet for those able to deal with the compartmentalization, perhaps the result is good enough. This assumes that the therapeutic action of the bot is “on target,” “effective,” “engaging,” which, it should be noted, is a big assumption, especially given that even in the real world it is hard to produce a good therapeutic result.
This matter of faking empathy opens up a humorous moment (though also a serious one – see below). Here the definition of “fake” is “fake” the way a veggie burger is a substitute for an actual hamburger. That may actually be an advantage for some people, though, obviously, in a profoundly different way in comparing how a hamburger relates to empathy as processed by a human being. The veggie burger influences the lower gastrointestinal tract and the empathy (whether automated or not) influences one’s psyche (the Greek word for “soul”). Not a vegetarian myself, I definitely eat a lot less meat than ten years ago, and, with apologies to the cattle industry (but not to the cattle), applaud the trend. The interesting thing is that by branding the products “veggie burgers” or “turkey Burgers,” the strong inference and implication is that the hamburger still sets the standard regarding the experience and taste that the consumer is trying to capture. Likewise with empathy.
In most cases, the automation of empathy relies on the person’s desire and need for empathy. Empathy is like oxygen for the soul – without it, people suffocate emotionally. Unfortunately, the world is not generous with its empathy, and most people do not get enough of it. Therefore, people are willing systematically, perhaps as a design limitation of the human psyche, to support a blind spot about the source of their empathy. Some will choose the Stephen Stills song: “If you can’t be with the one you love; love the one you’re with!” (1970), which, in this case, will be the bot mandated by the insurance company or the human resources department of the corporation. Deciding not to think about what is in fact the case, namely, this amalgamation of silicon hardware and software has no human body, is not morally responsible, and lacks authentic empathy, the person nonetheless attributes empathy to it because it just feels right; and yet, unless, the bot goes haywire and insults the person, that is often good enough to call forth the experience of having been “gotten,” of “having been heard,” even if there is no one listening.
For all of its power and limitations, psychoanalysis is right about at least one thing: transference is pervasive on the part of human beings. Nor is it restricted merely to other human beings. The chatbot becomes a new transitional object (to use D.W. Winnicott’s term (1953)). To quote Elvis, “Let me be – your teddy bear.” This is “transference,” an imaginary state in which the client imaginatively project, attributes, and/or assigns a belief, feeling, or role to the therapist, which the therapist really does not have. However, the ins-and-outs of transference are not for the faint of heart. What if the therapist really does behave in a harsh manner, thereby inviting the project on the part of the client of unresolved issues around a hard, bullying father figure? The treatment consists precisely in creating an empathic space of acceptance, to “take a beat,” “take a step back,” and talk about it. Of what does this remind you, dear client?” Is this starting to look and sound familiar?
The suggestion is that such features, including transference, can be simulated and iteratively improved in software. However, the risk is that in “simulating” some of these features – and the comparison is crude enough – it is rather like putting on blackface and pretending to be African-American. Don’t laugh or be righteously indignant. Things get “minstrel-ly” – and not in a positive way. There are significant social psychology experiments in which people have “gone undercover,” pretending to be black – in order the better to empathize with the struggles of black people. The result was fake empathy. (See A. Gaines (2017) Black for a Day: White Fantasies of Race and Empathy and L. Agosta (2025) “Empathy and its discontents.”) In the case of automated empathy software, no one is pretending to deliver human empathy (though, concerningly, sometimes it seems that they are!) and the program may usefully deliver the disclaimer that the empathy is simulated, multi-threaded, not exclusive, and not the direct product of biologically based experience of a human organism. To paraphrase a disclosure from the bot in Her, “I am currently talking to 3231 people and am in love 231 of them.”
Ethical limitations of “fake it till you make it”
While one can map these empathic functions one-to-one between human beings in relationship (including therapeutic ones), there is one aspect of the relationship that encompasses all the others and does not apply to the bot. That is the ethical aspect of the relationship. When a person goes to a professional for consultation – indeed whether about the individual’s mental health or the integrity of the individual’s financial portfolio or business enterprise – the relationship is a fiduciary one. (Key term: “fiduciary” = “trust”.) That is, one relies on the commitment to the integrity of the interaction including any transactional aspects. One is not going to get that kind of integrity or, just as importantly, the remedies in case of an integrity outage from a bot. Rather one looks to the designer, the human being, who remains the place where the responsibility lands – if one can figure out who that is “behind the curtain” of the faceless unempathic bureaucracy responsible for the product.
A significant part of the ethical challenge here is that automated neural networks – whether the human brain implemented in the organic “wetware” of the human biocomputer or, alternately, a computer network implemented in the software of silicon chips – seem to have emergent properties that cannot be rigorously predicated in advance. (On this point see Samuel Bowman (2024): Eight things to know about large language models: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11556011/400182/Eight-Things-to-Know-about-Large-Language-Models. Thus, human behavior, which is often predictable, is also often unpredictable. So human communities have instituted ethical standards of which law enforcement and organized religions are examples. Our standards for chatbots and similar platforms are still emerging.
Thus, the prognosis is mixed. Is automating empathy a silver bullet – or even a good enough lead bullet – to expand empathy for the individual and community and to so at scale, for example, for Henry David Thoreau’s “modern mass of men [persons] leading lives of quiet desperation”? Or our cyber age equivalent of a blow-up sex doll for the socially awkward person playing small and resistant to getting out of the person’s comfort zone? At the risk of ending on a cynical note, given the sorry state of human relations as demonstrated in the news of the day, maybe, just maybe, any form of expanded empathy, whether fake or authentic, if properly managed to mitigate harm, is a contribution.
In any case, the key differentiators between automated empathy and humanly (biologically) based empathy are the human body (or lack thereof), the exclusivity of the empathic rapport, and the ethical implications, including the locus of responsibility when things go right (and wrong). We humans will predictably fake it till we make it; and automating empathy does not produce empathy – it produces fake empathy.
References
(in alphabetical order by first name)
Alisha Gaines. (2017). Black for a Day: White Fantasies of Race and Empathy. Chapel Hill: University of North Carolina Press.
Carl Rogers. (1961). On Becoming a Person, intro. Peter Kramer. Boston: Houghton Mifflin, 1995.
Daniel P. Brown & Erika Fromm. Hypnotherapy and hypnoanalysis. Hillsdale, N.J.: L. Erlbaum Associates, 1986.
Donald W. Winnicott. (1953 [1951]). Transitional objects and transitional phenomena. A study of the first not-me possession. International Journal of Psycho-Analysis, 34, 89-97.
Dylan Freedman. (2025/0819). The day ChatGPT went cold. The New York Times:https://www.nytimes.com/2025/08/19/business/chatgpt-gpt-5-backlash-openai.html
Henri Ellenberger. (1971). The Discovery of the Unconscious.
Heinz Kohut. (1959). Introspection, empathy, and psychoanalysis. Journal of the American Psychoanalytic Association, 7: 459–483.
Heinz Kohut. (1971). The Analysis of the Self. New York: IUP Press.
Heinz Kohut. (1977). Restoration of the Self. New York: IUP Press.
Joseph Weizenbaum. (1966). ELIZA – A computer program for the study of natural language communication between men and machines,” Communications of the ACM, 9: 36–45. (See also ELIZA below under Wikipedia.)
Kenneth Craik. (1943). The Nature of Explanation. Cambridge: Cambridge University Press, 1967.
Lisa Bonos (2025/10/23): “Meet the people who dare to say no to artificial intelligence”: https://www.washingtonpost.com/technology/2025/10/23/opt-out-ai-workers-school/
Lou Agosta. (2025). Chapter Three: Empathy and its discontents. In Radical Empathy in the Context of Literature. New York: Palgrave Macmillan: 55 – 82. (https://doi.org/10.1007/978-3-031-75064-9_3 )
Lou Agosta. (2019). Review of The Empathy Effect by Helen Reiss: https://empathylessons.com/2019/01/27/review-the-empathy-effect-by-helen-riess/
Lou Agosta. (2015). A rumor of empathy at Affectiva: Reading faces and facial coding schemes using computer systems: https://empathylessons.com/2015/02/10/a-a-rumor-of-empathy-at-affectiva-reading-faces-and-facial-coding-schemes-using-computer-systems/
Paul Ekman. (2003). Emotions Revealed. New York: Owl Books (Henry Holt).
Philip K. Dick. (1968). Do Androids Dream of Electric Sheep. New York: Ballentine Books.
Samuel Bowman (2024): Eight things to know about large language models: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11556011/400182/Eight-Things-to-Know-about-Large-Language-Models.
Shabna Ummer-Hashim. (Oct 27, 2025). AI chatbot lawsuits and teen mental health: https://www.americanbar.org/groups/health_law/news/2025/ai-chatbot-lawsuits-teen-mental-health/
Spike Jonze. (2013). Her. Major motion picture.
Stephen Stills. (1970). Love one you’re with. Lyrics: https://www.google.com/search?client=safari&rls=en&q=words%3A+love+the+one+you%27re+with&ie=UTF-8&oe=UTF-8 [checked on 2025/10/31]
Wikipedia: “ELIZA: An early natural language processing computer program”: https://en.wikipedia.org/wiki/ELIZA
Zara Abrahams. (2025/03/12): “Using generic AI chatbots for mental health support: A dangerous trend”: https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists
Update (Nov 5, 2025). This article just noted: They fell in love with AI Chatbots: By Coralie Kraft : https://www.nytimes.com/interactive/2025/11/05/magazine/ai-chatbot-marriage-love-romance-sex.html [The comments note that people being self-expressed is generally a good thing, including self-expressed to and with Chatbots; and the individuals may usefully continue to try to find an actual human being with whom to talk and relate. Less charitably, other commentators have said things like “I hope the person gets the help they need.”]
Update (Nov 16, 2025). Note the headline on this article from the New York Times: “Chatbots are empathetic and accessible, but they can sometimes be wrong. What happens when you ask them for medical advice?”
IMAGE CREDIT: “Empathy can’t be automated” (c) Adler University – ad poster at the corner of Michigan Av and Washington St, Chicago, IL, reproduced with kind permission
(c) Lou Agosta, PhD and the Chicago Empathy Project
Categories: automating empathy, empathic interpretation, empathic receptivity, empathic responsiveness, faking emapthy, faking empathy, simulating empathy