From Chatbots to Companions: AI’s Human Side
AI&Future |
From alleviating loneliness among the elderly to providing personalized emotional support, the rapid development of artificial intelligence (AI) is gradually entering the human spirit, applying it to emotional companionship and mental health. However, this has also sparked numerous controversies: Does AI simply generate care and responses based on data analysis, or can it truly understand the complexities of human emotions? Will it change the way humans build real relationships, leading to greater loneliness? When human emotions are manipulated by AI, does it pose a threat to life and safety?
Digitalization and intelligentization are increasingly leading to the virtual expression of human emotions. While AI offers new possibilities for emotional support, it also carries potential risks. How to effectively utilize intelligent technology in the spiritual realm has become a crucial issue that requires urgent consideration.

The Impact of Artificial Intelligence on Emotions and Values
In this rapidly changing era, technology is becoming increasingly integrated into our lives. The rapid development of artificial intelligence, in particular, is influencing our emotions and values in unprecedented ways.
With the widespread adoption of AI, we are increasingly realizing that machines are more than just tools for executing commands; they are beginning to understand and simulate human emotions through data analysis. This change complicates the relationship between humans and machines and even raises a series of ethical issues. For example, by analyzing social media comments and sentiment, AI can generate personalized recommendations or feedback, appearing to communicate and resonate with humans. But can this kind of "communication" truly replace genuine, deep connections between people?
Furthermore, the impact of AI on values cannot be ignored. As algorithm-driven information acquisition becomes increasingly mainstream, people may be unwittingly forced to conform to the ideas promoted by certain algorithms. This not only affects individual judgment but also has the potential to alter society's perspectives and stances on certain issues on a larger scale. Our moral standards, emotional bonds, and ways of understanding the world are all quietly shifting amidst this efficient and precise data processing.
Thus, while enjoying the conveniences brought by technology, we must also consider whether these changes are truly in line with our human nature. Finding a balance between these factors, ensuring that AI contributes to our human development rather than replacing or altering our core values, is a crucial issue facing society today.
Virtual Reality: A New Frontier Reshaping Social Structure
In recent years, the development of virtual reality (VR) technology has revolutionized the way we live. With its widespread adoption, the way we interact with each other is being redefined. VR allows people to socialize in a fully immersive environment, which not only changes how we communicate but also reshapes the very fabric of our communities. For example, people in different locations can engage in face-to-face discussions in virtual spaces. This "distance-bridging" experience significantly strengthens interpersonal bonds.
However, as we rely on VR to meet our social needs, it also raises profound ethical and human challenges. For example, when people construct idealized images in virtual environments, do they become more alienated from real-life interactions? How should we balance the convenience of this technology with the potential loss of humanity? Therefore, as we explore VR as a new social tool, we must carefully consider its potential impact on our emotions, values, and social structures to better prepare for the future of human-machine symbiosis.

AI Emotional Counseling
- Genuine Care or Programmatic Routine?
Millions of users are already using ChatGPT and other specialized mental health chatbots. These "AI friends" offer convenient and affordable emotional counseling services, and some doctors are even using AI to tailor their diagnoses and treatments to appear more empathetic. While some experts have expressed positive feedback, describing such AI as "empathetic" in research and arguing that it is immune to emotional fluctuations and burnout and may be able to express care more openly and tirelessly than humans, there are also numerous voices of opposition. These voices question whether AI can truly empathize with human emotions and situations, and worry that people will rely on machines that "pretend to care" for emotional support. Some even worry that the rise of this "pseudo-empathetic" AI could alter our definition of empathy and, in turn, affect how people interact with each other.
As a key human trait, empathy is inherently linked to social interaction. While we instinctively understand empathy, its definition is unclear. A recently published paper reviewed 52 studies from 1980 to 2019 and found a lack of consensus on the concept of "empathy." However, Jakob Heikkansson, a psychologist at Stockholm University in Sweden and one of the authors of the paper, pointed out that there is some consensus across disciplines regarding empathy: first, it requires recognizing others' emotions; second, it requires a certain degree of empathy, while also being able to separate their emotions from one's own and empathize with them.
AI appears to have made significant progress in recognizing human emotions in recent years. Most AI-powered chatbots are powered by large language models that generate responses by predicting word sequences. Large models like ChatGPT are often able to capture the emotions in user input and respond appropriately.
"It's not surprising that AI can do this," said Jodi Halpern, a bioethicist at the University of California, Berkeley. "But it doesn't mean it possesses true empathy."
Human empathy develops in real interactions and requires ongoing feedback to adjust responses. It also requires a certain intuition to understand the other person's situation. For example, if a patient cries upon learning she's pregnant, knowing she's been trying for years to conceive can help us discern whether the tears are of joy or sadness. However, current AI is unable to understand these subtle nuances of human emotion. More importantly, Halpern points out, AI is unable to develop feelings for the people it interacts with. “AI simply provides a product—one that pretends to speak with the same empathy as a human, but in reality, it has no empathy.”