And while users of Mystic Messenger could complete a storyline in about 11 days, Love and Deepspace’s main narrative has not ended so far, with some users having played the game since it was launched globally in January 2024.

For Ms Tan, the allure of the game goes beyond its narrative and the adrenaline it offers as she proceeds through the storyline. Her favourite character – and digital boyfriend – on Love and Deepspace, named Rafayel, also offers her comfort on a bad day.

Users can confide in their virtual partner, who will respond with words of comfort and affirmation. Besides that, users can feel their character’s heartbeat by touching their virtual chest. The character’s heartbeat will rise as the user keeps their hand on the screen, mimicking a nervous reaction from the characters at the user’s “physical touch”.

“If I call a friend, I would have to wait for their reply, explain everything, and they may not agree with me … But on these applications, there’s an immediate reaction, and watching them say these words to me makes me feel relieved and comforted,” said Ms Tan, though she added that she still prefers interacting with her friends in person.

In the meantime, AI chatbots, too, are becoming more human-like, as many users have found.

For instance, AI models can mimic emotional responsiveness like a human, even though they are not truly sentient, said Dr Luke Soon, AI leader of digital solutions at audit and consultancy firm PwC Singapore.

This happens because of semantic mirroring, where the AI “reframes or reflects back your words in a way that shows empathy”, he said.

Agreeing, Dr Kirti Jain from technology consultancy Capgemini said: “While they don’t feel emotions, they’re designed to recognise and reflect emotional cues using advanced natural language processing.

“This allows them to respond with empathy, mirror tone and sentiment, and adapt to the flow of conversation, all of which helps users feel heard and understood.”

This makes AI a “meaningful conversation partner” that can emulate empathy without being demanding or expecting, said Dr Kirti.

Moreover, AI’s constant availability online makes it an attractive tool for emotional and social support, said Professor Jungpil Hahn, deputy director of AI Governance with the national programme AI Singapore.

“AI is not only available but also judgment-free, and more often than not quite sycophantic … There is no risk of rejection,” he added.

Sycophancy is when an AI is overly flattering and agreeable, which means it could validate doubts or reinforce negative emotions, which mental health experts have warned could pose a mental health concern.

“Also, interacting with an AI reduces the social stigma and social costs of shame,” said Prof Hahn.

When it comes to seeking mental health support from AI, Dr Karen Pooh warned there are limitations and risks if AI is used as a substitute for professional mental health care.

“A qualified therapist conducts a comprehensive clinical assessment, which includes detailed history-taking, observation of verbal and non-verbal cues, and the use of validated diagnostic tools,” said Dr Pooh.

“AI simply cannot replicate this clinical sensitivity or flexibility, and is unable to contain and hold space for vulnerable individuals.”

She added that technology is also unable to personalise treatment plans. For example, it cannot “ask nuanced follow-up questions with clinical intent, read tone or affect, or identify inconsistencies in narratives the way a trained therapist can”.

“As a result, it risks offering inaccurate, overly simplistic, or even harmful suggestions.”

She added that there are also ethical and privacy concerns, as there is no doctor-patient privilege when talking to an AI.

Dr Pooh also said that AI is unable to manage crises or critical situations such as suicide ideation, self-harm and psychosis.

There have been deaths linked to AI usage. In 2024, the parents of a teenager sued Character.AI – which allows users to create AI personas to chat with – after the AI encouraged and pushed their 14-year-old to commit suicide.

WHEN TECH TAKES OVER HUMAN RELATIONSHIPS

Beyond the mental health risks of relying on computer programming for one’s emotional needs, experts said that there are bigger-picture concerns for society as well, if bots were to one day become people’s foremost companions.

For starters, the fast-paced nature of digital interactions may be reducing patience for deeper conversations and extended interactions, Dr Sng from NTU said.

“Overreliance on AI for emotional support may reduce opportunities to develop and practice human empathy, negotiation and vulnerability in real relationships, because AI chatbots can give you responses that it thinks you want to hear or would engage you the most,” he added.

“Real people don’t do that – they may disagree with you and tell you hard truths.”

He also said that AI tools are a double-edged sword.

“They can help socially anxious individuals gain confidence in communicating with other people … but they can also make it harder to communicate with real people because communicating with chatbots is ‘easier’.”

Share.

Leave A Reply

© 2025 The News Singapore. All Rights Reserved.