SUPPORT AND SAFETY
We should be willing to accept AI-human relationships without judging the people who form them.
This follows a general moral principle that most of us already accept: We should respect the choices people make about their intimate lives when those choices don’t harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible.
First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behaviour.
Governments should also consider safeguards to restrict access by younger users, or at least to control the behaviour of chatbots who are interacting with young people. And they should mandate better privacy protections – though this is a problem that spans the entire tech industry.
Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible.
While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatises their choices.
AI lovers aren’t going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun.
Neil McArthur is Director, Centre for Professional and Applied Ethics at the University of Manitoba. This commentary first appeared on The Conversation.