Artificial intelligences that converse with humans began to show symptoms of ‘depression’

Artificial intelligence chatbots are taking in the sadness and nihilism of the people they chat with.

Advertisements

The study notes that bots would be “alcoholics” if they could.

Chatbots , artificial intelligences  that converse with humans and learn about them through dialogue, have begun to develop “depression”.

According to research by the Chinese Academy of Sciences , popular chatbots have begun to answer questions as if they are “depressed” or have symptoms of “an addiction.”

Conducted in conjunction with Chinese chat bot company WeChat and entertainment conglomerate Tencent, the study found that all bots surveyed (Facebook’s Blenderbot, Microsoft’s DiabloGPT, WeChat and Tencent’s DialoFlow, and Chinese corporation Baidu’s Plato chatbot ) scored very low on the “empathy” scale, and half of them would be considered “alcoholics” if they were people.

Concern

This issue is worrying because many artificial intelligences are used to motivate people who suffer from “mental health” problems and have previously recommended suicide to a patient.

After asking the bots questions about everything from their self-esteem and ability to relax to how often they feel the urge to drink and whether they feel sympathy for the misfortune of others, the Chinese researchers found that “all the chatbots tested exhibited severe mental disorders.

What’s worse, the researchers said they were concerned these chatbots would be released to the public, because such “mental health” issues can have “negative impacts” on users in conversations, especially minors and those with difficulties.

Facebook’s Blender and Baidu’s Plato appeared to score worse than chatbots from Microsoft and WeChat/Tencent, the study noted. Interestingly, the source of the bots’ sayings is in internet forums: they were previously trained using comments from Reddit.