Robots with human-like behaviors can “fool” people about their ways of thinking and acting .
Artificial intelligence is becoming more and more a part of our lives and, according to experiments carried out with 119 participants, robots with human-like behaviors can be perceived as having mental states.
When robots appear to engage with people and display human-like emotions, individuals may perceive them as capable of “thinking” or acting according to their own beliefs and desires rather than their own computer programs, concludes research published in the Technology, Mind, and Behavior magazine .
However, the relationship between the anthropomorphic form, the human-like behavior and the tendency to attribute independent thinking and purposeful behavior to robots is yet to be understood, says Agnieszka Wykowska, a researcher at the Italian Institute of Technology and author of the study.
“As artificial intelligence becomes more and more a part of our lives, it is important to understand how interaction with a robot displaying human-like behaviors could induce a higher probability of attribution of intentional action to the robot ,” he adds.
The study tests
The team ran three separate tests with 119 participants to examine how they perceived a human-like robot , the iCub, after socializing with it and watching videos together.
Before and after interacting with the robot , the volunteers completed a questionnaire that showed them images of the robot in different situations and asked them to choose whether the motivation of the machine in each situation was mechanical or intentional.
For example, participants viewed three photos showing the robot selecting a tool and then chose whether the robot “grasped the closest object or was fascinated by the use of the tool,” details a note from the American Psychological Association.
In the first two experiments, iCub’s actions were remotely controlled to behave in a gregarious manner, greeting volunteers, introducing themselves and asking their names; the cameras located in their eyes were able to recognize the faces of the participants and maintain eye contact.
The volunteers then watched three short documentary videos with the robot , which was programmed to respond to the videos with sounds and facial expressions of sadness, amazement or happiness.
In the third experiment, the researchers programmed iCub to behave more like a machine while watching the videos with the participants: the cameras in its eyes were turned off so it couldn’t maintain eye contact, and all emotional reactions to the images were turned off. replaced by a “beep” and repetitive movements of his torso, head and neck.
your conclusions
The team found that participants who watched the videos with the human-like robot were more likely to rate the human-like robot’s actions as intentional rather than programmed, while those who only interacted with the more mechanical robot did not.
This shows that mere exposure to a human-like robot is not enough for people to believe that it is capable of thoughts and emotions; it is the human-like behavior that could be crucial to being perceived as an intentional agent.
According to Wykowska, this could form the basis for the design of the social robots of the future: the social link with them could be beneficial in some contexts, such as with social assistance robots .
For example, in caring for the elderly, social bonding with the elderly could induce a greater degree of compliance with the recommendations on taking medication.