In the infamous Milgram’s Experiment from the 1960s, test subjects willingly followed an instructor’s orders to apply increasingly lethal shocks on a screaming victim in another room -- despite their concern for the person’s welfare, and their own stress over having to do such a horrible thing. (Both the “instructor” and “victim” were only actors, of course, secretly working for Dr. Milgram.)
Dr. Mar Gonzalez-Franco at Microsoft Research and Dr. Mel Slater at the University of Barcelona recently recreated the Milgram Experiment in virtual reality -- but this time, instead of ordering their test subjects to shock a human victim in another room, they were instructed to electrocute an avatar displayed right in front of them. And even knowing that the victim was just a simulation, the test subjects were still traumatized:
Unlike in Milgram’s test, our participants were not deceived regarding the actual harm to the subject when his answers to the memory test were wrong—since he wasn’t a real person. Using this setup, we could ethically test how humans respond to authority.
The participants in the experiment nevertheless had difficulty with shocking the avatar: Although 85 percent of them completed the task, they exhibited measurable signs of stress. However, when we looked more closely at the actual interactions that people had with the virtual victim, we found that participants were trying to cheat, even if unconsciously, by giving cues to the avatar, signaling the right answers with a louder voice tone.
These findings were summarized in Scientific American last year, but I’ve since been talking with Dr. Gonzalez-Franco about its extraordinary implications. How is it even possible for people to feel empathy for a digital avatar?
“There is a strong presence illusion that enables the empathy mechanisms,” Dr. Gonzalez-Franco tells me. “Presence has been described as a combination of two factors: the plausibility illusion that the things that are happening are realistic. And the place illusion, the illusion that you are in a new location, in this case the examination room. In this experiment the plausibility is also greatly enhanced by the fact that the avatar is animated using a real actor performance. There is lip sync when it talks and it looks at you during the experiment, establishing eye contact. All in all creates a strong presence on the participants.”
Can she see this technology being used to identify actual sociopaths incapable of having empathy -- almost like a reverse Voight-Kampf test from Blade Runner?
“I don’t think so. I mean sociopaths are generally very good at hiding they are sociopaths from the rest.”
From my perspective, it seems like her test subjects being in a VR rig is key to creating this empathy. After all, in the Grand Theft Auto franchise and countless other non-VR games, players happily torment and kill innocent virtual characters with little evident trauma.
That’s mostly possible, she tells me, “because presence is much reduced in that type of event, as compared to VR. The level of presence is very important and it is one of the reasons why there is a detachment from drone operators [remotely firing missiles] compared to soldiers on the ground.”
Whatever the study's real world implications, the possibilities for creating powerful game experiences which evoke vivid moral dilemmas and genuine empathy are clear. Or as Dr. Mar Gonzalez-Franco puts it, "Having those games inside VR will increase the ethical impact of those games."
Read more about this study on Scientific American. In recent months, her colleague published a related study with just as many fascinating implications -- more on that next week.
Video courtesy Dr. Gonzalez-Franco.
Comments
You can follow this conversation by subscribing to the comment feed for this post.