You might have seen the GIF above floating round the ‘net recently. It comes from a recent TED talk given by Boston Dynamics, and shows the company’s latest bipedal Atlas robots failing — delightfully — to perform some light office admin. Now Boston Dynamics’ bots are no strangers to going viral, but usually they’re framed as harbingers of the robo-apocalypse. This time round the reaction has been much more sympathetic. Even a little affectionate.
“I legitimately feel sorry for a robot. The future is going to be weird,” said one commenter on Reddit, where the GIF was submitted to the r/aww sub-reddit alongside adorable kittens and puppies. “That was the cutest thing I’ve ever seen a robot do. Poor guy tried so hard,” said another in the same thread.
But why exactly are we more accepting of Boston Dynamics’ bots when they’re falling over? Well, according to group of scientists from the Center for Human-Computer Interaction in Salzburg, Austria, it might be for the same reasons we like people that make errors: because it makes them more relatable and more approachable. In other words — it makes them seem human.
The scientists in Salzburg, led by research fellow Nicole Mirnig, tested this hypothesis recently by setting up a task in which 45 human volunteers had to build LEGO creations with help from a small, humanoid robot. In some of the tests, Mirnig and her colleagues programmed the bot to make simple mistakes like repeating words or failing to grasp objects. Each time round they asked the volunteers to rate the robot on a number of criteria including likability, anthropomorphism, and perceived intelligence. The results showed that when the robot made mistakes, people liked it more.
Now, it’s tricky to say definitively why mistakes make robots more likable, but Mirnig’s theory is that it has something to do with the ‘Pratfall Effect’ — a phenomenon in social psychology where we like individuals more when they mess up. Crucially, though, this effect is very context-dependent. We don’t like people who make mistakes all the time; but we do like when individuals who are genuinely reliable make small errors. And, according to Mirnig, robots fit into this latter category pretty well.
“Research has shown that people form their opinions and expectations about robots to a substantial proportion on what they learn from the media,” Mirnig told Digital Trends. “Those media entail movies in which robots are often portrayed as perfectly functioning entities, good or evil. Upon interacting with a social robot themselves, people adjust their opinions and expectations based on their interaction experience. I assume that interacting with a robot that makes mistakes, makes us feel closer and less inferior to technology.”
If these findings hold true, then companies in the future might pre-program robots to make slight errors when they interact with us. These would be carefully calculated — enough to register, but not so serious that they impinge on the interaction. In a way, nothing could be more human.