I’d like to start off today’s story with a quote from one of Kate Darling’s papers on Anthropomorphic Framing in Human-Robot Interaction:

People have a tendency to project life-like qualities onto robots. As we increasingly create spaces where robotic technology interacts with humans, this inclination raises ethical questions around use and policy. A human-robot-interaction experiment conducted in our lab indicates that framing robots through anthropomorphic language (like a personified name or story) can impact how people perceive and treat a robot. … I discuss concerns about anthropomorphizing robotic technology in certain contexts, but argue that there are also cases where encouraging anthropomorphism is desirable.

Two quick definitions according to Wikipedia, just so there’s no confusion: A robot is a machine capable of carrying out a complex series of actions automatically. Anthropomorphization is the attribution of human-like traits or form to non-human beings (e.g. “seeing” a face in the front of a car). If you’ve already started falling asleep after those first few lines: Don’t worry, what follows will be rather unscientific!

So, I came across this topic while watching Lex Fridman’s talk with Kate Darling on YouTube a couple of days ago and it got me thinking. For example, Lex Fridman mentioned that he makes a conscious effort to say “please” whenever he’s asking something of a robot. Do you do that too? I certainly don’t. However, I don’t really talk to robots that often, so there’s that.

The actual question here is: Why should you bother being nice to a robot? As far as we’re concerned, they don’t have any feelings. However, one of many good points that was brought up in the talk was that being nice to robots might train our “empathy muscle”, while treating them badly might result in the very opposite – we then might go on and transfer that behavior over to human interaction. Also, treating things badly really has few upsides in general (did shouting at your computer ever help?). So maybe the question should be: Why shouldn’t you bother being nice to a robot? In the end, it doesn’t really cost you anything other than a fraction of a second to add that “please” to the end of your sentence, and it might help with keeping you off that robot’s blacklist if it should happen to actually have feelings. If that really is the case, then that might be the reason why that robot lawnmower of yours has been trying to kill you ever since you’ve kicked it for going in the wrong direction.

Anyway, Kate Darling’s paper also mentions that robots are specifically designed to be anthropomorphized, but people will also anthropomorphize robots with non-anthropomorphic design. I’ve certainly caught myself doing that! However, that might just be the case because I’ve been primed to do that by seeing likeable “robots” from my early childhood on, two examples being Microsoft’s Clippy and C-3PO in Star Wars.

C3PO from Star Wars!

Here’s another interesting quote that emphasizes this:

… a CEO and employees of a company that develops medicine delivery robots observed hospital staff being friendlier towards robots that had been given human names. Even tolerance for malfunction was higher with anthropomorphic framing (“Oh, Betsy made a mistake!” vs. “This stupid machine doesn’t work!”).

I definitely tend to resort to the not-so-nice form of response, so that’s something I’ve got to work on, too…

Potential for Manipulation

Robots are increasingly being put to use in various workplaces, e.g. manufacturing, transportation systems or the military. Personal households might follow soon (or already have) and in general we are and will be exposed to interaction with robots more often.

I think what’s interesting here is the potential for exploitation: Assuming that people form some sort of connection with personal household (or any kind of) robots, would they be more susceptible to being manipulated by it? If at some point it should go as far as people marrying their robots (there’s people who marry their dog, so this idea is not that far fetched) this will certainly be something to think about. Many manipulative mechanisms are already taking place via various, much simpler forms of technology which we all know about.

However, maybe the underlying question is: Is it okay as long as the manipulation leads to a positive outcome? Is the solution to incentivize companies to be interested in manipulating an outcome that’s positive for them but also us and society as a whole? What does a positive outcome even look like? These questions go much further than the anthropomorphization of robots of course, which is why I’ll stop here. What do you think our future interaction with robots will look like? Are you nice to robots – and if not, why not?

That’s all for this week – thanks for reading!