Watch any Sci-Fi movie that has artificial intelligence and you’ll notice they all have personalities. Jarvis from “Iron Man”, Samantha from “Her”, and V.I.K.I. from “I, Robot” all had personalities (Viki was a bit homicidal, but that’s for a different article about machines taking over the human race).

It turns out, this isn’t just good for story-telling, it’s actually what humans expect from an AI. According to the following study, humans automatically anthropomorphize robots.

“The pSTS (posterior superior temporal sulcus) been described as a ‘social-information processing’ centre (Watson, Latinus, Charest, Crabbe, & Belin, 2014), and as ‘the hub for the distributed brain network for social perception’ since it is functionally connected to a host of brain circuits that process specific social information (Lahnakoski et al., 2012).

Entities that induce the activation of the pSTS apart from other humans include animals (Chao et al., 1999; Kaiser, Shiffrar, & Pelphrey, 2012), robotic faces producing emotional expressions (Gobbini et al., 2011), animate-like entities with perceived goals such as robots (Shultz, Lee, Pelphrey, & McCarthy, 2011), or even animated geometric shapes (Blakemore et al., 2003; Castelli, Happe, Frith, & Frith, 2000; Gao, Newman, & Scholl, 2009; Osaka, Ikeda, & Osaka, 2012; Schultz, Friston, O’Doherty, Wolpert, & Frith, 2005).”

A recent MIT study cited by Forbes indicates people feel empathy for robots and another study found that the more we interact with a robot, the more we anthropomorphize it. All research seems to show that when you try to create an artificial entity, humans expect it to think, act, and even feel like the thing you’re trying to make it mimic… funny how that works ain’t it?

This brings up a whole new concept in user interface design that didn’t really exist before, and that’s the emotional part. Aside from the occasional frustrating interface, most people don’t get too emotional at a button or a form on a page. If they do, they don’t really get angry at the button so much as the person who decided to make the button.

By contrast, when your bot becomes frustrating (and it will), customers are going to become more likely to project intention on to your bot. When the bot doesn’t help them, perhaps subconsciously, they’ll attribute ill intention to your bot, “your bot isn’t helping me!” As if the bot woke up that morning just to screw up their day. This also means that if your bot is stale, it’ll be perceived as a stale, boring person. If you try too hard to be cool, your bot is going to be perceived as a person who is trying too hard to be cool.

“Yea… but I don’t want our bot to be some old stuffy bot like a phone tree from 1993 Rick, it needs to be hip and cool and something millennials would like”… I hear you shouting to your screen.

First, stop using words like “hip” and “cool” when talking about business entities, they’re rarely either and when you try to make them such, you end up coming off like Hillary Clinton in a leather jacket trying to tell everyone she just really loves rap music. “Hello there fellow young people. I’m a major brand but I’m using slang like we’ve been friends for years. See, I’m totes like you and this is totes genuine, I used totes in a sentence, that’s still a thing right?… now will you buy our stuff? Please?”

I’m a Millennial, the demographic brands are trying to reach on social media and chatbots… and that’s how most bots sound to me lately.

It’s true, from an experience perspective, nobody’s going to get excited about a stuffy chatbot that’s basically a phone tree written down, nobody wants to feel like they’re interacting with a robot. Look at Siri and Alexa, even though they don’t get everything right, they’re still capable and engaging assistants. The fact that Siri and Alexa have a personality is what makes them fun.

So how do you know when you should throw in some slang to make it less robotic and when you should be serious? How do you make a bot that will appeal to most people without being stuffy or coming of as disingenuous?

In my opinion, and this may sound obvious, you simply ask yourself “what would a real agent do?” If you wouldn’t want your real agents telling customers “My bad. I can’t answer that, try asking something else” then maybe don’t have your bot say that. If a real agent wouldn’t be, “Joker from Batman” excited about rescheduling your appointment, maybe don’t have your bot say things like, “AWESOMESAUCE! We totally just rescheduled your appointment! Go have a Coke and a smile now!” Let’s be honest, if your agent was that excited about rescheduling an appointment, you’d probably wonder if they were on drugs or ask them to get checked for brain tumors. A real agent, when trying to do a serious task would probably respond, “Okay, I’ve got you rescheduled for the 18th between 2pm and 4pm. Anything else I can help you with?”

“Okay, but when do we get to have fun answers like Siri and Alexa?” you say.

When it’s appropriate, just like a real human would. Humans “feel the room” before blurting things out… well, most of us do. Siri and Alexa “feel the room” by mirroring you. If you ask a silly question, they’ll give you a silly answer. If you ask a serious question about adding something to your cart, she just adds it to your cart.

In summary, give your bot a persona, but try to make it the persona of a genuine agent who helps you when you need help and can take a joke when you’re joking. Please don’t make “Mitt Romney trying to be cool to get your vote” bots because you think that’s what millennials would like; it’s not… and they won’t.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.