Artificial intelligence

The mirror we don’t recognize ourselves in: Talking AI with Dr. Beth Singler

Should we be polite to AI assistants? Are we changing ourselves just so that AI can understand us? Dr. Beth Singler of University of Cambridge on how we see AI.

Art by

Paul Sizer

Share article

One of many experts appearing in Tomorrow Unlocked’s new audio series Fast Forward is Dr. Beth Singler, anthropologist and Junior Research Fellow in artificial intelligence at University of Cambridge.

Dr. Singler (@BVLSingler) examines the social, ethical and philosophical implications of artificial intelligence and robotics. She has spoken at Edinburgh Science Festival, London Science Museum and New Scientist Live, and been interviewed by New Scientist, Forbes and the BBC.

I interviewed Dr. Singler about AI and the future of work.

Dr. Beth Singler

Ken: In your work, you engage people in conversations about the implications of artificial intelligence (AI) and robotics. What do people think AI is?

Beth: For the public, it isn’t one thing. People point to examples of AI being implemented, but it has different definitions for people. They draw presumptions from science fiction and media accounts of dangerous AI and scary robots. It’s a malleable term – people say ‘the algorithm’ and mean AI.

Many think of AI in the workplace replacing human physical work, but we see AI taking on more knowledge labor and even emotional labor.

Ken: What kind of emotional tasks can AI do?

Beth: We increasingly see interfaces with AI that give simulated emotional responses. AI assistants do tasks for you but pleasantly and civilly. Call center work is already highly structured and scripted – an AI assistant or chatbot can take over that pleasantry system. How workplaces implement AI will influence how we connect with other humans.

Ken: Are we creating a human-machine social world we’ll have to learn to interact with?

Beth: Yes. We’re seeing these human-machine interactions playing out in different places – in the home, workplace, and care settings. We’re having to understand that relationship and teach our children to negotiate it. There are discussions on whether children should be polite when using AI assistants. We’re coming up with a new social format for interactions with AI.

Ken: I thought, of course you should be polite to machines – if only because one day they’ll look at everything we’ve said and done and judge us accordingly. I want to be on the right side of them.

Beth: We also see arguments that you should be civil to AI assistants because this is how we should behave to other entities, whether human or non-human – that it reflects our natures. If we aren’t civil to machines, it says more about us than their needs. There are many different answers to questions of politeness to AI assistants.

Ken: People find conversations with Cleverbot amusing when it asks things like, “Don’t you wish you had a body?” or “What is God to you?” They don’t consider Cleverbot thinks it’s appropriate because a human asked IT those questions. We’re looking into a strange, distorting mirror and not recognizing our reflection.

Beth: Absolutely. There’s a reason the Black Mirror TV series is called Black Mirror – it’s a reflective surface for understanding ourselves. AI and machine responses come from data sets, and those involve biases.

It’s a moment to reflect, for instance, on questions of personhood before we even get to anything like artificial general intelligence (AGI) or superintelligence. Should we be civil? If we say rude or sexist things to a female AI assistant, does that matter? These questions come out again and again.

I’m an anthropologist, meaning I study what humans do and think. These big questions are integral to our concept of what AI is. I’ve seen in my work engaging the public and seeing their sometimes hopeful, sometimes fearful responses that this will be a conversation we’ll have for some time yet.

Talking about AI and the future of work gets down to big questions like, what is the human being for? If we define ourselves in terms of what we do and what we produce, we’ll fear replacement.

Ken: I was at an airport buying a train ticket one afternoon. It was quiet, and the woman behind the counter said, “You should have been here yesterday – the automatic ticket machines had recalibrated and was giving wrong tickets. People adjust. Machines don’t.” I wondered if this ability to adjust is part of our relationship with machines.

Beth: It’s interesting how much we adjust to machines. With the airport systems that use facial recognition software, I often have to take off my glasses, change my hair or bob down. We adjust ourselves to be accepted by the system.

You see this in how automation is changing the workplace. There are interviews with facial recognition software involved, so we’re trying to smile more in a video interview. We’re increasingly making changes to fit the machine-based system.

Ken: It suggests an element of trust. Where does trust fit in our relationship with machines?

Beth: Trust is key. We want to believe software that observes our responses in job interviews is fair and neutral, but we have examples where trust is let down.

In the UK in 2020, an algorithm that helped grade student exam papers damaged public trust – it penalized students studying at less high-achieving schools. In my work, I see examples of people trusting too much – they have an image of a superintelligence that doesn’t exist yet.

Around the term “blessed by the algorithm,” people feel their YouTube content is promoted because the algorithm decided they should be lucky. They use the language of religious belief.

Society can only trust technology it understands. Digital literacy – understanding what AI is and isn’t – is key to that.

Ken: We tend to understand things better as fiction. It’s a way to get a grip on the world. But I get the feeling fiction’s not a grip anymore, but a stranglehold. Is that fair?

Beth: I enjoy science fiction accounts of AI in their many interpretations, fears and hopes.

One of the hazards is a strict, negative story used too often. I’m a fan of the Terminator film franchise, but I see how dystopian imagery of robot uprisings shapes people’s views of AI. And AI making crucial decisions about our future – whether we get a job or a mortgage, or how we’re treated in hospital – may also be overshadowed by Terminator-like stories.

Ken: And it stops us noticing when AI does good things, like in medicine and traffic control. The robots are already among us, but they don’t usually walk on two legs. They’re more likely to be sorting out your airplane ticket.

Beth: There’s a move toward making robots cuter and replicating child and animal forms to reduce those threatening associations from science fiction. Think of Arnold Schwarzenegger’s Terminator versus the therapeutic robot PARO, modeled on a baby harp seal.

Ken: Is there an element of trying to make work more fun? Perhaps work becomes more like play if you have an AI assistant who helps with the emotional labor?

Beth: Yes. There’s a history of trying to gamify the workplace – developing ‘third space’ options that involve games or places where you can nap. Perhaps how we apply AI is a part of how we make the workplace more enjoyable. If our software chatted back to us, was entertaining and responded to us, it might seem less laborious.

Ken: Going back to emotional labor, programs could soften the edges of work relationships, whether online or in an office – I can imagine something like an ’emotional Roomba’ (robot vacuum cleaner) allowing for moments of interaction.

Beth: We see examples of AI mediating between humans in conversation, like machine learning algorithms suggesting how to respond to emails or warning your tone is too harsh – softening the edges of our interactions at work is a developing space.

Ken: After some emails I’ve had, I see the value in something like that.

Beth: I also saw an application for divorced or divorcing couples helping conversations be more amicable for the benefit of any children. A machine learning algorithm warns you things like, perhaps you’re being a bit sarcastic.

Ken: I’m scared of an algorithm that understands sarcasm. That will be the end of humanity.

Beth: There’s a wonderful Tom Gauld cartoon about scientists trying to create a sarcastic bot. And the bot says to the scientist, “It’s going great. This guy is a real genius.”

Ken: What thought about AI and the future of work would you most like people to take away?

Beth: I’d like people to consider how much we should change our behavior in relation to AI in the workplace. People don’t normally interact in purely rational ways. If we curtail that normal human messiness, we’re not anthropomorphizing AI but robo-morphizing humans. If we make ourselves smile more to do well in an interview with facial recognition software, we limit ourselves. Although we might see AI as a human simulation, do we become a human simulation in response to AI?

Listen to Tomorrow Unlocked’s Fast Forward audio series for more expert views

AI and machine learning in cybersecurity

These tools are helping us fight cybercriminals more effectively than ever.

About authors

Ken Hollings is a writer and broadcaster who explores the relationship between culture and technology, specifically how they shape and influence each other. His books include Welcome to Mars, The Bright Labyrinth and The Space Oracle.