{"id":41250,"date":"2021-08-24T08:41:56","date_gmt":"2021-08-24T12:41:56","guid":{"rendered":"https:\/\/www.kaspersky.com\/blog\/?post_type=emagazine&#038;p=41250"},"modified":"2022-07-27T07:51:34","modified_gmt":"2022-07-27T11:51:34","slug":"ai-ethics-beth-singler","status":"publish","type":"emagazine","link":"https:\/\/www.kaspersky.com\/blog\/secure-futures-magazine\/ai-ethics-beth-singler\/41250\/","title":{"rendered":"The mirror we don&#8217;t recognize ourselves in: Talking AI with Dr. Beth Singler"},"content":{"rendered":"<p>One of many experts appearing in Tomorrow Unlocked\u2019s new audio series <a href=\"http:\/\/www.tomorrowunlocked.com\/fastforward\" target=\"_blank\" rel=\"noopener nofollow\"><em>Fast Forward<\/em><\/a> is <a href=\"https:\/\/bvlsingler.com\/\" target=\"_blank\" rel=\"noopener nofollow\">Dr. Beth Singler<\/a>, anthropologist and Junior Research Fellow in artificial intelligence at University of Cambridge.<\/p>\n<p>Dr. Singler (<a href=\"https:\/\/twitter.com\/BVLSingler\" target=\"_blank\" rel=\"noopener nofollow\">@BVLSingler<\/a>) examines the social, ethical and philosophical implications of artificial intelligence and robotics. She has spoken at Edinburgh Science Festival, London Science Museum and New Scientist Live, and been interviewed by New Scientist, Forbes and the BBC.<\/p>\n<p>I interviewed Dr. Singler about AI and the future of work.<\/p>\n<div id=\"attachment_41252\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><img decoding=\"async\" aria-describedby=\"caption-attachment-41252\" class=\"wp-image-41252 size-large\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2021\/08\/17052814\/beth_singler-1024x683.jpg\" alt=\"\" width=\"1024\" height=\"683\"><p id=\"caption-attachment-41252\" class=\"wp-caption-text\">Dr. Beth Singler<\/p><\/div>\n<p><strong>Ken:<\/strong> In your work, you engage people in conversations about the implications of artificial intelligence (AI) and robotics. What do people think AI is?<\/p>\n<p><strong>Beth: <\/strong>For the public, it isn\u2019t one thing. People point to examples of AI being implemented, but it has different definitions for people. They draw presumptions from science fiction and media accounts of dangerous AI and scary robots. It\u2019s a malleable term \u2013 people say \u2018the algorithm\u2019 and mean AI.<\/p>\n<p>Many think of AI in the workplace replacing human physical work, but we see AI taking on more knowledge labor and even emotional labor.<\/p>\n\t\t\t<div class=\"c-promo-product\">\n\t\t\t\t\t\t<article class=\"c-card c-card--link c-card--medium@sm c-card--aside-hor@lg\">\n\t\t\t\t<div class=\"c-card__body  \">\n\t\t\t\t\t<header class=\"c-card__header\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<p class=\"c-card__headline\">AI and Machine Learning in Cybersecurity<\/p>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<h3 class=\"c-card__title \"><span>How they'll shape the future<\/span><\/h3>\n\t\t\t\t\t\t\t\t\t\t\t<\/header>\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"c-card__desc \">\n\t\t\t\t\t\t\t<p>AI and machine learning are helping us fight cybercriminals more effectively than ever before.<\/p>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<div class=\"c-card__aside\">\n\t\t\t\t\t<a href=\"https:\/\/www.kaspersky.com\/resource-center\/definitions\/ai-cybersecurity\" class=\"c-button c-card__link\" target=\"_blank\" rel=\"noopener nofollow\">How we use AI<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/article>\n\t\t<\/div>\n\t\n<p><strong>Ken: <\/strong>What kind of emotional tasks can AI do?<\/p>\n<p><strong>Beth: <\/strong>We increasingly see interfaces with AI that give simulated emotional responses. AI assistants do tasks for you but pleasantly and civilly. Call center work is already highly structured and scripted \u2013 an AI assistant or chatbot can take over that pleasantry system. How workplaces implement AI will influence how we connect with other humans.<\/p>\n<p><strong>Ken: <\/strong>Are we creating a human-machine social world we\u2019ll have to learn to interact with?<\/p>\n<p><strong>Beth: <\/strong>Yes. We\u2019re seeing these human-machine interactions playing out in different places \u2013 in the home, workplace, and care settings. We\u2019re having to understand that relationship and teach our children to negotiate it. There are discussions on whether children should be polite when using AI assistants. We\u2019re coming up with a new social format for interactions with AI.<\/p>\n<p><strong>Ken: <\/strong>I thought, <em>of course<\/em> you should be polite to machines \u2013 if only because one day they\u2019ll look at everything we\u2019ve said and done and judge us accordingly. I want to be on the right side of them.<\/p>\n<p><strong>Beth: <\/strong>We also see arguments that you should be civil to AI assistants because this is how we should behave to other entities, whether human or non-human \u2013 that it reflects our natures. If we aren\u2019t civil to machines, it says more about us than their needs. There are many different answers to questions of politeness to AI assistants.<\/p>\n<p><strong>Ken: <\/strong>People find conversations with <a href=\"https:\/\/en.wikipedia.org\/wiki\/Cleverbot\" target=\"_blank\" rel=\"noopener nofollow\">Cleverbot<\/a> amusing when it asks things like, \u201cDon\u2019t you wish you had a body?\u201d or \u201cWhat is God to you?\u201d They don\u2019t consider Cleverbot thinks it\u2019s appropriate because a human asked IT those questions. We\u2019re looking into a strange, distorting mirror and not recognizing our reflection.<\/p>\n<p><strong>Beth: <\/strong>Absolutely. There\u2019s a reason the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Black_Mirror\" target=\"_blank\" rel=\"noopener nofollow\">Black Mirror<\/a> TV series is called Black Mirror \u2013 it\u2019s a reflective surface for understanding ourselves. AI and machine responses come from data sets, and those involve biases.<\/p>\n<p>It\u2019s a moment to reflect, for instance, on questions of personhood before we even get to anything like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_general_intelligence\" target=\"_blank\" rel=\"noopener nofollow\">artificial general intelligence (AGI)<\/a> or <a href=\"https:\/\/en.wikipedia.org\/wiki\/Superintelligence\" target=\"_blank\" rel=\"noopener nofollow\">superintelligence<\/a>. Should we be civil? If we say rude or sexist things to a female AI assistant, does that matter? These questions come out again and again.<\/p>\n<p>I\u2019m an anthropologist, meaning I study what humans do and think. These big questions are integral to our concept of what AI is. I\u2019ve seen in my work engaging the public and seeing their sometimes hopeful, sometimes fearful responses that this will be a conversation we\u2019ll have for some time yet.<\/p>\n<blockquote><p>Talking about AI and the future of work gets down to big questions like, what is the human being for? If we define ourselves in terms of what we do and what we produce, we\u2019ll fear replacement.<\/p>\n<\/blockquote>\n<p><strong>Ken: <\/strong>I was at an airport buying a train ticket one afternoon. It was quiet, and the woman behind the counter said, \u201cYou should have been here yesterday \u2013 the automatic ticket machines had recalibrated and was giving wrong tickets. People adjust. Machines don\u2019t.\u201d I wondered if this ability to adjust is part of our relationship with machines.<\/p>\n<p><strong>Beth: <\/strong>It\u2019s interesting how much we adjust to machines. With the airport systems that use facial recognition software, I often have to take off my glasses, change my hair or bob down. We adjust ourselves to be accepted by the system.<\/p>\n<p>You see this in how automation is changing the workplace. There are interviews with facial recognition software involved, so we\u2019re trying to smile more in a video interview. We\u2019re increasingly making changes to fit the machine-based system.<\/p>\n<p><strong>Ken:<\/strong> It suggests an element of trust. Where does trust fit in our relationship with machines?<\/p>\n<p><strong>Beth: <\/strong>Trust is key. We want to believe software that observes our responses in job interviews is fair and neutral, but we have examples where trust is let down.<\/p>\n<p>In the UK in 2020, an <a href=\"https:\/\/www.bbc.com\/news\/explainers-53807730\" target=\"_blank\" rel=\"noopener nofollow\">algorithm that helped grade student exam papers<\/a> damaged public trust \u2013 it penalized students studying at less high-achieving schools. In my work, I see examples of people trusting too much \u2013 they have an image of a superintelligence that doesn\u2019t exist yet.<\/p>\n<blockquote><p>Around the term \u201cblessed by the algorithm,\u201d people feel their YouTube content is promoted because the algorithm decided they should be lucky. They use the language of religious belief.<\/p>\n<\/blockquote>\n<p>Society can only trust technology it understands. Digital literacy \u2013 understanding what AI is and isn\u2019t \u2013 is key to that.<\/p>\n<p><strong>Ken:<\/strong> We tend to understand things better as fiction. It\u2019s a way to get a grip on the world. But I get the feeling fiction\u2019s not a grip anymore, but a stranglehold. Is that fair?<\/p>\n<p><strong>Beth: <\/strong>I enjoy science fiction accounts of AI in their many interpretations, fears and hopes.<\/p>\n<p>One of the hazards is a strict, negative story used too often. I\u2019m a fan of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Terminator_(franchise)\" target=\"_blank\" rel=\"noopener nofollow\">the Terminator film franchise<\/a>, but I see how dystopian imagery of robot uprisings shapes people\u2019s views of AI. And AI making crucial decisions about our future \u2013 whether we get a job or a mortgage, or how we\u2019re treated in hospital \u2013 may also be overshadowed by Terminator-like stories.<\/p>\n<p><strong>Ken:<\/strong> And it stops us noticing when AI does good things, like in medicine and traffic control. The robots are already among us, but they don\u2019t usually walk on two legs. They\u2019re more likely to be sorting out your airplane ticket.<\/p>\n<p><strong>Beth: <\/strong>There\u2019s a move toward making robots cuter and replicating child and animal forms to reduce those threatening associations from science fiction. Think of Arnold Schwarzenegger\u2019s Terminator versus the <a href=\"https:\/\/www.youtube.com\/watch?v=oJq5PQZHU-I\" target=\"_blank\" rel=\"noopener nofollow\">therapeutic robot PARO, modeled on a baby harp seal<\/a>.<\/p>\n<div style=\"width: 640px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-41250-1\" width=\"640\" height=\"360\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2021\/08\/17053255\/AI_ethics_animated-header.m4v?_=1\"><\/source><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2021\/08\/17053255\/AI_ethics_animated-header.m4v\">https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2021\/08\/17053255\/AI_ethics_animated-header.m4v<\/a><\/video><\/div>\n<p><strong>Ken: <\/strong>Is there an element of trying to make work more fun? Perhaps work becomes more like play if you have an AI assistant who helps with the emotional labor?<\/p>\n<p><strong>Beth: <\/strong>Yes. There\u2019s a history of trying to gamify the workplace \u2013 developing \u2018third space\u2019 options that involve games or places where you can nap. Perhaps how we apply AI is a part of how we make the workplace more enjoyable. If our software chatted back to us, was entertaining and responded to us, it might seem less laborious.<\/p>\n<p><strong>Ken: <\/strong>Going back to emotional labor, programs could soften the edges of work relationships, whether online or in an office \u2013 I can imagine something like an \u2019emotional Roomba\u2019 (robot vacuum cleaner) allowing for moments of interaction.<\/p>\n<p><strong>Beth: <\/strong>We see examples of AI mediating between humans in conversation, like machine learning algorithms suggesting how to respond to emails or warning your tone is too harsh \u2013 softening the edges of our interactions at work is a developing space.<\/p>\n<p><strong>Ken: <\/strong>After some emails I\u2019ve had, I see the value in something like that.<\/p>\n<p><strong>Beth: <\/strong>I also saw an application for divorced or divorcing couples helping conversations be more amicable for the benefit of any children. A machine learning algorithm warns you things like, perhaps you\u2019re being a bit sarcastic.<\/p>\n<p><strong>Ken: <\/strong>I\u2019m scared of an algorithm that understands sarcasm. That will be the end of humanity.<\/p>\n<p><strong>Beth: <\/strong>There\u2019s a wonderful <a href=\"https:\/\/www.tomgauld.com\/\" target=\"_blank\" rel=\"noopener nofollow\">Tom Gauld<\/a> cartoon about <a href=\"https:\/\/www.newscientist.com\/article\/0-tom-gaulds-attempts-to-create-a-sarcastic-ai-are-really-genius\/\" target=\"_blank\" rel=\"noopener nofollow\">scientists trying to create a sarcastic bot<\/a>. And the bot says to the scientist, \u201cIt\u2019s going great. This guy is a real genius.\u201d<\/p>\n<p><strong>Ken: <\/strong>What thought about AI and the future of work would you most like people to take away?<\/p>\n<p><strong>Beth: <\/strong>I\u2019d like people to consider how much we should change our behavior in relation to AI in the workplace. People don\u2019t normally interact in purely rational ways. If we curtail that normal human messiness, we\u2019re not anthropomorphizing AI but robo-morphizing humans. If we make ourselves smile more to do well in an interview with facial recognition software, we limit ourselves. Although we might see AI as a human simulation, do <em>we<\/em> become a human simulation in response to AI?<\/p>\n<p><a href=\"https:\/\/www.tomorrowunlocked.com\/fastforward\" target=\"_blank\" rel=\"noopener nofollow\">Listen to Tomorrow Unlocked\u2019s Fast Forward audio series for more expert views<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Should we be polite to AI assistants? Are we changing ourselves just so that AI can understand us? Dr. Beth Singler of University of Cambridge on how we see AI.<\/p>\n","protected":false},"author":2566,"featured_media":41251,"template":"","coauthors":[3758],"class_list":{"0":"post-41250","1":"emagazine","2":"type-emagazine","3":"status-publish","4":"has-post-thumbnail","6":"emagazine-category-artificial-intelligence","7":"emagazine-category-emerging-tech","8":"emagazine-category-fast-forward","9":"emagazine-tag-ai","10":"emagazine-tag-ethics","11":"emagazine-tag-future-of-work","12":"emagazine-tag-robots"},"hreflang":[{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/secure-futures-magazine\/ai-ethics-beth-singler\/41250\/"}],"acf":[],"_links":{"self":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/emagazine\/41250","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/emagazine"}],"about":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/emagazine"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2566"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/41251"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=41250"}],"wp:term":[{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/coauthors?post=41250"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}