{"id":50030,"date":"2023-12-13T07:08:13","date_gmt":"2023-12-13T12:08:13","guid":{"rendered":"https:\/\/www.kaspersky.com\/blog\/?post_type=emagazine&#038;p=50030"},"modified":"2023-12-15T04:44:29","modified_gmt":"2023-12-15T09:44:29","slug":"insight-story-ai-ethics","status":"publish","type":"emagazine","link":"https:\/\/www.kaspersky.com\/blog\/secure-futures-magazine\/insight-story-ai-ethics\/50030\/","title":{"rendered":"As AI&#8217;s influence rapidly expands, here&#8217;s how business ethics must keep up"},"content":{"rendered":"<p>We live in a world where algorithms can make decisions and data fuels innovation. It means ethical considerations are more critical than ever for business. They must balance using new technology to increase competitive advantage while preserving integrity and protecting customers.<\/p>\n<p>In our podcast Insight Story, experts Tomoko Yokoi (Switzerland,) senior business executive and researcher at <a href=\"https:\/\/www.imd.org\/centers\/dbt\/imd-digital-business-transformation-center\/\" target=\"_blank\" rel=\"noopener nofollow\">Global Centre for Digital Business Transformation<\/a>, IMD Business School and Andy Crouch (UK,) consultant and co-founder of ethical-AI natural language processing company, <a href=\"https:\/\/www.akumen.co.uk\/\" target=\"_blank\" rel=\"noopener nofollow\">Akumen<\/a>, outline how AI biases can impact business and what steps they can take to ensure its fairness. Kaspersky Global Research and Analysis Team\u2019s Dr. Amin Hasbini expands on the privacy and responsible data use implications.<\/p>\n<p><iframe style=\"border: none;min-width: min(100%, 430px);height: 300px\" height=\"300\" scrolling=\"no\" src=\"https:\/\/www.podbean.com\/player-v2\/?i=pzx4d-151c103-pb&amp;from=pb6admin&amp;pbad=0&amp;square=1&amp;share=1&amp;download=1&amp;rtl=0&amp;fonts=Arial&amp;skin=1b1b1b&amp;font-color=auto&amp;logo_link=episode_page&amp;btn-skin=2baf9e&amp;size=300\" width=\"100%\"><\/iframe><\/p>\n<h2>Not all AI is created ethically equal<\/h2>\n<div id=\"attachment_50034\" style=\"width: 410px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" aria-describedby=\"caption-attachment-50034\" class=\"wp-image-50034 size-full\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2023\/12\/11052346\/Andy-Crouch.jpg\" alt=\"\" width=\"400\" height=\"402\"><p id=\"caption-attachment-50034\" class=\"wp-caption-text\">Andy Crouch, business development consultant, Akumen<\/p><\/div>\n<p>Andy\u2019s company Akumen found a problem that needed solving. \u201cScores out of five are useful, but we wanted insight from written responses like product reviews, and there was no way to do it. The team created an AI solution to identify meaning like topics, emotions and sentiment. Sentiment measures opinion \u2013 positive, negative or neutral \u2013 but emotions drive behavior. It works on text feedback anywhere, which might be about consumer goods, healthcare or anything else.\u201d<\/p>\n\t\t\t\t\n<p>Their approach uses AI differently from generative AI tools like <a href=\"https:\/\/chat.openai.com\/auth\/login\" target=\"_blank\" rel=\"noopener nofollow\">ChatGPT<\/a>. \u201cOur AI is rule-based, human-created and human-curated. It\u2019s completely transparent and there are no algorithms as with large language models. We can dive in and make rules more nuanced if we recognize bias. With large language models, that would be complex and expensive.\u201d<\/p>\n<p>Andy expands on generative AI\u2019s limits for truly understanding people. \u201cWe asked ChatGPT how many emotions humans experience \u2013 it said 138,000. That doesn\u2019t help us understand what drives behavior. Our platform has 22 emotions \u2013 enough to see what drives behavior. Through our partner, <a href=\"https:\/\/www.civi.com\/artificial-intelligence\/\" target=\"_blank\" rel=\"noopener nofollow\">Civicom<\/a>, we\u2019re helping the UK\u2019s national health service (NHS) to understand what patients and staff experience.\u201d<\/p>\n<p>And that understanding can improve lives:<\/p>\n<blockquote><p>Using AI to understand people\u2019s emotions and what they\u2019re talking about, you can quickly extract reliable insights. And if anyone questions things, you can show why the system\u2019s highlighted this and, if needed, modify.<\/p>\n<cite><p>Andy Crouch, consultant and co-founder, Akumen<\/p><\/cite><\/blockquote>\n<p>Larger language models use big data pools, but there are also more contained, enterprise-level tools like <a href=\"https:\/\/openai.com\/blog\/introducing-chatgpt-enterprise\" target=\"_blank\" rel=\"noopener nofollow\">ChatGPT<\/a><a href=\"https:\/\/openai.com\/blog\/introducing-chatgpt-enterprise\" target=\"_blank\" rel=\"noopener nofollow\"> Enterprise<\/a> that businesses can furnish with their own data and control how they use it.<\/p>\n<p>Tomoko sees enterprise-level tools as useful but notes they can\u2019t do what big data can do. \u201cOrganizations are developing new functions around AI, like data annotators, who clean data before it goes into models. But is it foolproof? The beauty of using data from everywhere is it gives you insights you otherwise wouldn\u2019t get.\u201d<\/p>\n<h2>Choosing ethical suppliers<\/h2>\n<p>Luckily for companies using AI ethically, more businesses are adopting digital responsibility policies and choosing ethics-first suppliers.<\/p>\n<p>Tomoko gives an example. \u201c<a href=\"https:\/\/www.telekom.com\/en\" target=\"_blank\" rel=\"noopener nofollow\">Deutsche Telekom<\/a> has been a pioneer in AI ethics. They\u2019ve trained all employees to ensure AI ethics are distributed throughout the organization. At the same time, they have about 300 suppliers and ensure it\u2019s in all their contracts. So it goes beyond the boundaries of the company.\u201d<\/p>\n<p>But many businesses don\u2019t know where to start. Tomoko says, \u201c<a href=\"https:\/\/www.imd.org\/ibyimd\/technology\/how-organizations-navigate-ai-ethics\/\" target=\"_blank\" rel=\"noopener nofollow\">Over 250 companies have committed to AI ethics<\/a>, but codified mechanisms only help if they change behavior. How can we live these principles and ideals? External experts can help, and there\u2019s a case for individuals taking responsibility, which will have a collective impact.\u201d<\/p>\n<p>She suggests how companies frame AI ethics matters. \u201cYou can see AI ethics as value or as compliance. If it\u2019s compliance, it will be cost- or risk-driven. But AI ethics could also be a competitive advantage.\u201d<\/p>\n<p>Andy compares AI ethics to health and safety. \u201cIf you have a health and safety director, it\u2019s only one person\u2019s responsibility. Change won\u2019t happen unless everyone understands health and safety\u2019s importance, and especially that it drives productivity and revenue.\u201d<\/p>\n<p>The competitive advantage is real. McKinsey research found <a href=\"https:\/\/www.mckinsey.com\/capabilities\/quantumblack\/our-insights\/why-digital-trust-truly-matters\" target=\"_blank\" rel=\"noopener nofollow\">72 percent of customers considered a business\u2019s AI policy before making an AI-related purchase<\/a>.<\/p>\n<p>Tomoko highlights the importance of backing up policies with action.<\/p>\n<blockquote><p>Companies making public commitments must change as an organization, embedding new practices. Have a grand goal of committing to AI ethics and digital responsibility, but divide it into tangible, more easily executed sub-goals.<\/p>\n<cite><p>Tomoko Yokoi, senior business executive and researcher, Global Centre for Digital Business Transformation, IMD Business School<\/p><\/cite><\/blockquote>\n<h2>Which AI issues should companies care about?<\/h2>\n<p>Tomoko outlines three places to look. \u201cFirst, consider the software development lifecycle. If you\u2019re considering developing an AI product, think of how it\u2019s designed. Look for bias in the data.<\/p>\n<p>\u201cSecond, once it\u2019s being developed, although many companies say they\u2019re implementing AI ethics, people developing AI-driven products don\u2019t know how to apply those principles. So, look at how people use ethical principles in day-to-day software development.<\/p>\n<p>\u201cThird, we test products in controlled environments. Once it launches, ask who is monitoring it and how we ensure it doesn\u2019t gather bias and that people use it correctly.\u201d<\/p>\n<p>Tomoko is part of IMD Business School and knows that what future executives learn about AI ethics will shape future companies\u2019 ethical behaviors with AI. She says, \u201cFirst, we say everyone has a responsibility to these issues that goes beyond the company. You need to be aware of this responsibility, but also be able to make others in your team aware.\u201d<\/p>\n<p>Secondly, \u201cWhat type of organizations do we want to build? We coach people to be able to handle multiple goals \u2013 not only profit but also social, environmental and ethical goals. We want them to walk away thinking of the future.\u201d<\/p>\n<p>Andy drills down into the data AI is using. \u201cUnderstand how the AI model is built. Is the data you\u2019re analyzing through that AI model ethically sourced, and are you using it ethically? The lack of transparency over large language models is rife for ethical risk and bias.\u201d<\/p>\n<p>AI training data bias can have life-threatening impacts. <a href=\"https:\/\/restofworld.org\/2023\/ai-translation-errors-afghan-refugees-asylum\/\" target=\"_blank\" rel=\"noopener nofollow\">Poor AI translations have been found to be jeopardizing asylum claims<\/a>. Andy sees <a href=\"https:\/\/research.ibm.com\/blog\/retrieval-augmented-generation-RAG\" target=\"_blank\" rel=\"noopener nofollow\">retrieval-augmented generation (RAG,)<\/a> which uses more proofed datasets, as part of the solution.<\/p>\n<p><span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe class=\"youtube-player\" type=\"text\/html\" width=\"640\" height=\"390\" src=\"https:\/\/www.youtube.com\/embed\/T-D1OfcDW1M?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent\" frameborder=\"0\" allowfullscreen=\"true\"><\/iframe><\/span><\/p>\n<h2>Can we have secure and well-regulated AI?<\/h2>\n<p>Dr. Amin Hasbini. Head of Research Centre, Middle East, Turkey and Africa for Kaspersky Global Research and Analysis Team, thinks AI ethical standards are needed. \u201cAI won\u2019t self-define its ethics. They must be programmed with ethical standards.\u201d<\/p>\n<p>Since there is almost no way the public can evaluate, critique or improve AI ethics, regulation must play a part, according to Amin. \u201cWe need security and safety by design, and continuous verification of it. That would require transparency, especially from big tech vendors, and letting the public influence how these technologies develop.\u201d<\/p>\n<p>He likens the challenge to that of regulating social media. \u201cWe\u2019re asking people to adopt technologies that can do much damage without giving them ways to ensure that doesn\u2019t happen. The same has happened before with social media, with it being used for data leaks and fake news. European Union regulation is moving fast around AI, but AI could be much more dangerous than social media \u2013 we need rules now.\u201d<\/p>\n<p>For improved ethical data use, Amin recommends asset management controls. \u201cIf well deployed, asset management controls allow data to be classified, including which is available to AI, which can be shared publicly and which needs to stay inside the organization.\u201d<\/p>\n<p>Andy says regulation is hard in this fast-moving space because no one knows what\u2019s coming next. \u201cI question anyone saying they know what will happen in the next six months or beyond. But there\u2019s a lot of fear and lobbying going on \u2013 so go slow. If your AI-driven capability can\u2019t deliver because it\u2019s non-compliant, ethically or otherwise, it will be damaging.\u201d<\/p>\n<p>However, he believes regulation is necessary. \u201cIt will be interesting to see how they regulate something that\u2019s not easily defined and morphs quickly, but we must protect those who need protecting.\u201d<\/p>\n<p>Kaspersky has recently proposed <a href=\"https:\/\/usa.kaspersky.com\/blog\/ethical-ai-usage-in-cybersecurity\/29008\/\" target=\"_blank\" rel=\"noopener\">six principles for ethical use of AI<\/a> in the cybersecurity industry with transparency at the core.<\/p>\n<h2>Getting started with AI ethics<\/h2>\n<p>Our experts have straightforward advice for those business executives yet to approach AI ethics.<\/p>\n<div id=\"attachment_50035\" style=\"width: 409px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" aria-describedby=\"caption-attachment-50035\" class=\"size-full wp-image-50035\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2023\/12\/11052506\/Tomoko-Yokoi.jpg\" alt=\"\" width=\"399\" height=\"399\"><p id=\"caption-attachment-50035\" class=\"wp-caption-text\">Tomoko Yokoi, senior business executive and researcher at Global Centre for Digital Business Transformation, IMD Business School<\/p><\/div>\n<p>Tomoko says, \u201cAs a mindset, remember the analog and digital worlds are the same. Your analog-world values should extend into the digital world.\u201d<\/p>\n<p>Andy highlights the need for both widespread knowledge and deep expertise. \u201cGet your whole team conversant with AI, but have a well-informed friend who lives and breathes this stuff to call when there are challenges.\u201d<\/p>\n<p>With headlines about AI taking our jobs and AI founders like <a href=\"https:\/\/mitsloan.mit.edu\/ideas-made-to-matter\/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai\" target=\"_blank\" rel=\"noopener nofollow\">Geoffrey Hinton sounding the alarm on unregulated AI perils<\/a>, it\u2019s easy to write off AI ethics as a problem too hard to fix. But these complex issues need priority.<\/p>\n<p>There are green shoots of change. In December 2023, the <a href=\"https:\/\/markets.businessinsider.com\/news\/stocks\/ibm-and-meta-launch-ai-alliance-in-collaboration-with-over-50-founding-members-1032875666\" target=\"_blank\" rel=\"noopener nofollow\">AI Alliance launched<\/a> to focus on developing AI responsibly, including safety and security tools. Its 50 members include Meta, IBM, CERN and Cornell. The message may be, \u2018Let\u2019s not move too fast and not break things.\u2019 With OpenAI, creators of ChatGPT, not invited to the party, could the tortoise of collective corporations beat the nimble hare of innovation?<\/p>\n<p>AI gives business potential for great gains but comes with great risks to reputation, security and privacy. With strong ethical AI policies translated into action and widespread knowledge among employees, businesses can have more confidence to take advantage of AI\u2019s many benefits.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When AI can make or break your business, how do you harness its power while staying within ethical boundaries?<\/p>\n","protected":false},"author":2521,"featured_media":50031,"template":"","coauthors":[3452],"class_list":{"0":"post-50030","1":"emagazine","2":"type-emagazine","3":"status-publish","4":"has-post-thumbnail","6":"emagazine-category-artificial-intelligence","7":"emagazine-category-leadership","8":"emagazine-tag-audio","9":"emagazine-tag-insight-story","10":"emagazine-tag-podcast"},"hreflang":[{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/secure-futures-magazine\/insight-story-ai-ethics\/50030\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/secure-futures-magazine\/insight-story-ai-ethics\/29525\/"}],"acf":[],"_links":{"self":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/emagazine\/50030","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/emagazine"}],"about":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/emagazine"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2521"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/50031"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=50030"}],"wp:term":[{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/coauthors?post=50030"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}