{"id":18318,"date":"2017-08-31T15:34:04","date_gmt":"2017-08-31T19:34:04","guid":{"rendered":"https:\/\/www.kaspersky.com\/blog\/?p=18318"},"modified":"2020-04-08T13:37:22","modified_gmt":"2020-04-08T17:37:22","slug":"ai-fails","status":"publish","type":"post","link":"https:\/\/www.kaspersky.com\/blog\/ai-fails\/18318\/","title":{"rendered":"Why machine learning is not enough"},"content":{"rendered":"<p>Connected technologies are invading our lives more and more fully with each passing day. We may not even notice how natural it\u2019s become to ask Siri or Alexa or Google to interpret more of our human experience, and expect our cars to respond to the rules of the road fast enough to keep our hides intact. Some of us are still bothered by technologies such as public cameras feeding images to facial recognition software, but plenty aren\u2019t.<\/p>\n<p>At this point, it\u2019s easy to laugh at a lot of AI failures because on balance they\u2019re mostly funny (just forget about the potential for fatal outcomes). Well, we think as the machines march on, and as malware continues to evolve, that will shift. While it\u2019s still fun, we took a look at some other AI failures.<\/p>\n<h3>Dollhouse debacle<\/h3>\n<p>A classic example: A news program aired in California early this year set off something of a chain reaction. It was an AI mishap actually based on another AI mishap. Basically, reporting about Amazon Echo mistakenly ordering a dollhouse caused a bunch of Amazon Echos (which were, as usual, attentively listening to everything and not distinguishing the voice of their owner from other voices) to mistakenly order a bunch of dollhouses. Maybe don\u2019t play this clip at home.<\/p>\n<ul>\n<li><a href=\"https:\/\/usa.kaspersky.com\/blog\/voice-recognition-threats\/10855\/\" target=\"_blank\" rel=\"noopener noreferrer\">Read more about Amazon Echo and dollhouses<\/a>.<\/li>\n<\/ul>\n<h3>Fast-food flop<\/h3>\n<p>Burger King attempted to exploit the same bug and used their ads to engage viewers\u2019 voice-activated assistants. In a way, they succeeded. The real problem was a failure to anticipate human behavior: By activating a search for the iconic Whopper on collaborative site Wikipedia using Google Home, the fast-food giant all but assured users would mess with the Whopper entry. <a href=\"http:\/\/www.npr.org\/sections\/thetwo-way\/2017\/04\/13\/523740193\/ok-google-burger-king-hijacked-your-speakers-and-failed-pretty-quickly\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Which they did<\/a>.<br>\n<span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe class=\"youtube-player\" type=\"text\/html\" width=\"640\" height=\"390\" src=\"https:\/\/www.youtube.com\/embed\/n5lj63-nc5g?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent\" frameborder=\"0\" allowfullscreen=\"true\"><\/iframe><\/span><\/p>\n<h3>Cortana\u2019s confusion<\/h3>\n<p>We can\u2019t call out Microsoft\u2019s voice assistant alone \u2014 Apple\u2019s Siri has its own subreddit of missteps, and Google\u2019s assistant has racked up plenty of humorous mistakes \u2014 but it\u2019s always funny when these new features fail in front of a crowd. This one looks like Cortana doesn\u2019t understand a non-American accent \u2014 or maybe the fast, natural speech threw it for a loop.<\/p>\n<p><span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe class=\"youtube-player\" type=\"text\/html\" width=\"640\" height=\"390\" src=\"https:\/\/www.youtube.com\/embed\/DDqrfCmIPxI?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent\" frameborder=\"0\" allowfullscreen=\"true\"><\/iframe><\/span><\/p>\n<h3>Fooling facial recognition<\/h3>\n<p>Your friends might not be fooled by a weird or wacky pair of spectacles, but a team of researchers at Carnegie Mellon University proved that changing that small a part of your look is enough to make you a completely different person in a machine\u2019s eyes. The best part: Researchers managed not only to dodge facial recognition but also to impersonate specific people by printing certain patterns over glasses frames.<\/p>\n<div id=\"attachment_23101\" style=\"width: 1034px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2017\/08\/17090624\/ai-fails-glass.jpg\"><img decoding=\"async\" aria-describedby=\"caption-attachment-23101\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2017\/08\/17090624\/ai-fails-glass-1024x614.jpg\" alt=\"\" width=\"1024\" height=\"614\" class=\"size-large wp-image-23101\"><\/a><p id=\"caption-attachment-23101\" class=\"wp-caption-text\">Researchers managed to impersonate each other and celebrities<\/p><\/div>\n<ul>\n<li>Here, <a href=\"https:\/\/www.theguardian.com\/technology\/2016\/nov\/03\/how-funky-tortoiseshell-glasses-can-beat-facial-recognition\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">the Guardian<\/a> explains it in more detail.<\/li>\n<li>And <a href=\"https:\/\/www.cs.cmu.edu\/~sbhagava\/papers\/face-rec-ccs16.pdf\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">here\u2019s the original paper<\/a>.<\/li>\n<\/ul>\n<h3>Street-sign setbacks<\/h3>\n<p>What about street-sign recognition by self-driving cars? Is it any better than facial recognition? Not much. Another group of researchers proved that sign recognition is fallible as well. Small changes any human would gloss over caused a machine-learning system to misclassify the \u201cSTOP\u201d sign as \u201cSpeed Limit 45.\u201d And it\u2019s not just a random mistake; it happened in 100% of the testing conditions.<\/p>\n<div id=\"attachment_23102\" style=\"width: 1034px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2017\/08\/17090632\/ai-fails-signs.jpg\"><img decoding=\"async\" aria-describedby=\"caption-attachment-23102\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2017\/08\/17090632\/ai-fails-signs-1024x373.jpg\" alt=\"\" width=\"1024\" height=\"373\" class=\"size-large wp-image-23102\"><\/a><p id=\"caption-attachment-23102\" class=\"wp-caption-text\">Machine recognized three of these defaced images as a \u201cSpeed Limit 45\u201d and the last one as a \u201cStop\u201d<\/p><\/div>\n<ul>\n<li>Here\u2019s <a href=\"https:\/\/www.bleepingcomputer.com\/news\/security\/you-can-trick-self-driving-cars-by-defacing-street-signs\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">more about street-sign recognition fails<\/a>.<\/li>\n<li> And <a href=\"https:\/\/blog.openai.com\/adversarial-example-research\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">here\u2019s the paper<\/a>.<\/li>\n<\/ul>\n<h3>Invisible panda<\/h3>\n<p>How massively does one have to alter the input to fool machine learning? You\u2019d be surprised how subtle this change can actually be. To the human eye, there\u2019s no difference at all between the two pictures below, whereas a machine was quite confident that they were completely different objects \u2014 a panda and a gibbon, respectively (curiously, a splash of noise that was added to the original picture is recognized as a nematode by the machine).<\/p>\n<p><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2017\/08\/01093113\/invidible-panda.png\"><img decoding=\"async\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2017\/08\/01093113\/invidible-panda.png\" alt=\"Invisible panda machine learning fail\" width=\"1600\" height=\"520\" class=\"alignnone size-full wp-image-18340\"><\/a><\/p>\n<ul>\n<li>Here\u2019s <a href=\"https:\/\/blog.openai.com\/adversarial-example-research\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">a somewhat more detailed post<\/a>.<\/li>\n<li>And, of course, <a href=\"https:\/\/arxiv.org\/pdf\/1412.6572.pdf\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">the paper<\/a>.<\/li>\n<\/ul>\n<h3>Terrible Tay<\/h3>\n<p>Microsoft\u2019s chatbot experiment, an AI called Tay.ai, was supposed to emulate a teenage girl and learn from its social media interactions. Turns out, we humans are monsters, and so Tay became, among other things, a Nazi. AI can grow, but its quality and characteristics do rest on its human input.<\/p>\n<ul>\n<li><a href=\"http:\/\/www.techrepublic.com\/article\/why-microsofts-tay-ai-bot-went-wrong\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Read more<\/a> about Tay and her misadventures. <\/li>\n<\/ul>\n<p>The deadliest failure so far, and perhaps the most famous, comes courtesy of Tesla \u2014 but we can\u2019t fault the AI, which despite its name, Autopilot, wasn\u2019t supposed to take over driving completely. Investigation found the person in the driver\u2019s seat really was failing to act as a driver, ignoring warnings about his hands not being on the wheel, setting cruise control above the speed limit, and taking no evasive actions during the <a href=\"https:\/\/www.extremetech.com\/extreme\/251299-dead-tesla-driver-wasnt-watching-dvd-wasnt-paying-attention-either\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">7 seconds or more<\/a> after the truck that ultimately killed him came into view.<\/p>\n<p>It might have been possible for Autopilot to avoid the accident \u2014 factors such as the placement and color contrast of the truck have been floated \u2014 but at this point, all we really know is that it did not exceed its job parameters, which we don\u2019t yet expect software to do.<\/p>\n<p>Ultimately, even using machine learning, in which software becomes smarter with experience, artificial intelligence can\u2019t come close to human intelligence. Machines are fast, consistent, and tireless, however, which pairs up nicely with human intuition and smarts.<\/p>\n<p>That\u2019s why our approach, which we call \u201chumacine\u201d, takes advantage of the best of both worlds, using the very fast and meticulous artificial intelligence of advanced programming and augmenting it with top-notch human cybersecurity professionals who can turn educated eyes and human brains to fighting malware and keeping consumer, enterprise, and infrastructure systems working safely.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial? Very. Intelligence? You be the judge.<\/p>\n","protected":false},"author":2045,"featured_media":18321,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[2684,1789],"tags":[960,2628,2486,1876,2642],"class_list":{"0":"post-18318","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-special-projects","8":"category-technology","9":"tag-artificial-intelligence","10":"tag-fails","11":"tag-humachine","12":"tag-machine-learning","13":"tag-next-gen"},"hreflang":[{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/ai-fails\/18318\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/ai-fails\/11163\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/ai-fails\/12535\/"},{"hreflang":"es-mx","url":"https:\/\/latam.kaspersky.com\/blog\/ai-fails\/11221\/"},{"hreflang":"es","url":"https:\/\/www.kaspersky.es\/blog\/ai-fails\/14276\/"},{"hreflang":"it","url":"https:\/\/www.kaspersky.it\/blog\/ai-fails\/14182\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/ai-fails\/18678\/"},{"hreflang":"pl","url":"https:\/\/plblog.kaspersky.com\/ai-fails\/7313\/"},{"hreflang":"de","url":"https:\/\/www.kaspersky.de\/blog\/ai-fails\/14540\/"},{"hreflang":"zh","url":"https:\/\/www.kaspersky.com.cn\/blog\/ai-fails\/8378\/"},{"hreflang":"ja","url":"https:\/\/blog.kaspersky.co.jp\/ai-fails\/17733\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/ai-fails\/17782\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/ai-fails\/17762\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.com\/blog\/tag\/humachine\/","name":"HuMachine"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/18318","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2045"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=18318"}],"version-history":[{"count":13,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/18318\/revisions"}],"predecessor-version":[{"id":34749,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/18318\/revisions\/34749"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/18321"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=18318"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=18318"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=18318"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}