{"id":52448,"date":"2024-10-17T15:01:42","date_gmt":"2024-10-17T19:01:42","guid":{"rendered":"https:\/\/www.kaspersky.com\/blog\/?p=52448"},"modified":"2024-10-17T15:01:42","modified_gmt":"2024-10-17T19:01:42","slug":"ai-role-in-cybersecurity-automation","status":"publish","type":"post","link":"https:\/\/www.kaspersky.com\/blog\/ai-role-in-cybersecurity-automation\/52448\/","title":{"rendered":"AI in cybersecurity automation"},"content":{"rendered":"<p>Although automation and machine learning (ML) have been used in information security for almost two decades, experimentation in this field continues non-stop. Security professionals need to combat increasingly sophisticated cyberthreats and a growing number of attacks without significant increases in budget or personnel. On the positive side, AI greatly reduces the workload on security analysts, while also accelerating many phases of incident handling \u2014 from detection to response. However, a number of seemingly obvious areas of ML application are underperforming.<\/p>\n<h2>AI-based detection of cyberthreats<\/h2>\n<p>To massively oversimplify, there are two basic \u2014 and long-tested \u2014 ways to apply ML:<\/p>\n<ul>\n<li><strong>Attack detection.<\/strong> By training AI on examples of phishing emails, malicious files, and dangerous app behavior, we can achieve an acceptable level of detection of <strong>similar<\/strong> The main pitfall is that this area is highly dynamic \u2014 with attackers constantly devising new methods of disguise. Therefore, the model needs frequent retraining to maintain its effectiveness. This requires a labeled dataset \u2014 that is, a large collection of recent, verified examples of malicious behavior. An algorithm trained in this way won\u2019t be effective against fundamentally new, never-before-seen attacks. What\u2019s more, there are certain difficulties in detecting attacks that rely entirely on legitimate IT tools (<a href=\"https:\/\/www.kaspersky.com\/blog\/lotl-attacks-detection-hardening-guidance\/50826\/\" target=\"_blank\" rel=\"noopener nofollow\">LotL<\/a>). Despite these limitations, most infosec vendors use this method, which is quite effective for email analysis, phishing detection, and identifying certain classes of malware. That said, it promises neither full automation nor 100% reliability.<\/li>\n<li><strong>Anomaly detection.<\/strong> By training AI on \u201cnormal\u201d server and workstation activity, we can identify deviations from this norm \u2014 such as when an accountant suddenly starts performing administrative actions with the mail server. The pitfalls here are that this method requires (a) collecting and storing vast amounts of telemetry, and (b) regular retraining of the AI to keep up with changes in the IT infrastructure. Even then, there\u2019ll be many false positives (FPs) and no guarantee of attack detection. Anomaly detection must be tailored to the specific organization, so using such a tool requires people highly skilled in cybersecurity, data analysis, and ML. And these priceless employees have to provide 24\/7 system support.<\/li>\n<\/ul>\n<p>The philosophical conclusion we can draw thus far is that AI excels at routine tasks where the subject area and object characteristics change slowly and infrequently: writing coherent texts, recognizing dog breeds, and so on. Where there is a human mind actively resisting the training data, statically configured AI in time gradually becomes less and less effective. Analysts fine-tune the AI instead of creating cyberthreat detection rules \u2014 the work domain changes, but, contrary to a common misconception, no human-labor saving is achieved. Furthermore, the desire to improve AI threat detection and boost the number of true positives (TP) inevitably leads to a rise in the number of FPs, which directly increases the human workload. Conversely, trying to cut FPs to near zero results in fewer TPs as well \u2014 thereby increasing the risk of missing a cyberattack.<\/p>\n<p>As a result, AI has a place in the detection toolkit, but not as a silver bullet able to solve all detection problems in cybersecurity, or work completely autonomously.<\/p>\n<h2>AI as a SOC analyst\u2019s partner<\/h2>\n<p>AI can\u2019t be entirely entrusted with searching for cyberthreats, but it can reduce the human workload by independently analyzing simple SIEM alerts and assisting analysts in other cases:<\/p>\n<ul>\n<li><strong>Filtering false positives.<\/strong> Having been trained on SIEM alerts and analysts\u2019 verdicts, AI can filter FPs quite reliably: our <a href=\"https:\/\/www.kaspersky.com\/enterprise-security\/managed-detection-and-response?icid=gl_kdailyplacehold_acq_ona_smm__onl_b2b_kasperskydaily_wpplaceholder____\" target=\"_blank\" rel=\"noopener nofollow\">Kaspersky MDR<\/a> solution achieves a SOC workload reduction of around 25%. See our forthcoming post for details of this \u201cauto-analytics\u201d implementation.<\/li>\n<li><strong>Alert prioritization.<\/strong> The same ML engine doesn\u2019t just filter out FPs; it also assesses the likelihood that a detected event indicates serious malicious activity. Such critical alerts are then passed to experts for prioritized analysis. Alternatively, \u201cthreat probability\u201d can be represented as a visual indicator \u2014 helping the analyst prioritize the most important alerts.<\/li>\n<li><strong>Anomaly detection.<\/strong> AI can quickly alert about anomalies in the protected infrastructure by tracking phenomena like a surge in the number of alerts, a sharp increase or decrease in the flow of telemetry from certain sensors, or changes in its structure.<\/li>\n<li><strong>Suspicious behavior detection.<\/strong> Although searching for <em>arbitrary<\/em> anomalies in a network entails significant difficulties, certain scenarios lend themselves well to automation, and in these cases, ML outperforms static rules. Examples include detecting unauthorized account usage from unusual subnets; detecting abnormal access to file servers and scanning them; and searching for <a href=\"https:\/\/attack.mitre.org\/techniques\/T1550\/003\/\" target=\"_blank\" rel=\"nofollow noopener\">pass-the-ticket attacks<\/a>.<\/li>\n<\/ul>\n<h2>Large language models in cybersecurity<\/h2>\n<p>As the top trending topic in AI, large language models (LLMs) have also been extensively tested by infosec firms. Leaving aside cybercriminal pursuits such as generating phishing emails and malware using GPT, <a href=\"https:\/\/github.com\/tmylla\/Awesome-LLM4Cybersecurity#threat-intelligence\" target=\"_blank\" rel=\"nofollow noopener\">we note these interesting (and plentiful) experiments<\/a> in leveraging LLMs for routine tasks:<\/p>\n<ul>\n<li>Generating detailed cyberthreat descriptions<\/li>\n<li>Drafting incident investigation reports<\/li>\n<li>Fuzzy search in data archives and logs via chats<\/li>\n<li>Generating tests, test cases, and code for fuzzing<\/li>\n<li>Initial analysis of decompiled source code in reverse engineering<\/li>\n<li>De-obfuscation and explanation of long command lines (our <a href=\"https:\/\/www.kaspersky.com\/enterprise-security\/managed-detection-and-response?icid=gl_kdailyplacehold_acq_ona_smm__onl_b2b_kasperskydaily_wpplaceholder____\" target=\"_blank\" rel=\"noopener nofollow\">MDR service<\/a> already employs this technology)<\/li>\n<li>Generating hints and tips for writing detection rules and scripts<\/li>\n<\/ul>\n<p>Most of the linked-to papers and articles describe niche implementations or scientific experiments, so they don\u2019t provide a measurable assessment of performance. Moreover, <a href=\"https:\/\/sloanreview.mit.edu\/article\/will-large-language-models-really-change-how-work-is-done\/\" target=\"_blank\" rel=\"nofollow noopener\">available research<\/a> on the performance of skilled employees aided by LLMs <a href=\"https:\/\/www.omfif.org\/2024\/02\/could-artificial-intelligence-really-boost-labour-productivity\/\" target=\"_blank\" rel=\"nofollow noopener\">shows mixed results<\/a>. Therefore, such solutions should be implemented slowly and in stages, with a preliminary assessment of the savings potential, and a detailed evaluation of the time investment and the quality of result.<\/p>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"mdr\">\n","protected":false},"excerpt":{"rendered":"<p>AI has dozens of applications in cybersecurity. Which ones are the most effective? <\/p>\n","protected":false},"author":2507,"featured_media":52449,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1999,3051],"tags":[1140,4611,960,3795,4138,3058],"class_list":{"0":"post-52448","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business","8":"category-enterprise","9":"tag-ai","10":"tag-ai-technology-research","11":"tag-artificial-intelligence","12":"tag-mdr","13":"tag-ml","14":"tag-soc"},"hreflang":[{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/ai-role-in-cybersecurity-automation\/52448\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/ai-role-in-cybersecurity-automation\/28161\/"},{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/ai-role-in-cybersecurity-automation\/23424\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/ai-role-in-cybersecurity-automation\/30612\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/ai-role-in-cybersecurity-automation\/28314\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/ai-role-in-cybersecurity-automation\/38412\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/ai-role-in-cybersecurity-automation\/28405\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/ai-role-in-cybersecurity-automation\/34269\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/ai-role-in-cybersecurity-automation\/33893\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.com\/blog\/tag\/artificial-intelligence\/","name":"artificial intelligence"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/52448","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2507"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=52448"}],"version-history":[{"count":4,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/52448\/revisions"}],"predecessor-version":[{"id":52460,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/52448\/revisions\/52460"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/52449"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=52448"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=52448"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=52448"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}