How do cybercriminals attack workstations? They generally exploit vulnerabilities in frequently used programs or potentially dangerous features in legitimate software. There are other ways, of course, but those are the most common ploys. So, it might seem logical to restrict the use of such software. But how can you do that without harming business processes? Mindlessly blocking software can cause severe damage to business; you have to take into account differences in employees’ roles. Our approach was to reduce the attack surface through adaptive anomaly control with the use of machine-learning techniques.
For many years, MS Office has had the dubious distinction of being top dog by number of exploited vulnerabilities. But that does not mean that the software is bad. Vulnerabilities are everywhere. It is just that cybercriminals focus more on Office than its counterparts, because it is the most widely used. Even if your company is prepared to spend money on retraining employees to use an alternative, as soon as another productivity suite gains popularity, it will knock Office off the top of the exploited software leaderboard.
Some products have features that are clearly dangerous. For example, macros in that selfsame Office can be used to execute malicious code. But a blanket ban would be impractical; financial analysts and accountants need those tools in their day-to-day operations.
The task is to somehow keep strict watch over such programs and intervene only when anomalous activity is detected. But there is one problem.
How do you define anomalous?
The essence of cybercriminal activity is to appear legitimate in the eyes of security systems. How can a cybersecurity system determine whether a message sent to an employee contains an important document with a macro or a Trojan? Did that person send a .js file for work purposes, or does the file conceal a virus?
It would be possible, at least in theory, to manually analyze the work of each employee, ascertain what tools they do and do not need, and, on the basis of that information, construct a threat model and surgically block certain program features.
But numerous complications arise here. First, the larger the company, the more difficult it is to build an accurate model for each employee. Second, even in a small business, manual configuration requires a great deal of time and effort on the part of administrators. And third, the process will likely have to be repeated whenever the corporate infrastructure or tools are changed.
To preserve the sanity of administrators and IT security officers, the only option is to automate the process of configuring restrictions.
We implemented the automation process in the following way: First, systems built on machine-learning principles combed our threat databases and generated standard patterns of potentially malicious activity. We then implemented pinpoint blocking of these patterns on each specific workstation.
Second, we created an automatic adaptation (aka Smart) mode to analyze user activity and determine which rules can be applied and which would interfere with normal operation. It works as follows: The system first collects statistics on the triggering of control rules for a specific period of time in learning mode, and then creates a model of the user or group’s normal operation (legitimate scenario). After that, learning mode is disabled, and only those control rules that block anomalous actions are activated.
In the event that the user’s work model is modified, the system can be switched back to learning mode and adapted to the new scenario. In addition, the option for fine-tuning exists in case exclusions need to be added.
It’s no panacea, but it considerably reduces the surface for possible attacks.
The Adaptive Anomaly Control (AAC) module forms part of the updated Kaspersky Endpoint Security for Business Advanced solution, which we recently unveiled to the general public. Click on the banner below to download a trial version of the security product in which the technology is implemented.