I haven’t seen the sixth Mission Impossible movie, and I don’t think I will. I sat through the fifth — in suitably zombified state, returning home on a long-haul flight after a tough week’s business — but only because one scene in it was shot in our shiny new modern London office. And that was one Mission Impossible installment too many, really. Nope — not for me. Slap, bang, smash, crash, pow, wow. Oof. Nah, I prefer something a little more challenging, thought-provoking and just plain interesting. After all, I have precious little time as it is!
I really am giving Tom Cruise and Co. a major dissing here, aren’t I? But hold on. I have to give them their due for at least one scene done really rather well (i.e., thought-provoking and plain interesting!). It’s the one where the good guys need to get a bad guy to rat on his bad-guy colleagues, or something like that. So they set up a fake environment in a “hospital” with “CNN” on the “TV” broadcasting a news report about atomic Armageddon. Suitably satisfied his apocalyptic manifesto has been broadcast to the world, the baddie gives up his pals (or was it a login code?) in the deal arranged with his interrogators. Oops. Here’s the clip.
Why do I like this scene so much? Because, actually, it demonstrates really well one of the methods of detecting … previously unseen cyberthreats! There are in fact many such methods — they vary depending on area of application, effectiveness, resource use, and other parameters (I write about them regularly here). But one always seems to stand out: emulation (about which I’ve also written plenty here before).
As in the MI movie, an emulator launches the object being investigated in an isolated, artificial environment, which encourages it to reveal its maliciousness.
But there’s one serious downside to such an approach — the very fact that the environment is artificial. The emulator does its best to make that artificial environment seem as much as possible like a real operating system environment, but ever-increasingly smart malware still manages to distinguish it from the real thing — and then the emulator sees how the malware recognized it, regroups and improves its emulation, and on and on in a never-ending cycle that regularly opens the window of vulnerability on a protected computer. The fundamental problem is that no emulator has yet been the spitting image of a real OS.
On the other hand, there’s another option for tackling the behavioral analysis of suspicious objects: analysis — on a real operating system — one on a virtual machine. Well, why not? If the emulator never quite fully cuts it, let a real, albeit virtual, machine have a go! It would be the ideal “interrogation” — conducted in a real environment, not an artificial one, but with no real negative consequences.
On hearing about this concept, some may rush to ask why no one thought of it before. After all, virtualisation has been in the tech-mainstream since 1992. Well, as it turns out, it’s not so simple.
First, analyzing suspicious objects in a virtual machine is a resource-intensive process, suited only to heavyweight enterprise-grade security solutions, where scanning needs to be super-intensive so that absolutely zero maliciousness gets through the defenses. Alas, for home computers, let alone smartphones, this technology isn’t suitable — yet.
Second, such things actually do exist. In fact, we already use this technology — internally, here at the Kompany — for internal investigations. But in terms of market-ready products, not many are available yet. Competitors have released similar products, but their effectiveness leaves a lot to be desired. As a rule, such products are limited to just collecting logs and basic analysis.
Third, launching a file on a virtual machine is just the beginning of a very long and tricky process. After all, the aim of the exercise is to have the maliciousness of an object reveal itself, and for that you need a smart hypervisor, behavior logging and analysis, constant fine-tuning of the templates of dangerous actions, protection from anti-emulation tricks, execution optimization, and much more.
Here I can say without false modesty that we truly are way ahead — of the whole planet!
Recently we were granted a U.S. patent (US10339301) covering the creation of a suitable environment for a virtual machine for conducting deep, rapid analysis of suspicious objects. Here’s how it works:
- Virtual machines are created (for different types of objects) with settings that ensure both their optimal execution and a maximally high detection rate.
- The hypervisor of a virtual machine works in tandem with system logging of an object’s behavior and system analysis thereof, helped by updatable databases of templates of suspicious behavior, heuristics, the logic of reactions to actions, and more.
- Should suspicious actions be detected, the analysis system enters on-the-fly changes to the process of execution of the object on a virtual machine to encourage the object to show its malicious intentions. For example, the system can create files, amend the registry, speed up time, and so on.
That last point — the third — is the most unique and delicious feature of our technology. Let me give you an example to show you how it works.
The system detects a launched file has “fallen asleep” and no longer manifests any activity. That’s because the object can be programmed to quietly do nothing for several (dozen) minutes (hours) until the beginning of malicious activity. When it starts its do-nothing thing, we speed up time on-the-fly inside the virtual machine so that it passes one, three, five and up to a gazillion minutes per second. The functionality of the file being analyzed doesn’t change, while the time of the wait is lowered by hundreds (or thousands) of times. And if, after its “snooze,” the malware decides to check the system clock (has it been ticking?), it will be fooled into thinking it has, and continue with its malicious mission — exposing itself in the process.
The object uses a vulnerability in a specific library or tries to change the contents of a file or registry. At first, with the help of the regular fopen() function, it tries to open the library (or file or registry), and if it fails to do so (there’s no library, or no access rights to the file) — then it simply gives up. In such a scenario we change (on the fly) the return value of the fopen() function from “file absent” to “file exists” (or, if necessary, we create the file itself and fill it with appropriate content), then we simply observe what the object does.
Such an approach also works really well in conditions of logic trees of an object’s behavior. For example: if there exist file A and file B, then file C is modified and the job’s finished. However, it’s not known what the program being investigated will do if only one of either file A or file B exists. Therefore, we launch an iteration in parallel and tell the suspect program that file A exists but B doesn’t, then we analyze the further logic-tree activity.
What’s important to note is that the rules of reaction to execution of the file are configured by external, easily updatable databases. You don’t need to redevelop the whole engine to add new logic, just correctly describe the multitude of possible scenarios of malicious behavior and perform a one-click update.