Can we beat software vulnerabilities?

Can we beat software vulnerabilities? It is not possible to do so completely, but there are ways to mend the issue.

Here is another rhetorical question in cybersecurity. Is it possible to beat something that has been around for as long as software itself?

In computer security overall, a “vulnerability” is a weakness that allows an attacker to reduce a system’s information assurance; an intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw. Regarding software, the “bug” is a fault causing it to produce an incorrect or unexpected result, or to behave in unintended (for its developers and users) ways. In other words, a vulnerable software may usually work okay, but when it is approached in a “different manner” (i.e. with malicious intent and appropriate tools), things may happen. And they actually do.


If not for bugs, spreading viruses, Trojans, unsanctioned backdoors and growing botnets would be much more difficult. So it’s appropriate to say that the bugs are the foundation of information security problems. Or, at least one of them. Because besides vulnerability in software, there’s always “a human factor” and a possibility to use social engineering to infiltrate even the most secure system.

In an “ideal world” of bugless software, the information security industry would look very differently and most likely would be much slimmer than it is today. Actually, it’s a classic dilemma: had there been no war, no need for an army; had there been no crime, no police would be needed. Had there be no diseases, there would be no doctors. But there are wars, crimes and diseases, and so there are men-at-arms, police, and doctors. And all of them also make mistakes and sometimes commit crimes.

Software vulnerabilities are mostly comparable to diseases, or even more accurately, to dispositions to diseases (and so security experts are like doctors). In a living organism such dispositions are determined genetically in some cases, caused by birth trauma and/or unhealthy environment in the others.

What causes the software flaws? As a rule vulnerabilities are the results of development mistakes, insufficient quality assurance and/or outright wrong approach to coding – when the software is written without security in mind from the day one. Later there could be stacks of patches, making the original package swell twice per its original size, and still there are more and more bugs discovered. Simply because the software is “genetically” vulnerable.

There are some religious sects those object to medical care proclaiming the diseases to be the punishment from above. It would be interesting to look at their “counterparts” in cyberworld; however unlikely they existence may appear, this would make a nice cyberpunk plot twist.

So, can we bust all the bugs completely?



Yes, sure, just as soon as the ancient proverb “errare humanum est” is null and void in relation to Mankind. How soon is it going to happen?

Actually, there’s always a temptation to put a blame on developers – i.e. code-writers for the bugs. From time to time you may hear “demands” to hold developers accountable for mistakes they have failed to fix before the software has gone on sale. But software vulnerabilities are rather an organizational problem that has little to do with the coders’ qualification. Besides, some bugs may stay “below the radar” for years, as shown by #heartbleed, “Stuxnet flaw” and many others. Neither developers, nor end-users knew anything about them until these vulnerabilities had been discovered – either by security experts, or criminals. So who’s to blame is tempting but irrelevant question, after all. Let’s say, there’s just too little we can do to beat software bugs completely, this is plain and simple impossible.

However, “security-in-mind” paradigm, good quality assurance and responsible handling of newly discovered flaws can mend problems dramatically, decreasing the systems’ susceptibility to malware and other attempts to use it in a malicious manner. By “responsible handling” we understand simply the adequate response to bug reports and quick release of the patches. Today most of the software developers have bug tracking and bug reporting instruments in place. Without them things would be much, much worse. This is just like prophylactics of cold, flu and other diseases that can help us stay healthy even if there’s bad weather, and we have to use crowded city transport, where it is especially easy to catch a cold or some other air-communicable malady.

And businesses and end-users have to deploy security solutions, because, yes, there are software vulnerabilities being exploited by the attackers and malware, and unfortunately they are here to stay for a long time.

What are the requirements for an efficient corporate security solution to deal with vulnerabilities? First, it should be able to detect vulnerable programs and suggest updates or even perform an update automatically. Of course, this has to be done automatically for all endpoints. It is especially important on enterprise levels when IT departments have to deal with hundreds (if not thousands) of endpoints with a wide range of software installed.

Second, the security solution has to detect and block malicious attacks utilizing vulnerabilities, including “zero-days” – security holes that have not been patched yet. In Kaspersky Lab this is achieved using a number of solution, including an intelligent Automatic Exploit Prevention system looking for unusual and potentially harmful activity from regular applications and blocking it.