No blame: how psychological safety helps improve cybersecurity

Companies need to build a culture of security, but this is impossible when employees are afraid to discuss incidents or suggest improvements.

How to implement a blameless approach to cybersecurity

Even companies with a mature cybersecurity posture and significant investments into data protection aren’t immune to cyber-incidents. Attackers can exploit zero-day vulnerabilities or compromise a supply chain. Employees can fall victim to sophisticated scams designed to breach the company’s defenses. The cybersecurity team itself can make a mistake while configuring security tools, or during an incident response procedure. However, each of these incidents represents an opportunity to improve processes and systems, making your defenses even more effective. This isn’t just a rallying call; it’s a practical approach that’s been successful enough in other fields such as aviation safety.

In aviation, almost everyone in the aviation industry — from aircraft design engineers to flight attendants – is required to share information to prevent incidents. This isn’t limited to crashes or system failures; the industry also reports potential problems. These reports are constantly analyzed, and security measures are adjusted based on the findings. According to Allianz Commercial’s statistics, this continuous implementation of new measures and technologies has led to a significant reduction in fatal incidents — from 40 per million flights in 1959 to 0.1 in 2015.

Still in aviation, it was recognized long ago that this model simply won’t work if people are afraid to report procedure violations, quality issues, and other causes of incidents. That’s why aviation standards include requirements for non-punitive reporting and a just culture, meaning that reporting problems and violations shouldn’t lead to punishment. DevOps engineers have a similar principle they call a blameless culture, which they use when analyzing major incidents. This approach is also essential in cybersecurity.

Does every mistake have a name?

The opposite of a blameless culture is the idea that “every mistake has a name”, meaning a specific person is to blame. Under this approach, every mistake can lead to disciplinary action, including termination. This principle is considered harmful and doesn’t lead to better security.

  • Employees fear accountability and tend to distort facts during incident investigations — or even destroy evidence.
  • Distorted or partially destroyed evidence complicates the response and worsens the overall outcome because security teams can’t quickly and properly assess the scope of a given incident.
  • Zeroing in on one person to blame during an incident review prevents the team from focusing on how to change the system to prevent similar incidents from happening again.
  • Employees are afraid to report violations of IT and security policies, causing the company to miss opportunities to fix security flaws before they lead to a critical incident.
  • Employees have no motivation to discuss cybersecurity issues, coach one another, or correct their coworkers’ mistakes.

To truly enable every employee to contribute to your company’s security, you need a different approach.

The core principles of a just culture

Call it “non-punitive reporting” or a “blameless culture” — the core principles are the same:

  • Everyone makes mistakes. We learn from our mistakes; we don’t punish them. However, it’s crucial to distinguish between an honest mistake and a malicious violation.
  • When analyzing security incidents, the overall context, the employee’s intent, and any systemic issues that may have contributed to the situation all need considering. For example, if a high turnover of seasonal retail employees prevents them from being granted individual accounts, they might resort to sharing a single login for a point-of-sale terminal. Is the store administrator at fault? Probably not.
  • Beyond just reviewing technical data and logs, you must have in-depth conversations with everyone involved in an incident. For this you should create a productive and safe environment where people feel comfortable sharing their perspectives.
  • The goal of an incident review should be to improve behavior, technology, and processes in the future. Regarding the latter for serious incidents, they should be split in to two: immediate response to mitigate the damage, and postmortem analysis to improve your systems and procedures.
  • Most importantly, be open and transparent. Employees need to know how reports of issues and incidents are handled, and how decisions are made. They should know exactly who to turn to if they see or even suspect a security problem. They need to know that both their supervisors and security specialists will support them.
  • Confidentiality and protection. Reporting a security issue should not create problems for the person who reported it or for the person who may have caused it — as long as both acted in good faith.

How to implement these principles in your security culture

Secure leadership buy-in. A security culture doesn’t require massive direct investment, but it does need consistent support from the HR, information security, and internal communications teams. Employees also need to see that top management actively endorses this approach.

Document your approach. The blameless culture philosophy should be captured in your company’s official documents — from detailed security policies to a simple, short guide that every employee will actually read and understand. This document should clearly state the company’s position on the difference between a mistake and a malicious violation. It should formally state that employees won’t be held personally responsible for honest errors, and that the collective priority is to improve the company’s security, and prevent future recurrences.

Create channels for reporting issues. Offer several ways for employees to report problems: a dedicated section on the intranet, a specific email address, or the option to simply tell their immediate supervisor. Ideally, you should also have an anonymous hotline for reporting concerns without fear.

Train employees. Training helps employees recognize insecure processes and behaviors. Use real-world examples of problems they should report, and walk them through different incident scenarios. You can use our online our online Kaspersky Automated Security Awareness Platform to organize these cybersecurity-awareness training sessions. Motivate employees to not only report incidents, but also to suggest improvements and think about how to prevent security problems in their day-to-day work.

Educate your leadership. Every manager needs to understand how to respond to reports from their team. They need to know how and where to forward a report, and how to avoid creating blame-focused islands in a sea of just culture. Teach leaders to respond in a way that makes their coworkers feel supported and protected. Their reactions to incidents and error reports needs to be constructive. Leaders should also encourage discussions of security issues in team meetings to normalize the topic.

Develop a fair review procedure for incidents and security-issue reports. You’ll need to assemble a diverse group of employees from various teams to form a “no-blame review board”. It will be responsible for promptly processing reports, making decisions, and creating action plans for each case.

Reward proactivity. Publicly praise and reward employees who report spearphishing attempts or newly discovered flaws in policies or configurations, or who simply complete awareness training better and faster than others on their team. Mention these proactive employees in regular IT and security communications such as newsletters.

Integrate findings into your security management processes. The conclusions and suggestions from the review board should be prioritized and incorporated into the company’s cyber-resilience plan. Some findings may simply influence risk assessments, while others could directly lead to changes in company policies, or implementation of new technical security controls or reconfiguration of existing ones.

Use mistakes as learning opportunities. Your security awareness program will be more effective if it uses real-life examples from your own organization. You don’t need to name specific individuals, but you can mention teams and systems, and describe attack scenarios.

Measure performance. To ensure this process is working and delivering results, you need to use information security metrics as well as HR and communications KPIs. Track the MTTR for identified issues, the percentage of issues discovered through employee reports, employee satisfaction levels, the number and nature of security issues identified, and the number of employees engaged in suggesting improvements.

Important exceptions

A security culture or blameless culture doesn’t mean that no one is ever held accountable. Aviation safety documents on non-punitive reporting, for example, include crucial exceptions. Protection doesn’t apply when someone knowingly and maliciously deviates from the regulations. This exception prevents an insider who has leaked data to competitors from enjoying complete impunity after confessing.

The second exception is when national or industry regulations require individual employees to be held personally accountable for incidents and violations. Even with this kind of regulation, it’s vital to maintain balance. The focus should remain on improving processes and preventing future incidents —  not on finding who’s to blame. You can still build a culture of trust if investigations are objective and accountability is only applied where it’s truly necessary and justified.

Tips