Complexity and Security Vulnerabilities

Most security incidents do not start with advanced attacks or unknown vulnerabilities. They start with a mistake.

Someone writes code with a bug.
Someone sets a configuration incorrectly.
Someone handles data in an unsafe way.

In practice, the most common cause of a cybersecurity weakness is that the people who develop, manage, and operate an IT system make an error. There are many types of errors: mistakes in code, mistakes in configuration, mistakes in data. The common factor is that the more complex the system is, the greater the chance that someone will do something wrong.

Cybersecurity is often treated as something applied from the outside of systems, after they are built. Security measures are added around the IT system to protect it from assumed weaknesses and threats. Organizations introduce security systems and routines outside the core solution: firewalls, access control layers, monitoring tools, approval processes, and so on. All of this is meant to make the system safer, but it also adds more components, more settings, and more things that can go wrong.

Security requirements are also often defined externally. They come from regulations, standards, corporate policies, or generic best practices. These requirements are not always adapted to the actual context and conditions of the system. The result is that teams build systems to satisfy external demands that may not fit how the system is really used. This can increase complexity without necessarily improving real security.

The idea of built-in security is good. In theory, we should think about security all the way while building an IT system, not just bolt it on at the end. But this can easily become a new driver of complexity. When built-in security is implemented as many extra frameworks, tools, and rules, the security requirements and measures themselves become what increases complexity. More security controls can mean more configuration, more policies, more integration points. That again increases the chance that someone makes a mistake.

This creates a kind of loop: new threats or incidents lead to new security measures, which make systems more complex, which makes it easier for people to make mistakes, which leads to new vulnerabilities and new measures. If we ignore the role of complexity, we risk ending up with systems that have more and more security features on paper, but are harder and harder to understand and operate safely in practice.

To actually improve security, we have to see complexity as a risk in itself. Security measures and requirements should be evaluated not only on how they protect against threats, but also on how much complexity they introduce and how likely they are to cause new errors. Otherwise, we risk building IT systems where the very security controls that were supposed to protect us become part of the problem.

Leave a comment