Businesses are as much at risk from human error as from threat actors. Typos, configuration errors, and other human errors can lead to disaster on the same scale as any modern cyberthreat. Great technology defenses can only get you so far with managing risk.
It is generally agreed upon that Zero Trust principles are a more effective approach to securing your organization than defense in depth (though they aren’t mutually exclusive). This approach entails defining exactly what user or application has access to what resource, using a validation identity control, and continually validating that the behavior is acceptable. Nearly every organization has a progressive plan for deploying elements that achieve this depending on where they are on their adoption path. However, the technology side of the equation is discrete and primarily solvable. The challenge lies with the keyboard to monitor interface — the human.
User awareness training, great documentation, and effective processes all help, but their success largely depends on humans. There is no spellcheck for policy syntax, and command line interface (CLI) errors have no supporting syntax algorithms either. So, how does one reduce the risk of the weakest link in the Zero Trust architecture that is us? Remove as many potential error opportunities as possible.
Akin to how one goes about attack surface management, a set series of processes and steps can minimize exposure and reduce risk and limit the potential introduction of human error:
- How many commits are made to infrastructure elements?
- What validation controls are in place to avoid introducing a syntax error?
- What is the precedence of the control or new policy so that it curates or controls traffic or threats with the right logic path?
- What discovery engine is constantly surveilling the active connections in the environment to build, as complete as possible, a map of the apps and connected elements?
- We must address these questions parallel to the technical proposal for Zero Trust.
Fortunately, there have been many advances in management and supporting operational tools to assist with this over the last several years. For instance, Juniper Networks’ Security Director Cloud enables Ops teams to add new firewall policies to the network which are run through an extensive series of algorithms before they are committed. This step ensures that the rule syntax doesn’t mess up the network’s security due to human error. In addition, every policy possesses a hit count, including deeper insights (i.e., last used, by whom or what, how often, etc.) to facilitate the clean-up and proper deprecation of rules. Asset report pivots and traffic profiles all support operations teams as they look to answer some hard questions and ultimately reduce the potential introduction of human error.
After all, no one ever means to make mistakes; that’s why they are mistakes. However, keeping human error management in context as you go about your security control selection might be the key to unlocking Zero Trust in your organization and preventing an infrastructure outage or a glaring hole in your defenses.