Skip to content

Cybersecurity for MSPs: The Human Factor

Humans are both the weakest and strongest link of any cybersecurity program. That duality of contribution is due to independent thought. While technical and administrative safeguards are relatively binary—they exist and function or don’t—people are fallible. Even the most well-intentioned of people can do the wrong thing for the right reasons or may not even understand that they are doing the bad thing. 

Jump To:

It's a losing proposition to try and troubleshoot human cybersecurity failings. There are practically infinite causes and as many points of resolution. The best you can do is manage your organization’s human factor appropriately and provide the tools people need to behave correctly. There are both positive and negative incentives to do that, and a combination of both represent best practice for employee management. 

In this article, we’ll quickly cover the strengths and weaknesses of the human cybersecurity element. Building on that, we’ll cover what you can do to cultivate that human cybersecurity element. Finally, we’ll address negative reinforcement, what security frameworks say about that, and provide some considerations for effective utilization of that reinforcement.

Human Cybersecurity – Strengths and Weaknesses

People are far from as accurate or consistent as an information system. They also don’t have the robustness of immutable policies. While those can be seen as limitations, they’re also what drives human greatness. The flexibility of the human mind makes it quickly adaptable to any situation in a way that information systems and policies aren’t. 

That flexibility can also turn to malleability if leveraged by a threat actor. Information systems can be locked down for all but the most sophisticated attacks straightforwardly using policies and input validation. Humans can’t be, which is why social engineering attacks are so effective. One type of social engineering attack, phishing, has been so successful that it’s been the top attack vector for the past few years.

If trained appropriately and provided the appropriate resources. However, people can use an accurate and quick early warning system. If a person knows what a social engineering or phishing attack looks like, they can report those attacks where information systems may miss them. 

People also know the expected behavior of systems. Automated information security defense infrastructure can be said to know expected system behavior too. However, the automated infrastructure can’t easily and readily identify that a computer’s performing badly and acting weirdly. 

Put differently; automated information security infrastructure is only as good as its models. If a model isn’t completely accurate and tuned to an environment, unexpected behavior either: 1) won’t result in alerts and miss threats or 2) will result in too many alerts and report false positives, which make analysts miss threats. 

People can rationalize inappropriate behavior and determine whether or not it’s reportable. People innately understand patterns—and even patterns of behavior—well. Conversely, they know when those patterns are disrupted. While humans will also under and over-report, people can be a solid supplement to automated systems. They may mean the difference between detecting a threat and not detecting a threat. 

How to Cultivate the Human Element

So, you have this legion of unreliable and fickle components to a cybersecurity program. What do you do with them?

You play to their strengths. Where people are flexible, adaptable, behavioral, and pattern-focused, you can use those qualities to build a robust cybersecurity early warning system. Building a cybersecurity program that accounts for that can be relatively straightforward. There are some common programmatic elements you can put into place to enable and promote human cybersecurity.

Training - Tangibility

At a minimum, you should provide your organization’s staff with training. Unfortunately, in addition to some of the weaknesses above, you also need to consider boredom. If your training is dry and uninteresting, training won’t be effective. 

Training can be engaging and, therefore, effective, even without a substantial budget. Think about making training tangible. Practically every industry has had one or more data breaches or significant cybersecurity events at this point. 

Talk about those events in training. Highlight where things went wrong if you can and the impacts of a cyber incident. For example, if you’re in healthcare, the average downtime resulting from a cyber incident is greater than a month. That’s a month of not being able to treat patients effectively and do the work that makes healthcare so critical. 

In finance, downtime can prevent people from accessing their money or maximizing the value of that money. In education, students can’t learn. In the non-profit space, services can’t be delivered. 

Those kinds of events mean two things: 1) disruptions in mission-critical service delivery and 2) loss of profits, reputation, productivity, and other mission-critical operations. In short: people don’t get what they need, and that puts the business in jeopardy. 

Training – Patterns

Humans spot patterns. We’re better than automated systems at doing so. We’re so good at it that we’ll make patterns up out of nothing. 

Leveraging that pattern recognition is critical to cultivating human cybersecurity. One way to do that is by teaching the elements of spotting a phishing email. Some of the top tips are:

  • Look for an unexpected email address in the email header,

  • Evaluate the contents for misspellings, 

  • Question the urgency of the request, 

  • Think about whether or not the email is an expected subject coming from an expected source, and

  • If there are links and attachments, validate that you should be receiving that content and only open them if you are certain of their validity. 

Adding more information and objectives can also be good, but you want to balance information saturation with utility. After a point, there will be diminishing returns by providing additional information. 

The same exercise can be done for social engineering attacks broadly as well as potential signs of a cyberattack. People can—and will—take an alert and sophisticated approach toward potential attack modalities if they’re given the tools and support to do so. 


Exercise is critical to building operative memory in handling incidents. The idiom “practice makes perfect” exists for a reason. Similar to pattern recognition, rote recall of processes and actions helps build comfort and facility with processes. 

There are many forms exercises can take:

  • System recovery drills

  • Security incident tabletops

  • Major incident management testing

  • Physical security drills

  • Downtime or building closure drills

These are just a few examples of exercises that make implementing an emergency process straightforward and effective. 

Exercises should cover both the processes and the operational effectiveness of those processes. Not only should there be a discussion of what should be done, but there should also be a “live-action” implementation of those events. Obviously, that’s going to be more difficult for some exercises than others, but the more actual experience that can be injected into the exercise, the better. 

Tools and Empowerment

To tune staff responses to cyberattacks and other information security incidents, they need more than training. They need tools and empowerment. 

Tools can be the easy part of this domain. Think about how staff engages with threats and how they can report them quickly. For example, if staff receive phishing emails, they should have a quick and easy button to click to report that phishing. So too, with social engineering attacks. Call center staff should have a way to escalate issues they believe to be a potential attack or data harvesting exercise. 

With respect to other threats, they should have contact information to get information to security analysts to kick off incident response as quickly as possible. The quicker and easier staff can contact the security office, the better. 

What’s more difficult is staff empowerment. Staff need to be able to say: “This is a threat.” More than that, they need to be able to do so without fear of reproach. In the call center example, successful identification of a social engineering attack means halting credentials or other information theft. A false positive identification means ending a session with a user or customer before they receive the help they need. 

There’s obviously a tipping point between the two, and driving a balance depends on organizational operations and needs. A thoughtful approach to information security and risk management means finding that tipping point and successfully coaching management and staff on how to balance it. 

Empowerment also means delegation. Letting the people closest to a threat identify and evaluate the threat means a greater likelihood of mitigating the threat timely. Conversely, the higher the escalation and the more removed from the threat, the lower likelihood of timely threat mitigation. 

Again, there’s a balance to be struck, and that will depend on organizational needs. Delegation means that more people get to call an incident. That increases the probability of false positives. Some organizations may determine that to be an unacceptable proposition. Others may think not delegating is an unnecessary risk. 

One thing is certain: if staff isn’t supported when they report risks, then risks will go unreported. Certainly, some potential risks will be reported, but not all potential risks. In that way, an organization can impede its risk management and mitigation function. That’s a risk unto itself and an arguably unnecessary one. 

If staff is actively penalized for reporting risk, however, then no risks will be reported internally. That’s not to say risks won’t be reported, just that they won’t be internal. That’s why whistleblower protections exist. So if your organization considers or actively penalizes risk reporting, it really needs to evaluate whether the externalization of risk reporting is desirable.


Where punitive measures for reporting risk are undesirable, punitive measures for engaging in risky behavior can be helpful. People respond much better to reinforcement than they do to punishment. A good distinction: reinforcement promotes behavior, and punishment dissuades behavior. 

Reinforcement, even negative reinforcement, can be used very effectively in the corporate environment to stave off an attack. Phishing exercises are a great example. Notifying staff who failed a phishing exercise of their failure and requiring them to do training or speak with a manager or security staff member goes a long way. Similar measures can be undertaken for other activities. 

Where reinforcement doesn’t work, you may need to resort to punishment. Punishment, especially in the corporate space, carries numerous detriments: degraded morale, malicious compliance, weaponized ignorance, unresponsiveness, retaliation, data exfiltration, and loss of mission-critical personnel. You need to weigh the risks of punishment against the benefits of punishment. 

That being said, continuing with the phishing example, if an employee has been trained about phishing and repeatedly fails phishing exercises (or worse, falls for an actual phishing attack), then punitive measures may be necessary. What those are and how they’re structured depend on organizational structure and operations. 

Inequitable punitive measures, for example, treating executives differently, will ultimately undermine the success and impact of a security program. Creating the perception that some staff is privileged and above the rules will eviscerate respect for the rules. 


There are many ways to cultivate the human element of any organization to be a security watchdog. By relying on the strengths people bring to the table and cultivating those, you can create very effective operational security awareness

There are some drawbacks to completely relying on people for security safeguards. However, people provide a great supplement to automated infrastructure and an early warning system where that infrastructure fails to identify threats.

You’ll want to think about how you cultivate good behavior and dissuade unwanted behavior. Training, empowerment, and reinforcement consistently prove to work best. Punitive measures applied consistently can help but can also have significant tradeoffs. Whatever method you pursue depends on organizational needs and priorities.