Every breath you take

“Insider threat” — it’s a term that gets thrown around a lot in cybersecurity circles. Practitioners want to know who is responsible for attacks and how attacks are being perpetrated so defenses can be appropriately implemented and provisioned. The problem with the term “insider threat,” though, is that different individuals, organizations, and media outlets all have various interpretations for what constitutes an insider threat.

If taken at face value, an “insider threat” is one caused by an insider—an employee who works for or with the organization whose data has been breached or stolen. Access to the affected data may be authorized, or the person may have gained unauthorized access in some manner. But here is where the definition starts to get fuzzy; does “insider threat” imply an intentional act? Are accidental or inadvertent losses/breaches classified in the same manner as targeted attacks by external actors who clearly are not authorized?

Every move you make

Depending on the source you read, insider threats account for anywhere from 60% of all attacks (IBM 2016 Cyber Security Intelligence Index ) to 43% (Intel Grand Theft Data ) to approximately 30% (2015 Verizon DBIR). It’s important to note, though, that looking just at these three resources, definitions vary. Both the Intel and IBM reports further break down the categorization to “intentional/malicious” or “accidental.” The DBIR, however, categorizes “insider” by role type (e.g., executive/management, end user, developer, system admin, etc.) and considers stolen credentials part of the categorization, whereas the other sources do not.

Suffice it to say, any stolen credential used for nefarious purposes is a problem. The issue is bigger than that, however. The vast majority of breaches can be tied back, in some way, to credentials or users. Ransomware? Yup, need an insider to click on or download a link/document to perpetrate one of those attacks. Unauthorized network access? Possibly a phished or easily guessed password. Third-party breach? Likely a combination of all of the above.

Additionally, says Scott Lyons , EVP of Business Development at WarCollar Industries, companies continue to be plagued by weak password policies and policy enforcement, which are highly contributory factors to insider threat. While it may be the end user’s password that is breached, stolen, or manipulated, the end user isn’t responsible for administration or management; that onus falls on IT’s or security’s shoulders. They’re the ones who must ensure policies are created, rolled out, and enforced. Any way you look at it, says Lyons, asset loss can occur in a variety of ways, all facilitated by the human element.

Every bond you break

Defining “insider threat” is a tricky process because the industry does not have standardized agreement about who within the organization is most likely to be negligent and/or has the ability to instigate or participate in a cyber-based crime. That said, Lyons offers that efforts are underway to tie employee behaviors to information security key indicators of compromise (IoCs). He says that managers can be critical of employees’ patterns of behavior, such as the typical time she/he shows up to and leaves work; how frequently individuals are performing duties in the office vs. taking work home at the end of the day; whether or not an employee tries to access information or systems for which they are unauthorized; if that person is suddenly accessing or downloading large amounts of data, especially if his/her job requirements haven’t changed; and whether or not that person regularly logs on to the network during “off” hours. Observing these patterns of behavior—and noting any variances—can be a key indicator when something is awry.

Every step you take

The questions is, asks Lyons, how do leaders see all of the IoC’s that make up her/his workforce? How does a manager identify if an insider threat truly exists? He says that the answer is “for information security thought leaders to establish an enterprise-based life cycle which includes the following steps:

  1. Create a baseline
  2. Generate awareness
  3. Intervene
  4. Mitigate
  5. Monitor”

Source: http://csrc.nist.gov/organizations/fissea/2012-conference/presentations/fissea-conference-2012_mahoutchian-and-gelles.pdf

I’ll be watching you

Unfortunately it’s hard to pinpoint precisely what type or category of insider is most negligent and most likely to give away the data farm. Yes, there are reports which claim executives are the riskiest job category because executives travel frequently and are so high up in the organization that they can define policy rather than abide by it if it’s inconvenient. Others claim security teams must keep a hawk eye on system admins because they hold the keys to the information kingdom. Others, still, say to watch for employees who appear disgruntled or are experiencing certain personal problems. These vast generalizations, though, can lead to wild goose chases, unfairly discriminating against employees based on her/his role or personality rather than concrete actions or behaviors.

 

A better method of determining the biggest insider threats in your organization is to understand your environment. This means knowing what data you have, where it’s located, who is accessing it and how frequently, and putting proper protections around it. On top of that, understand employees’ behaviors and patterns. Start with a baseline, then watch and understand deviations from the norm. Don’t automatically assume that a change in behavior is an indication of malicious activity, but monitor, investigate, and then intervene when appropriate. Every insider threat can’t be eliminated—as humans, we all sometimes make mistakes—but intentional acts can be reduced through a commitment to a well-defined, standardized, and communicated process.