Blog Post
See All Blog Posts

“Deter, Detect, Mitigate”—that’s the motto David M. Cattler, director of the Defense Counterintelligence and Security Agency, has declared in his public service announcement for September, which marks National Insider Risk Awareness Month.

As its name suggests, National Insider Risk Awareness Month is designed, through conferences and other activities, to educate U.S. policymakers and security professionals about how to identify and counter Insider Threats.

So, what’s an Insider Threat?

That’s a term of art used to describe people who abuse their organizational privileges to sabotage computer networks, leak information (think Private Chelsea Manning), steal data on behalf of foreign powers, commit workplace violence (as in the case of Fort Hood mass shooter Nidal Hasan), or unwittingly create vulnerabilities through inadvertent or negligent actions.

Even when lives are not at stake, these Insider Threats pose a serious problem for organizations, be they military, governmental, or industrial. According to the Ponemon Institute, there were 7,343 reported Insider Threat incidents last year, with each incident costing the affected organization an average of $16.2 million. 

This, of course, is not some new type of threat. During the Cold War, the U.S. government saw its fair share of spies. But following the high-profile incidents with Hasan and Manning, President Barak Obama signed Executive Order 13587, which directed “structural reforms to ensure responsible sharing and safeguarding of classified information on computer networks that shall be consistent with appropriate protections for privacy and civil liberties.”

This led to the establishment, or strengthening, of Counter-Insider Threat programs—especially within U.S. government agencies and departments that deal with classified information. Influenced by cybersecurity tradecraft, most of these efforts tend to focus on monitoring and analyzing technical indicators. So, for instance, they look at failed logins, accessing restricted data, or suspicious email communications with foreign recipients.

The problem, however, with the purely technological approaches is that by the time such alerts occur, it is often too late to prevent or limit the damage.

So why then do we persist with these unsatisfying solutions? Well, it’s sort of like the old joke about the drunk who dropped his car keys in the dark but then goes to look for them under a streetlamp. When asked why by a passing policeman, he replies, “Because the light is better there.”

Likewise, here, the bias to address only technical factors is very strong. In fact, in the early 2000s, before I came to Cogility Software, I had attended a Counter-Insider Threat conference in Dallas, Texas, where I was likely the only behavioral scientist out of the roughly 50 people in attendance.

Even though we reviewed cases that clearly had behavioral/psychosocial contributing factors, the solutions discussed reflected a cybersecurity mindset that focused almost exclusively on identifying technical tripwires.

Meanwhile, there was growing evidence (based on studies and reports by researchers at institutions such as PERSEREC or Carnegie Mellon’s Software Engineering Institute CERT Division), that by focusing only on technical factors, an organization will have little hope of identifying problematic individuals ahead of time (getting “left of harm”).

Taking inspiration from these studies, my own approach (now being implemented by Cogility’s Counter-Insider Threat product) looks at the Insider Threat as a human problem. This means not just looking for technical indicators, but at the Whole Person. So, we also look for behavioral indicators, like hostility toward coworkers, a substance abuse problem or signs of narcissism.

Tracking various psychosocial concerns like these, along with the customary technical factors—i.e., adopting a sociotechnical Whole Person approach—gives us a longer lead time (weeks or even months) for uncovering an Insider Threat.

Now, one important challenge for advocates of a Whole Person approach is addressing issues about privacy. Indeed, I remember encountering a great deal of resistance when trying to publish papers on Counter-Insider Threat research between 2006 and 2010; peer reviewers complained that the approach would be too invasive.

In response, I wrote a paper1 arguing that the Whole Person approach could be used for positive intervention, to find an “offramp” for individuals on the road to becoming an Insider Threat. The Whole Person approach does not in any way resemble the punitive “pre-crime” strategy depicted in the movie, Minority Report. On the contrary, it can be a guide for empathy and assistance.2

Today, the Whole Person strategy is identified as a critical ingredient of effective insider risk programs, representing “best practices.” In the Defense and Intelligence Communities, especially, there is already an expectation that individuals will have to trade off some privacy to better safeguard America’s most prized secrets—and to protect themselves from potential physical harm.

This is why the U.S. government is working with Cogility to further advance a Whole Person approach to insider threat management.

To learn more about our approach, please also check out this white paper.
https://cogility.com/resources/whole-person-c-int-approach-to-get-left-of-harm/


1 Greitzer, FL, DA Frincke, and MM Zabriskie. (2011). “Social/Ethical Issues in Predictive Insider Threat Monitoring.” In: MJ Dark (Ed.), Information Assurance and Security Ethics in Complex Systems: Interdisciplinary Perspectives. Hershey, Pennsylvania: IGI Global. Chapter 7, pp.132-161.

2 Greitzer, FL. (2019). Insider Threat: It’s the HUMAN, Stupid! Proceedings of the Northwest Cybersecurity Symposium, April 8-10, 2019. Article No. 4, pp. 1-8. ACM ISBN 978-1-4503-6614-4/19/04.

Recent Related Stories