Mitigating Insider Threats:  Latest Trends, Best Practices and AI Automation

Mitigating Insider Threats: Latest Trends, Best Practices and AI Automation

Insider threat incidents range from data security breaches which have cost firms like Capital One as much as $100 – $150 million to violent threats from disgruntled employees, like the case of Coast Guard Lieutenant Christopher Hassan who was arrested after a joint Coast Guard and FBI investigation found he was stockpiling weapons and seeking to launch a major attack.

Every Organization is Vulnerable

While these high-profile incidents grab international headlines, the reality is that every organization is vulnerable to insider threats. On average, insider threats cost almost $9 million, take more than two months to contain and include issues related to careless workers, disgruntled employees, workplace violence and malicious insiders.

Consider that between January and June 2019,  the healthcare industry had already disclosed 285 incidents of patient privacy breaches, with hospital insiders responsible for 20 percent of the incidents.  Similarly, the Verizon 2019 Data Breach Investigations Report, found that 34 percent of all breaches were caused by insiders.

Companies are Building Insider Threat Programs, But Want to Invest More

Some 90 percent of organizations feel vulnerable to insider attacks and 86 percent have or are building an insider threat program.  Still, nearly 75 percent of C-level executives do not feel they are invested enough to mitigate the risks associated with an insider threat.

As part of National Insider Threat Awareness Month this September, the National Counterintelligence and Security Center (NCSC) is reminding companies of the need for strong insider threat protection programs and the signs to look for with existing employees. 

Look for These Concerning Behaviors

William Evanina, who heads up NCSC, shares that those individuals engaged or contemplating insider threats display “concerning behaviors” before engaging in these events. 

The CERT National Insider Threat Center in the latest edition of its Common Sense Guide to Mitigating Insider Threats, identifies these behaviors as including:

  • repeated policy violations;
  • disruptive behavior;
  • financial difficulty or unexplained extreme change in finances; and
  • job performance problems.

Early Detection Technologies

AI security

The Center suggests deploying solutions for monitoring employee actions, correlating information from multiple data sources, having tools for employees to report concerning or disruptive behavior, and monitoring social media.

Surveys like the one conducted by Crowd Research Partners show more and more organizations are increasingly using behavior monitoring and similar methods to help with early detection of insider threats

And, a report from Accenture found that while advanced identification, security intelligence and threat sharing technologies are widely adopted, automation, AI and machine learning are now being used by about 40 percent of companies.   

Costs Savings from AI Automation

According to the same report, once investment costs are considered, AI automation could offer the highest net savings of about $2 million and begin to address the shortage in skilled security staff.

AI can help detect the risk indicators displayed by those who want to defraud organizations but without the inherent human bias.  Additionally, AI can help manage the incredible volume of data that must be collected, aggregated, correlated, analyzed and fused across disparate sources

Following the Common Sense Guide to Mitigating Insider Threats

Companies looking to follow the CERT National Insider Threat Center’s guidelines, should consider how the Radiance platform can help with monitoring social media, correlating disparate information, and providing a tool for employees to report concerning behaviors.  

Radiance OS-INT monitors all publicly available information across the entire deep web, not only social media.  And, it can ingest massive amounts of unstructured content from disparate internal data sources for further correlation and verification.

Radiance’s HUM-INT platform, known as S4, is a mobile application that allows users to confidentially report concerns in real time.  It can be configured as a workplace tool, with a centralized management portal to allow clients to access real–time threats to geo-fenced facility locations.

Try Radiance for Free Today.

Download our S4 app.

Predicting and Preventing Suicide Through AI

Predicting and Preventing Suicide Through AI

It’s not always easy for young people to articulate their problems. A student who regularly attends class and receives good grades could also be fighting an addiction. A teen constantly smiling for Instagram photos could actually be depressed. For friends and family of the person struggling, recognizing the warning signs of distress might not come easily.

 

Artificial intelligence can act as a voice for people dealing with various internal issues. It can also notify loved ones or even officials when a person needs help. The following two stories serve as examples of potential tragedies that could be avoided thanks to artificial intelligence:

 

Using Artificial Intelligence to Fight Cyber Bullying

Hailey was in her dorm room staring at her phone. A stranger had posted another fake story about her. Hailey knew if she reported it, the imposter would just create a new account or use a website that allows anonymous posts.

Hailey is one of more than 20% of college students being cyberbullied. She struggled with bullying and depression throughout her first two years of college before her friends and family were able to help her, but she could have gotten help a lot sooner with artificial intelligence. As soon as the menacing messages appeared, cutting-edge predictive analytics paired with human analysis could have combatted the issue much earlier.

 

Catch Suicidal Tendencies Early with Artificial Intelligence

Ana had been a star student in high school. She held a part-time job, ran track and was in a serious relationship. During her freshman year of college, she became increasingly depressed. One night she texted heart emojis to all her friends, wrote a goodbye letter to her parents, and attempted suicide. Ana’s friends found her and called 911 in time.

While she was lucky, suicide has risen to become the second-leading cause of death among Ana’s age group. Ana, and so many others like her, could have benefited from help and treatment as soon as predictive analytics powered by artificial intelligence flagged her online searches and habits as possible suicidal tendencies.

 

Meet Radiance.

As mental health problems become more common, and troubling behavior migrates online where it is harder to identify using traditional methods, many schools are struggling to adapt. To face these new challenges, innovative solutions are needed.

What if a sophisticated system could immediately alert student services to the problems their students face, like what should have happened for Hailey and Ana. The idea of counselors and health care professionals being guided to students’ darkest struggles is not some distant future. It’s possible today thanks to Lumina – a Predictive Analytics firm which uses artificial intelligence and open-source data to combat some of society’s most pressing issues. Powered by cutting-edge artificial intelligence and human analysis, Lumina’s newest solution can identify harmful behavior online and alert people who can help.

By working with schools, Lumina can help counselors, student services, and even security officers adapt to new digital landscapes related to bullying, mental health, drug misuse, and other challenges. With new threats emerging every day, taking full advantage of artificial intelligence will allow schools to meet these challenges head-on.