Predicting and Preventing Heath Care Fraud

Predicting and Preventing Heath Care Fraud

When the Centers for Medicare & Medicaid Services (CMS) announced its vision to modernize Medicare program integrity, Administrator Seema Verma highlighted the agency’s interest in seeking new innovative strategies involving machine learning and artificial intelligence.

Executive Order Directs HHS to use AI to Detect Fraud and Abuse

The announcement came earlier this month and followed an Executive Order by President Trump which urged the Secretary of Health and Human Services (HHS) to direct “public and private resources toward detecting and preventing fraud, waste, and abuse, including through the use of the latest technologies such as artificial intelligence.”

Medicare Fraud Estimated between $21 and $71 Billion Annually

Medicare fraud, waste, and abuse costs CMS and taxpayers billions of dollars.

In 2018, improper payments represented five percent of the total $616.8 billion of Medicare’s net costs. And it is estimated that Medicare loses between $21 and $71 billion per year to fraud, waste and abuse.

Part of those costs are driven by inefficiencies in trying to identify and flag these issues before, during and after they occur.

For example, today, clinicians manually review medical records associated with Medicare claims and as a result, CMS reviews less than one percent of those records

Artificial intelligence and machine learning could be more cost effective and less burdensome, and can help existing predictive systems designed to flag fraud.

HHS Among Largest Data Producers in the World

In order to understand the potential for AI, CMS also recently issued a Request for Information asking, among other things, if AI tools are being used in the private sector to detect fraud and how AI can enhance program integrity efforts.

HHS, which houses CMS,  is among the largest data producers in the world, with its healthcare and financial data exceeding petabytes per year, making it the perfect fit for AI and machine learning models.

In fact, researchers at Florida Atlantic University programmed computers to predict, classify and flag potentially fraudulent Medicare Part B claims from 2012-2015, using algorithms to detect patterns of fraud in publicly available CMS data.  The researchers noted they had only “scratched the surface” and planned further trials.

Just “Scratching the Surface”

But the promise of AI isn’t in just in the CMS data. It’s also in the behaviors of those looking to commit fraud.

According to Jeremy Clopton, director at accounting consultancy Upstream Academy and an Association of Certified Fraud Examiners faculty member, the risk of fraud is often described as having three key factors: a perceived pressure or financial need, a perceived opportunity, and a rationalization of the behavior.

To prevent fraud,  AI must analyze behavioral data that might indicate the pressure someone is facing and how they could rationalize fraud to deal with those pressures. For example, he notes that someone facing financial pressures might regularly search for articles related to debt relief and could also mention those concerns in emails. AI has made finding these behaviors more efficient.

AI, Fraud Detection and the Private Sector

The private sector is already embracing AI for a variety of fraud prevention needs.  Aetna has 350 machine learning models focused on preventing criminals from fabricating health insurance claims.

And, Mastercard Healthcare Solutions recently announced it would also use AI to identify suspicious activity and help its clients detect fraud .

Beyond just healthcare, the use of AI and ML as part of an organization’s anti-fraud programs is expected to almost triple in the next two years, according to the Association of Certified Fraud Examiners.  

And, 55 percent of organizations expect to increase their budgets for anti-fraud technology over the next two years.

Based on the efforts at HHS and CMS, it looks like the Federal Government will be part of the AI-fueled anti-fraud movement.

Learn more about AI-powered Radiance and its risk and fraud sensing capabilities.

Try Radiance for free today.

Mitigating Insider Threats:  Latest Trends, Best Practices and AI Automation

Mitigating Insider Threats: Latest Trends, Best Practices and AI Automation

Insider threat incidents range from data security breaches which have cost firms like Capital One as much as $100 – $150 million to violent threats from disgruntled employees, like the case of Coast Guard Lieutenant Christopher Hassan who was arrested after a joint Coast Guard and FBI investigation found he was stockpiling weapons and seeking to launch a major attack.

Every Organization is Vulnerable

While these high-profile incidents grab international headlines, the reality is that every organization is vulnerable to insider threats. On average, insider threats cost almost $9 million, take more than two months to contain and include issues related to careless workers, disgruntled employees, workplace violence and malicious insiders.

Consider that between January and June 2019,  the healthcare industry had already disclosed 285 incidents of patient privacy breaches, with hospital insiders responsible for 20 percent of the incidents.  Similarly, the Verizon 2019 Data Breach Investigations Report, found that 34 percent of all breaches were caused by insiders.

Companies are Building Insider Threat Programs, But Want to Invest More

Some 90 percent of organizations feel vulnerable to insider attacks and 86 percent have or are building an insider threat program.  Still, nearly 75 percent of C-level executives do not feel they are invested enough to mitigate the risks associated with an insider threat.

As part of National Insider Threat Awareness Month this September, the National Counterintelligence and Security Center (NCSC) is reminding companies of the need for strong insider threat protection programs and the signs to look for with existing employees. 

Look for These Concerning Behaviors

William Evanina, who heads up NCSC, shares that those individuals engaged or contemplating insider threats display “concerning behaviors” before engaging in these events. 

The CERT National Insider Threat Center in the latest edition of its Common Sense Guide to Mitigating Insider Threats, identifies these behaviors as including:

  • repeated policy violations;
  • disruptive behavior;
  • financial difficulty or unexplained extreme change in finances; and
  • job performance problems.

Early Detection Technologies

AI security

The Center suggests deploying solutions for monitoring employee actions, correlating information from multiple data sources, having tools for employees to report concerning or disruptive behavior, and monitoring social media.

Surveys like the one conducted by Crowd Research Partners show more and more organizations are increasingly using behavior monitoring and similar methods to help with early detection of insider threats

And, a report from Accenture found that while advanced identification, security intelligence and threat sharing technologies are widely adopted, automation, AI and machine learning are now being used by about 40 percent of companies.   

Costs Savings from AI Automation

According to the same report, once investment costs are considered, AI automation could offer the highest net savings of about $2 million and begin to address the shortage in skilled security staff.

AI can help detect the risk indicators displayed by those who want to defraud organizations but without the inherent human bias.  Additionally, AI can help manage the incredible volume of data that must be collected, aggregated, correlated, analyzed and fused across disparate sources

Following the Common Sense Guide to Mitigating Insider Threats

Companies looking to follow the CERT National Insider Threat Center’s guidelines, should consider how the Radiance platform can help with monitoring social media, correlating disparate information, and providing a tool for employees to report concerning behaviors.  

Radiance OS-INT monitors all publicly available information across the entire deep web, not only social media.  And, it can ingest massive amounts of unstructured content from disparate internal data sources for further correlation and verification.

Radiance’s HUM-INT platform, known as S4, is a mobile application that allows users to confidentially report concerns in real time.  It can be configured as a workplace tool, with a centralized management portal to allow clients to access real–time threats to geo-fenced facility locations.

Try Radiance for Free Today.

Download our S4 app.

Predicting and Preventing Suicide Through AI

Predicting and Preventing Suicide Through AI

It’s not always easy for young people to articulate their problems. A student who regularly attends class and receives good grades could also be fighting an addiction. A teen constantly smiling for Instagram photos could actually be depressed. For friends and family of the person struggling, recognizing the warning signs of distress might not come easily.


Artificial intelligence can act as a voice for people dealing with various internal issues. It can also notify loved ones or even officials when a person needs help. The following two stories serve as examples of potential tragedies that could be avoided thanks to artificial intelligence:


Using Artificial Intelligence to Fight Cyber Bullying

Hailey was in her dorm room staring at her phone. A stranger had posted another fake story about her. Hailey knew if she reported it, the imposter would just create a new account or use a website that allows anonymous posts.

Hailey is one of more than 20% of college students being cyberbullied. She struggled with bullying and depression throughout her first two years of college before her friends and family were able to help her, but she could have gotten help a lot sooner with artificial intelligence. As soon as the menacing messages appeared, cutting-edge predictive analytics paired with human analysis could have combatted the issue much earlier.


Catch Suicidal Tendencies Early with Artificial Intelligence

Ana had been a star student in high school. She held a part-time job, ran track and was in a serious relationship. During her freshman year of college, she became increasingly depressed. One night she texted heart emojis to all her friends, wrote a goodbye letter to her parents, and attempted suicide. Ana’s friends found her and called 911 in time.

While she was lucky, suicide has risen to become the second-leading cause of death among Ana’s age group. Ana, and so many others like her, could have benefited from help and treatment as soon as predictive analytics powered by artificial intelligence flagged her online searches and habits as possible suicidal tendencies.


Meet Radiance.

As mental health problems become more common, and troubling behavior migrates online where it is harder to identify using traditional methods, many schools are struggling to adapt. To face these new challenges, innovative solutions are needed.

What if a sophisticated system could immediately alert student services to the problems their students face, like what should have happened for Hailey and Ana. The idea of counselors and health care professionals being guided to students’ darkest struggles is not some distant future. It’s possible today thanks to Lumina – a Predictive Analytics firm which uses artificial intelligence and open-source data to combat some of society’s most pressing issues. Powered by cutting-edge artificial intelligence and human analysis, Lumina’s newest solution can identify harmful behavior online and alert people who can help.

By working with schools, Lumina can help counselors, student services, and even security officers adapt to new digital landscapes related to bullying, mental health, drug misuse, and other challenges. With new threats emerging every day, taking full advantage of artificial intelligence will allow schools to meet these challenges head-on.