Predicting and Preventing Heath Care Fraud

Predicting and Preventing Heath Care Fraud

When the Centers for Medicare & Medicaid Services (CMS) announced its vision to modernize Medicare program integrity, Administrator Seema Verma highlighted the agency’s interest in seeking new innovative strategies involving machine learning and artificial intelligence.

Executive Order Directs HHS to use AI to Detect Fraud and Abuse

The announcement came earlier this month and followed an Executive Order by President Trump which urged the Secretary of Health and Human Services (HHS) to direct “public and private resources toward detecting and preventing fraud, waste, and abuse, including through the use of the latest technologies such as artificial intelligence.”

Medicare Fraud Estimated between $21 and $71 Billion Annually

Medicare fraud, waste, and abuse costs CMS and taxpayers billions of dollars.

In 2018, improper payments represented five percent of the total $616.8 billion of Medicare’s net costs. And it is estimated that Medicare loses between $21 and $71 billion per year to fraud, waste and abuse.

Part of those costs are driven by inefficiencies in trying to identify and flag these issues before, during and after they occur.

For example, today, clinicians manually review medical records associated with Medicare claims and as a result, CMS reviews less than one percent of those records

Artificial intelligence and machine learning could be more cost effective and less burdensome, and can help existing predictive systems designed to flag fraud.

HHS Among Largest Data Producers in the World

In order to understand the potential for AI, CMS also recently issued a Request for Information asking, among other things, if AI tools are being used in the private sector to detect fraud and how AI can enhance program integrity efforts.

HHS, which houses CMS,  is among the largest data producers in the world, with its healthcare and financial data exceeding petabytes per year, making it the perfect fit for AI and machine learning models.

In fact, researchers at Florida Atlantic University programmed computers to predict, classify and flag potentially fraudulent Medicare Part B claims from 2012-2015, using algorithms to detect patterns of fraud in publicly available CMS data.  The researchers noted they had only “scratched the surface” and planned further trials.

Just “Scratching the Surface”

But the promise of AI isn’t in just in the CMS data. It’s also in the behaviors of those looking to commit fraud.

According to Jeremy Clopton, director at accounting consultancy Upstream Academy and an Association of Certified Fraud Examiners faculty member, the risk of fraud is often described as having three key factors: a perceived pressure or financial need, a perceived opportunity, and a rationalization of the behavior.

To prevent fraud,  AI must analyze behavioral data that might indicate the pressure someone is facing and how they could rationalize fraud to deal with those pressures. For example, he notes that someone facing financial pressures might regularly search for articles related to debt relief and could also mention those concerns in emails. AI has made finding these behaviors more efficient.

AI, Fraud Detection and the Private Sector

The private sector is already embracing AI for a variety of fraud prevention needs.  Aetna has 350 machine learning models focused on preventing criminals from fabricating health insurance claims.

And, Mastercard Healthcare Solutions recently announced it would also use AI to identify suspicious activity and help its clients detect fraud .

Beyond just healthcare, the use of AI and ML as part of an organization’s anti-fraud programs is expected to almost triple in the next two years, according to the Association of Certified Fraud Examiners.  

And, 55 percent of organizations expect to increase their budgets for anti-fraud technology over the next two years.

Based on the efforts at HHS and CMS, it looks like the Federal Government will be part of the AI-fueled anti-fraud movement.

Learn more about AI-powered Radiance and its risk and fraud sensing capabilities.

Try Radiance for free today.

Ending Human Trafficking Through Education, Awareness and AI

Ending Human Trafficking Through Education, Awareness and AI

Earlier this month, Florida became the first state to require schools to teach K-12 students about child trafficking prevention. The state ranks third in the nation for reported human trafficking cases, with 767 cases reported in 2018, nearly 20 percent of which involved minors.

A $150 billion industry

While Florida’s program will be the first targeted on youth education, awareness campaigns have become a critical component of the fight against this $150 billion industry, which impacts as many as 40.3 million people annually.

The Department of Homeland Security’s Blue Campaign is one example.  This national public awareness campaign is focused on increasing detection of human trafficking and identifying victims. 

Increasing detection of victims

The campaign works to educate the public, law enforcement and industry partners to recognize the indicators of human trafficking, and how to appropriately respond to possible cases.

According to DHS, among the potential indicators that a person might be a victim of human trafficking are:

  • Disconnection from family and friends
  • Dramatic and sudden changes in behavior
  • Disorientation and signs of abuse
  • Timid, fearful or submissive behavior
  • Signs of being denied food, water, sleep or medical care
  • Deference to someone in authority or the appearance of being coached on what to say

Finding the perpetrators

While potential indicators for victims are well documented, identifying the perpetrators is more difficult.

Law enforcement points to the fact that traffickers represent every social, ethnic, and racial group and are not only men—women run many established rings.

Cases have even revealed that traffickers are not necessarily always strangers to or casual acquaintances of the victims. Traffickers can be family members, intimate partners, and long-time friends of the victims.

With all these variables in finding the perpetrators, law enforcement is increasingly looking for tools to help this lucrative and subversive crime. 

“A rare window into criminal behavior”

One tool is the Internet, which provides traffickers with the unprecedented ability to exploit a greater number of victims and advertise services across geographic areas.  It is also a way to recruit victims, especially unsuspecting and vulnerable youth. 

As research conducted in 2011 at the University of Southern California found, online trafficking transactions “leave behind traces of user activity, providing a rare window into criminal behavior, techniques, and patterns.

“Every online communication between traffickers, ‘johns,’ and their victims reveals potentially actionable information for anti-trafficking investigators.”

The study noted the potential for integrating human experts and computer-assisted technologies like AI to detect trafficking online.

AI and human trafficking

Similar research conducted at Carnegie Mellon University looked at how low-level traffickers and organized transnational criminal networks used web sites like Craigslist and Backpage to advertise their victims. The researchers developed AI-based tools to find patterns in the hundreds of millions of online ads and help recover victims and find the bad actors.

Fast forward to today.

In February, the United Nations held a two-day conference focused on using AI to end modern slavery.

The conference brought together researchers, policy makers, social scientists, members of the tech community, and survivors.

One of those researchers – from Lehigh University – is working on a human trafficking project to help law enforcement overcome the challenges of turning vast amounts of data, primarily from police incident reports, into actionable intelligence to assist with their investigations.

Providing better alerts and real risks

Former Federal government officials share the optimism about the power of AI to aid law enforcement in weeding out the criminals and finding the victims.

Alma Angotti, a former U.S. regulation official for the Securities and Exchange Commission, points to the power of AI to highlight key indicators of trafficking from hundreds of thousands of sources, providing better alerts and more likely real risks.

“For example, law enforcement can look at young women of a certain age entering the country from certain high-risk jurisdictions. Marry that up with social media and young people missing from home, or people associated with a false employment agency or who think they are getting a nanny job, and you start to develop a complete picture. And the information can be brought up all at once, rather than an analyst having to go through the Dark Web.”

To report suspected human trafficking to Federal law enforcement, call 1-866-347-2423.

To get help from the National Trafficking Hotline call 1-888-373-7888 or text HELP or INFO to BeFree (233733).

Learn more about AI-powered Radiance and its risk sensing capabilities for issues like human trafficking.

Adopting AI in the Insurance Industry

Adopting AI in the Insurance Industry

While commonly maligned as one of the laggard industries when it comes to technology, insurance companies are increasingly investing in AI and machine learning across all aspects of their business.  

Carriers Investing $5+ million on AI Annually

A study by Genpact found that 87 percent of carriers are investing more than $5 million in AI every year. This is more than both banking (86%) and consumer goods and retail companies (63%).

Two practical applications for this technology – claims and underwriting fraud – provide opportunity to help solve some of the industry’s biggest challenges.

Solving the Fraud Issue with Tech and AI

Fraud is estimated to be more than $30 billion every year in the U.S. alone.

And, according to the Coalition Against Insurance Fraud, 2018 was the third consecutive time in six years that insurers reported increasing amounts of fraud. Nearly three-quarters of insurers reported that fraud had increased either significantly or slightly, an 11-point increase since 2014.

To address this issue the Coalition’s survey reinforced Genpact’s findings: nearly two-thirds of insurers planned to acquire new technology in the next year for enhanced detection of claims fraud, and another one-third would add technology to address underwriting fraud.

Deloitte further notes that among the areas that will see the greatest impact are fraud detection and risk analysis. Beyond these important use cases, AI and machine learning assist with customer due diligence, augmenting existing processes with analysis of external data sources.

Transformation to ‘Predict and Prevent’

The power of AI, according to experts, is its ability to analyze mass amounts of data from a wide range of sources including previous claims, customer information, and social media, to help combat fraud.

A senior data scientist at AXIS Capital recently noted that 80 percent of internal data is unstructured in the form of PDF and emails, and that AI’s text mining and natural language processing could help reveal core hidden information.  

Additionally, she pointed to AI’s ability to scrape information from the Internet, gathering information in real time to understand evolving risks.

With these capabilities, AI can transform the industry, and underwriting in particular, into the ‘predict and prevent’ mode.

‘Seismic Impact’ on the Industry

That opinion is shared by others, including E&Y and McKinsey, with the latter reporting that AI “will have a seismic impact on all aspects of the insurance industry.”

McKinsey recommends that as insurers onboard these technologies, they take a multi-pronged strategy that begins with getting smart on AI-related technologies and trends and includes the development and implementation of long-term technology plan. 

Comprehensive Data Strategy

The firm also underscores that AI technology performs best with a high volume of data from multiple sources, and that carriers must develop a comprehensive data strategy.  Internal data will need to be organized in ways that enable and support the agile development of new analytics insights and capabilities. With external data, carriers must focus on securing access to data that enriches and complements their internal data sets. 

The Radiance Solution

Radiance is a powerful tool to help identify potential fraud and other concerns.  It ingests and processes large amounts of unstructured data, providing actionable, prioritized results that can be further refined by entering publicly-available identifiable information.  

Radiance’s deep web listening goes beyond traditional social media monitoring tools, using internally-developed and proprietary algorithms to capture online content relevant to insurance fraud. 

Radiance can apply the same machine learning capabilities against existing legacy databases, integrating those disparate sources and analyzing that data against risk and other industry specific needs.

Try Radiance for free today.

The Increasing Threat to the Global Energy Supply

The Increasing Threat to the Global Energy Supply

This month’s attack on Saudi Arabia’s Abqaiq oil processing facility, which is the world’s largest and accounts for five percent of global oil supplies, resulted in one of the biggest oil price increases  ever recorded. 

More importantly, it demonstrated that the world’s energy infrastructure is vulnerable, can be severely disrupted and is an increasingly likely target for bad actors.

Recent Attacks Reinforce the Threat

Other recent examples – of both cyber and physical attacks – reinforce the threat.

In 2008, an alleged cyber attack blew up an oil pipeline in Turkey, shutting it down for three weeks.  In 2015, a Distributed Denial of Service (DDos)  attack brought down a section of the Ukrainian power grid — for just six hours, but substations on the grid had to be operated manually for months.  Another attack in the Ukraine occurred just a year later, reportedly carried out by Russian actors. And, the Abqaiq facility itself had been the target of a thwarted Al Qaeda suicide bomber attack in 2006.

Threats to Physical Security

A 2018 report by the United Nations Office of Counter-Terrorism outlined the most intuitive physical threats to critical infrastructure, including the energy sector, involved the use of explosives or incendiary devices, rockets, MANPADs, grenades and tools to induce arson.

That same report noted that the energy sector has witnessed sustained terrorist activity through attacks perpetrated by Al Qaeda and its affiliates on oil companies’ facilities and personnel in Algeria, Iraq, Kuwait, Pakistan, Saudi Arabia and Yemen.

Increasing Intensity of DDoS Attacks

In addition to physical threats, it is estimated that by 2020, at least five countries will see foreign hackers take all or part of their national energy grid offline through Permanent Denial of Service (PDoS) attacks. And, DDoS attacks like those in the Ukraine are becoming increasingly severe.  Studies show that the number of total DDoS attacks decreased by 18 percent year-over-year in Q2 2017.  At the same time, there was a 19 percent increase in the average number of attacks per target.

U.S. is the “Holy Grail”

Disruption of the U.S. power grid is considered the “holy grail,” and experts predict that the energy industry could be an early battleground, not only the power sector, but the nation’s pipelines and the entirety of the supply chain. 

In fact, last year the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) publicly accused the Russians of cyberattacks on small utility companies in the United States.  In a joint Technical Alert  (TA), the agencies said Russian hackers conducted spear phishing attacks and staged malware in the control rooms with the goal of gathering data to create detrimental harm to critical U.S. infrastructure.

900 “Vulnerabilities” Found in the U.S. Energy Systems

This specific incident aside, DHS’s Industrial Control System Computer Emergency Response Team found nearly 900 cyber security vulnerabilities in U.S. energy control systems between 2011 and 2015, more than any other industry.  It’s not surprising that the international oil sector alone is increased investments on cyber defenses by $1.9 billion in 2018. 

Investment in Physical Security Will Reach $920 billion

With any disruption to the global or national energy supply having serious implications for virtually all industries, especially critical ones like healthcare, transportation, security, and financial services, one report projects that the global critical infrastructure protection market will be worth $118 billion by 2028.

Physical security is expected to account for the highest proportion of spending, and cumulatively will account for $920 billion in investment.

Artificial Intelligence: A Security “Pathway” for the Future

Experts suggest that these investments should include next generation technologies for both physical and cyber security purposes. As one expert put it: “Automation, including via artificial intelligence, is an emerging and future cyber security pathway.”

In addition to the role that automation, artificial intelligence and machine learning can bring to identifying and predicting a physical or cyber attack, research shows that it can also help manage the rising costs associated with it. A study found that only 38 percent of companies are investing in this technology – even though after initial investments, it could represent net saving of $2.09 million.

Learn more about AI-driven Radiance and how it can help identify and predict physical and cyber threats to the energy infrastructure.

Mitigating Insider Threats:  Latest Trends, Best Practices and AI Automation

Mitigating Insider Threats: Latest Trends, Best Practices and AI Automation

Insider threat incidents range from data security breaches which have cost firms like Capital One as much as $100 – $150 million to violent threats from disgruntled employees, like the case of Coast Guard Lieutenant Christopher Hassan who was arrested after a joint Coast Guard and FBI investigation found he was stockpiling weapons and seeking to launch a major attack.

Every Organization is Vulnerable

While these high-profile incidents grab international headlines, the reality is that every organization is vulnerable to insider threats. On average, insider threats cost almost $9 million, take more than two months to contain and include issues related to careless workers, disgruntled employees, workplace violence and malicious insiders.

Consider that between January and June 2019,  the healthcare industry had already disclosed 285 incidents of patient privacy breaches, with hospital insiders responsible for 20 percent of the incidents.  Similarly, the Verizon 2019 Data Breach Investigations Report, found that 34 percent of all breaches were caused by insiders.

Companies are Building Insider Threat Programs, But Want to Invest More

Some 90 percent of organizations feel vulnerable to insider attacks and 86 percent have or are building an insider threat program.  Still, nearly 75 percent of C-level executives do not feel they are invested enough to mitigate the risks associated with an insider threat.

As part of National Insider Threat Awareness Month this September, the National Counterintelligence and Security Center (NCSC) is reminding companies of the need for strong insider threat protection programs and the signs to look for with existing employees. 

Look for These Concerning Behaviors

William Evanina, who heads up NCSC, shares that those individuals engaged or contemplating insider threats display “concerning behaviors” before engaging in these events. 

The CERT National Insider Threat Center in the latest edition of its Common Sense Guide to Mitigating Insider Threats, identifies these behaviors as including:

  • repeated policy violations;
  • disruptive behavior;
  • financial difficulty or unexplained extreme change in finances; and
  • job performance problems.

Early Detection Technologies

AI security

The Center suggests deploying solutions for monitoring employee actions, correlating information from multiple data sources, having tools for employees to report concerning or disruptive behavior, and monitoring social media.

Surveys like the one conducted by Crowd Research Partners show more and more organizations are increasingly using behavior monitoring and similar methods to help with early detection of insider threats

And, a report from Accenture found that while advanced identification, security intelligence and threat sharing technologies are widely adopted, automation, AI and machine learning are now being used by about 40 percent of companies.   

Costs Savings from AI Automation

According to the same report, once investment costs are considered, AI automation could offer the highest net savings of about $2 million and begin to address the shortage in skilled security staff.

AI can help detect the risk indicators displayed by those who want to defraud organizations but without the inherent human bias.  Additionally, AI can help manage the incredible volume of data that must be collected, aggregated, correlated, analyzed and fused across disparate sources

Following the Common Sense Guide to Mitigating Insider Threats

Companies looking to follow the CERT National Insider Threat Center’s guidelines, should consider how the Radiance platform can help with monitoring social media, correlating disparate information, and providing a tool for employees to report concerning behaviors.  

Radiance OS-INT monitors all publicly available information across the entire deep web, not only social media.  And, it can ingest massive amounts of unstructured content from disparate internal data sources for further correlation and verification.

Radiance’s HUM-INT platform, known as S4, is a mobile application that allows users to confidentially report concerns in real time.  It can be configured as a workplace tool, with a centralized management portal to allow clients to access real–time threats to geo-fenced facility locations.

Try Radiance for Free Today.

Download our S4 app.

Assessing Safety Protocols in Public Venues

Assessing Safety Protocols in Public Venues

As the summer draws to a close and students return to campus, schools across the country are incorporating active shooter response training into their procedures and protocols.  The drills are just one component of overall safety preparedness efforts, being undertaken at the state, federal and local levels.

STRONG Ohio Plan Includes Social Media Scans

While response trainings on school campuses have become an increasingly common practice, the focus is even more pronounced in light of the recent mass shooting attacks in Dayton and El Paso.

In response to the shootings in Ohio, Governor Mike DeWine unveiled his STRONG Ohio plan, designed to reduce gun violence. The state created a School Safety Center, which will review school emergency management plans and offer risk threat and safety assessments, consolidate school safety resources on, promote the use of a tip line to anonymously report suspected threats and scan social media and websites to identify people suggesting acts of violence. 

Increased Arrests for Threatening Comments

Increased precautions aren’t just being taken at schools, and for good reason.  Following those tragic events, the FBI ordered a new threat assessment to thwart future mass attacks in the country.

Since that time, more than 25 people have been arrested for making threats to commit mass shootings – and that number does not include the three mall shooting scares in California over the weekend.

Public Venues Enhancing Security and Reviewing Response Plans

Sports venues like the Raven’s M&T Bank, and Camden Yard, home to the Orioles, announced enhanced security measures in August and retailers across the country are reviewing their safety procedures, which as Target noted in a public statement include team member training, partnerships with law enforcement and the use of technology.

Use of technology is not unique to private corporations.  Even before the recent shootings, the FBI issued a request for proposal for a social media early alerting to mitigate multifaceted threats. 

Tips for Personal Safety

The Department of Homeland Security offers tips for all of us to follow when we’re in public locations.

  • Be Prepared: Take notice of surroundings and identify potential emergency exits. Be aware of unusual behaviors and report suspicious activities to security or law enforcement.
  • Take Action: If an attack occurs, run to the nearest exit and conceal yourself while moving away from the dangerous activity. If you can’t exit to a secure area, protect yourself by seeking cover.
  • Assist and React: Call 9-1-1, remain alert and stay aware of the situation. Help with first aid when it is safe, and follow instructions once law enforcement arrives.

Part of your preparation can include downloading for free Lumina’s See Something Say Something app. It’s a crowd-sourced, mobile application that allows users to confidentially report concerns in real time.  

You can learn more about S4 and download it here. It’s one part of our comprehensive, AI-driven risk management platform, Radiance.