When the Centers for Medicare & Medicaid Services (CMS) announced its vision to modernize Medicare program integrity, Administrator Seema Verma highlighted the agency’s interest in seeking new innovative strategies involving machine learning and artificial intelligence.
Executive Order Directs HHS to use AI to Detect Fraud and Abuse
The announcement came earlier this month and followed an Executive Order by President Trump which urged the Secretary of Health and Human Services (HHS) to direct “public and private resources toward detecting and preventing fraud, waste, and abuse, including through the use of the latest technologies such as artificial intelligence.”
Medicare Fraud Estimated between $21 and $71 Billion Annually
Medicare fraud, waste, and abuse costs CMS and taxpayers billions of dollars.
Artificial intelligence and machine learning could be more cost effective and less burdensome, and can help existing predictive systems designed to flag fraud.
HHS Among Largest Data Producers in the World
In order to understand the potential for AI, CMS also recently issued a Request for Information asking, among other things, if AI tools are being used in the private sector to detect fraud and how AI can enhance program integrity efforts.
But the promise of AI isn’t in just in the CMS data. It’s also in the behaviors of those looking to commit fraud.
According to Jeremy Clopton, director at accounting
consultancy Upstream Academy and an Association of Certified Fraud Examiners
faculty member, the risk of fraud is often described as having three key factors: a perceived pressure
or financial need, a perceived opportunity, and a rationalization of the
While these high-profile incidents grab international headlines, the reality is that every organization is vulnerable to insider threats. On average, insider threats cost almost $9 million, take more than two months to contain and include issues related to careless workers, disgruntled employees, workplace violence and malicious insiders.
As part of National Insider Threat Awareness Month this September, the National Counterintelligence and Security Center (NCSC) is reminding companies of the need for strong insider threat protection programs and the signs to look for with existing employees.
Look for These Concerning Behaviors
William Evanina, who heads up NCSC, shares that those individuals engaged or contemplating insider threats display “concerning behaviors” before engaging in these events.
financial difficulty or unexplained extreme change in finances; and
job performance problems.
Early Detection Technologies
The Center suggests deploying solutions for monitoring
employee actions, correlating information from multiple data sources, having
tools for employees to report concerning or disruptive behavior, and monitoring
And, a report from Accenture found that while advanced identification, security intelligence and threat sharing technologies are widely adopted, automation, AI and machine learning are now being used by about 40 percent of companies.
Costs Savings from AI Automation
According to the same report, once investment costs are considered, AI automation could offer the highest net savings of about $2 million and begin to address the shortage in skilled security staff.
Following the Common Sense Guide to Mitigating Insider Threats
Companies looking to follow the CERT National Insider Threat Center’s guidelines, should consider how the Radiance platform can help with monitoring social media, correlating disparate information, and providing a tool for employees to report concerning behaviors.
OS-INT monitors all publicly available information across the entire deep
web, not only social media. And, it can
ingest massive amounts of unstructured content from disparate internal data
sources for further correlation and verification.
HUM-INT platform, known as S4, is a mobile application that allows users to
confidentially report concerns in real time.
It can be configured as a workplace tool, with a centralized management
portal to allow clients to access real–time threats to geo-fenced facility
It’s not always easy for young people to articulate their problems. A student who regularly attends class and receives good grades could also be fighting an addiction. A teen constantly smiling for Instagram photos could actually be depressed. For friends and family of the person struggling, recognizing the warning signs of distress might not come easily.
Artificial intelligence can act as a voice for people dealing with various internal issues. It can also notify loved ones or even officials when a person needs help. The following two stories serve as examples of potential tragedies that could be avoided thanks to artificial intelligence:
Using Artificial Intelligence to Fight Cyber Bullying
Hailey was in her dorm room staring at her phone. A stranger had posted another fake story about her. Hailey knew if she reported it, the imposter would just create a new account or use a website that allows anonymous posts.
Hailey is one of more than 20% of college students being cyberbullied. She struggled with bullying and depression throughout her first two years of college before her friends and family were able to help her, but she could have gotten help a lot sooner with artificial intelligence. As soon as the menacing messages appeared, cutting-edge predictive analytics paired with human analysis could have combatted the issue much earlier.
Catch Suicidal Tendencies Early with Artificial Intelligence
Ana had been a star student in high school. She held a part-time job, ran track and was in a serious relationship. During her freshman year of college, she became increasingly depressed. One night she texted heart emojis to all her friends, wrote a goodbye letter to her parents, and attempted suicide. Ana’s friends found her and called 911 in time.
While she was lucky, suicide has risen to become the second-leading cause of death among Ana’s age group. Ana, and so many others like her, could have benefited from help and treatment as soon as predictive analytics powered by artificial intelligence flagged her online searches and habits as possible suicidal tendencies.
As mental health problems become more common, and troubling behavior migrates online where it is harder to identify using traditional methods, many schools are struggling to adapt. To face these new challenges, innovative solutions are needed.
What if a sophisticated system could immediately alert student services to the problems their students face, like what should have happened for Hailey and Ana. The idea of counselors and health care professionals being guided to students’ darkest struggles is not some distant future. It’s possible today thanks to Lumina – a Predictive Analytics firm which uses artificial intelligence and open-source data to combat some of society’s most pressing issues. Powered by cutting-edge artificial intelligence and human analysis, Lumina’s newest solution can identify harmful behavior online and alert people who can help.
By working with schools, Lumina can help counselors, student services, and even security officers adapt to new digital landscapes related to bullying, mental health, drug misuse, and other challenges. With new threats emerging every day, taking full advantage of artificial intelligence will allow schools to meet these challenges head-on.