Advancing Public Safety and Protecting Privacy

Advancing Public Safety and Protecting Privacy

Lumina Testifies Before the Florida House of Representatives

“How do we leverage the power of technology without sacrificing constitutional liberties?  How do we ensure we are doing everything we can to keep our communities safe without turning our society into the Minority Report?  

These were the opening questions posed by Florida State Representative James Grant at a recent hearing focused on Using Technology to Advance Public Safety and Privacy in the Florida House of Representatives.

Lumina joined a panel of expert witnesses to answer these and other questions from members of the Criminal Justice Subcommittee.  

In addition to Lumina’s Doug Licker and Jessica Dareneau, other panelists included Dr. Russell Baker, CEO & Founder of Psynetix and Wayne A. Logan, a professor of law at Florida State University.

No Standardized Methodology

Beginning on the issue of using technology to keep communities safe, Psynetix’s Baker noted that the signs of violence or potential terrorism are often missed because there is no standardized methodology to collect, report and disseminate crucial information indicative of these potential acts – and that even if the data is available, it becomes siloed.

Lumina expanded on those complications, noting that 93 percent of those carrying out a mass violent attack make threatening communications prior to the event – including on social media –  and that 75 percent of terrorists used the internet to plan an attack.

The Internet is Useful to Everyone…Including Bad Actors

“The internet, it turns out is useful to everyone…and that includes bad actors,” Licker testified. 

“The UN, the FBI and the Office of the Director of National Intelligence (ODNI) support the use of new technologies to help mine the publicly available information on the internet to help prevent, predict and deter attacks in the future,” he continued.

The problem, comes in the mass amounts of data available on the web – some 2.5 quintillion bytes of data are added to the internet daily.  And, constrained resources from law enforcement agencies to analyze the data and respond.

Real-time Detection of Digital Evidence

In the slide presentation, Lumina shared a quote from the RAND Corporation which noted: “Most law-enforcement agencies in the United States, particularly at the state and local level, don’t have a whole lot of capability and technical people to manage and respond to digital evidence more generally, much less real-time detection.”

That’s where technologies like Lumina’s Radiance platform can be valuable for law enforcement.

“The power of our Radiance platform is two-fold – its ability to ingest massive amounts of unstructured, open source data and its real-time ability to analyze that information to predict and prevent organizational risks and threats,” Dareneau said. “It does this through purpose-built, best-in-class algorithms that can overcome the challenges of massive unstructured data ingestion and prioritization.”

The question of each of our publicly available digital footprints, and law enforcement’s ability to use that information in an investigation was widely discussed at the hearing.

Is Privacy Dead?

“Digital dossiers exist today on us all, which law enforcement can and will readily put to use in its work such as by means of computers, patrol cars and even hand-held devices,” Logan testified. “And why should law enforcement not be able to harness the crime control tools enabled by technological advances, such as machine learning targeting massive data sources?”

“In my view, my personal view, they should be able to do so but in a regulated manner,” he continued.

But, what should those regulations look like, and how best to approach the balance between privacy and safety?

Logan noted that the European Union, California and Illinois are all taking steps towards data protection measures, and could be models for Florida to follow.  

Transparency is Key

Dareneau said many of the policies being implemented relate to transparency.

“Transparency is so important, and that is what so many of these other jurisdictions are enacting in their legislation – requirements that you disclose what you are collecting and then how you are using it,” she testified. “So we try to stay on top of that, and make sure our privacy policy and terms includes exactly what we are collecting, how we are using it and who we could provide it to.”

As the hearing ended, Chairman Grant reiterated the work before his subcommittee to understand and delineate between private data and public information.  “This body is committed to acting,” he said.

Committed to Acting

When legislative session begins January 14, 2020, it’s clear that this topic will be a key focus for this subcommittee and the broader legislature. 

As Logan noted, “Technology is really potentially a game changer here. The question is whether it will be permitted, what limitations are going to be put on it and what accountability measures will be put in place. It’s just a different era.  We need to air the potential concerns here, and we need to transparently deliberate them and decide the issues.”

You can watch the hearing and review the materials here.

Predicting and Preventing Heath Care Fraud

Predicting and Preventing Heath Care Fraud

When the Centers for Medicare & Medicaid Services (CMS) announced its vision to modernize Medicare program integrity, Administrator Seema Verma highlighted the agency’s interest in seeking new innovative strategies involving machine learning and artificial intelligence.

Executive Order Directs HHS to use AI to Detect Fraud and Abuse

The announcement came earlier this month and followed an Executive Order by President Trump which urged the Secretary of Health and Human Services (HHS) to direct “public and private resources toward detecting and preventing fraud, waste, and abuse, including through the use of the latest technologies such as artificial intelligence.”

Medicare Fraud Estimated between $21 and $71 Billion Annually

Medicare fraud, waste, and abuse costs CMS and taxpayers billions of dollars.

In 2018, improper payments represented five percent of the total $616.8 billion of Medicare’s net costs. And it is estimated that Medicare loses between $21 and $71 billion per year to fraud, waste and abuse.

Part of those costs are driven by inefficiencies in trying to identify and flag these issues before, during and after they occur.

For example, today, clinicians manually review medical records associated with Medicare claims and as a result, CMS reviews less than one percent of those records

Artificial intelligence and machine learning could be more cost effective and less burdensome, and can help existing predictive systems designed to flag fraud.

HHS Among Largest Data Producers in the World

In order to understand the potential for AI, CMS also recently issued a Request for Information asking, among other things, if AI tools are being used in the private sector to detect fraud and how AI can enhance program integrity efforts.

HHS, which houses CMS,  is among the largest data producers in the world, with its healthcare and financial data exceeding petabytes per year, making it the perfect fit for AI and machine learning models.

In fact, researchers at Florida Atlantic University programmed computers to predict, classify and flag potentially fraudulent Medicare Part B claims from 2012-2015, using algorithms to detect patterns of fraud in publicly available CMS data.  The researchers noted they had only “scratched the surface” and planned further trials.

Just “Scratching the Surface”

But the promise of AI isn’t in just in the CMS data. It’s also in the behaviors of those looking to commit fraud.

According to Jeremy Clopton, director at accounting consultancy Upstream Academy and an Association of Certified Fraud Examiners faculty member, the risk of fraud is often described as having three key factors: a perceived pressure or financial need, a perceived opportunity, and a rationalization of the behavior.

To prevent fraud,  AI must analyze behavioral data that might indicate the pressure someone is facing and how they could rationalize fraud to deal with those pressures. For example, he notes that someone facing financial pressures might regularly search for articles related to debt relief and could also mention those concerns in emails. AI has made finding these behaviors more efficient.

AI, Fraud Detection and the Private Sector

The private sector is already embracing AI for a variety of fraud prevention needs.  Aetna has 350 machine learning models focused on preventing criminals from fabricating health insurance claims.

And, Mastercard Healthcare Solutions recently announced it would also use AI to identify suspicious activity and help its clients detect fraud .

Beyond just healthcare, the use of AI and ML as part of an organization’s anti-fraud programs is expected to almost triple in the next two years, according to the Association of Certified Fraud Examiners.  

And, 55 percent of organizations expect to increase their budgets for anti-fraud technology over the next two years.

Based on the efforts at HHS and CMS, it looks like the Federal Government will be part of the AI-fueled anti-fraud movement.

Learn more about AI-powered Radiance and its risk and fraud sensing capabilities.

Try Radiance for free today.

Lumina Announces Robert Spring will Join Board of Managers as Vice Chair

Tampa, FL, October 22, 2019 — Lumina, a predictive analytics company whose AI-driven Radiance platform helps keep people and places safe and secure through active and early detection of potential risk-related behaviors, announced today that Robert E. Spring will join as Vice Chairman of the Board of Managers effective immediately. 

“Rob has been a critical contributor as part of our advisory board, and we welcome him to his new role,” said Lumina CEO and Co-Founder, Allan Martin. “Rob’s experience in the areas of national security and defense, as well as corporate risk management will add significantly to our long-term growth strategy.”

Spring is a managing director at Gracie Square Capital, LLC, an investment and consulting firm, and has a long history of involvement with the National Defense University, including its Center for the Study of Weapons of Mass Destruction.  He is a member of the Advisory Board of RAND Corporation’s Center for Global Risk and Security and the Board of the Jamestown Foundation.  He has worked with the Defense Science Board on issues involving the defense industrial base.  Additionally, Spring has been involved in efforts related to veteran suicide prevention and post-traumatic stress disorder.

“Lumina’s Radiance platform brings unparalleled sophistication in helping organizations identify significant threats,” said Spring.  “I look forward to serving as Vice Chairman, introducing this powerful technology to corporate and governmental institutions, and helping Lumina fulfill its critical protection mission.”

Today’s announcement follows recent news of two capital raises by Lumina totaling nearly $6.5 million, expansion of staff in its Tampa offices and continued investment in sales and marketing efforts to key industry verticals including education, government, finance and transportation industries. 

“Rob has a keen understanding of national security and risk issues,” said Chairman of the Board Andrew Krusen. “His insights will continue to inform our go-to-market strategy, and we look forward to working with him in this new capacity.”

In addition to Krusen, the Lumina Board of Managers includes former Florida Attorney General and Secretary of State Jim Smith; Jeb Bush, Jr, Managing Partner at Jeb Bush & Associates; Kathleen Shanahan, co-CEO Turtle & Hughes and former Chief of Staff to Vice President-elect Dick Cheney and Florida Governor Jeb Bush.  Co-Founders Allan Martin and Morten Middelfart also serve on the board.  Governor Jeb Bush, former Homeland Security Secretary Michael Chertoff and Charles Allen, former Assistant Director of Central Intelligence for Collection serve on the Lumina Advisory Board.

About Lumina 

Lumina is a predictive analytics company founded on the idea that technology is a force for good.  The company’s optimized artificial intelligence capabilities help keep people and places safe and secure through active and early detection of high-risk behavior.  Lumina’s Radiance platform uses proprietary, deep web listening algorithms to uncover risk, provide timely, actionable information, and help prevent catastrophic loss.  Lumina is committed to protecting what matters most, and its Radiance platform is designed to help solve the world’s most challenging problems. 

For more information contact Jill Kermes at 202-957-0715 or jill.kermes@luminaanalytics.com

Ending Human Trafficking Through Education, Awareness and AI

Ending Human Trafficking Through Education, Awareness and AI

Earlier this month, Florida became the first state to require schools to teach K-12 students about child trafficking prevention. The state ranks third in the nation for reported human trafficking cases, with 767 cases reported in 2018, nearly 20 percent of which involved minors.

A $150 billion industry

While Florida’s program will be the first targeted on youth education, awareness campaigns have become a critical component of the fight against this $150 billion industry, which impacts as many as 40.3 million people annually.

The Department of Homeland Security’s Blue Campaign is one example.  This national public awareness campaign is focused on increasing detection of human trafficking and identifying victims. 

Increasing detection of victims

The campaign works to educate the public, law enforcement and industry partners to recognize the indicators of human trafficking, and how to appropriately respond to possible cases.

According to DHS, among the potential indicators that a person might be a victim of human trafficking are:

  • Disconnection from family and friends
  • Dramatic and sudden changes in behavior
  • Disorientation and signs of abuse
  • Timid, fearful or submissive behavior
  • Signs of being denied food, water, sleep or medical care
  • Deference to someone in authority or the appearance of being coached on what to say

Finding the perpetrators

While potential indicators for victims are well documented, identifying the perpetrators is more difficult.

Law enforcement points to the fact that traffickers represent every social, ethnic, and racial group and are not only men—women run many established rings.

Cases have even revealed that traffickers are not necessarily always strangers to or casual acquaintances of the victims. Traffickers can be family members, intimate partners, and long-time friends of the victims.

With all these variables in finding the perpetrators, law enforcement is increasingly looking for tools to help this lucrative and subversive crime. 

“A rare window into criminal behavior”

One tool is the Internet, which provides traffickers with the unprecedented ability to exploit a greater number of victims and advertise services across geographic areas.  It is also a way to recruit victims, especially unsuspecting and vulnerable youth. 

As research conducted in 2011 at the University of Southern California found, online trafficking transactions “leave behind traces of user activity, providing a rare window into criminal behavior, techniques, and patterns.

“Every online communication between traffickers, ‘johns,’ and their victims reveals potentially actionable information for anti-trafficking investigators.”

The study noted the potential for integrating human experts and computer-assisted technologies like AI to detect trafficking online.

AI and human trafficking

Similar research conducted at Carnegie Mellon University looked at how low-level traffickers and organized transnational criminal networks used web sites like Craigslist and Backpage to advertise their victims. The researchers developed AI-based tools to find patterns in the hundreds of millions of online ads and help recover victims and find the bad actors.

Fast forward to today.

In February, the United Nations held a two-day conference focused on using AI to end modern slavery.

The conference brought together researchers, policy makers, social scientists, members of the tech community, and survivors.

One of those researchers – from Lehigh University – is working on a human trafficking project to help law enforcement overcome the challenges of turning vast amounts of data, primarily from police incident reports, into actionable intelligence to assist with their investigations.

Providing better alerts and real risks

Former Federal government officials share the optimism about the power of AI to aid law enforcement in weeding out the criminals and finding the victims.

Alma Angotti, a former U.S. regulation official for the Securities and Exchange Commission, points to the power of AI to highlight key indicators of trafficking from hundreds of thousands of sources, providing better alerts and more likely real risks.

“For example, law enforcement can look at young women of a certain age entering the country from certain high-risk jurisdictions. Marry that up with social media and young people missing from home, or people associated with a false employment agency or who think they are getting a nanny job, and you start to develop a complete picture. And the information can be brought up all at once, rather than an analyst having to go through the Dark Web.”

To report suspected human trafficking to Federal law enforcement, call 1-866-347-2423.

To get help from the National Trafficking Hotline call 1-888-373-7888 or text HELP or INFO to BeFree (233733).

Learn more about AI-powered Radiance and its risk sensing capabilities for issues like human trafficking.

Solving the National Crisis of Veteran Suicide

Solving the National Crisis of Veteran Suicide

More than 6,000 veterans committed suicide in 2017 – an average of 17 suicides a day.

Veteran Suicide Rate is 1.5 times the rate of non-veterans

That number was recently reported in the 2019 National Veteran Suicide Prevention Annual Report, along with these equally sobering statistics: 

  • The veteran suicide rate is 1.5 times the rate for non-veterans.
  • Veterans ages 18-34 had the highest suicide rate (44.5 per 100,000). Overall, the suicide rate for this age group has increased by 76 percent since 2005.
  • In addition to the veteran suicides, there were 919 suicides among never federally activated National Guard and Reserve members, an average of 2.5 per day.

The report reinforces the magnitude of the crisis facing former members of our military. And, it comes just months after a renewed call for a comprehensive approach to address this national tragedy.

PREVENTS: A Comprehensive National Approach

In March 2019, the Trump Administration announced its FY 2020 budget proposal of $9.4 billion for veteran mental health services, including $222 million for suicide-prevention outreach, a $15.6 million increase over 2019.

That same month, President Trump also issued an Executive Order on a National Roadmap to Empower Veterans and End Suicide (PREVENTS). The PREVENTS Initiative calls for the development of a comprehensive public health strategy across all levels of government, and the private and non-profit sectors.

The goal is to understand the underlying factors of suicide, cultivate active engagement with veterans, and increase the timely identification of risk and intervention for those in need. 

Increasing Timely Identification and Intervention

A national research strategy is among the key components of PREVENTS. The Office of Science and Technology Policy (OSTP) is tasked with leading efforts to improve the coordination, monitoring, benchmarking, and execution of suicide-related data and research.

In its Request for Information, the OSTP announced its milestones and metrics would be focused on improving the ability to identify individual veterans and groups of veterans at greater risk of suicide and draw upon technology to capture and use health data from non-clinical settings to help target prevention and intervention strategies.

AI as an Early Detection System

Some experts believe that machine learning can be part of the solution when it comes to early intervention and risk prediction, suggesting that AI can be an early detection system by identifying and monitoring behaviors indicative of suicidal ideation.

One study they point to, conducted by researchers at the New York University School of Medicine, and funded by a grant from the U.S. Army Medical Research and Acquisition Activity, used speech-based algorithms to help detect posttraumatic stress disorders (PTSD) from warzone-exposed veterans.

Speech-based Algorithms Help Identify PTSD

The study analyzed audio recordings of clinical interviews, creating 40,526 speech features that were input into an algorithm, and ultimately shaved down to eighteen specific markers indicative of the potential for PTSD.  The algorithm correctly classified cases 89.1% of the time based on slower, more monotonous speech characteristics, less change in tonality, and less variation in activation.

Machine learning and AI are also being used to better analyze the Veterans Administration’s (VA) electronic health records to identify key factors related to suicide risks.

Deep Learning Neural Networks Predict Risk Based on Physicians’ Notes

A collaboration between the VA, the Department of Energy (DOE) and researchers at Lawrence Berkeley National Lab, focused on building deep learning neural networks that could distinguish between patients at high risk and those who are not based on physicians’ and discharge notes. 

Among the challenges was the noisy data sets that included structured data such as lab work and procedures and the unstructured data, like handwritten notes. But, as one researcher on the project pointed out, the value is in that unstructured data: 

“We believe that, for suicide prevention, the unstructured data will give us another side of the story that is extremely important for predicting risk — things like what the person is feeling, social isolation, homelessness, lack of sleep, pain, and incarceration. This kind of data is more complicated and heterogeneous, and we plan to apply what we have learned …to help VA doctors better decide who is at high risk and who they need to reach out to.”

The Path Ahead

The PREVENT Task Force’s mandate is to submit a proposed roadmap forward by next March.  The possibilities presented by AI and machine learning suggest that this technology should be a key area of focus, with continued investment and research.

While early identification of suicidal behaviors and risk is just one piece of helping end this national tragedy, it is a critical component of the overall strategy – and AI can play an important role.

To contact the Veteran Crisis Line, callers can dial 1-800-273-8255 and select option 1 for a VA staffer. Veterans, troops, or their family members can also text 838255 or visit VeteransCrisisLine.net for assistance.

The National Suicide Prevention Lifeline is 1-800-273-8255.