Lumina Testifies Before the Florida House of Representatives
“How do we leverage the power of technology without sacrificing constitutional liberties? How do we ensure we are doing everything we can to keep our communities safe without turning our society into the Minority Report?”
In addition to Lumina’s Doug Licker and Jessica Dareneau, other panelists included Dr. Russell Baker, CEO & Founder of Psynetix and Wayne A. Logan, a professor of law at Florida State University.
No Standardized Methodology
Beginning on the issue of using technology to keep communities safe, Psynetix’s Baker noted that the signs of violence or potential terrorism are often missed because there is no standardized methodology to collect, report and disseminate crucial information indicative of these potential acts – and that even if the data is available, it becomes siloed.
The problem, comes in the mass amounts of data available on the web – some 2.5 quintillion bytes of data are added to the internet daily. And, constrained resources from law enforcement agencies to analyze the data and respond.
Real-time Detection of Digital Evidence
In the slide presentation, Lumina shared a quote from the RAND
Corporation which noted: “Most law-enforcement agencies in the United
States, particularly at the state and local level, don’t have a whole lot of
capability and technical people to manage and respond to digital evidence more
generally, much less real-time detection.”
That’s where technologies like Lumina’s Radiance platform
can be valuable for law enforcement.
“The power of our Radiance platform is two-fold – its
ability to ingest massive amounts of unstructured, open source data and its
real-time ability to analyze that information to predict and prevent
organizational risks and threats,” Dareneau said. “It does this through
purpose-built, best-in-class algorithms that can overcome the challenges of
massive unstructured data ingestion and prioritization.”
The question of each of our publicly available digital footprints, and law enforcement’s ability to use that information in an investigation was widely discussed at the hearing.
Is Privacy Dead?
“Digital dossiers exist today on us all, which law
enforcement can and will readily put to use in its work such as by means of
computers, patrol cars and even hand-held devices,” Logan testified. “And why
should law enforcement not be able to harness the crime control tools enabled
by technological advances, such as machine learning targeting massive data
“In my view, my personal view, they should be able to do so
but in a regulated manner,” he continued.
But, what should those regulations look like, and how best
to approach the balance between privacy and safety?
Logan noted that the European Union, California and Illinois are all taking steps towards data protection measures, and could be models for Florida to follow.
Transparency is Key
Dareneau said many of the policies being implemented relate to transparency.
“Transparency is so important, and that is what so many of
these other jurisdictions are enacting in their legislation – requirements that
you disclose what you are collecting and then how you are using it,” she
and terms includes exactly what we are collecting, how we are using it and who
we could provide it to.”
As the hearing ended, Chairman Grant reiterated the work before his subcommittee to understand and delineate between private data and public information. “This body is committed to acting,” he said.
Committed to Acting
When legislative session begins January 14, 2020, it’s clear
that this topic will be a key focus for this subcommittee and the broader
As Logan noted, “Technology is really potentially a game changer here. The question is whether it will be permitted, what limitations are going to be put on it and what accountability measures will be put in place. It’s just a different era. We need to air the potential concerns here, and we need to transparently deliberate them and decide the issues.”
You can watch the hearing and review the materials here.
When the Centers for Medicare & Medicaid Services (CMS) announced its vision to modernize Medicare program integrity, Administrator Seema Verma highlighted the agency’s interest in seeking new innovative strategies involving machine learning and artificial intelligence.
Executive Order Directs HHS to use AI to Detect Fraud and Abuse
The announcement came earlier this month and followed an Executive Order by President Trump which urged the Secretary of Health and Human Services (HHS) to direct “public and private resources toward detecting and preventing fraud, waste, and abuse, including through the use of the latest technologies such as artificial intelligence.”
Medicare Fraud Estimated between $21 and $71 Billion Annually
Medicare fraud, waste, and abuse costs CMS and taxpayers billions of dollars.
Artificial intelligence and machine learning could be more cost effective and less burdensome, and can help existing predictive systems designed to flag fraud.
HHS Among Largest Data Producers in the World
In order to understand the potential for AI, CMS also recently issued a Request for Information asking, among other things, if AI tools are being used in the private sector to detect fraud and how AI can enhance program integrity efforts.
But the promise of AI isn’t in just in the CMS data. It’s also in the behaviors of those looking to commit fraud.
According to Jeremy Clopton, director at accounting
consultancy Upstream Academy and an Association of Certified Fraud Examiners
faculty member, the risk of fraud is often described as having three key factors: a perceived pressure
or financial need, a perceived opportunity, and a rationalization of the
“Every online communication between
traffickers, ‘johns,’ and their victims reveals potentially actionable
information for anti-trafficking investigators.”
The study noted the potential for integrating human experts and computer-assisted technologies like AI to detect trafficking online.
AI and human trafficking
Similar research conducted at Carnegie Mellon University looked at how low-level traffickers and organized transnational criminal networks used web sites like Craigslist and Backpage to advertise their victims. The researchers developed AI-based tools to find patterns in the hundreds of millions of online ads and help recover victims and find the bad actors.
The conference brought together researchers, policy makers, social scientists, members of the tech community, and survivors.
One of those researchers – from Lehigh University – is working on a human trafficking project to help law enforcement overcome the challenges of turning vast amounts of data, primarily from police incident reports, into actionable intelligence to assist with their investigations.
Providing better alerts and real risks
Former Federal government officials share the optimism about the power of AI to aid law enforcement in weeding out the criminals and finding the victims.
example, law enforcement can look at young women of a certain age entering the
country from certain high-risk jurisdictions. Marry that up with social media
and young people missing from home, or people associated with a false
employment agency or who think they are getting a nanny job, and you start to
develop a complete picture. And the information can be brought up all at once,
rather than an analyst having to go through the Dark Web.”
To report suspected human trafficking to Federal law enforcement, call 1-866-347-2423.
To get help from the National Trafficking Hotline call 1-888-373-7888 or text HELP or INFO to BeFree (233733).
about AI-powered Radiance and its risk sensing capabilities for issues like human trafficking.
The goal is to understand the underlying factors of suicide, cultivate active engagement with veterans, and increase the timely identification of risk and intervention for those in need.
Increasing Timely Identification and Intervention
A national research strategy is among the key components of PREVENTS. The Office of Science and Technology Policy (OSTP) is tasked with leading efforts to improve the coordination, monitoring, benchmarking, and execution of suicide-related data and research.
In its Request for Information, the OSTP announced its milestones and metrics would be focused on improving the ability to identify individual veterans and groups of veterans at greater risk of suicide and draw upon technology to capture and use health data from non-clinical settings to help target prevention and intervention strategies.
AI as an Early Detection System
believe that machine learning can be part of the solution when it comes to
early intervention and risk prediction, suggesting that AI can be an early
detection system by identifying and monitoring behaviors indicative of suicidal
One study they point to, conducted by researchers at the New York University School of Medicine, and funded by a grant from the U.S. Army Medical Research and Acquisition Activity, used speech-based algorithms to help detect posttraumatic stress disorders (PTSD) from warzone-exposed veterans.
Speech-based Algorithms Help Identify PTSD
The study analyzed audio recordings of clinical interviews, creating
40,526 speech features that were input into an algorithm, and ultimately shaved
down to eighteen specific markers indicative of the potential for PTSD.
The algorithm correctly classified cases 89.1% of the time based on slower,
more monotonous speech characteristics, less change in tonality, and less
variation in activation.
Deep Learning Neural Networks Predict Risk Based on Physicians’ Notes
A collaboration between the VA, the Department of Energy (DOE) and researchers at Lawrence Berkeley National Lab, focused on building deep learning neural networks that could distinguish between patients at high risk and those who are not based on physicians’ and discharge notes.
Among the challenges was the noisy data sets that included
structured data such as lab work and procedures and the unstructured data, like
handwritten notes. But, as one researcher on the project pointed out, the value
is in that unstructured data:
“We believe that, for suicide prevention, the unstructured data will give us another side of the story that is extremely important for predicting risk — things like what the person is feeling, social isolation, homelessness, lack of sleep, pain, and incarceration. This kind of data is more complicated and heterogeneous, and we plan to apply what we have learned …to help VA doctors better decide who is at high risk and who they need to reach out to.”
The Path Ahead
The PREVENT Task Force’s mandate is to submit a proposed
roadmap forward by next March. The possibilities presented by AI and
machine learning suggest that this technology should be a key area of focus,
with continued investment and research.
While early identification of suicidal behaviors and risk is just one piece of helping end this national tragedy, it is a critical component of the overall strategy – and AI can play an important role.
To contact the Veteran Crisis Line, callers can dial
1-800-273-8255 and select option 1 for a VA staffer. Veterans, troops, or their
family members can also text 838255 or visit VeteransCrisisLine.net for assistance.
The National Suicide Prevention Lifeline is 1-800-273-8255.
(S4 is a mobile app that allows people to report concerning behaviors in real time. It’s short for See Something, Say Something).
The first alert: Tuesday Afternoon
This alert was from a high school student.
The student expressed concern that a best friend was at risk
It turns out that the two students had recently lost another
close friend to suicide. Since that
time, the friend at risk had been distant and negative, and showed other
warning signs, which you can read more about from the American
Foundation for Suicide Prevention.
The student who sent the S4 alert wanted to make sure the
best friend got help before it was too late.
Not surprisingly, the student wished to remain anonymous. But the student shared the school information and the name of the friend. Our S4 app also validated the location from which the alert was sent.
80% of those considering suicide give some sign of their intentions
This report was a serious concern. Statistics show that 80% of those
considering suicide give some sign of their intentions, and often those
signs are communicated to the people closest to them.
We acted immediately, calling the school and sharing the
information with the administration, who confirmed the recent suicide and
thanked us for the report.
A person in time of crisis would get the help they need.
The second alert: Wednesday late morning
Just 20 hours later, we received another S4 alert.
This one was different.
The person reporting the concern had innocently moved a postal package for a neighbor. Then, the person noticed that the package had a marking indicating that it was from a company that sells bulletproof armor.
What to do with this information? Buying armor isn’t illegal. But, why was this neighbor concerned? Was there something else to the report?
More than 75% of the attackers in mass violence events exhibit concerning behaviors
Through a search of all the open source data on the
Internet, OS-INT found publicly available social media images of the subject
holding an IED, raising concerns that perhaps there was more to investigate.
We sent the S4 report, and the findings from OS-INT to the authorities, so they could determine appropriate next steps.
The third report: Wednesday afternoon
While we were working the bulletproof armor S4, another alert came in.
Again, it was a report concerning potential mass violence.
But this time, it was at a school.
The report indicated that a student had discussed bringing a gun to school the next day. The report included the student’s name, and the school that he attended.
93% of the attackers in mass violence events made threatening or concerning communications
The same Secret Service report we mentioned previously,
tells us that of the mass attacks in 2018, 93% of attackers made threatening or concerning communications prior to the
That is why – like the bulletproof armor S4 report – we ran the student’s name through Radiance OS-INT.
Radiance quickly sent us a link to a publicly available
YouTube channel where a person with the same name as the student shows himself
executing a shooting rampage in a video game. We thought this was important
additional information to share with the local authorities.
Within minutes, we called the police, and learned they had received another tip surrounding the same student and were following up on the reports.
The power of S4
When we launched our S4 app, we knew the value it would bring to our clients as they work to keep their school and corporate campuses safe.
But we also understood the potential power it had for the
We knew we had to make the app available to others. And it had to be free of charge.
Since making our app available, we have had thousands of
downloads and hundreds of reports.
The truth is, we never like it when those reports come in. The thought that someone might want to do harm to themselves, or to others, keeps us up at night.
Be the light in your community
But, we know See Something, Say Something works. And we’re committed to using our technology
to help make a difference.
We encourage you to do the same.
Download the app today at the App
Store or from Google
Play, and help be the light in your community.
And, if you are in a suicide crisis, or know someone who is the National Suicide Prevention Lifeline is 1-800-273-8255.
A 2018 report by the United Nations
Office of Counter-Terrorism outlined the most intuitive physical threats to
critical infrastructure, including the energy sector, involved the use of
explosives or incendiary devices, rockets, MANPADs, grenades and tools to
That same report noted that the energy sector has witnessed sustained terrorist activity through attacks perpetrated by Al Qaeda and its affiliates on oil companies’ facilities and personnel in Algeria, Iraq, Kuwait, Pakistan, Saudi Arabia and Yemen.
Increasing Intensity of DDoS Attacks
In addition to physical threats, it is estimated that by 2020, at least five countries will see foreign hackers take all or part of their national energy grid offline through Permanent Denial of Service (PDoS) attacks. And, DDoS attacks like those in the Ukraine are becoming increasingly severe. Studies show that the number of total DDoS attacks decreased by 18 percent year-over-year in Q2 2017. At the same time, there was a 19 percent increase in the average number of attacks per target.
U.S. is the “Holy Grail”
of the U.S. power grid is considered the “holygrail,” and experts predict that the
energy industry could be an early battleground, not only the power sector, but
the nation’s pipelines and the entirety of the supply chain.
In fact, last year the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) publicly accused the Russians of cyberattacks on small utility companies in the United States. In a joint Technical Alert (TA), the agencies said Russian hackers conducted spear phishing attacks and staged malware in the control rooms with the goal of gathering data to create detrimental harm to critical U.S. infrastructure.
900 “Vulnerabilities” Found in the
U.S. Energy Systems
This specific incident aside, DHS’s Industrial Control System Computer Emergency Response Team found nearly 900 cyber security vulnerabilities in U.S. energy control systems between 2011 and 2015, more than any other industry. It’s not surprising that the international oil sector alone is increased investments on cyber defenses by $1.9 billion in 2018.
Investment in Physical Security Will Reach $920 billion
With any disruption to the global or national energy supply having serious implications for virtually all industries, especially critical ones like healthcare, transportation, security, and financial services, one report projects that the global critical infrastructure protection market will be worth $118 billion by 2028.
Physical security is expected to account for the highest proportion of spending, and cumulatively will account for $920 billion in investment.
Artificial Intelligence: A Security “Pathway” for the Future
Experts suggest that these investments should include next generation technologies for both physical and cyber security purposes. As one expert put it: “Automation, including via artificial intelligence, is an emerging and future cyber security pathway.”
In addition to the role that automation, artificial intelligence and machine learning can bring to identifying and predicting a physical or cyber attack, research shows that it can also help manage the rising costs associated with it. A study found that only 38 percent of companies are investing in this technology – even though after initial investments, it could represent net saving of $2.09 million.
Learn more about AI-driven Radiance and how it can help identify and predict physical and cyber threats to the energy infrastructure.