A 2018 report by the United Nations
Office of Counter-Terrorism outlined the most intuitive physical threats to
critical infrastructure, including the energy sector, involved the use of
explosives or incendiary devices, rockets, MANPADs, grenades and tools to
That same report noted that the energy sector has witnessed sustained terrorist activity through attacks perpetrated by Al Qaeda and its affiliates on oil companies’ facilities and personnel in Algeria, Iraq, Kuwait, Pakistan, Saudi Arabia and Yemen.
Increasing Intensity of DDoS Attacks
In addition to physical threats, it is estimated that by 2020, at least five countries will see foreign hackers take all or part of their national energy grid offline through Permanent Denial of Service (PDoS) attacks. And, DDoS attacks like those in the Ukraine are becoming increasingly severe. Studies show that the number of total DDoS attacks decreased by 18 percent year-over-year in Q2 2017. At the same time, there was a 19 percent increase in the average number of attacks per target.
U.S. is the “Holy Grail”
of the U.S. power grid is considered the “holygrail,” and experts predict that the
energy industry could be an early battleground, not only the power sector, but
the nation’s pipelines and the entirety of the supply chain.
In fact, last year the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) publicly accused the Russians of cyberattacks on small utility companies in the United States. In a joint Technical Alert (TA), the agencies said Russian hackers conducted spear phishing attacks and staged malware in the control rooms with the goal of gathering data to create detrimental harm to critical U.S. infrastructure.
900 “Vulnerabilities” Found in the
U.S. Energy Systems
This specific incident aside, DHS’s Industrial Control System Computer Emergency Response Team found nearly 900 cyber security vulnerabilities in U.S. energy control systems between 2011 and 2015, more than any other industry. It’s not surprising that the international oil sector alone is expected to increase investments on cyber defenses by $1.9 billion this year.
Investment in Physical Security Will Reach $920 billion
With any disruption to the global or national energy supply having serious implications for virtually all industries, especially critical ones like healthcare, transportation, security, and financial services, one report projects that the global critical infrastructure protection market will be worth $118 billion by 2028.
Physical security is expected to account for the highest proportion of spending, and cumulatively will account for $920 billion in investment.
Artificial Intelligence: A Security “Pathway” for the Future
Experts suggest that these investments should include next generation technologies for both physical and cyber security purposes. As one expert put it: “Automation, including via artificial intelligence, is an emerging and future cyber security pathway.”
In addition to the role that automation, artificial intelligence and machine learning can bring to identifying and predicting a physical or cyber attack, research shows that it can also help manage the rising costs associated with it. A study found that only 38 percent of companies are investing in this technology – even though after initial investments, it could represent net saving of $2.09 million.
Learn more about AI-driven Radiance and how it can help identify and predict physical and cyber threats to the energy infrastructure.
In July, Florida resident Tayyab Tahir Ismail was
sentenced to 20 years in prison for distributing information pertaining to explosives
According to a press release issued by the FBI, Tahir posted bomb making instructions on the Internet, and on a social media platform. His goal was for that information to be used to create a weapon of mass destruction in support of violent jihad.
Social Media, IoT, Attack Planning and Radicalization
Use of the Internet and social media to propagate radical views, share information related to a terror attack or plan for an attack is well documented.
Research of terrorist activity in Syria and Iraq in 2014 noted the use of a variety of social media platforms, with Twitter as the most popular channel. In a three-month period, 59 Twitter accounts of Western fighters in Syria alone had produced a total of 154,119 tweets, with the average account posting 2,612 times; and
In a December 2018 report on National Security, the U.S. Government Accountability Office (GAO) noted that “terrorists could…increase their use of online communications to reach new recruits and disseminate propaganda.”
Technology as a Double-Edged Sword
GAO’s findings echoed those of a report just one year earlier from the Office of the Director of National Intelligence (ODNI), which noted that technology “will be a double-edged sword. On the one hand, it will facilitate terrorist communications, recruitment, logistics, and lethality. On the other, it will provide authorities with more sophisticated techniques to identify and characterize threats….”
The RAND Corporation furthers this analysis of technology’s role in prevention activities, finding that early phase terrorism prevention activities should include monitoring online content advocating violence, and messaging to encourage communities to identify radicalized individuals for intervention.
United Nations: Internet Can Aid in Counter-Terrorism
Against this backdrop, the United Nations recently found that the significant amount of knowledge about terrorist organizations activities on the Internet can aid in counter-terrorism efforts, and that new technologies are helping proactively prevent, detect and deter terrorist attacks.
AI and machine learning are technologies that continue to take center stage in the identification of online threats and prevention of catastrophic events, whether it’s from Islamic or right-wing extremists.
AI Can Help Assess Threats and Enhance Situational Awareness
In fact when it comes to enhancing situational awareness (SA), and better detecting and discerning real attacks from false alarms, the Center for Strategic and International Studies (CSIS) noted that “AI applications for all-source data fusion, front-line analysis, and predictive analytics promise the potential to unlock new insights and effectively enhance strategic SA.”
That’s exactly where technologies like Lumina’s Radiance platform come into play. Radiance’s Open Source Intelligence (OS-INT) includes more than 6,500 terms related to potential national security risks and threats. The platform conducts nearly 135,000 searches across all publicly-available data on the web, correlating names with these terms and cross-referencing over 1 million queries into Lumina’s proprietary databases of risk. A search of this magnitude – done manually – would take more than a year to complete.
As the summer draws to a close and students return to campus, schools across the country are incorporating active shooter response training into their procedures and protocols. The drills are just one component of overall safety preparedness efforts, being undertaken at the state, federal and local levels.
STRONG Ohio Plan Includes Social Media Scans
While response trainings on school campuses have become an increasingly common practice, the focus is even more pronounced in light of the recent mass shooting attacks in Dayton and El Paso.
In response to the shootings in Ohio, Governor Mike DeWine unveiled his STRONG Ohio plan, designed to reduce gun violence. The state created a School Safety Center, which will review school emergency management plans and offer risk threat and safety assessments, consolidate school safety resources on saferschools.ohio.gov, promote the use of a tip line to anonymously report suspected threats and scan social media and websites to identify people suggesting acts of violence.
Increased Arrests for Threatening Comments
Increased precautions aren’t just being taken at schools, and for good reason. Following those tragic events, the FBI ordered a new threat assessment to thwart future mass attacks in the country.
Be Prepared: Take notice of surroundings and identify potential emergency exits. Be aware of unusual behaviors and report suspicious activities to security or law enforcement.
Take Action: If an attack occurs, run to the nearest exit and conceal yourself while moving away from the dangerous activity. If you can’t exit to a secure area, protect yourself by seeking cover.
Assist and React: Call 9-1-1, remain alert and stay aware of the situation. Help with first aid when it is safe, and follow instructions once law enforcement arrives.
Part of your preparation can include downloading for free Lumina’s See Something Say Something app. It’s a crowd-sourced, mobile application that allows users to confidentially report concerns in real time.
You can learn more about S4 and download it here. It’s one part of our comprehensive, AI-driven risk management platform, Radiance.
The rationale behind these efforts was straightforward. Recent attacks around the globe demonstrate the role social media and the Internet can play in helping people become radicalized, research and plan for mass violence, and as was the case of Christchurch, incite extremism by distributing images from an attack.
While the Internet has become a platform for extremists, it also provides opportunities to prevent and counter acts of terrorism. A United Nations report on The Use of the Internet for Terrorist Purposes, found that a significant amount of knowledge about the activities of terrorist organizations can be found on the Internet, aiding in counter-terrorism efforts. Importantly, the report went on to say that increasingly sophisticated technologies are helping proactively prevent, detect and deter terrorist activity involving use of the Internet.
The reasons behind mass shootings around the globe are multi-faceted, but not unsolvable.
And, while we agree with the critics that existing social media listening technologies are not adequate, we know that our AI-driven Radiance platform is.
Radiance’s key differentiator is that it brings power of Open Source Intelligence (OS-INT), Internet Intelligence (NET-INT) and our See Something Say Something app (HUM-INT) for edge-to-edge risk detection. Radiance scours the web prioritizing current behaviors to predict future action.
We can find the needle in the haystack (quickly)
Our OS-INT component finds that needle in the haystack because it is continuously ingesting all open source data and filtering out all the “noise” with our proprietary behavioral affinity models (BAMs). These filters measure the data against terms and phrases associated with violent extremism, lone wolf attacks and other threats to global security.
It’s not what’s been posted. It’s what’s been read
What a person is reading on the Internet is exponentially more valuable in predicting future behavior than what they may post or react to online. NET-INT hunts the web, identifying, cataloguing and continuously monitoring IP addresses researching a full spectrum of risk-related content.
A 360-degree view
Other risk reporting apps operate in a vacuum. Information is sent to the authorities without context or insight. By integrating our See Something Say Something app with our OS-INT and NET-INT components, Radiance provides much clearer insights and more actionable intelligence to respond to the reported threat.
Although the term deepfake – a blend of the words “deep learning” and “fake” – was first coined in 2017, concerns about doctored videos and audio reached a fevered pitch after a manipulated video of House Speaker Nancy Pelosi went viral in May 2019.
Nancy Pelosi and the Deepfake
The video, which was slowed to about 75 percent of its original speed, was intended to make the Speaker appear like she was slurring her words. It was posted on Facebook, Twitter and YouTube. YouTube removed the video as a matter of company policy, Facebook did not.
Concerns about the implications of these deepfake videos on the 2020 elections has led to an investigation by the House Intelligence Committee this summer. And, in a January 2019 Statement for the Record before the Senate Select Committee on Intelligence, Director of National Intelligence Dan Coats noted that online and election interference could include “deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files….”
Corporate America Targeted
While the political implications are serious, so too are the implications of deepfakes for corporations.
Criminals are using corporate videos, earnings calls and
media appearances to build models of executive voices. According to a report from the BBC, deepfake audio has
been used to steal millions in dollars. In three separate cases, financial
controllers were tricked into transferring money based on bogus audio of their
CEOs requesting the transfer.
The reputational consequences are equally disconcerting.
Deepfake videos of a company CEO — released on digital and
social media immediately before an earnings call could have serious
implications on stock price.
Or activists looking to discredit a corporation and create an online misinformation campaign could release a deepfake video attempting to implicate the practices of the organization or casting its leaders in a bad light.
Mark Zuckerberg and the Deepfake
Consider that Mark Zuckerberg himself was the victim of a deepfake video. Posted on Instagram, the doctored video showed the Facebook CEO calling himself, “one man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures”? Instagram stood by corporate policy and did not take the video down.
Are Corporations Really “Largely Defenseless?”
As companies look to ways to protect their reputation and bottom line against the risk of a deepfake, some experts and pundits insist that there are few tools available, leaving businesses “largely defenseless.”
At Lumina, we disagree.
OS-INT deep-web listening technology is the solution. Radiance uses continuous deep-web extraction
to ingest all open source data and prioritize it against configurable
behavioral affinity models (BAM).
Corporate Reputation Behavioral Model and Continuous Monitoring
Our corporate reputation BAM is specifically designed to filter the volumes of publicly available information against terms related to reputational, brand and business risks. The results are cleaned and prioritized, yielding relevant insights into any disinformation being spread about a corporation, its leadership or its employees.
The platform becomes even more powerful after the first
deep-web search is completed. Our
continuous monitoring capabilities allow for daily searches, producing only relevant,
new web content.
The system would quickly flag the content associated with a
deepfake, helping corporations get ahead of the issue before it becomes viral.