The Increasing Threat to the Global Energy Supply

The Increasing Threat to the Global Energy Supply

This month’s attack on Saudi Arabia’s Abqaiq oil processing facility, which is the world’s largest and accounts for five percent of global oil supplies, resulted in one of the biggest oil price increases  ever recorded. 

More importantly, it demonstrated that the world’s energy infrastructure is vulnerable, can be severely disrupted and is an increasingly likely target for bad actors.

Recent Attacks Reinforce the Threat

Other recent examples – of both cyber and physical attacks – reinforce the threat.

In 2008, an alleged cyber attack blew up an oil pipeline in Turkey, shutting it down for three weeks.  In 2015, a Distributed Denial of Service (DDos)  attack brought down a section of the Ukrainian power grid — for just six hours, but substations on the grid had to be operated manually for months.  Another attack in the Ukraine occurred just a year later, reportedly carried out by Russian actors. And, the Abqaiq facility itself had been the target of a thwarted Al Qaeda suicide bomber attack in 2006.

Threats to Physical Security

A 2018 report by the United Nations Office of Counter-Terrorism outlined the most intuitive physical threats to critical infrastructure, including the energy sector, involved the use of explosives or incendiary devices, rockets, MANPADs, grenades and tools to induce arson.

That same report noted that the energy sector has witnessed sustained terrorist activity through attacks perpetrated by Al Qaeda and its affiliates on oil companies’ facilities and personnel in Algeria, Iraq, Kuwait, Pakistan, Saudi Arabia and Yemen.

Increasing Intensity of DDoS Attacks

In addition to physical threats, it is estimated that by 2020, at least five countries will see foreign hackers take all or part of their national energy grid offline through Permanent Denial of Service (PDoS) attacks. And, DDoS attacks like those in the Ukraine are becoming increasingly severe.  Studies show that the number of total DDoS attacks decreased by 18 percent year-over-year in Q2 2017.  At the same time, there was a 19 percent increase in the average number of attacks per target.

U.S. is the “Holy Grail”

Disruption of the U.S. power grid is considered the “holy grail,” and experts predict that the energy industry could be an early battleground, not only the power sector, but the nation’s pipelines and the entirety of the supply chain. 

In fact, last year the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) publicly accused the Russians of cyberattacks on small utility companies in the United States.  In a joint Technical Alert  (TA), the agencies said Russian hackers conducted spear phishing attacks and staged malware in the control rooms with the goal of gathering data to create detrimental harm to critical U.S. infrastructure.

900 “Vulnerabilities” Found in the U.S. Energy Systems

This specific incident aside, DHS’s Industrial Control System Computer Emergency Response Team found nearly 900 cyber security vulnerabilities in U.S. energy control systems between 2011 and 2015, more than any other industry.  It’s not surprising that the international oil sector alone is expected to increase investments on cyber defenses by $1.9 billion this year. 

Investment in Physical Security Will Reach $920 billion

With any disruption to the global or national energy supply having serious implications for virtually all industries, especially critical ones like healthcare, transportation, security, and financial services, one report projects that the global critical infrastructure protection market will be worth $118 billion by 2028.

Physical security is expected to account for the highest proportion of spending, and cumulatively will account for $920 billion in investment.

Artificial Intelligence: A Security “Pathway” for the Future

Experts suggest that these investments should include next generation technologies for both physical and cyber security purposes. As one expert put it: “Automation, including via artificial intelligence, is an emerging and future cyber security pathway.”

In addition to the role that automation, artificial intelligence and machine learning can bring to identifying and predicting a physical or cyber attack, research shows that it can also help manage the rising costs associated with it. A study found that only 38 percent of companies are investing in this technology – even though after initial investments, it could represent net saving of $2.09 million.

Learn more about AI-driven Radiance and how it can help identify and predict physical and cyber threats to the energy infrastructure.

The Role of AI in National Security

The Role of AI in National Security

In July, Florida resident Tayyab Tahir Ismail was sentenced to 20 years in prison for distributing information pertaining to explosives online.

According to a press release issued by the FBI, Tahir posted bomb making instructions on the Internet, and on a social media platform. His goal was for that information to be used to create a weapon of mass destruction in support of violent jihad.

Social Media, IoT, Attack Planning and Radicalization

Use of the Internet and social media to propagate radical views, share information related to a terror attack or plan for an attack is well documented.

Technology as a Double-Edged Sword

GAO’s findings echoed those of a report just one year earlier from the Office of the Director of National Intelligence (ODNI), which noted that technology “will be a double-edged sword. On the one hand, it will facilitate terrorist communications, recruitment, logistics, and lethality. On the other, it will provide authorities with more sophisticated techniques to identify and characterize threats….”

The RAND Corporation furthers this analysis of technology’s role in prevention activities, finding that early phase terrorism prevention activities should include monitoring online content advocating violence, and messaging to encourage communities to identify radicalized individuals for intervention.

United Nations:  Internet Can Aid in Counter-Terrorism

Against this backdrop, the United Nations recently found that the significant amount of knowledge about terrorist organizations activities on the Internet can aid in counter-terrorism efforts, and that new technologies are helping proactively prevent, detect and deter terrorist attacks.

AI and machine learning are technologies that continue to take center stage in the identification of online threats and prevention of catastrophic events, whether it’s from Islamic or right-wing extremists.

AI Can Help Assess Threats and Enhance Situational Awareness

In fact when it comes to enhancing situational awareness (SA), and better detecting and discerning real attacks from false alarms, the Center for Strategic and International Studies (CSIS) noted that “AI applications for all-source data fusion, front-line analysis, and predictive analytics promise the potential to unlock new insights and effectively enhance strategic SA.”

The organization went on to say that the vast amounts of open-source data available through media, social media and the Internet of Things provides new indicators that are relevant to SA. Importantly, AI data mining can process large amounts of this information quickly and efficiently increase precision in the detail and quality of information collected.  

The Radiance Solution

That’s exactly where technologies like Lumina’s Radiance platform come into play.  Radiance’s Open Source Intelligence (OS-INT) includes more than 6,500 terms related to potential national security risks and threats. The platform conducts nearly 135,000 searches across all publicly-available data on the web, correlating names with these terms and cross-referencing over 1 million queries into Lumina’s proprietary databases of risk.  A search of this magnitude – done manually – would take more than a year to complete.

Learn more about our edge-to-edge risk detection through Radiance OS-INT, Internet Intelligence (NET-INT) and our Human Intelligence (HUM-INT) mobile app, S4.

Try Radiance for free today.

Assessing Safety Protocols in Public Venues

Assessing Safety Protocols in Public Venues

As the summer draws to a close and students return to campus, schools across the country are incorporating active shooter response training into their procedures and protocols.  The drills are just one component of overall safety preparedness efforts, being undertaken at the state, federal and local levels.

STRONG Ohio Plan Includes Social Media Scans

While response trainings on school campuses have become an increasingly common practice, the focus is even more pronounced in light of the recent mass shooting attacks in Dayton and El Paso.

In response to the shootings in Ohio, Governor Mike DeWine unveiled his STRONG Ohio plan, designed to reduce gun violence. The state created a School Safety Center, which will review school emergency management plans and offer risk threat and safety assessments, consolidate school safety resources on saferschools.ohio.gov, promote the use of a tip line to anonymously report suspected threats and scan social media and websites to identify people suggesting acts of violence. 

Increased Arrests for Threatening Comments

Increased precautions aren’t just being taken at schools, and for good reason.  Following those tragic events, the FBI ordered a new threat assessment to thwart future mass attacks in the country.

Since that time, more than 25 people have been arrested for making threats to commit mass shootings – and that number does not include the three mall shooting scares in California over the weekend.

Public Venues Enhancing Security and Reviewing Response Plans

Sports venues like the Raven’s M&T Bank, and Camden Yard, home to the Orioles, announced enhanced security measures in August and retailers across the country are reviewing their safety procedures, which as Target noted in a public statement include team member training, partnerships with law enforcement and the use of technology.

Use of technology is not unique to private corporations.  Even before the recent shootings, the FBI issued a request for proposal for a social media early alerting to mitigate multifaceted threats. 

Tips for Personal Safety

The Department of Homeland Security offers tips for all of us to follow when we’re in public locations.

  • Be Prepared: Take notice of surroundings and identify potential emergency exits. Be aware of unusual behaviors and report suspicious activities to security or law enforcement.
  • Take Action: If an attack occurs, run to the nearest exit and conceal yourself while moving away from the dangerous activity. If you can’t exit to a secure area, protect yourself by seeking cover.
  • Assist and React: Call 9-1-1, remain alert and stay aware of the situation. Help with first aid when it is safe, and follow instructions once law enforcement arrives.

Part of your preparation can include downloading for free Lumina’s See Something Say Something app. It’s a crowd-sourced, mobile application that allows users to confidentially report concerns in real time.  

You can learn more about S4 and download it here. It’s one part of our comprehensive, AI-driven risk management platform, Radiance.

Why AI and Tech Can Help Predict the Next Mass Shooting

Why AI and Tech Can Help Predict the Next Mass Shooting

 After the tragic mass shootings in Texas and Ohio, President Trump called on social media companies and local, state and federal agencies to “develop tools that detect mass shooters before they strike.

The appeal mirrored those of the French and New Zealand prime ministers after the attacks in Christchurch, New Zealand and Negombo, Sri Lanka.  Both committed to ending the use of social media to promote terrorism. 

Radicalization and the Internet

The rationale behind these efforts was straightforward. Recent attacks around the globe demonstrate the role social media and the Internet can play in helping people become radicalized, research and plan for mass violence, and as was the case of Christchurch, incite extremism by distributing images from an attack.

Research confirms the concerns.  Between 2005 and 2016, social media played a role in the radicalization of nearly 70 percent of Islamist extremists and more than 40 percent of far-right extremists, according to a research brief by the National Consortium for the Study of Terrorism and Responses to Terrorism.  The study also found that more than 25 percent of Islamic extremists used social media to plan a domestic terror attack or travel to a foreign conflict zone.

Counter-Terrorism and the Internet

While the Internet has become a platform for extremists, it also provides opportunities to prevent and counter acts of terrorism.  A United Nations report on The Use of the Internet for Terrorist Purposes, found that a significant amount of knowledge about the activities of terrorist organizations can be found on the Internet, aiding in counter-terrorism efforts.  Importantly, the report went on to say that increasingly sophisticated technologies are helping proactively prevent, detect and deter terrorist activity involving use of the Internet.

 Enter the Critics

Despite these facts, critics point to what they say is technology’s inability to effectively monitor terrorist content online.  Some cite the limited resources and expertise in law enforcement to manage and respond to digital evidence in real time.  Others lament the
scale of data added to the Internet daily, and the associated challenges of detecting specific threats – the so-called needle in the haystack – in time to stop a planned attack.  

The arguments aren’t new. 

While tech companies highlight the power of artificial intelligence and machine learning to help detect threat, at a hearing on global terrorism this summer, one person testified before the House Intelligence and Counterterrorism Subcommittee that with AI “there is much more artificial than intelligent.”

 The Case for AI

The reasons behind mass shootings around the globe are multi-faceted, but not unsolvable. 

And, while we agree with the critics that existing social media listening technologies are not adequate, we know that our AI-driven
Radiance
platform is.

Radiance’s key differentiator is that it brings power of Open Source Intelligence (OS-INT), Internet Intelligence (NET-INT) and our See Something Say Something app (HUM-INT) for edge-to-edge risk detection. Radiance scours the web prioritizing current behaviors to predict future action.  

 We can find the needle in the haystack (quickly)

Our OS-INT component finds that needle in the haystack because it is continuously ingesting all open source data and filtering out all the “noise” with our proprietary behavioral affinity models (BAMs).  These filters measure the data against terms and phrases associated with violent extremism, lone wolf attacks and other threats to global security.

 It’s not what’s been posted. It’s what’s been read

What a person is reading on the Internet is exponentially more valuable in predicting future behavior than what they may post or react to online. NET-INT hunts the web, identifying, cataloguing and continuously monitoring IP addresses researching a full spectrum of risk-related content.

 A 360-degree view

Other risk reporting apps operate in a vacuum. Information is sent to the authorities without context or insight.  By integrating our See Something Say Something app with our OS-INT and NET-INT components, Radiance provides much clearer insights and more actionable intelligence to respond to the reported threat.

 

Give Radiance a Free Trial Today.

AI is integral to creating deepfakes. It’s also critical to protecting against them.

AI is integral to creating deepfakes. It’s also critical to protecting against them.

Although the term deepfake – a blend of the words “deep learning” and “fake” – was first coined in 2017, concerns about doctored videos and audio reached a fevered pitch after a manipulated video of House Speaker Nancy Pelosi went viral in May 2019.

Nancy Pelosi and the Deepfake

The video, which was slowed to about 75 percent of its original speed, was intended to make the Speaker appear like she was slurring her words.  It was posted on Facebook, Twitter and YouTube. YouTube removed the video as a matter of company policy, Facebook did not.

Although the video was ultimately “disappeared” from Facebook, the damage was already done – within days it had more than 2.5 million views on Facebook alone.

The 2020 Election – Cause for Concern

Concerns about the implications of these deepfake videos on the 2020 elections has led to an investigation by the House Intelligence Committee this summer.  And,  in a January 2019 Statement for the Record before the Senate Select Committee on Intelligence, Director of National Intelligence Dan Coats noted that online and election interference could include “deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files….”  

Corporate America Targeted

While the political implications are serious, so too are the implications of deepfakes for corporations.

Criminals are using corporate videos, earnings calls and media appearances to build models of executive voices.  According to a report from the BBC, deepfake audio has been used to steal millions in dollars. In three separate cases, financial controllers were tricked into transferring money based on bogus audio of their CEOs requesting the transfer.

The reputational consequences are equally disconcerting.

Deepfake videos of a company CEO — released on digital and social media immediately before an earnings call could have serious implications on stock price. 

Or activists looking to discredit a corporation and create an online misinformation campaign could release a deepfake video attempting to implicate the practices of the organization or casting its leaders in a bad light.

Mark Zuckerberg and the Deepfake

Consider that Mark Zuckerberg himself was the victim of a deepfake video. Posted on Instagram, the doctored video showed the Facebook CEO calling himself, “one man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures”?  Instagram stood by corporate policy and did not take the video down.

Are Corporations Really “Largely Defenseless?”

As companies look to ways to protect their reputation and bottom line against the risk of a deepfake, some experts and pundits insist that there are few tools available, leaving businesses “largely defenseless.”

At Lumina, we disagree.

Our Radiance OS-INT deep-web listening technology is the solution.  Radiance uses continuous deep-web extraction to ingest all open source data and prioritize it against configurable behavioral affinity models (BAM).  

Corporate Reputation Behavioral Model and Continuous Monitoring

Our corporate reputation BAM is specifically designed to filter the volumes of publicly available information against terms related to reputational, brand and business risks. The results are cleaned and prioritized, yielding relevant insights into any disinformation being spread about a corporation, its leadership or its employees.

The platform becomes even more powerful after the first deep-web search is completed.  Our continuous monitoring capabilities allow for daily searches, producing only relevant, new web content. 

The system would quickly flag the content associated with a deepfake, helping corporations get ahead of the issue before it becomes viral.  

Learn more about Radiance here.