Although the term deepfake – a blend of the words “deep learning” and “fake” – was first coined in 2017, concerns about doctored videos and audio reached a fevered pitch after a manipulated video of House Speaker Nancy Pelosi went viral in May 2019.

Nancy Pelosi and the Deepfake

The video, which was slowed to about 75 percent of its original speed, was intended to make the Speaker appear like she was slurring her words.  It was posted on Facebook, Twitter and YouTube. YouTube removed the video as a matter of company policy, Facebook did not.

Although the video ultimately “disappeared” from Facebook, the damage was already done – within days it had more than 2.5 million views on Facebook alone.

The 2020 Election – Cause for Concern

Concerns about the implications of these deepfake videos on the 2020 elections has led to an investigation by the House Intelligence Committee this summer.  And,  in a January 2019 Statement for the Record before the Senate Select Committee on Intelligence, Director of National Intelligence Dan Coats noted that online and election interference could include “deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files….”  

Corporate America Targeted

While the political implications are serious, so too are the implications of deepfakes for corporations.

Criminals are using corporate videos, earnings calls and media appearances to build models of executive voices.  According to a report from the BBC, deepfake audio has been used to steal millions in dollars. In three separate cases, financial controllers were tricked into transferring money based on bogus audio of their CEOs requesting the transfer.

The reputational consequences are equally disconcerting.

Deepfake videos of a company CEO — released on digital and social media immediately before an earnings call could have serious implications on stock price. 

Or activists looking to discredit a corporation and create an online misinformation campaign could release a deepfake video attempting to implicate the practices of the organization or casting its leaders in a bad light.

Mark Zuckerberg and the Deepfake

Consider that Mark Zuckerberg himself was the victim of a deepfake video. Posted on Instagram, the doctored video showed the Facebook CEO calling himself, “one man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”  Instagram stood by corporate policy and did not take the video down.

Are Corporations Really “Largely Defenseless?”

As companies look to ways to protect their reputation and bottom line against the risk of a deepfake, some experts and pundits insist that there are few tools available, leaving businesses “largely defenseless.”

At Lumina, we disagree.

Our Radiance OS-INT deep-web listening technology is the solution.  Radiance uses continuous deep-web extraction to ingest all open source data and prioritize it against configurable behavioral affinity models (BAM).  

Corporate Reputation Behavioral Model and Continuous Monitoring

Our corporate reputation BAM is specifically designed to filter the volumes of publicly available information against terms related to reputational, brand and business risks. The results are cleaned and prioritized, yielding relevant insights into any disinformation being spread about a corporation, its leadership or its employees.

The platform becomes even more powerful after the first deep-web search is completed.  Our continuous monitoring capabilities allow for daily searches, producing only relevant, new web content. 

The system would quickly flag the content associated with a deepfake, helping corporations get ahead of the issue before it becomes viral.  

Learn more about Radiance here.