Authoritarian Regimes Could Exploit Cries of ‘Deepfake’
February 14, 2021
954         0

by admin


A viral video shows a young woman conducting an exercise class on a roundabout in the Burmese capital, Nyapyidaw. Behind her a military convoy approaches a checkpoint to go conduct arrests at the Parliament building. Has she inadvertently filmed a coup? She dances on.

The video later became a viral meme, but for the first days, online amateur sleuths debated if it was green-screened or otherwise manipulated, often using the jargon of verification and image forensics.

For many online viewers, the video captures the absurdity of 2021. Yet claims of audiovisual manipulation are increasingly being used to make people wonder if what is real is a fake.

At Witness, in addition to our ongoing work to help people film the reality of human rights violations, we’ve led a global effort to better prepare for increasingly sophisticated audiovisual manipulation, including so-called deepfakes. These technologies provide tools to make someone appear to say or do something they never did, to create an event or person who never existed, or to more seamlessly edit within a video.

The hype falls short, however. The political and electoral threat of actual deepfakes lends itself well to headlines, but the reality is more nuanced. The real reasons for concern became clear through expert meetings that Witness led in Brazil, South Africa, and Malaysia, as well as in the US and Europe, with people who had lived through attacks on their reputation and their evidence, and professionals such as journalists and fact-checkers charged with fighting lies. They highlighted current harms from manipulated nonconsensual sexual images targeting ordinary women, journalists, and politicians. This is a real, existing, widespread problem, and recent reporting has confirmed its growing scale.

Their testimony also pinpointed how claims of deepfakery and video manipulation were being increasingly used for what law professors Danielle Citron and Bobby Chesney call the “liar’s dividend,” the ability of the powerful to claim plausible deniability on incriminating footage. Statements like “It’s a deepfake” or “It’s been manipulated” have often been used to disparage a leaked video of a compromising situation or to attack one of the few sources of civilian power in authoritarian regimes: the credibility of smartphone footage of state violence. This builds on histories of state-sponsored deception. In Myanmar, the army and authorities have repeatedly both shared fake images themselves and challenged the veracity and integrity of real evidence of human rights violations.

In our discussions, journalists and human rights defenders, including those from Myanmar, described fearing the weight of having to relentlessly prove what’s real and what is fake. They worried their work would become not just debunking rumors, but having to prove that something is authentic. Skeptical audiences and public factions second-guess the evidence to reinforce and protect their worldview, and to justify actions and partisan reasoning. In the US, for example, conspiracists and right-wing supporters dismissed former president Donald Trump’s awkward concession speech after the attack on the Capitol by claiming “it’s a deepfake.”

There are no easy solutions. We must support stronger audiovisual forensic and verification skills in the community and professional leaders globally who can help their audiences and community members. We can promote the widespread accessibility of platform tools to make it easier to see and challenge the perennial mis-contextualized or edited “shallowfake” videos that simply miscaption a video or do a basic edit, as well as more sophisticated deepfakes. Responsible “authenticity infrastructure” that makes it easier to track if and how an image has been manipulated and by whom, for those who want to “show their work,” can help if developed from the start with a consciousness of how it could also be abused.

We must also candidly acknowledge that promoting tools and verification skills can in fact perpetuate a conspiratorial “disbelief by default” approach to media that in fact is at the heart of the problem with so many videos that in fact show reality. Any approach to providing better skills and infrastructure must recognize that conspiratorial reasoning is a short step from constructive doubt. Media-literacy approaches and media forensic tools that send people down the rabbit hole rather than promoting common sense judgement can be part of the problem. We don’t all need to be instant open source investigators. First we should apply simple frameworks like the SIFT methodology: Stop, Investigate the source, Find trusted coverage, and Trace the original context.



subscribe for YouMedia Newsletter
0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

newsletter
subscribe for YouMedia Newsletter
LET'S HANG OUT ON SOCIAL