Governing Deepfakes Detection to Ensure Supports Global Needs
WITNESS will be joining the new Steering Committee on AI and Media Integrity launched by the Partnership on AI. The Steering Committee has been formed to support governance and coordination of projects strengthening technical capabilities in deepfakes detection. This will build on WITNESS’ current leadership in this space as Co-Chair of the Partnership on AI’s Expert Group on AI and Social/Societal Influence, which has focused this past year on AI and media manipulation including via the Media Preparedness convening facilitated by WITNESS, BBC and PAI this year. Initial members of this Steering Committee will include First Draft, WITNESS, XPRIZE, CBC/Radio-Canada, the BBC, The New York Times, Facebook, and Microsoft, among other PAI Partner organizations to be later announced.
The first project of the Steering Committee will be governance of the Deepfakes Detection Challenge recently announced by Facebook, Microsoft, PAI and a range of leading academic researchers to support increased research into detection of ‘deepfakes.
Reflecting WITNESS’ work over the past few years on building an interconnected solutions discussion around deepfakes and synthetic media that particularly centers global communities already facing related harms we intend to highlight some of these core issues as critical issues in judging the value of detection solutions:
- Accessibility and potential adoptability, particularly outside the US/Europe: How accessible detection methods are to more people globally is critical and how likely any particular detection method is to be adoptable at scale for a diversity of people are critical questions that have been raised in our dialogues with journalists, media and civil society globally. A recent national-level convening in Brazil reinforced this need and the others outlined below.
- Explainability of detection approaches. These detection approaches will enter an existing public sphere characterized by challenges to trust in media as well as a distrust of algorithmic decision-making that is not explainable. The more blackbox an approach is, the less convincing it will be to publics or useful to journalists who must explain their findings to skeptical audiences.
- Relevance to real-world scenarios likely to be experienced by global publics particularly outside the Global North, as well as journalists and fact-checkers (such as manipulated images and videos that are partial fakes, compressed, ‘laundered’ across social media networks and must be evaluated and explained in real-time) These concerns were particularly highlighted in depth in the workshop WITNESS held connecting leading deepfakes researchers and leading fact-checkers.
Learn more about the background on the Steering Committee from the Partnership on AI’s launch post.
Background on WITNESS deepfakes preparedness work: For the past year WITNESS has been working with our partners, journalists and technologists to better understand what is needed to better prepare for potential threats from deepfakes and other synthetic media. We have particularly focused on ensuring that any approaches are grounded in existing realities of harms caused by misinformation and disinformation, particularly outside the Global North and the responses that communities want. We have also emphasized learning from existing experience among journalist and activist communities dealing with verification, trust and truth as well as building better collaborations between stakeholders to respond to this. The stakeholders on this issue include key social media, video-sharing and search platforms as well as the independent, academic and commercial technologists developing research and products in this area. We hosted the first cross-disciplinary expert convening in this area, and most recently led the first convening to discuss these issues in Brazil. A comprehensive list of our recommendations and our reporting is available here.