Deepfakes Archives - WITNESS https://www.witness.org/tag/deepfakes/ Human Rights Video Tue, 14 Dec 2021 18:49:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 76151064 DEEPFAKES AND SATIRE REPORT RELEASED https://www.witness.org/deepfakes-and-satire-report-released/ Tue, 14 Dec 2021 18:43:01 +0000 https://www.witness.org/?p=2264964 WITNESS is pleased to announce the launch of a new report called JUST JOKING! Deepfakes, Satire and the Politics of Synthetic Media in partnership with MIT Open Documentary Lab’s Co-Creation Studio. The report is part of WITNESS’ global “Prepare, Don’t Panic” initiative on deepfakes and manipulated media. It is also the latest in  a series of efforts to drive dialogue and action on critical questions around deepfakes, parody, and malicious disinformation.

Around the world deepfakes are becoming a powerful tool for artists, satirists, and activists. But what happens when vulnerable people are not “in on the joke,” or when malignant intentions are disguised as humor? JUST JOKING! focuses on the fast-growing intersections between deepfakes and satire. Key questions explored in the report include: who decides what’s funny, what’s fair, and who is accountable?

“WITNESS knows the power of video to challenge rights abuses and the powerful. But our research over the last three years has revealed the lines where deepfakes are misused, and claims of humor disguise gaslighting and malice,” said WITNESS’ Sam Gregory, a co-producer of the report. “We’re in a critical moment to ensure we have a robust conversation about protecting the creative, political power of deepfakes satire and other emerging forms of critique. At the same time, we must demand consistent approaches from platforms, app-makers and others who are implicated in the creation and distribution of deceptive and malicious digital forgeries that masquerade as ‘just joking!’”

“A wide range of voices need to be part of answering these deep questions,” said Katerina Cizek, the Co-Creation Studio producer of the report, “These questions arise in the cracks and overlaps between satire and disinformation. Voices who need to be at the table include not only technologists, lawyers and politicians, the platforms – where these videos are shared and where they are made–but also human rights activists, artists, journalists. Most importantly, perhaps, we must hear from people from around the world who are both finding new uses for these technologies, as well as have a profound understanding of the impact and harm that vulnerable communities face when malicious actors and institutions are not held accountable.”

JUST JOKING! analyzes more than 70 recent, wide-ranging cases of deepfakes. Some are examples of potent satire, art, or activism, from mocking authoritarian leaders to synthetically resurrecting victims of injustice to demand action. But others demonstrate how bad actors use comedy as both a sword and a shield, to glorify the powerful and attack marginalized communities, while seeking to escape culpability. Increasingly, satire is used as a defensive excuse — “just joking!” — after a video has circulated and caused harm.

JUST JOKING! is part of a continuing collaboration between Co-Creation Studio at MIT Open Documentary Lab and WITNESS on the Deepfakery project. WITNESS’ “Prepare, Don’t Panic” initiative pursues a globally-inclusive, human rights-lead approach to deepfakes, authenticity, and media manipulation. The report is written by deepfake experts Henry Ajder and Joshua Glick, based on the lead research of Henry Ajder.

For more information or to request interviews with the report’s producers, email media [@] witness [dot] org.

]]>
2264964
WITNESS “Deepfakes – Prepare Yourself Now” Report Launched https://www.witness.org/witness-deepfakes-prepare-yourself-now-report-launched/ Thu, 17 Oct 2019 16:07:02 +0000 https://www.witness.org/?p=2199217 WITNESS is delighted to announce that our report, “Deepfakes – Prepare Yourself Now” is live. This report warns how AI-altered media can further threaten already vulnerable communities and people as well as public trust in videos, and identifies key prioritized threats and solutions as seen from a cross-section of Brazilian stakeholders.

Brazil is one of the countries in the world that has suffered most from the use of misinformation, disinformation and so-called “fake news.” On July 25th, 2019, WITNESS held a convening on “Deepfakes: Prepare Yourself Now” in São Paulo, Brazil; it followed on from an earlier meeting with grassroots activists and human rights defenders. The workshop participants included favela-based activists, journalists, fact-checkers, technologists, civic activists, satirists and others, who focused on prioritizing perceived threats and solutions. 

“This is likely to be a global problem and it’s critical that the decisions about what is needed and the solutions we want, both technical and otherwise, are not just determined in the US and Europe or excluding the voices of people who will most be harmed,” emphasized Sam Gregory, WITNESS Program Director.

The report is available in English and Portuguese.

For more on WITNESS’ work in this area:

For more on WITNESS’ programmatic work in Brazil:

]]>
2199217
Governing Deepfakes Detection to Ensure Supports Global Needs https://www.witness.org/governing-deepfakes-detection-to-ensure-supports-global-needs/ Thu, 19 Sep 2019 16:03:51 +0000 https://www.witness.org/?p=2198671 WITNESS will be joining the new Steering Committee on AI and Media Integrity launched by the Partnership on AI. The Steering Committee has been formed to support governance and coordination of projects strengthening technical capabilities in deepfakes detection. This will build on WITNESS’ current leadership in this space as Co-Chair of the Partnership on AI’s Expert Group on AI and Social/Societal Influence, which has focused this past year on AI and media manipulation including via the Media Preparedness convening facilitated by WITNESS, BBC and PAI this year. Initial members of this Steering Committee will include First Draft, WITNESS, XPRIZE, CBC/Radio-Canada, the BBC, The New York Times, Facebook, and Microsoft, among other PAI Partner organizations to be later announced.

The first project of the Steering Committee will be governance of the Deepfakes Detection Challenge recently announced by Facebook, Microsoft, PAI and a range of leading academic researchers to support increased research into detection of ‘deepfakes.

Reflecting WITNESS’ work over the past few years on building an interconnected solutions discussion around deepfakes and synthetic media that particularly centers global communities already facing related harms we intend to highlight some of these core issues as critical issues in judging the value of detection solutions:

  • Accessibility and potential adoptability, particularly outside the US/Europe: How accessible detection methods are to more people globally is critical and how likely any particular detection method is to be adoptable at scale for a diversity of people are critical questions that have been raised in our dialogues with journalists, media and civil society globally.  A recent national-level convening in Brazil reinforced this need and the others outlined below.
  • Explainability of detection approaches. These detection approaches will enter an existing public sphere characterized by challenges to trust in media as well as a distrust of algorithmic decision-making that is not explainable. The more blackbox an approach is, the less convincing it will be to publics or useful to journalists who must explain their findings to skeptical audiences.
  • Relevance to real-world scenarios likely to be experienced by global publics particularly outside the Global North, as well as journalists and fact-checkers (such as manipulated images and videos that are partial fakes, compressed, ‘laundered’ across social media networks and must be evaluated and explained in real-time) These concerns were particularly highlighted in depth in the workshop WITNESS held connecting leading deepfakes researchers and leading fact-checkers.

Learn more about the background on the Steering Committee from the Partnership on AI’s launch post.


Background on WITNESS deepfakes preparedness work: For the past year WITNESS has been working with our partners, journalists and technologists to better understand what is needed to better prepare for potential threats from deepfakes and other synthetic media. We have particularly focused on ensuring that any approaches are grounded in existing realities of harms caused by misinformation and disinformation, particularly outside the Global North and the responses that communities want. We have also emphasized learning from existing experience among journalist and activist communities dealing with verification, trust and truth as well as building better collaborations between stakeholders to respond to this. The stakeholders on this issue include key social media, video-sharing and search platforms as well as the independent, academic and commercial technologists developing research and products in this area. We hosted the first cross-disciplinary expert convening in this area, and most recently led the first convening to discuss these issues in Brazil. A comprehensive list of our recommendations and our reporting is available here.

]]>
2198671
WITNESS joins the Partnership on AI https://www.witness.org/witness-joins-the-partnership-on-ai/ Fri, 16 Nov 2018 23:07:44 +0000 https://www.witness.org/?p=2195600 Note from our Program Director Sam Gregory:

At a time when the role of artificial intelligence (AI) in the processes of creating media, managing what we see, and moderating content is becoming increasingly important, WITNESS is glad to be joining the Partnership on AI. We look forward to engaging with others in the Partnership on AI (PAI)to address critical challenges around key focus areas of our Tech Advocacy work including misinformation and disinformation, content moderation, privacy, and facial recognition and deepfakes/synthetic media. There’s a critical opportunity now to ensure that AI is used in a rights-protecting and rights-enhancing way, to ensure that marginalized voices are part of the process of development and implementation and that ethical considerations about when AI is used are front-and-center. WITNESS will be co-chairing the new Working Group on Social and Societal Influence, which is beginning with a focus on AI and media.

From the PAI website:

We are excited to announce that we have added 10 new organizations to the growing Partnership on AI community. The latest cohort of new members represents a diverse range of sectors, including media and telecommunications businesses, as well as civil rights organizations, academia, and research institutes.

These new members will bring valuable new perspectives. For example, the addition of media organizations will be crucial at a time when AI-enabled techniques in synthetic news and imagery may pose challenges to what people see and believe, but also may help to authenticate and verify information.

PAI is also committed to ensuring geographic diversity in exploring AI’s hard questions, and as a result, the latest group of new members includes organizations from Australia, Canada, Italy, South Korea, United Kingdom, and the United States, allowing us to bring together important viewpoints from around the world.

The following organizations join the Partnership on AI in November 2018:

Autonomy, Agency and Assurance Innovation (3A) Institute
American Psychological Association
British Broadcasting Corporation (BBC)
DataKind
The New York Times
OPTIC Network
PolicyLink
Samsung Electronics
The Vision and Image Processing Lab at University of Waterloo
WITNESS

Partnership on AI Executive Director Terah Lyon, said “We are proud to welcome a diverse new group of organizations and perspectives to the Partnership on AI, and I look forward to seeing the impact of their contributions. Technology is a series of decisions made by humans, and by involving more viewpoints and perspectives in the AI debate we will be able to improve the quality of those decisions.”

Matthew Postgate, Chief Product and Technology Officer at the BBC, said: “I am delighted that the BBC has joined the Partnership on AI. The use of machine learning and data enabled services offer incredible opportunities for the BBC and our audience, but also present serious challenges for society. We will only realise the benefits and solve the challenges by coming together with other media and technology organisations in the interests of citizens. Partnership on AI and its member base provide the platform to do just that, and I am committed to ensuring the BBC plays an active part.”

Nick Rockwell, Chief Technology Officer at the New York Times, said: “Our mission at The New York Times is to help people better understand the world, so it is imperative that we understand and participate in the ways technology is changing our lives. At The Times, we already use artificial intelligence in many ways to deepen our readers engagement with our journalism, always in accordance with ethical guidelines and our commitment to our reader’s privacy, so we are both deeply excited and deeply concerned about the power of artificial intelligence to impact society. We are excited to join the Partnership on AI to continue to deepen our understanding, and to help shape the future of this technology for good.”

Jake Porway, Founder and Executive Director at DataKind, said: “We couldn’t be more aligned to the Partnership on AI’s cause as our mission at DataKind is virtually synonymous — to create a world in which data science and AI are used ethically and capably in the service of humanity. There’s huge potential to reach this goal together, and we’re particularly excited to play the role of connector between the many technology companies in the group committed to making positive social change and the needs on the ground that they could support.”

The new cohort of members will participate in the Partnership’s existing Working Groups and will join new projects and work beginning with the Partnership’s upcoming All Partners Meeting.

The Partnership on AI exists to study and formulate best practices on AI, to advance the public’s understanding of AI, and to provide a platform for open collaboration between all those involved in, and affected by, the development and deployment of AI technologies.

To succeed in this mission, we need deep involvement from diverse voices and viewpoints that represent a wide range of audiences, geographies, and interests.

We welcome questions from organizations interested in learning more about membership. To contact us, please see the forms available here.

]]>
2195600
In Conversation With National Endowment for Democracy: How Will Deepfakes Transform Disinformation? https://www.witness.org/in-conversation-with-national-endowment-for-democracy-how-will-deepfakes-transform-disinformation/ Mon, 01 Oct 2018 15:41:32 +0000 https://www.witness.org/?p=2195284 People have started to panic about the increasing possibility of manipulating images, video, and audio, often popularly described as “deepfakes”.  In the past decade Hollywood studios have had the capacity to morph faces —from Brad Pitt in “The Curious Case of Benjamin Button” to Princess Leia in “Star Wars’ Rogue One”—and companies and consumers have had tools such as Photoshop to digitally alter images and video in subtler ways.

Disinformation, the intentional use of false or misleading information for political purposes, is increasingly recognized as a threat to democracy worldwide. Many observers argue that this challenge has been exacerbated by social media and a declining environment for independent news outlets. Now, new advances in technology—including but not limited to “deepfakes” and other forms of synthetic media—threaten to supercharge the disinformation crisis.

WITNESS Program Director Sam Gregory, along with four other deepfakes leading experts sat down with the National Endowment for Democracy to talk about these threats and the role they play in the disinformation landscape.

“The most serious ramification of deepfakes and other forms of synthetic media is that they further damage people’s trust in our shared information sphere and contribute to the move of our default response from trust to mistrust,” Sam told NED.

To read the entire interview, click here.

For more on our work on deepfakes, click here.

 

]]>
2195284
Cast Your Vote for WITNESS at SXSW 2019! https://www.witness.org/cast-your-vote-for-witness-at-sxsw-2019/ Mon, 13 Aug 2018 16:15:51 +0000 https://www.witness.org/?p=2194558 It’s that time of year where YOU decide what topics you want to hear about at South by Southwest (SXSW) in 2019. The 10-day convening, that brings the worlds’ top filmmakers, musicians, technologists, and creatives to Austin, Texas, has become an annual event for WITNESS. Whether it’s sharing our latest programmatic work with organizational peers, interacting with cutting-edge technologies, or participating in conversations about media and human rights, we look forward to both learning and sharing at SXSW every March.

This year we’re proposing the panel, “Deepfakes: What Should We Fear, What Can We Do,” led by our Program Director Sam Gregory to discuss questions surrounding deepfakes and synthetic media, how they can be used maliciously, and how we can detect and stop them.

More about the panel:

Deepfakes! As more sophisticated, more personalized, more convincing audio and video manipulation emerges how do we get beyond the apocalyptic discussion of the “end of trust in images and audio” and instead focus on what we can do about malicious deepfakes and other AI-manipulated synthetic media. Based on WITNESS’ collaborations with technologists, journalists and human rights activists, we’ll explore the state-of-the-art usage of deepfakes and other ‘synthetic media’, the solutions available to fight these malicious uses and where this goes next. Linked to broader trends in challenges to public trust, disinformation, and the evolving information ecosystem globally how should we plan together to fight the dark side of a faked video and audio future?

Sound interesting? Want to hear from WITNESS in Austin? Then please cast your vote for Deepfakes: What Should We Fear, What Can We Do here!

]]>
2194558
WITNESS featured in Harvard Business Review https://www.witness.org/witness-featured-in-harvard-business-review/ Thu, 09 Aug 2018 19:44:40 +0000 https://www.witness.org/?p=2194540 We are pleased to announce that our Program Director Sam Gregory was recently interviewed by Scott Berinato for the Harvard Business Review about WITNESS and the future of news and synthetic media.

In the article titled, “Business in the Age of Computational Propaganda and Deep Fakes,” Sam spoke about the reality of deep fakes and how WITNESS is working with companies to inform them of the potential of deep fakes to influence people (not only in the political sphere) and on dealing with synthetic media.

“Deliberately polluting the environment to erode trust is a common authoritarian tactic. So we have to guard against it… In human rights, that could mean identifying an audience that you want to target hate speech at… In business, I think about phishing scams in which someone fakes a voice you trust. These are some of the threat models we have been laying out as we try to address the dangers of mis- and disinformation broadly.” Sam said during the interview.

You can access the full interview here.

]]>
2194540
WITNESS LEADS CONVENING ON PROACTIVE SOLUTIONS TO MAL-USES OF DEEPFAKES AND OTHER AI-GENERATED SYNTHETIC MEDIA https://www.witness.org/witness-leads-convening-on-proactive-solutions-to-mal-uses-of-deepfakes-and-other-ai-generated-synthetic-media/ Mon, 02 Jul 2018 21:46:03 +0000 https://www.witness.org/?p=2194330 Read the detailed summary of discussions and recommendations on next-steps here

On June 11, 2018, WITNESS in collaboration with First Draft, a project of the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School, brought together 30 leading independent and company-based technologists, machine learning specialists, academic researchers in synthetic media, human rights researchers, and journalists. Under Chatham House Rules, the discussion was focused on pragmatic and proactive ways to mitigate the threats that widespread use and commercialization of new tools for AI-generated synthetic media, such as deepfakes and facial reenactment, potentially pose to public trust, reliable journalism and trustworthy human rights documentation.

WITNESS has for twenty-five years enabled human rights defenders, and now increasingly anyone, anywhere to use video and technology to protect and defend human rights. Our experience has shown the value of images to drive a more diverse personal storytelling and civic journalism, to drive movements around pervasive human rights violations like police violence, and to be critical evidence in war crimes trials. We have also seen the ease in which videos and audio, often crudely edited or even simply recycled and re-contextualized can perpetuate and renew cycles of violence.

WITNESS’ Tech + Advocacy work has frequently included engaging with key social media and video-sharing platforms to develop innovative policy and product responses to challenges facing high-risk users and high-public interest content. As a potential threat of more sophisticated, more personalized audio and video manipulation emerges, we see a critical necessity to bring together key actors before we are in the eye-of-the-storm, to ensure we prepare in a more coordinated way and to challenge technopocalyptic narratives that in and of themselves damage public trust in video and audio.

The convening goals included:

  • Broaden journalists, technologists and human rights researchers’ understanding of these new technologies, where needed;
  • While recognizing positive potential usages, begin building a common understanding of the threats created by– and potential responses to – mal-uses of AI-generated imagery, video and audio to public discourse and reliable news and human rights documentation, and map landscape of innovation in this area.
  • Build shared understanding of existing approaches in human rights, journalism and technology to deal with mal-uses of faked, simulated and recycled images, audio and video, and their relationship to other forms of mis/dis/mal-information
  • Based on case studies (real and hypothetical) facilitate discussion of potential pragmatic tactical, normative and technical responses to risk models of fabricated audio and video by companies, independent activists, journalists, academic researchers, open-source technologists and commercial platforms;
  • Identify priorities for continued discussion between stakeholders

Recommendations emerging from the convening included:

  1. Baseline research and a focused sprint on the optimal ways to track authenticity, integrity, provenance and digital edits of images, audio and video from capture to sharing to ongoing use. Research should focus on a rights-protecting approach that a) maximizes how many people can access these tools, b) minimizes barriers to entry and potential suppression of free speech without compromising right to privacy and freedom of surveillance c) minimizes risk to vulnerable creators and custody-holders and balances these with d) potential feasibility of integrating these approaches in a broader context of platforms, social media and in search engines. This research needs to reflect platform, independent commercial and open-source activist efforts, consider use of blockchain and similar technologies, review precedents (e.g. spam and current anti-disinformation efforts) and identify pros and cons to different approaches as well as the unanticipated risks. WITNESS will lead on supporting this research and sprint.
  2. Detailed threat modelling around synthetic media mal-uses for particular key stakeholders (journalists, human rights defenders, others). Create models based on actors, motivations and attack vectors, resulting in identification of tailored approaches relevant to specific stakeholders or issues/values at stake.
  3. Public and private dialogue on how platforms, social media sites and search engines design a shared approach and better coordinate around mal-uses of synthetic media. Much like the public discussions around data use and content moderation, there is a role for third parties in civil society to serve as a public voice on pros/cons of various approaches, as well as to  facilitate public discussion and serve as a neutral space for consensus-building. WITNESS will support this type of outcomes-oriented discussion.
  4. Platforms, search and social media companies should prioritize development of key tools already identified in the OSINT human rights and journalism community as critical: particularly reverse video search. This is because many of the problems of synthetic media relate to existing challenges around verification and trust in visual media.
  5. More shared  learning on how to detect synthetic media that brings together existing practices from manual and automatic forensics analysis with human rights, Open Source Intelligence (OSINT) and journalistic practitioners – potentially via a workshop where they test/learn each other’s methods and work out what to adopt and how to make techniques accessible. WITNESS and First Draft will engage on this.
  6. Prepare for the emergence of synthetic media in real-world situations by working with journalists and human rights defenders to build playbooks for upcoming risk scenarios so that no-one can claim ‘we didn’t see this coming’ and so as  to facilitate more understanding of technologies at stake. WITNESS and First Draft will collaborate on this.
  7. Include additional stakeholders who were under-represented in the 6/11 convening and are critical voices either in an additional meeting or in upcoming activities
    • “Global South” voices as well as marginalized communities in US and Europe
    • Policy and legal voices and national and international level
    • Artists and provocateurs
  8. Additional understanding of relevant research questions and lead research to inform other strategies. First Draft will lead on additional research.

 

For blog posts produced providing further details on next steps see:

 

]]>
2194330