Artificial Intelligence Archives - WITNESS https://www.witness.org/tag/artificial-intelligence/ Human Rights Video Thu, 17 Oct 2019 16:07:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 76151064 WITNESS “Deepfakes – Prepare Yourself Now” Report Launched https://www.witness.org/witness-deepfakes-prepare-yourself-now-report-launched/ Thu, 17 Oct 2019 16:07:02 +0000 https://www.witness.org/?p=2199217 WITNESS is delighted to announce that our report, “Deepfakes – Prepare Yourself Now” is live. This report warns how AI-altered media can further threaten already vulnerable communities and people as well as public trust in videos, and identifies key prioritized threats and solutions as seen from a cross-section of Brazilian stakeholders.

Brazil is one of the countries in the world that has suffered most from the use of misinformation, disinformation and so-called “fake news.” On July 25th, 2019, WITNESS held a convening on “Deepfakes: Prepare Yourself Now” in São Paulo, Brazil; it followed on from an earlier meeting with grassroots activists and human rights defenders. The workshop participants included favela-based activists, journalists, fact-checkers, technologists, civic activists, satirists and others, who focused on prioritizing perceived threats and solutions. 

“This is likely to be a global problem and it’s critical that the decisions about what is needed and the solutions we want, both technical and otherwise, are not just determined in the US and Europe or excluding the voices of people who will most be harmed,” emphasized Sam Gregory, WITNESS Program Director.

The report is available in English and Portuguese.

For more on WITNESS’ work in this area:

For more on WITNESS’ programmatic work in Brazil:

]]>
2199217
WITNESS Announces 2019-20 Mozilla Fellow https://www.witness.org/witness-announces-2019-20-mozilla-fellow/ Fri, 27 Sep 2019 18:21:14 +0000 https://www.witness.org/?p=2198846 WITNESS is excited to welcome Leil Zahra as our 2019-2020 Mozilla Fellow! Leil is a trans-feminist queer filmmaker and researcher.

Mozilla Fellowships provide funding, amplification, and networking opportunities to individuals devoted to privacy, inclusion, and openness online. This cohort of Mozilla Fellows will debunk myths about AI and research new models of data governance.

Mozilla Fellows hail from more than 10 countries and consist of technologists, activists, lawyers, designers, and scientists. Many of these fellows are working on the topic of trustworthy AI — artificial intelligence that helps, rather than harms, humanity.

Leil will be working with the WITNESS’ Tech+Advocacy team, where they will research content take-down in the North Africa West Asia region, the double standards in the so-called “moderation,” and the existing agreements between some governments and social media platforms. In collaboration WITNESS, Leil will work to bring more voices from the region to the table, joining efforts to challenge Euro and North American centrism in the debate around tech. Before joining Mozilla, Leil worked in various projects and organisations around digital security and data, researching the impacts of surveillance and data collection on human rights and political participation; and examining creative responses to the threats. They approach tech from a critical, de-colonial and trans-feminist perspective.

WITNESS is very excited about Leil’s approach and how it will enrich our existing content moderation and platform accountability work. Leil’s focus fits perfectly with our commitment to challenge global power dynamics in this field, and we’re excited to see their project unfold.

]]>
2198846
Governing Deepfakes Detection to Ensure Supports Global Needs https://www.witness.org/governing-deepfakes-detection-to-ensure-supports-global-needs/ Thu, 19 Sep 2019 16:03:51 +0000 https://www.witness.org/?p=2198671 WITNESS will be joining the new Steering Committee on AI and Media Integrity launched by the Partnership on AI. The Steering Committee has been formed to support governance and coordination of projects strengthening technical capabilities in deepfakes detection. This will build on WITNESS’ current leadership in this space as Co-Chair of the Partnership on AI’s Expert Group on AI and Social/Societal Influence, which has focused this past year on AI and media manipulation including via the Media Preparedness convening facilitated by WITNESS, BBC and PAI this year. Initial members of this Steering Committee will include First Draft, WITNESS, XPRIZE, CBC/Radio-Canada, the BBC, The New York Times, Facebook, and Microsoft, among other PAI Partner organizations to be later announced.

The first project of the Steering Committee will be governance of the Deepfakes Detection Challenge recently announced by Facebook, Microsoft, PAI and a range of leading academic researchers to support increased research into detection of ‘deepfakes.

Reflecting WITNESS’ work over the past few years on building an interconnected solutions discussion around deepfakes and synthetic media that particularly centers global communities already facing related harms we intend to highlight some of these core issues as critical issues in judging the value of detection solutions:

  • Accessibility and potential adoptability, particularly outside the US/Europe: How accessible detection methods are to more people globally is critical and how likely any particular detection method is to be adoptable at scale for a diversity of people are critical questions that have been raised in our dialogues with journalists, media and civil society globally.  A recent national-level convening in Brazil reinforced this need and the others outlined below.
  • Explainability of detection approaches. These detection approaches will enter an existing public sphere characterized by challenges to trust in media as well as a distrust of algorithmic decision-making that is not explainable. The more blackbox an approach is, the less convincing it will be to publics or useful to journalists who must explain their findings to skeptical audiences.
  • Relevance to real-world scenarios likely to be experienced by global publics particularly outside the Global North, as well as journalists and fact-checkers (such as manipulated images and videos that are partial fakes, compressed, ‘laundered’ across social media networks and must be evaluated and explained in real-time) These concerns were particularly highlighted in depth in the workshop WITNESS held connecting leading deepfakes researchers and leading fact-checkers.

Learn more about the background on the Steering Committee from the Partnership on AI’s launch post.


Background on WITNESS deepfakes preparedness work: For the past year WITNESS has been working with our partners, journalists and technologists to better understand what is needed to better prepare for potential threats from deepfakes and other synthetic media. We have particularly focused on ensuring that any approaches are grounded in existing realities of harms caused by misinformation and disinformation, particularly outside the Global North and the responses that communities want. We have also emphasized learning from existing experience among journalist and activist communities dealing with verification, trust and truth as well as building better collaborations between stakeholders to respond to this. The stakeholders on this issue include key social media, video-sharing and search platforms as well as the independent, academic and commercial technologists developing research and products in this area. We hosted the first cross-disciplinary expert convening in this area, and most recently led the first convening to discuss these issues in Brazil. A comprehensive list of our recommendations and our reporting is available here.

]]>
2198671
WITNESS Co-Hosts Convening With Partnership on AI and BBC on Protecting Public Discourse From AI-Generated Mis/Disinformation https://www.witness.org/witness-convening-protecting-public-discourse-ai-generated-mis-disinformation/ Thu, 20 Jun 2019 20:07:29 +0000 https://www.witness.org/?p=2197407 At the end of May, WITNESS co-hosted a workshop in London with the Partnership on AI and the BBC to address the question:

As AI becomes more sophisticated and its techniques more accessible, how can organizations across technology, media, civil society, and the academic research community work together to coordinate strategies around the emergent threat of AI-generated mis/disinformation?

The workshop in London intended to:

  • Connect news and media organizations, key technology companies, researchers, and others,
  • Facilitate better understanding of the threats organizations currently face and will face in the future,
  • Promote development of potential solutions to those threats, and how these relate to existing technical and journalistic approaches as well as the global contexts of mis/disinformation,
  • Enable identification of tactics for better communication/coordination between participants, and
  • Allow participants to work together on the prolonged, positive development of AI in the context of mis/disinformation.

The full post, co-authored by WITNESS Program Director Sam Gregory, is available here.

At WITNESS, we work on ways to prepare for the challenges of emerging threats such as “deepfakes” and synthetic media. Recently, we published a report on how we can work together to detect artificial intelligence-manipulated media. Click here to learn more about WITNESS’ special initiative focused on the impact of these emerging threats.

]]>
2197407
WITNESS joins the Partnership on AI https://www.witness.org/witness-joins-the-partnership-on-ai/ Fri, 16 Nov 2018 23:07:44 +0000 https://www.witness.org/?p=2195600 Note from our Program Director Sam Gregory:

At a time when the role of artificial intelligence (AI) in the processes of creating media, managing what we see, and moderating content is becoming increasingly important, WITNESS is glad to be joining the Partnership on AI. We look forward to engaging with others in the Partnership on AI (PAI)to address critical challenges around key focus areas of our Tech Advocacy work including misinformation and disinformation, content moderation, privacy, and facial recognition and deepfakes/synthetic media. There’s a critical opportunity now to ensure that AI is used in a rights-protecting and rights-enhancing way, to ensure that marginalized voices are part of the process of development and implementation and that ethical considerations about when AI is used are front-and-center. WITNESS will be co-chairing the new Working Group on Social and Societal Influence, which is beginning with a focus on AI and media.

From the PAI website:

We are excited to announce that we have added 10 new organizations to the growing Partnership on AI community. The latest cohort of new members represents a diverse range of sectors, including media and telecommunications businesses, as well as civil rights organizations, academia, and research institutes.

These new members will bring valuable new perspectives. For example, the addition of media organizations will be crucial at a time when AI-enabled techniques in synthetic news and imagery may pose challenges to what people see and believe, but also may help to authenticate and verify information.

PAI is also committed to ensuring geographic diversity in exploring AI’s hard questions, and as a result, the latest group of new members includes organizations from Australia, Canada, Italy, South Korea, United Kingdom, and the United States, allowing us to bring together important viewpoints from around the world.

The following organizations join the Partnership on AI in November 2018:

Autonomy, Agency and Assurance Innovation (3A) Institute
American Psychological Association
British Broadcasting Corporation (BBC)
DataKind
The New York Times
OPTIC Network
PolicyLink
Samsung Electronics
The Vision and Image Processing Lab at University of Waterloo
WITNESS

Partnership on AI Executive Director Terah Lyon, said “We are proud to welcome a diverse new group of organizations and perspectives to the Partnership on AI, and I look forward to seeing the impact of their contributions. Technology is a series of decisions made by humans, and by involving more viewpoints and perspectives in the AI debate we will be able to improve the quality of those decisions.”

Matthew Postgate, Chief Product and Technology Officer at the BBC, said: “I am delighted that the BBC has joined the Partnership on AI. The use of machine learning and data enabled services offer incredible opportunities for the BBC and our audience, but also present serious challenges for society. We will only realise the benefits and solve the challenges by coming together with other media and technology organisations in the interests of citizens. Partnership on AI and its member base provide the platform to do just that, and I am committed to ensuring the BBC plays an active part.”

Nick Rockwell, Chief Technology Officer at the New York Times, said: “Our mission at The New York Times is to help people better understand the world, so it is imperative that we understand and participate in the ways technology is changing our lives. At The Times, we already use artificial intelligence in many ways to deepen our readers engagement with our journalism, always in accordance with ethical guidelines and our commitment to our reader’s privacy, so we are both deeply excited and deeply concerned about the power of artificial intelligence to impact society. We are excited to join the Partnership on AI to continue to deepen our understanding, and to help shape the future of this technology for good.”

Jake Porway, Founder and Executive Director at DataKind, said: “We couldn’t be more aligned to the Partnership on AI’s cause as our mission at DataKind is virtually synonymous — to create a world in which data science and AI are used ethically and capably in the service of humanity. There’s huge potential to reach this goal together, and we’re particularly excited to play the role of connector between the many technology companies in the group committed to making positive social change and the needs on the ground that they could support.”

The new cohort of members will participate in the Partnership’s existing Working Groups and will join new projects and work beginning with the Partnership’s upcoming All Partners Meeting.

The Partnership on AI exists to study and formulate best practices on AI, to advance the public’s understanding of AI, and to provide a platform for open collaboration between all those involved in, and affected by, the development and deployment of AI technologies.

To succeed in this mission, we need deep involvement from diverse voices and viewpoints that represent a wide range of audiences, geographies, and interests.

We welcome questions from organizations interested in learning more about membership. To contact us, please see the forms available here.

]]>
2195600
In Conversation With National Endowment for Democracy: How Will Deepfakes Transform Disinformation? https://www.witness.org/in-conversation-with-national-endowment-for-democracy-how-will-deepfakes-transform-disinformation/ Mon, 01 Oct 2018 15:41:32 +0000 https://www.witness.org/?p=2195284 People have started to panic about the increasing possibility of manipulating images, video, and audio, often popularly described as “deepfakes”.  In the past decade Hollywood studios have had the capacity to morph faces —from Brad Pitt in “The Curious Case of Benjamin Button” to Princess Leia in “Star Wars’ Rogue One”—and companies and consumers have had tools such as Photoshop to digitally alter images and video in subtler ways.

Disinformation, the intentional use of false or misleading information for political purposes, is increasingly recognized as a threat to democracy worldwide. Many observers argue that this challenge has been exacerbated by social media and a declining environment for independent news outlets. Now, new advances in technology—including but not limited to “deepfakes” and other forms of synthetic media—threaten to supercharge the disinformation crisis.

WITNESS Program Director Sam Gregory, along with four other deepfakes leading experts sat down with the National Endowment for Democracy to talk about these threats and the role they play in the disinformation landscape.

“The most serious ramification of deepfakes and other forms of synthetic media is that they further damage people’s trust in our shared information sphere and contribute to the move of our default response from trust to mistrust,” Sam told NED.

To read the entire interview, click here.

For more on our work on deepfakes, click here.

 

]]>
2195284
WITNESS LEADS CONVENING ON PROACTIVE SOLUTIONS TO MAL-USES OF DEEPFAKES AND OTHER AI-GENERATED SYNTHETIC MEDIA https://www.witness.org/witness-leads-convening-on-proactive-solutions-to-mal-uses-of-deepfakes-and-other-ai-generated-synthetic-media/ Mon, 02 Jul 2018 21:46:03 +0000 https://www.witness.org/?p=2194330 Read the detailed summary of discussions and recommendations on next-steps here

On June 11, 2018, WITNESS in collaboration with First Draft, a project of the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School, brought together 30 leading independent and company-based technologists, machine learning specialists, academic researchers in synthetic media, human rights researchers, and journalists. Under Chatham House Rules, the discussion was focused on pragmatic and proactive ways to mitigate the threats that widespread use and commercialization of new tools for AI-generated synthetic media, such as deepfakes and facial reenactment, potentially pose to public trust, reliable journalism and trustworthy human rights documentation.

WITNESS has for twenty-five years enabled human rights defenders, and now increasingly anyone, anywhere to use video and technology to protect and defend human rights. Our experience has shown the value of images to drive a more diverse personal storytelling and civic journalism, to drive movements around pervasive human rights violations like police violence, and to be critical evidence in war crimes trials. We have also seen the ease in which videos and audio, often crudely edited or even simply recycled and re-contextualized can perpetuate and renew cycles of violence.

WITNESS’ Tech + Advocacy work has frequently included engaging with key social media and video-sharing platforms to develop innovative policy and product responses to challenges facing high-risk users and high-public interest content. As a potential threat of more sophisticated, more personalized audio and video manipulation emerges, we see a critical necessity to bring together key actors before we are in the eye-of-the-storm, to ensure we prepare in a more coordinated way and to challenge technopocalyptic narratives that in and of themselves damage public trust in video and audio.

The convening goals included:

  • Broaden journalists, technologists and human rights researchers’ understanding of these new technologies, where needed;
  • While recognizing positive potential usages, begin building a common understanding of the threats created by– and potential responses to – mal-uses of AI-generated imagery, video and audio to public discourse and reliable news and human rights documentation, and map landscape of innovation in this area.
  • Build shared understanding of existing approaches in human rights, journalism and technology to deal with mal-uses of faked, simulated and recycled images, audio and video, and their relationship to other forms of mis/dis/mal-information
  • Based on case studies (real and hypothetical) facilitate discussion of potential pragmatic tactical, normative and technical responses to risk models of fabricated audio and video by companies, independent activists, journalists, academic researchers, open-source technologists and commercial platforms;
  • Identify priorities for continued discussion between stakeholders

Recommendations emerging from the convening included:

  1. Baseline research and a focused sprint on the optimal ways to track authenticity, integrity, provenance and digital edits of images, audio and video from capture to sharing to ongoing use. Research should focus on a rights-protecting approach that a) maximizes how many people can access these tools, b) minimizes barriers to entry and potential suppression of free speech without compromising right to privacy and freedom of surveillance c) minimizes risk to vulnerable creators and custody-holders and balances these with d) potential feasibility of integrating these approaches in a broader context of platforms, social media and in search engines. This research needs to reflect platform, independent commercial and open-source activist efforts, consider use of blockchain and similar technologies, review precedents (e.g. spam and current anti-disinformation efforts) and identify pros and cons to different approaches as well as the unanticipated risks. WITNESS will lead on supporting this research and sprint.
  2. Detailed threat modelling around synthetic media mal-uses for particular key stakeholders (journalists, human rights defenders, others). Create models based on actors, motivations and attack vectors, resulting in identification of tailored approaches relevant to specific stakeholders or issues/values at stake.
  3. Public and private dialogue on how platforms, social media sites and search engines design a shared approach and better coordinate around mal-uses of synthetic media. Much like the public discussions around data use and content moderation, there is a role for third parties in civil society to serve as a public voice on pros/cons of various approaches, as well as to  facilitate public discussion and serve as a neutral space for consensus-building. WITNESS will support this type of outcomes-oriented discussion.
  4. Platforms, search and social media companies should prioritize development of key tools already identified in the OSINT human rights and journalism community as critical: particularly reverse video search. This is because many of the problems of synthetic media relate to existing challenges around verification and trust in visual media.
  5. More shared  learning on how to detect synthetic media that brings together existing practices from manual and automatic forensics analysis with human rights, Open Source Intelligence (OSINT) and journalistic practitioners – potentially via a workshop where they test/learn each other’s methods and work out what to adopt and how to make techniques accessible. WITNESS and First Draft will engage on this.
  6. Prepare for the emergence of synthetic media in real-world situations by working with journalists and human rights defenders to build playbooks for upcoming risk scenarios so that no-one can claim ‘we didn’t see this coming’ and so as  to facilitate more understanding of technologies at stake. WITNESS and First Draft will collaborate on this.
  7. Include additional stakeholders who were under-represented in the 6/11 convening and are critical voices either in an additional meeting or in upcoming activities
    • “Global South” voices as well as marginalized communities in US and Europe
    • Policy and legal voices and national and international level
    • Artists and provocateurs
  8. Additional understanding of relevant research questions and lead research to inform other strategies. First Draft will lead on additional research.

 

For blog posts produced providing further details on next steps see:

 

]]>
2194330