Technology Archives - WITNESS https://www.witness.org/tag/technology/ Human Rights Video Thu, 17 Oct 2019 16:07:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 76151064 WITNESS “Deepfakes – Prepare Yourself Now” Report Launched https://www.witness.org/witness-deepfakes-prepare-yourself-now-report-launched/ Thu, 17 Oct 2019 16:07:02 +0000 https://www.witness.org/?p=2199217 WITNESS is delighted to announce that our report, “Deepfakes – Prepare Yourself Now” is live. This report warns how AI-altered media can further threaten already vulnerable communities and people as well as public trust in videos, and identifies key prioritized threats and solutions as seen from a cross-section of Brazilian stakeholders.

Brazil is one of the countries in the world that has suffered most from the use of misinformation, disinformation and so-called “fake news.” On July 25th, 2019, WITNESS held a convening on “Deepfakes: Prepare Yourself Now” in São Paulo, Brazil; it followed on from an earlier meeting with grassroots activists and human rights defenders. The workshop participants included favela-based activists, journalists, fact-checkers, technologists, civic activists, satirists and others, who focused on prioritizing perceived threats and solutions. 

“This is likely to be a global problem and it’s critical that the decisions about what is needed and the solutions we want, both technical and otherwise, are not just determined in the US and Europe or excluding the voices of people who will most be harmed,” emphasized Sam Gregory, WITNESS Program Director.

The report is available in English and Portuguese.

For more on WITNESS’ work in this area:

For more on WITNESS’ programmatic work in Brazil:

]]>
2199217
WITNESS and partners push back against EU regulation that threatens online free expression https://www.witness.org/witness-and-partners-push-back-against-eu-regulation-that-threatens-online-free-expression/ Mon, 28 Jan 2019 17:54:11 +0000 https://www.witness.org/?p=2196029 WITNESS has partnered with peers around the world to issue a letter to Rapporteur Daniel Dalton and the rest of the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs to voice opposition to the proposed Regulation on Dissemination of Terrorist Content Online.

The proposal is a serious threat to online free expression in Europe and freedom of expression globally. It is likely to inspire dangerous copycat laws and encourages increased use of opaque machine-learning algorithms to remove content—including content created by human rights defenders, alternative media, and journalists.

Leaving it up to algorithms to detect “extremist content” will create innumerable false positives and damage human rights content that is critical to ensuring accountability for perpetrators; content for which activists and journalists often risk their lives.

The ideas and concerns expressed in this letter are based on the real-world experiences of WITNESS, our partners, and the 25 other signatories. We are honored to bring together the diverse voices on this letter in defense of online freedom of expression and, specifically, the protection of human rights content.

Read more about this consortium, our opposition to the Regulation on Dissemination of Terrorist Content Online, and the full letter to the European Parliament on our blog.

Photo: © European Union 2017 – European Parliament (Attribution-NonCommercial-NoDerivatives CreativeCommons licenses)

]]>
2196029
WITNESS joins the Partnership on AI https://www.witness.org/witness-joins-the-partnership-on-ai/ Fri, 16 Nov 2018 23:07:44 +0000 https://www.witness.org/?p=2195600 Note from our Program Director Sam Gregory:

At a time when the role of artificial intelligence (AI) in the processes of creating media, managing what we see, and moderating content is becoming increasingly important, WITNESS is glad to be joining the Partnership on AI. We look forward to engaging with others in the Partnership on AI (PAI)to address critical challenges around key focus areas of our Tech Advocacy work including misinformation and disinformation, content moderation, privacy, and facial recognition and deepfakes/synthetic media. There’s a critical opportunity now to ensure that AI is used in a rights-protecting and rights-enhancing way, to ensure that marginalized voices are part of the process of development and implementation and that ethical considerations about when AI is used are front-and-center. WITNESS will be co-chairing the new Working Group on Social and Societal Influence, which is beginning with a focus on AI and media.

From the PAI website:

We are excited to announce that we have added 10 new organizations to the growing Partnership on AI community. The latest cohort of new members represents a diverse range of sectors, including media and telecommunications businesses, as well as civil rights organizations, academia, and research institutes.

These new members will bring valuable new perspectives. For example, the addition of media organizations will be crucial at a time when AI-enabled techniques in synthetic news and imagery may pose challenges to what people see and believe, but also may help to authenticate and verify information.

PAI is also committed to ensuring geographic diversity in exploring AI’s hard questions, and as a result, the latest group of new members includes organizations from Australia, Canada, Italy, South Korea, United Kingdom, and the United States, allowing us to bring together important viewpoints from around the world.

The following organizations join the Partnership on AI in November 2018:

Autonomy, Agency and Assurance Innovation (3A) Institute
American Psychological Association
British Broadcasting Corporation (BBC)
DataKind
The New York Times
OPTIC Network
PolicyLink
Samsung Electronics
The Vision and Image Processing Lab at University of Waterloo
WITNESS

Partnership on AI Executive Director Terah Lyon, said “We are proud to welcome a diverse new group of organizations and perspectives to the Partnership on AI, and I look forward to seeing the impact of their contributions. Technology is a series of decisions made by humans, and by involving more viewpoints and perspectives in the AI debate we will be able to improve the quality of those decisions.”

Matthew Postgate, Chief Product and Technology Officer at the BBC, said: “I am delighted that the BBC has joined the Partnership on AI. The use of machine learning and data enabled services offer incredible opportunities for the BBC and our audience, but also present serious challenges for society. We will only realise the benefits and solve the challenges by coming together with other media and technology organisations in the interests of citizens. Partnership on AI and its member base provide the platform to do just that, and I am committed to ensuring the BBC plays an active part.”

Nick Rockwell, Chief Technology Officer at the New York Times, said: “Our mission at The New York Times is to help people better understand the world, so it is imperative that we understand and participate in the ways technology is changing our lives. At The Times, we already use artificial intelligence in many ways to deepen our readers engagement with our journalism, always in accordance with ethical guidelines and our commitment to our reader’s privacy, so we are both deeply excited and deeply concerned about the power of artificial intelligence to impact society. We are excited to join the Partnership on AI to continue to deepen our understanding, and to help shape the future of this technology for good.”

Jake Porway, Founder and Executive Director at DataKind, said: “We couldn’t be more aligned to the Partnership on AI’s cause as our mission at DataKind is virtually synonymous — to create a world in which data science and AI are used ethically and capably in the service of humanity. There’s huge potential to reach this goal together, and we’re particularly excited to play the role of connector between the many technology companies in the group committed to making positive social change and the needs on the ground that they could support.”

The new cohort of members will participate in the Partnership’s existing Working Groups and will join new projects and work beginning with the Partnership’s upcoming All Partners Meeting.

The Partnership on AI exists to study and formulate best practices on AI, to advance the public’s understanding of AI, and to provide a platform for open collaboration between all those involved in, and affected by, the development and deployment of AI technologies.

To succeed in this mission, we need deep involvement from diverse voices and viewpoints that represent a wide range of audiences, geographies, and interests.

We welcome questions from organizations interested in learning more about membership. To contact us, please see the forms available here.

]]>
2195600
WITNESS ANNOUNCES FIRST MOZILLA FELLOW https://www.witness.org/witness-announces-first-mozilla-fellow/ Mon, 27 Aug 2018 20:35:11 +0000 https://www.witness.org/?p=2194855 WITNESS is extremely excited to announce open-source investigator Gabriela Ivens as our first ever Mozilla Fellow!

Mozilla Fellowships provide resources, tools, community, and amplification to those building a more humane digital world. During their tenure, Fellows use their skill sets — in technology, in advocacy, in law — to design products, run campaigns, influence policy and ultimately lay the groundwork for a more open and inclusive internet.

Mozilla Fellows hail from a range of disciplines and geographies: They are policymakers in Kenya, journalists in Brazil, engineers in Germany, privacy activists in the United States, and data scientists in the Netherlands. Fellows work on individual projects, but also collaborate on cross-disciplinary solutions to the internet’s biggest challenges. Fellows are awarded competitive compensation and benefits.

Gabriela will be working with our Tech+Advocacy team where she will be working on issues around the safe, ethical, and effective use of video in documenting human rights violations.

During the Fellowship, Gabriela will be focusing on a number of areas including emerging technologies for human rights documentation and the effects of policy and engineering decisions by technology companies – such as content takedowns of information – that is, or could be, societally important.

Her work will provide a greater level of understanding of the impact tech companies have on civil society and human rights defenders. Before becoming a Fellow, Gabriela worked at Syrian Archive, a group working on preserving visual documentation of the Syrian conflict, and has been working on open source investigations since 2015. She holds a master’s degree from University College London in Human Rights.

]]>
2194855
WITNESS joins EIUC to train Young Professionals about Human Rights and Film https://www.witness.org/witness-joins-euic-to-train-young-professionals-about-human-rights-and-film/ Mon, 20 Aug 2018 17:02:20 +0000 https://www.witness.org/?p=2194669 WITNESS is extremely excited to announce that we will be partnering with the Global Campus of Human Rights at the European Inter‑University Centre for the 13th Cinema Human Rights and Advocacy Summer School in Venice from August 27th to September 5th. The 10-day intense training is aimed at young professionals wishing to broaden their understanding of the connections between human rights, films, digital media and video advocacy.

The program aims to foster participatory and critical thinking on urgent human rights issues, debate with experts and filmmakers from all over the world during the 75th Venice International Film Festival and learn how to use films as a tool for social and cultural change.

Our Senior Attorney and Program Manager Kelly Matheson will conduct a few sessions on how to use video for change, advocacy and evidence for human rights.

You can find the entire schedule and more details about this exciting training here.

]]>
2194669
Cast Your Vote for WITNESS at SXSW 2019! https://www.witness.org/cast-your-vote-for-witness-at-sxsw-2019/ Mon, 13 Aug 2018 16:15:51 +0000 https://www.witness.org/?p=2194558 It’s that time of year where YOU decide what topics you want to hear about at South by Southwest (SXSW) in 2019. The 10-day convening, that brings the worlds’ top filmmakers, musicians, technologists, and creatives to Austin, Texas, has become an annual event for WITNESS. Whether it’s sharing our latest programmatic work with organizational peers, interacting with cutting-edge technologies, or participating in conversations about media and human rights, we look forward to both learning and sharing at SXSW every March.

This year we’re proposing the panel, “Deepfakes: What Should We Fear, What Can We Do,” led by our Program Director Sam Gregory to discuss questions surrounding deepfakes and synthetic media, how they can be used maliciously, and how we can detect and stop them.

More about the panel:

Deepfakes! As more sophisticated, more personalized, more convincing audio and video manipulation emerges how do we get beyond the apocalyptic discussion of the “end of trust in images and audio” and instead focus on what we can do about malicious deepfakes and other AI-manipulated synthetic media. Based on WITNESS’ collaborations with technologists, journalists and human rights activists, we’ll explore the state-of-the-art usage of deepfakes and other ‘synthetic media’, the solutions available to fight these malicious uses and where this goes next. Linked to broader trends in challenges to public trust, disinformation, and the evolving information ecosystem globally how should we plan together to fight the dark side of a faked video and audio future?

Sound interesting? Want to hear from WITNESS in Austin? Then please cast your vote for Deepfakes: What Should We Fear, What Can We Do here!

]]>
2194558
WITNESS featured in Harvard Business Review https://www.witness.org/witness-featured-in-harvard-business-review/ Thu, 09 Aug 2018 19:44:40 +0000 https://www.witness.org/?p=2194540 We are pleased to announce that our Program Director Sam Gregory was recently interviewed by Scott Berinato for the Harvard Business Review about WITNESS and the future of news and synthetic media.

In the article titled, “Business in the Age of Computational Propaganda and Deep Fakes,” Sam spoke about the reality of deep fakes and how WITNESS is working with companies to inform them of the potential of deep fakes to influence people (not only in the political sphere) and on dealing with synthetic media.

“Deliberately polluting the environment to erode trust is a common authoritarian tactic. So we have to guard against it… In human rights, that could mean identifying an audience that you want to target hate speech at… In business, I think about phishing scams in which someone fakes a voice you trust. These are some of the threat models we have been laying out as we try to address the dangers of mis- and disinformation broadly.” Sam said during the interview.

You can access the full interview here.

]]>
2194540
WITNESS talks to The Outline about racially insensitive app https://www.witness.org/witness-talks-to-the-outline-about-racially-insensitive-app/ Thu, 09 Aug 2018 18:20:19 +0000 https://www.witness.org/?p=2194535 Since the start of 2018, there has been a variety of incidents where the police have been called on people of color for alleged “suspicious activity.” For example, just last month the police were called in Northampton, MA on a black Smith College graduate student eating her lunch, who according to a staff member, “seemed to be out of place.”

Now, it will become easier for people, particularly racists to call the police over what they deem “suspicious activity.” Thanks to an app called the App Task Force, people can now “help police agencies reduce criminal activity and improve community policing in their jurisdiction.” According to the app’s “about” section, citizens who have no police training can “download the app and use it to report suspicious and potentially criminal activity … in real time.”

While apps like App Task Force vow that they will let citizens expose criminal behavior, we at WITNESS think that this will only lead to more racial profiling and unfair arrests. In this extremely toxic political climate where white officers mistreat people of color, apps like these aren’t helping create more transparency and accountability for the police, in fact, they’re doing the opposite.

Read what our U.S. Program Manager Jackie Zammuto had to say to The Outline about how the intersection of policing and technology can be problematic here.

]]>
2194535
WITNESS LEADS CONVENING ON PROACTIVE SOLUTIONS TO MAL-USES OF DEEPFAKES AND OTHER AI-GENERATED SYNTHETIC MEDIA https://www.witness.org/witness-leads-convening-on-proactive-solutions-to-mal-uses-of-deepfakes-and-other-ai-generated-synthetic-media/ Mon, 02 Jul 2018 21:46:03 +0000 https://www.witness.org/?p=2194330 Read the detailed summary of discussions and recommendations on next-steps here

On June 11, 2018, WITNESS in collaboration with First Draft, a project of the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School, brought together 30 leading independent and company-based technologists, machine learning specialists, academic researchers in synthetic media, human rights researchers, and journalists. Under Chatham House Rules, the discussion was focused on pragmatic and proactive ways to mitigate the threats that widespread use and commercialization of new tools for AI-generated synthetic media, such as deepfakes and facial reenactment, potentially pose to public trust, reliable journalism and trustworthy human rights documentation.

WITNESS has for twenty-five years enabled human rights defenders, and now increasingly anyone, anywhere to use video and technology to protect and defend human rights. Our experience has shown the value of images to drive a more diverse personal storytelling and civic journalism, to drive movements around pervasive human rights violations like police violence, and to be critical evidence in war crimes trials. We have also seen the ease in which videos and audio, often crudely edited or even simply recycled and re-contextualized can perpetuate and renew cycles of violence.

WITNESS’ Tech + Advocacy work has frequently included engaging with key social media and video-sharing platforms to develop innovative policy and product responses to challenges facing high-risk users and high-public interest content. As a potential threat of more sophisticated, more personalized audio and video manipulation emerges, we see a critical necessity to bring together key actors before we are in the eye-of-the-storm, to ensure we prepare in a more coordinated way and to challenge technopocalyptic narratives that in and of themselves damage public trust in video and audio.

The convening goals included:

  • Broaden journalists, technologists and human rights researchers’ understanding of these new technologies, where needed;
  • While recognizing positive potential usages, begin building a common understanding of the threats created by– and potential responses to – mal-uses of AI-generated imagery, video and audio to public discourse and reliable news and human rights documentation, and map landscape of innovation in this area.
  • Build shared understanding of existing approaches in human rights, journalism and technology to deal with mal-uses of faked, simulated and recycled images, audio and video, and their relationship to other forms of mis/dis/mal-information
  • Based on case studies (real and hypothetical) facilitate discussion of potential pragmatic tactical, normative and technical responses to risk models of fabricated audio and video by companies, independent activists, journalists, academic researchers, open-source technologists and commercial platforms;
  • Identify priorities for continued discussion between stakeholders

Recommendations emerging from the convening included:

  1. Baseline research and a focused sprint on the optimal ways to track authenticity, integrity, provenance and digital edits of images, audio and video from capture to sharing to ongoing use. Research should focus on a rights-protecting approach that a) maximizes how many people can access these tools, b) minimizes barriers to entry and potential suppression of free speech without compromising right to privacy and freedom of surveillance c) minimizes risk to vulnerable creators and custody-holders and balances these with d) potential feasibility of integrating these approaches in a broader context of platforms, social media and in search engines. This research needs to reflect platform, independent commercial and open-source activist efforts, consider use of blockchain and similar technologies, review precedents (e.g. spam and current anti-disinformation efforts) and identify pros and cons to different approaches as well as the unanticipated risks. WITNESS will lead on supporting this research and sprint.
  2. Detailed threat modelling around synthetic media mal-uses for particular key stakeholders (journalists, human rights defenders, others). Create models based on actors, motivations and attack vectors, resulting in identification of tailored approaches relevant to specific stakeholders or issues/values at stake.
  3. Public and private dialogue on how platforms, social media sites and search engines design a shared approach and better coordinate around mal-uses of synthetic media. Much like the public discussions around data use and content moderation, there is a role for third parties in civil society to serve as a public voice on pros/cons of various approaches, as well as to  facilitate public discussion and serve as a neutral space for consensus-building. WITNESS will support this type of outcomes-oriented discussion.
  4. Platforms, search and social media companies should prioritize development of key tools already identified in the OSINT human rights and journalism community as critical: particularly reverse video search. This is because many of the problems of synthetic media relate to existing challenges around verification and trust in visual media.
  5. More shared  learning on how to detect synthetic media that brings together existing practices from manual and automatic forensics analysis with human rights, Open Source Intelligence (OSINT) and journalistic practitioners – potentially via a workshop where they test/learn each other’s methods and work out what to adopt and how to make techniques accessible. WITNESS and First Draft will engage on this.
  6. Prepare for the emergence of synthetic media in real-world situations by working with journalists and human rights defenders to build playbooks for upcoming risk scenarios so that no-one can claim ‘we didn’t see this coming’ and so as  to facilitate more understanding of technologies at stake. WITNESS and First Draft will collaborate on this.
  7. Include additional stakeholders who were under-represented in the 6/11 convening and are critical voices either in an additional meeting or in upcoming activities
    • “Global South” voices as well as marginalized communities in US and Europe
    • Policy and legal voices and national and international level
    • Artists and provocateurs
  8. Additional understanding of relevant research questions and lead research to inform other strategies. First Draft will lead on additional research.

 

For blog posts produced providing further details on next steps see:

 

]]>
2194330
WITNESS FEATURED IN WIRED UK https://www.witness.org/witness-featured-in-wired-uk/ Thu, 28 Jun 2018 17:29:07 +0000 https://www.witness.org/?p=2194315 We are extremely pleased to announce that our Tech+Advocacy Program Manager Dia Kayyali was recently interviewed by Kate O’Flaherty, at WIRED UK about YouTube’s takedown of critical war crime evidence in Syria.

In the article titled, “YouTube keeps deleting evidence of Syrian chemical weapon attacks,” Dia along with our partners at the Syrian Archive, spoke about the importance of these videos and the complexity of the Google algorithm.

“Algorithmic transparency, in general, is a huge issue, but even more for content takedowns. They don’t have any outside auditing, and that’s a basic necessity of transparency. We don’t know the details of the algorithm: we have pushed and pushed, but we still don’t know,” Dia said.

To read the entire article, click here.

]]>
2194315