Tech and Advocacy Archives - WITNESS https://www.witness.org/category/tech-advocacy/ Human Rights Video Wed, 09 Sep 2020 16:16:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 76151064 Press release: Major Human Rights and Internet Watchdog Organizations Sign On to Demands for #AuditFBIndia As Pressure Builds https://www.witness.org/press-release-major-human-rights-and-internet-watchdog-organizations-sign-on-to-demands-for-auditfbindia-as-pressure-builds/ Wed, 09 Sep 2020 10:30:47 +0000 https://www.witness.org/?p=2223642 9 September 2020

Contact:

Dia Kayyali, dia@witness.org,

Heidi Beirich, heidi@globalextremism.org

Global and national groups call on Mark Zuckerberg to hold Facebook India accountable

In early August, the Wall Street Journal published an exposé of Facebook India, with evidence that Ankhi Das, the head of Public Policy at Facebook, India, South and Central exhibited political bias by suspending the community guidelines when it came to genocidal hate speech. The article has been followed by myriad press reports in the Wall Street Journal, Reuters, TIME Magazine, and more, detailing bias and failure to address dangerous content at the Facebook India office. This week, a wide range of civil society organizations from around the world, including WITNESS, Free Press, Global Project Against Hate and Extremism, Media Justice,  Southern Poverty Law Center, and the Islamic Women’s Council of New Zealand have signed an open letter calling on Mark Zuckerberg to work with civil society to address dangerous content on Facebook, ensure that a thorough audit of Facebook India takes place, and place Ankhi Das on administrative leave while the audit is being conducted. 

The timeline of Facebook’s complicity in genocide goes back to 2013, where hate speech on the platform fuelled the Muzzafarnagar riots. It has continued unabated. In 2020,  content like #Coronajihad has spread as quickly as COVID itself, and has led to real world violence against many. Facebook itself admitted that it was used to incite genocide against Rohingya Muslims in Myanmar. An Indian parliamentary panel on information technology questioned Facebook on September 2 and will do so again, following demands made by ex-civil servants and Facebook India employees. But Facebook shouldn’t wait to be forced to take action. It should publicly state what steps it is taking to address its tragic failures in India. 

Here’s what civil society is saying:

Anjum Raman, project Lead for Inclusive Aotearoa Collective (New Zealand) and member of the Global Internet Forum to Counter Terrorism Independent Advisory Committee, said:

The activities and actions of Facebook staff in India show the danger of political parties and tech companies colluding to undermine the democratic process.  It shows the result of not having independent, transparent regulatory systems in place to oversee the activities of companies which have significant impact on the wellbeing and safety of millions of people.  We have already seen evidence of that harm across the world, whether in Myanmar, India, or New Zealand.  The international community needs to come together to ensure urgent action in regulating the behaviour and activities of these companies.

Dia Kayyali, Program Manager, tech + advocacy at WITNESS, said:

Facebook has been warned about the offline violence enabled by its platform in places like India. From the United Nations Special Adviser on the Prevention of Genocide to civil society organizations, the alarm has been raised. So why is dangerous Islamophobic content that is directly linked to real world harm being allowed to persist? Even before the Wall Street Journal article, it seemed to many of us in the global community that it’s about profit in an important market- but Facebook’s business model can’t be reliant on ignoring warning signs of genocide.. Facebook must take real action now, not just apologize or even get rid of one or two executives. This is your chance, Facebook. Conduct the most thorough investigation you’ve done yet. Make the results as public as possible. And work with civil society to stem the flow of the bloody, harmful content on your platform. 

Heidi Beirich, Ph.D., EVP  at the Global Project Against Hate and Extremism, said:

It’s high time Mark Zuckerberg and Facebook take anti-Muslim hatred seriously and change how its policies are applied in Asia and across the world. The scandal in the Indian office, where anti-Muslim and other forms of hatred were allowed to stay online due to religious and political bias, is appalling and the leadership in that office complicit. Hatred fomented on Facebook has led to violence and the most terrifying crime, genocide, against Muslims and other marginalized populations across the region, most notably in Myanmar. Anti-Muslim content is metastasizing across the platform as Facebook’s own civil rights audit proved. Facebook must put an end to this now.

Sana Din from Indian American Muslim Council said:

 Facebook allowed incendiary Islamophobic content even after they were informed that it was leading to genocidal violence.  From Muzzafnagar to Delhi, Indian Muslims and millions of other caste oppressed minorities cannot wait for change.  Facebook needs to act now. They  cannot evade their direct role in supporting genocidal hate speech at Facebook India and the only remedy to this harm is an Audit of caste and religious bias. 

Steven Renderos, executive director of MediaJustice said: 

As a global company with over 3 billion followers, Facebook has the unprecedented power of affecting users both on and off their platform. Counter to Mark Zuckerberg’s public aspirations of creating an inclusive platform, Facebook has become the tool of choice around the world to escalate violence around race, caste, and religion. As recent history in Myanmar has taught us, the consequences of not preventing hate speech from going viral on their platform translate into actual violence and genocide for some. This is not merely about a company struggling to address hateful activities at scale, this is a result of people Facebook has entrusted to represent its interest around the world. Nowhere is this more true than with Ankhi Das and Facebook India.

You can read the letter here.

]]>
2223642
WITNESS joins the Partnership on AI https://www.witness.org/witness-joins-the-partnership-on-ai/ Fri, 16 Nov 2018 23:07:44 +0000 https://www.witness.org/?p=2195600 Note from our Program Director Sam Gregory:

At a time when the role of artificial intelligence (AI) in the processes of creating media, managing what we see, and moderating content is becoming increasingly important, WITNESS is glad to be joining the Partnership on AI. We look forward to engaging with others in the Partnership on AI (PAI)to address critical challenges around key focus areas of our Tech Advocacy work including misinformation and disinformation, content moderation, privacy, and facial recognition and deepfakes/synthetic media. There’s a critical opportunity now to ensure that AI is used in a rights-protecting and rights-enhancing way, to ensure that marginalized voices are part of the process of development and implementation and that ethical considerations about when AI is used are front-and-center. WITNESS will be co-chairing the new Working Group on Social and Societal Influence, which is beginning with a focus on AI and media.

From the PAI website:

We are excited to announce that we have added 10 new organizations to the growing Partnership on AI community. The latest cohort of new members represents a diverse range of sectors, including media and telecommunications businesses, as well as civil rights organizations, academia, and research institutes.

These new members will bring valuable new perspectives. For example, the addition of media organizations will be crucial at a time when AI-enabled techniques in synthetic news and imagery may pose challenges to what people see and believe, but also may help to authenticate and verify information.

PAI is also committed to ensuring geographic diversity in exploring AI’s hard questions, and as a result, the latest group of new members includes organizations from Australia, Canada, Italy, South Korea, United Kingdom, and the United States, allowing us to bring together important viewpoints from around the world.

The following organizations join the Partnership on AI in November 2018:

Autonomy, Agency and Assurance Innovation (3A) Institute
American Psychological Association
British Broadcasting Corporation (BBC)
DataKind
The New York Times
OPTIC Network
PolicyLink
Samsung Electronics
The Vision and Image Processing Lab at University of Waterloo
WITNESS

Partnership on AI Executive Director Terah Lyon, said “We are proud to welcome a diverse new group of organizations and perspectives to the Partnership on AI, and I look forward to seeing the impact of their contributions. Technology is a series of decisions made by humans, and by involving more viewpoints and perspectives in the AI debate we will be able to improve the quality of those decisions.”

Matthew Postgate, Chief Product and Technology Officer at the BBC, said: “I am delighted that the BBC has joined the Partnership on AI. The use of machine learning and data enabled services offer incredible opportunities for the BBC and our audience, but also present serious challenges for society. We will only realise the benefits and solve the challenges by coming together with other media and technology organisations in the interests of citizens. Partnership on AI and its member base provide the platform to do just that, and I am committed to ensuring the BBC plays an active part.”

Nick Rockwell, Chief Technology Officer at the New York Times, said: “Our mission at The New York Times is to help people better understand the world, so it is imperative that we understand and participate in the ways technology is changing our lives. At The Times, we already use artificial intelligence in many ways to deepen our readers engagement with our journalism, always in accordance with ethical guidelines and our commitment to our reader’s privacy, so we are both deeply excited and deeply concerned about the power of artificial intelligence to impact society. We are excited to join the Partnership on AI to continue to deepen our understanding, and to help shape the future of this technology for good.”

Jake Porway, Founder and Executive Director at DataKind, said: “We couldn’t be more aligned to the Partnership on AI’s cause as our mission at DataKind is virtually synonymous — to create a world in which data science and AI are used ethically and capably in the service of humanity. There’s huge potential to reach this goal together, and we’re particularly excited to play the role of connector between the many technology companies in the group committed to making positive social change and the needs on the ground that they could support.”

The new cohort of members will participate in the Partnership’s existing Working Groups and will join new projects and work beginning with the Partnership’s upcoming All Partners Meeting.

The Partnership on AI exists to study and formulate best practices on AI, to advance the public’s understanding of AI, and to provide a platform for open collaboration between all those involved in, and affected by, the development and deployment of AI technologies.

To succeed in this mission, we need deep involvement from diverse voices and viewpoints that represent a wide range of audiences, geographies, and interests.

We welcome questions from organizations interested in learning more about membership. To contact us, please see the forms available here.

]]>
2195600
WITNESS Featured in NBC News https://www.witness.org/witness-featured-in-nbc-news/ Mon, 05 Nov 2018 22:06:05 +0000 https://www.witness.org/?p=2195539 Fake news shadowed the Brazilian election last week as disinformation quickly spread across the popular Facebook-owned messaging service WhatsApp to help far-right front-runner Jair Bolsonaro achieve victory.

Bolsonaro had been getting an illegal helping hand from a group of businessmen who were bankrolling a campaign to bombard WhatsApp users with fake news about the opposition candidate.

Researchers have found that fake news stories and rumors spread quickly via person-to-person and group messages on the app, using its features in culturally specific ways or taking advantage of third-party workarounds to add extra layers of utility — and creating new avenues of potential abuse in the process.

Our Senior Program Manager Priscila Neri recently sat down with NBC News to talk about how WhatsApp became linked to mob violence and fake news. Read the article here.

]]>
2195539
In Conversation With National Endowment for Democracy: How Will Deepfakes Transform Disinformation? https://www.witness.org/in-conversation-with-national-endowment-for-democracy-how-will-deepfakes-transform-disinformation/ Mon, 01 Oct 2018 15:41:32 +0000 https://www.witness.org/?p=2195284 People have started to panic about the increasing possibility of manipulating images, video, and audio, often popularly described as “deepfakes”.  In the past decade Hollywood studios have had the capacity to morph faces —from Brad Pitt in “The Curious Case of Benjamin Button” to Princess Leia in “Star Wars’ Rogue One”—and companies and consumers have had tools such as Photoshop to digitally alter images and video in subtler ways.

Disinformation, the intentional use of false or misleading information for political purposes, is increasingly recognized as a threat to democracy worldwide. Many observers argue that this challenge has been exacerbated by social media and a declining environment for independent news outlets. Now, new advances in technology—including but not limited to “deepfakes” and other forms of synthetic media—threaten to supercharge the disinformation crisis.

WITNESS Program Director Sam Gregory, along with four other deepfakes leading experts sat down with the National Endowment for Democracy to talk about these threats and the role they play in the disinformation landscape.

“The most serious ramification of deepfakes and other forms of synthetic media is that they further damage people’s trust in our shared information sphere and contribute to the move of our default response from trust to mistrust,” Sam told NED.

To read the entire interview, click here.

For more on our work on deepfakes, click here.

 

]]>
2195284
In conversation with VICE: Why is it so Hard to Care about Human Rights? https://www.witness.org/in-conversation-with-vice-why-is-it-so-hard-to-care-about-human-rights/ Thu, 13 Sep 2018 18:10:55 +0000 https://www.witness.org/?p=2195080 Ask any humanitarian volunteer you’ve walked past on a sidewalk — it’s an incredibly difficult job to get people to commit themselves to a cause or relief effort in another part of the world.

Our Program Director Sam Gregory was recently interviewed by VICE News about why is it so hard to care about humanitarian causes.

Click here to see what Sam, and other human rights leaders had to say about what it takes to care for justice in the world.

]]>
2195080
WITNESS joins Human Rights Organizations from around the World to Oppose Egypt Media Laws https://www.witness.org/witness-joins-human-rights-organizations-from-around-the-world-to-oppose-egypt-media-laws/ Thu, 06 Sep 2018 19:18:34 +0000 https://www.witness.org/?p=2194987 On 18 August 2018, President Sisi ratified the Anti-Cyber and Information Technology Crimes Law (Cybercrime Law). The Egyptian parliament had already approved the law on 5 July, granting the government new powers to restrict digital rights and interfere with activists’ freedoms online. Only last month, the parliament also passed another dangerous law (the Media Regulation Law) that would place under government regulation and supervision as member of the media anyone with a social media account that has more than 5,000 followers.

We at WITNESS think that this is a violation of basic human and digital rights and strongly oppose it. We along with 25 other human rights and digital rights organizations around the world signed a letter today to oppose this ridiculous law because filming for human rights—and posting it online—must not be criminalized.

More here.

]]>
2194987
WITNESS AND 13 ORGANIZATIONS JOIN HANDS TO TELL GOOGLE TO CANCEL CHINA CENSORSHIP PLAN https://www.witness.org/witness-and-18-organizations-join-hands-to-tell-google-to-cancel-china-censorship-plan/ Tue, 28 Aug 2018 15:25:26 +0000 https://www.witness.org/?p=2194877 Today, WITNESS and 13 other leading human rights organizations join hands to give Google a message.
This is what we have to say:

Open letter to Google on reported plans to launch a censored search engine in China

To: Sundar Pichai, Chief Executive Officer, Google Inc
cc: Ben Gomes, Vice President of Search; Kent Walker, Senior Vice President of Global Affairs

Dear Mr Pichai,

Like many of Google’s own employees, we are extremely concerned by reports that Google is developing a new censored search engine app for the Chinese market. The project, codenamed “Dragonfly”, would represent an alarming capitulation by Google on human rights. The Chinese government extensively violates the rights to freedom of expression and privacy; by accommodating the Chinese authorities’ repression of dissent, Google would be actively participating in those violations for millions of internet users in China.

We support the brave efforts of Google employees who have alerted the public to the existence of Dragonfly, and voiced their concerns about the project and Google’s transparency and oversight processes.

In contrast, company leadership has failed to respond publicly to concerns over Project Dragonfly, stating that it does not comment on “speculation about future plans”. Executives have also refused to answer basic questions about how the company will safeguard the rights of users in China as it seeks to expand its business in the country.

Since Google publicly exited the search market in China in 2010, citing restrictions to freedom of expression online, the Chinese government has strengthened its controls over the internet and intensified its crackdown on freedom of expression. We are therefore calling on Google to:

  • Reaffirm the company’s 2010 commitment not to provide censored search engine services in China;
  • Disclose its position on censorship in China and what steps, if any, Google is taking to safeguard against human rights violations linked to Project Dragonfly and its other Chinese mobile app offerings;
  • Guarantee protections for whistle-blowers and other employees speaking out where they see the company is failing its commitments to human rights.

Our concerns about Dragonfly are set out in detail below.

Freedom of expression and privacy in China and Google’s human rights commitments

It is difficult to see how Google would currently be able to relaunch a search engine service in China in a way that would be compatible with the company’s human rights responsibilities under international standards, or its own commitments. Were it to do so, in other words, there is a high risk that the company would be directly contributing to, or complicit in, human rights violations.

The Chinese government runs one of the world’s most repressive internet censorship and surveillance regimes. Human rights defenders and journalists are routinely arrested and imprisoned solely for expressing their views online. Under the Cybersecurity Law,[1] internet companies operating in China are obliged to censor users’ content in a way that runs counter to international obligations to safeguard the rights of access to information, freedom of expression and privacy. Thousands of websites and social media services in the country remain blocked, and many phrases deemed to be politically sensitive are censored.[2] Chinese law also requires companies to store Chinese users’ data within the country and facilitate surveillance by abusive security agencies.

According to confidential Google documents obtained by The Intercept, the new search app being developed under Project Dragonfly would comply with China’s draconian rules by automatically identifying and filtering websites blocked in China, and “blacklisting sensitive queries”. Offering services through mobile phone apps, including Google’s existing Chinese apps, raises additional concerns because apps enable access to extraordinarily sensitive data. Given the Cybersecurity Law’s data localization and other requirements, it is likely that the company would be enlisted in surveillance abuses and their users’ data would be much more vulnerable to government access.

Google has a responsibility to respect human rights that exists independently of a state’s ability or willingness to fulfil its own human rights obligations.[3] The company’s own Code of Conduct promises to advance users’ rights to privacy and freedom of expression globally. In Google’s AI Principles, published in June, the company pledged not to build “technologies whose purpose contravenes widely accepted principles of international law and human rights”. The company also commits, through the Global Network Initiative, to conduct human rights due diligence when entering markets or developing new services. Project Dragonfly raises significant, unanswered questions about whether Google is meeting these commitments

Transparency and human rights due diligence

Google’s refusal to respond substantively to concerns over its reported plans for a Chinese search service falls short of the company’s commitment to accountability and transparency.[4]

In 2010, the human rights community welcomed Google’s announcement that it had “decided we are no longer willing to continue censoring our results on Google.cn”, citing cyber-attacks against the Gmail accounts of Chinese human rights activists and attempts by the Chinese government to “further limit free speech on the web”.

If Google’s position has indeed changed, then this must be stated publicly, together with a clear explanation of how Google considers it can square such a decision with its responsibilities under international human rights standards and its own corporate values. Without these clarifications, it is difficult not to conclude that Google is now willing to compromise its principles to gain access to the Chinese market.

There also appears to be a broader lack of transparency around due diligence processes at Google. In order to “know and show” that they respect human rights, companies are required under international standards to take steps to identify, prevent and mitigate against adverse impacts linked to their products – and communicate these efforts to key stakeholders and the public.[5] The letter from Google employees published on 16 August 2018 demonstrates that some employees do not feel Google’s processes for implementing its AI Principles and ethical commitments are sufficiently meaningful and transparent.[6]

Protection of whistle-blowers

Google has stated that it cannot respond to questions about Project Dragonfly because reports about the project are based on “leaks”.[7] However, the fact that the information has been publicly disclosed by employees does not lessen its relevance and rights impact.

In relation both to Project Dragonfly and to Google’s involvement in the US government’s drone programme, Project Maven, whistle-blowers have been crucial in bringing ethical concerns over Google’s operations to public attention. The protection of whistle-blowers who disclose information that is clearly in the public interest is grounded in the rights to freedom of expression and access to information.[8] The OECD Guidelines for Multinational Enterprises recommend that companies put in place “safeguards to protect bona fide whistle-blowing activities”.[9]

We are calling on Google to publicly commit to protect whistle-blowers in the company and to take immediate steps to address the concerns employees have raised about Project Dragonfly.

As it stands, Google risks becoming complicit in the Chinese government’s repression of freedom of speech and other human rights in China. Google should heed the concerns raised by human rights groups and its own employees and refrain from offering censored search services in China.

Signed, the following organizations:
Access Now
Amnesty International
Article 19
Center for Democracy and Technology
Committee to Protect Journalists
Electronic Frontier Foundation
Human Rights in China
Human Rights Watch
Independent Chinese PEN Centre
International Service for Human Rights (ISHR)
Pen International
Reporters Without Borders (RSF)
WITNESS

Signed in individual capacity (affiliations for identification purposes only):

Ronald Deibert
Professor of Political Science and Director of the Citizen Lab
University of Toronto

Rebecca MacKinnon
Director, Ranking Digital Rights

Xiao Qiang
Research Scientist
Founder and Director of the Counter-Power Lab
School of Information, University of California at Berkeley

Lokman Tsui
Assistant Professor at the School of Journalism and Communication
The Chinese University of Hong Kong

[1] See Cybersecurity Law of the People’s Republic of China (2016), unofficial translation, https://www.chinalawtranslate.com/bilingual-2016-cybersecurity-law/?lang=en and Human Rights Watch, “China: Abusive Cybersecurity Law Set to be Passed,” November 6, 2016, https://www.hrw.org/news/2016/11/06/china-abusive-cybersecurity-law-set-be-passed.

[2] See GreatFire.org, Online Censorship In China, https://en.greatfire.org/analyzer.

[3] UN Guiding Principles on Business and Human Rights,  https://www.ohchr.org/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf.

[4] For example, the Global Network Initiative Principles on Freedom of Expression and Privacy, https://globalnetworkinitiative.org/gni-principles/.

[5] UN Guiding Principles on Business and Human Rights.

[6] Kate Conger and Daisuke Wakabayashi, “Google Employees Protest Secret Work on Censored Search Engine for China,” New York Times, August 16, 2018, https://www.nytimes.com/2018/08/16/technology/google-employees-protest-search-censored-china.html.

[7] Amnesty International meeting with Google, August 2018.

[8] UN Special Rapporteur on freedom of expression, Report to the General Assembly on the Protection of Sources and Whistleblowers, September 2015, https://www.ohchr.org/en/issues/freedomopinion/pages/protectionofsources.aspx.

[9] OECD Guidelines for multinational enterprises, para 13, http://www.oecd.org/corporate/mne/

]]>
2194877
WITNESS ANNOUNCES FIRST MOZILLA FELLOW https://www.witness.org/witness-announces-first-mozilla-fellow/ Mon, 27 Aug 2018 20:35:11 +0000 https://www.witness.org/?p=2194855 WITNESS is extremely excited to announce open-source investigator Gabriela Ivens as our first ever Mozilla Fellow!

Mozilla Fellowships provide resources, tools, community, and amplification to those building a more humane digital world. During their tenure, Fellows use their skill sets — in technology, in advocacy, in law — to design products, run campaigns, influence policy and ultimately lay the groundwork for a more open and inclusive internet.

Mozilla Fellows hail from a range of disciplines and geographies: They are policymakers in Kenya, journalists in Brazil, engineers in Germany, privacy activists in the United States, and data scientists in the Netherlands. Fellows work on individual projects, but also collaborate on cross-disciplinary solutions to the internet’s biggest challenges. Fellows are awarded competitive compensation and benefits.

Gabriela will be working with our Tech+Advocacy team where she will be working on issues around the safe, ethical, and effective use of video in documenting human rights violations.

During the Fellowship, Gabriela will be focusing on a number of areas including emerging technologies for human rights documentation and the effects of policy and engineering decisions by technology companies – such as content takedowns of information – that is, or could be, societally important.

Her work will provide a greater level of understanding of the impact tech companies have on civil society and human rights defenders. Before becoming a Fellow, Gabriela worked at Syrian Archive, a group working on preserving visual documentation of the Syrian conflict, and has been working on open source investigations since 2015. She holds a master’s degree from University College London in Human Rights.

]]>
2194855
WITNESS joins EIUC to train Young Professionals about Human Rights and Film https://www.witness.org/witness-joins-euic-to-train-young-professionals-about-human-rights-and-film/ Mon, 20 Aug 2018 17:02:20 +0000 https://www.witness.org/?p=2194669 WITNESS is extremely excited to announce that we will be partnering with the Global Campus of Human Rights at the European Inter‑University Centre for the 13th Cinema Human Rights and Advocacy Summer School in Venice from August 27th to September 5th. The 10-day intense training is aimed at young professionals wishing to broaden their understanding of the connections between human rights, films, digital media and video advocacy.

The program aims to foster participatory and critical thinking on urgent human rights issues, debate with experts and filmmakers from all over the world during the 75th Venice International Film Festival and learn how to use films as a tool for social and cultural change.

Our Senior Attorney and Program Manager Kelly Matheson will conduct a few sessions on how to use video for change, advocacy and evidence for human rights.

You can find the entire schedule and more details about this exciting training here.

]]>
2194669
Cast Your Vote for WITNESS at SXSW 2019! https://www.witness.org/cast-your-vote-for-witness-at-sxsw-2019/ Mon, 13 Aug 2018 16:15:51 +0000 https://www.witness.org/?p=2194558 It’s that time of year where YOU decide what topics you want to hear about at South by Southwest (SXSW) in 2019. The 10-day convening, that brings the worlds’ top filmmakers, musicians, technologists, and creatives to Austin, Texas, has become an annual event for WITNESS. Whether it’s sharing our latest programmatic work with organizational peers, interacting with cutting-edge technologies, or participating in conversations about media and human rights, we look forward to both learning and sharing at SXSW every March.

This year we’re proposing the panel, “Deepfakes: What Should We Fear, What Can We Do,” led by our Program Director Sam Gregory to discuss questions surrounding deepfakes and synthetic media, how they can be used maliciously, and how we can detect and stop them.

More about the panel:

Deepfakes! As more sophisticated, more personalized, more convincing audio and video manipulation emerges how do we get beyond the apocalyptic discussion of the “end of trust in images and audio” and instead focus on what we can do about malicious deepfakes and other AI-manipulated synthetic media. Based on WITNESS’ collaborations with technologists, journalists and human rights activists, we’ll explore the state-of-the-art usage of deepfakes and other ‘synthetic media’, the solutions available to fight these malicious uses and where this goes next. Linked to broader trends in challenges to public trust, disinformation, and the evolving information ecosystem globally how should we plan together to fight the dark side of a faked video and audio future?

Sound interesting? Want to hear from WITNESS in Austin? Then please cast your vote for Deepfakes: What Should We Fear, What Can We Do here!

]]>
2194558