Tech Advocacy Archives - WITNESS https://www.witness.org/tag/tech-advocacy/ Human Rights Video Wed, 09 Sep 2020 16:16:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 76151064 Press release: Major Human Rights and Internet Watchdog Organizations Sign On to Demands for #AuditFBIndia As Pressure Builds https://www.witness.org/press-release-major-human-rights-and-internet-watchdog-organizations-sign-on-to-demands-for-auditfbindia-as-pressure-builds/ Wed, 09 Sep 2020 10:30:47 +0000 https://www.witness.org/?p=2223642 9 September 2020

Contact:

Dia Kayyali, dia@witness.org,

Heidi Beirich, heidi@globalextremism.org

Global and national groups call on Mark Zuckerberg to hold Facebook India accountable

In early August, the Wall Street Journal published an exposé of Facebook India, with evidence that Ankhi Das, the head of Public Policy at Facebook, India, South and Central exhibited political bias by suspending the community guidelines when it came to genocidal hate speech. The article has been followed by myriad press reports in the Wall Street Journal, Reuters, TIME Magazine, and more, detailing bias and failure to address dangerous content at the Facebook India office. This week, a wide range of civil society organizations from around the world, including WITNESS, Free Press, Global Project Against Hate and Extremism, Media Justice,  Southern Poverty Law Center, and the Islamic Women’s Council of New Zealand have signed an open letter calling on Mark Zuckerberg to work with civil society to address dangerous content on Facebook, ensure that a thorough audit of Facebook India takes place, and place Ankhi Das on administrative leave while the audit is being conducted. 

The timeline of Facebook’s complicity in genocide goes back to 2013, where hate speech on the platform fuelled the Muzzafarnagar riots. It has continued unabated. In 2020,  content like #Coronajihad has spread as quickly as COVID itself, and has led to real world violence against many. Facebook itself admitted that it was used to incite genocide against Rohingya Muslims in Myanmar. An Indian parliamentary panel on information technology questioned Facebook on September 2 and will do so again, following demands made by ex-civil servants and Facebook India employees. But Facebook shouldn’t wait to be forced to take action. It should publicly state what steps it is taking to address its tragic failures in India. 

Here’s what civil society is saying:

Anjum Raman, project Lead for Inclusive Aotearoa Collective (New Zealand) and member of the Global Internet Forum to Counter Terrorism Independent Advisory Committee, said:

The activities and actions of Facebook staff in India show the danger of political parties and tech companies colluding to undermine the democratic process.  It shows the result of not having independent, transparent regulatory systems in place to oversee the activities of companies which have significant impact on the wellbeing and safety of millions of people.  We have already seen evidence of that harm across the world, whether in Myanmar, India, or New Zealand.  The international community needs to come together to ensure urgent action in regulating the behaviour and activities of these companies.

Dia Kayyali, Program Manager, tech + advocacy at WITNESS, said:

Facebook has been warned about the offline violence enabled by its platform in places like India. From the United Nations Special Adviser on the Prevention of Genocide to civil society organizations, the alarm has been raised. So why is dangerous Islamophobic content that is directly linked to real world harm being allowed to persist? Even before the Wall Street Journal article, it seemed to many of us in the global community that it’s about profit in an important market- but Facebook’s business model can’t be reliant on ignoring warning signs of genocide.. Facebook must take real action now, not just apologize or even get rid of one or two executives. This is your chance, Facebook. Conduct the most thorough investigation you’ve done yet. Make the results as public as possible. And work with civil society to stem the flow of the bloody, harmful content on your platform. 

Heidi Beirich, Ph.D., EVP  at the Global Project Against Hate and Extremism, said:

It’s high time Mark Zuckerberg and Facebook take anti-Muslim hatred seriously and change how its policies are applied in Asia and across the world. The scandal in the Indian office, where anti-Muslim and other forms of hatred were allowed to stay online due to religious and political bias, is appalling and the leadership in that office complicit. Hatred fomented on Facebook has led to violence and the most terrifying crime, genocide, against Muslims and other marginalized populations across the region, most notably in Myanmar. Anti-Muslim content is metastasizing across the platform as Facebook’s own civil rights audit proved. Facebook must put an end to this now.

Sana Din from Indian American Muslim Council said:

 Facebook allowed incendiary Islamophobic content even after they were informed that it was leading to genocidal violence.  From Muzzafnagar to Delhi, Indian Muslims and millions of other caste oppressed minorities cannot wait for change.  Facebook needs to act now. They  cannot evade their direct role in supporting genocidal hate speech at Facebook India and the only remedy to this harm is an Audit of caste and religious bias. 

Steven Renderos, executive director of MediaJustice said: 

As a global company with over 3 billion followers, Facebook has the unprecedented power of affecting users both on and off their platform. Counter to Mark Zuckerberg’s public aspirations of creating an inclusive platform, Facebook has become the tool of choice around the world to escalate violence around race, caste, and religion. As recent history in Myanmar has taught us, the consequences of not preventing hate speech from going viral on their platform translate into actual violence and genocide for some. This is not merely about a company struggling to address hateful activities at scale, this is a result of people Facebook has entrusted to represent its interest around the world. Nowhere is this more true than with Ankhi Das and Facebook India.

You can read the letter here.

]]>
2223642
Civil Liberties Committee of the European Parliament Ready To Vote https://www.witness.org/civil-liberties-committee-of-the-european-parliament-ready-to-vote/ Fri, 15 Mar 2019 12:33:21 +0000 https://www.witness.org/?p=2196472 Update: You can still call! The vote has been moved to April 1st

Next week, the Civil Liberties [LIBE] Committee of the European Parliament is set to vote on the proposed “Regulation on preventing the dissemination of terrorist content online” that WITNESS and other civil society organizations have been strongly opposing for months. Despite the pushback on the text of this proposal in multiple civil society letters, from two other Committees of the Parliament, from the United Nations, and from the European Union Agency for Fundamental Rights and European Data Protection Supervisor, the LIBE Committee’s report on the regulation is weak, to say the least. Unlike the other Committee reports from the Internal Market [IMCO] Committee  and the Culture and Education [CULT] Committee, the LIBE report doesn’t address some of the most fundamental issues with this proposal. It doesn’t address the overly broad definitions of terrorist content and leaves in place one hour removals. Bottom line, it doesn’t address the fundamental problems with this proposal.

Fortunately, our friends at La Quadrature have created a website that makes it easy for you to reach out to the Members of Parliament that will be voting on this report. As the website notes:

On 21 March 2019 the European Parliament’s “Civil Liberties” committee (LIBE, 60 European Members of Parliament (MEP)) will cast the first vote on this text. As the European Election will happen right afterwards, it is likely to be our last opportunity to obtain the rejection of this text.

Their website has clear talking points and tips for calling MEPs. Check it out and call today! This isn’t the final vote, but it is important, and we’ll keep you updated.

 

]]>
2196472
WITNESS to attend content moderation conference at EU Parliament https://www.witness.org/witness-to-attend-content-moderation-conference-eu-parliament-conference/ Wed, 30 Jan 2019 17:53:06 +0000 https://www.witness.org/?p=2196036 Like many governing bodies, the European Parliament is looking closely at how tech companies and platforms develop rules and policies that govern content moderation: what content might violate their terms of service, what content should be removed, users blocked, etc.

Content moderation is a major focus area of our Tech Advocacy program. The policies often adversely affect human rights content as our Program Manager for Tech Advocacy, Dia Kayyali has written about.

On Tuesday, February 5, 2019, Dia will speak at the “Content Moderation & Removal at Scale” conference being held at European Parliament in Brussels. The conference will explore how Internet companies develop and implement internal rules and policies in the area of content moderation. What are the challenges they currently face to moderate or remove illegal and controversial content, including hate speech, terrorist content, disinformation, and copyright infringing material? And how could or should future European regulations affect these practices?

Dia will be sharing a response to the panel “Illegal content: terrorist content and hate speech.” Dia recently spearheaded an effort by WITNESS to bring together 26 human rights defenders, journalists, archivists, digital rights organizations, and alternative media to tell members of the European Parliament that a proposed regulation to erase extremist content online will erase human rights too. Read the open letter here.

The conference is open to the public. If you are in Brussels, you can register here.  It will be recorded and live-streamed and you can follow the conversation on social media with  #COMOatScale.

Photo credit: © European Union 2014 – European Parliament (Attribution-NonCommercial-NoDerivs Creative Commons license). 

]]>
2196036
WITNESS and partners push back against EU regulation that threatens online free expression https://www.witness.org/witness-and-partners-push-back-against-eu-regulation-that-threatens-online-free-expression/ Mon, 28 Jan 2019 17:54:11 +0000 https://www.witness.org/?p=2196029 WITNESS has partnered with peers around the world to issue a letter to Rapporteur Daniel Dalton and the rest of the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs to voice opposition to the proposed Regulation on Dissemination of Terrorist Content Online.

The proposal is a serious threat to online free expression in Europe and freedom of expression globally. It is likely to inspire dangerous copycat laws and encourages increased use of opaque machine-learning algorithms to remove content—including content created by human rights defenders, alternative media, and journalists.

Leaving it up to algorithms to detect “extremist content” will create innumerable false positives and damage human rights content that is critical to ensuring accountability for perpetrators; content for which activists and journalists often risk their lives.

The ideas and concerns expressed in this letter are based on the real-world experiences of WITNESS, our partners, and the 25 other signatories. We are honored to bring together the diverse voices on this letter in defense of online freedom of expression and, specifically, the protection of human rights content.

Read more about this consortium, our opposition to the Regulation on Dissemination of Terrorist Content Online, and the full letter to the European Parliament on our blog.

Photo: © European Union 2017 – European Parliament (Attribution-NonCommercial-NoDerivatives CreativeCommons licenses)

]]>
2196029
Program Manager Dia Kayyali’s advice for Mark Zuckerberg featured in The Guardian https://www.witness.org/program-manager-dia-kayyalis-advice-for-mark-zuckerberg-featured-in-the-guardian/ Mon, 14 Jan 2019 21:27:57 +0000 https://www.witness.org/?p=2195939 Every January since 2009, Mark Zuckerberg, CEO of Facebook, publicly shares his goals for the year. Over the past decade, as Facebook has grown in influence and notoriety, his “personal challenges” have mirrored the weight and responsibility of the tech giant. A far cry from earlier declarations like promising to dress more adult-like, Zuckerberg’s resolutions have become far more consequential–not just for himself and his company, but all of us. In 2018, in the wake of security issues, misinformation, election scandals, and more, Zuckerberg pledged “to focus on fixing these important issues.” Many believe 2018 to be the first year he failed to accomplish his personal challenge.

However, ahead of this year’s formal declaration of his commitments, The Guardian asked technology experts, policymakers, and activists two questions:

  • What do you predict Mark Zuckerberg’s 2019 personal challenge will be?
  • What do you think Mark Zuckerberg’s 2019 personal challenge should be?

WITNESS’ Tech Advocacy Program Manager, Dia Kayyali, was one of the experts asked to predict and advise. Here’s what they had to say:

Will be: Some other, similarly broad, challenge that relates to making Facebook a force for good in the world.

Should be: Take personal responsibility for turning Facebook around as a company. That means publicly committing to creating an ethical and principled company that respects civil society, and ensuring that at every level Facebook makes decisions based on human rights instead of market forces. It means personally committing to a Facebook that doesn’t accidentally make decisions that aid violent regimes, white supremacists and other bad actors. Above all, it means simply being honest about Facebook’s largely detrimental role in global society. That would be the biggest challenge of all.

Shortly after The Guardian ran this piece, Zuckerberg shared his 2019 personal challenge. Following another terrible year for Facebook, Zuckerberg pledged “….to host a series of public discussions about the future of technology in society — the opportunities, the challenges, the hopes, and the anxieties.”

Unfortunately, Dia’s prediction was pretty spot on.

Dia leads WITNESS’ Tech Advocacy program which engages technology companies and supports digital policies that help human rights advocates safely, effectively, and ethically use technology for good. The program includes direct, sustained advocacy to those in leadership positions at companies to ensure that anyone, anywhere can use the power of technology to protect and defend human rights.

]]>
2195939
WITNESS joins international call to Google to end Project Dragonfly https://www.witness.org/witness-joins-international-call-to-google-to-end-project-dragonfly/ Tue, 11 Dec 2018 15:37:15 +0000 https://www.witness.org/?p=2195764 Today, WITNESS joins over 60 international human rights organizations and 10 leading figures in the digital and human rights fields to call for Google to respect human rights in China. WITNESS added our voices to a letter led by Human Rights Watch and Amnesty International, which calls on Google to “[drop] Project Dragonfly and any plans to launch a censored search app in China, and to re-affirm the company’s 2010 commitment that it won’t provide censored search services in the country.”

WITNESS stands by the United Nations Guiding Principles on Business and Human Rights. As we said recently in a submission to the United Nations, “companies must make a commitment to adhere to international human rights standards, including freedom of expression, even when it affects their financial bottom line or requires them to affirmatively defend attacks on rights by States.” And, even when it means they cannot enter a new market. This is especially true in the case of international technology platforms like Google, which have an enormous impact on freedom of expression, privacy, and other human rights. Entering a market cannot be an excuse for participating in the violation of fundamental rights.

Today’s letter follows an August 28 letter from 14 organizations, including WITNESS, which called on Google to “[Disclose]what steps, if any, Google is taking to safeguard against human rights violations linked to Project Dragonfly and its other Chinese mobile app offerings” and “Guarantee protections for whistle-blowers and other employees speaking out where they see the company is failing its commitments to human rights.” The letter outlined concrete concerns with the project and how it would aid surveillance and censorship. It also builds on two open letters from Google employees calling on the company to drop Project Dragonfly.

Google’s October 26th response was lackluster. The company notes that it hasn’t committed to building a censored search engine, but it also doesn’t explain how the project could possibly comply with Google’s previous public statements about upholding human rights and freedom of expression.

Read the letter in full below:

OPEN LETTER: RESPONSE TO GOOGLE on PROJECT DRAGONFLY, China and Human Rights

To: Sundar Pichai, Chief Executive Officer, Google Inc

cc: Ben Gomes, Vice President of Search; Kent Walker, Senior Vice President of Global Affairs; Scott Beaumont, Vice President, Greater China & Korea

11 December 2018

Dear Mr Pichai,

We are writing to ask you to ensure that Google drops Project Dragonfly and any plans to launch a censored search app in China, and to re-affirm the company’s 2010 commitment that it won’t provide censored search services in the country.

We are disappointed that Google in its letter of 26 October[1] failed to address the serious concerns of human rights groups over Project Dragonfly. Instead of addressing the substantive issues set out in the August letter,[2] Google’s response – along with further details that have since emerged about Project Dragonfly – only heightens our fear that the company may knowingly compromise its commitments to human rights and freedom of expression, in exchange for access to the Chinese search market.

We stand with current and former Google employees speaking out over recent ethical scandals at the company, including Project Dragonfly. We wholeheartedly support the message from hundreds of Google employees asking Google to drop Dragonfly in their open letter of 27 November, and commend their bravery in speaking out publicly. We echo their statement that their “opposition to Dragonfly is not about China: we object to technologies that aid the powerful in oppressing the vulnerable, wherever they may be.” [3]

New details leaked to the media strongly suggest that if Google launches such a product it would facilitate repressive state censorship, surveillance, and other violations affecting nearly a billion people in China. Media reports state that Google has built a prototype that censors “blacklisted” search terms including “human rights”, “student protest” and “Nobel Prize”, including in journalistic content, and links users’ search queries to personal phone numbers.[4] The app would also force users to sign in to use the service, track and store location information and search histories, and provide “unilateral access” to such data to an unnamed Chinese joint venture company, in line with China’s data localization law – allowing the government virtually unfettered access to this information.[5]

Facilitating Chinese authorities’ access to personal data, as described in media reports, would be particularly reckless. If such features were launched, there is a real risk that Google would directly assist the Chinese government in arresting or imprisoning people simply for expressing their views online, making the company complicit in human rights violations. This risk was identified by Google’s own security and privacy review team, according to former and current Google employees. Despite attempts to minimize internal scrutiny, a team tasked with assessing Dragonfly concluded that Google “would be expected to function in China as part of the ruling Communist Party’s authoritarian system of policing and surveillance,” according to a media report.[6]

Actively aiding China’s censorship and surveillance regime is likely to set a terrible precedent for human rights and press freedoms worldwide. A recent Freedom House report warned that the Chinese government is actively promoting its model of pervasive digital censorship and surveillance around the world.[7] Many governments look to China’s example, and a major industry leader’s acquiescence to such demands will likely cause many other regimes to follow China’s lead, provoking a race to the bottom in standards. It would also undermine efforts by Google and other companies to resist government surveillance requests in order to protect users’ privacy and security,[8] emboldening state intelligence and security agencies to demand greater access to user data.

Google’s letter makes several specific points that are directly contradicted by other sources. The letter states that it is “not close” to launching a search product in China, and that before doing so the company would consult with key stakeholders. However, as reported by the media, comments made in July by Ben Gomes, Google’s Head of Search, suggested the product could be “six to nine months [to launch]” and stressed the importance of having a product ready to be “brought off the shelf and quickly deployed” so that “we don’t miss that window if it ever comes.”[9]

The letter also states that Google worked on Dragonfly simply to “explore” the possibility of re-entering the Chinese search market, and that it does not know whether it “would or could” launch such a product. Yet media reports based on an internal Google memo suggest that the project was in a “pretty advanced state” and that the company had invested extensive resources to its development.[10]

Google’s decision to design and build Dragonfly in the first place is troubling. Google’s own AI Principles commit the company not to “design or deploy” (emphasis added) technologies whose purpose contravenes human rights. Given the company’s history in China and the assessment of its own security team, Google is well aware of the human rights implications of providing such an application. Moreover, Google’s letter fails to answer many questions about what steps, if any, the company is taking to safeguard human rights, including with respect to its current Chinese mobile app offerings, consistent with its commitments.

We urge Google to heed concerns from its own employees and from organizations and individuals across the political spectrum by abandoning Project Dragonfly and reaffirming its commitment not to provide censored search services in China. We also note that the letter makes no reference to whistle-blowers, and thus we urgently repeat our call to the company that it must publicly commit to protect the rights of whistle-blowers and other workers voicing rights concerns.

We welcome that Google has confirmed the company “takes seriously” its responsibility to respect human rights. However, the company has so far failed to explain how it reconciles that responsibility with the company’s decision to design a product purpose-built to undermine the rights to freedom of expression and privacy.

Signed, the following organizations:

Access Now

ActiveWatch – Media Monitoring Agency (MMA)

Adil Soz – International Foundation for Protection of Freedom of Speech

Americans for Democracy & Human Rights in Bahrain (ADHRB)

Amnesty International

Article 19

Articulo 12 – Son Tus Datos

Association for Progressive Communications

Asociacion para una Ciudadania Participativa

Bolo Bhi

Briar Project

Bytes for All (B4A)

Cartoonists Rights Network, International (CRNI)

Center for Democracy & Technology

Center for Media Freedom and Responsibility (CMFR)

Center for Independent Journalism (CIJ)

Child Rights International Network (CRIN)

Committee to Protect Journalists (CPJ)

Electronic Freedom Foundation (EFF)

Foro de Periodismo Argentino (FOPEA)

Freedom of the Press Foundation

Freedom Forum

Fundación Datos Protegidos (Chile)

Fundacion Internet Bolivia

Globe International Center (GIC)

Hong Kong Journalists Association

Human Rights in China (HRIC)

Human Rights First

Human Rights Watch

Independent Chinese PEN Center (ICPC)

Independent Journalism Center (IJC)

Index on Censorship

Initiative for Freedom of Expression – Turkey

Interfaith Center on Corporate Responsibility (ICCR)

International Campaign for Tibet

International Service for Human Rights (ISHR)

International Tibet Network Secretariat

Internet Sans Frontières

Latin American Observatory of Regulation, Media and Convergence – OBSERVACOM

Media Rights Agenda (MRA)

Mediacentar Sarajevo

NetBlocks

Network of Chinese Human Rights Defenders (CHRD)

New America’s Open Technology Institute

Norwegian PEN

OpenMedia

Pacific Island News Association

Palestinian Center for Development and Media Freedoms (MADA)

PEN International

PEN America

Privacy International

Reporters Without Borders (RSF)

Software Freedom Law Center, India (SFLC.in)

South East Europe Media Organisation (SEEMO)

Southeast Asian Press Alliance (SEAPA)

Students for a Free Tibet

Syrian Center for Media and Freedom of Expression (SCM)

Tibet Action Institute

Việt Tân

WITNESS

World Uyghur Congress

Signed in individual capacity (affiliations for identification purposes only):

Chinmayi Arun

Assistant Professor, National Law University Delhi

Arturo J. Carrillo

Clinical Professor of Law, The George Washington University Law School

Richard Danbury

Associate Professor, Journalism, De Montfort University Leicester

Ronald Deibert

Professor of Political Science and Director of the Citizen Lab, University of Toronto

Molly K. Land

Professor of Law and Human Rights, University of Connecticut School of Law                                                                                  

Rebecca MacKinnon

Director, Ranking Digital Rights                                                                                                                                                                                                                                                  

Deirdre K. Mulligan

Associate Professor, School of Information and Faculty Director, Berkeley Center for Law and Technology, University of California, Berkeley

Paloma Muñoz Quick

Director, Investor Alliance for Human Rights (IAHR)                                                                                                                 

Edward Snowden

President, Freedom of the Press Foundation

Lokman Tsui

Assistant Professor, School of Journalism and Communication, The Chinese University of Hong Kong

——

[1] Letter from Kent Walker, Senior Vice President for Global Affairs at Google, responding to concerns of multiple human rights organizations and individuals, 26 October 2018, https://www.amnesty.org/en/documents/ASA17/9552/2018/en/

[2] Letter to Sundar Pichai from multiple human rights organizations and individuals, 28 August 2018, https://www.documentcloud.org/documents/4792329-Google-Dragonfly-Open-Letter.html

[3] Google employees, ‘We are Google employees. Google must drop Dragonfly’, 27 September 2018, https://medium.com/@googlersagainstdragonfly/we-are-google-employees-google-must-drop-dragonfly-4c8a30c5e5eb

[4] Ryan Gallagher, ‘Google China Prototype Links Searches to Phone Numbers’, The Intercept, 14 September 2018, https://theintercept.com/2018/09/14/google-china-prototype-links-searches-to-phone-numbers/ ;  Jack Poulson, Letter to Senate Commerce Committee, 24 September 2018, https://int.nyt.com/data/documenthelper/328-jack-poulson-dragonfly/87933ffa89dfa78d9007/optimized/full.pdf

[5] Ryan Gallagher and Lee Fang, ‘Google Suppresses Memo Revealing Plans To Closely Track Search Users In China’, The Intercept, 21 September 2018, https://theintercept.com/2018/09/21/google-suppresses-memo-revealing-plans-to-closely-track-search-users-in-china/

[6] Ryan Gallagher, ‘Google Shut Out Privacy and Security Teams from Secret China Project’, The Intercept, 29 November 2018, https://theintercept.com/2018/11/29/google-china-censored-search/

[7] Freedom House, ‘Freedom on the Net 2018: The Rise of Digital Authoritarianism’, October 2018, https://freedomhouse.org/report/freedom-net/freedom-net-2018/rise-digital-authoritarianism

[8] Reform Government Surveillance Coalition

[9] Ryan Gallagher, ‘Leaked Transcript Of Private Meeting Contradicts Google’s Official Story On China’, The Intercept, 9 October 2018, https://theintercept.com/2018/10/09/google-china-censored-search-engine/

[10] Ryan Gallagher and Lee Fang, ‘Google Suppresses Memo Revealing Plans to Closely Track Search Users in China’, The Intercept, 21 September 2018, https://theintercept.com/2018/09/21/google-suppresses-memo-revealing-plans-to-closely-track-search-users-in-china/

 

]]>
2195764
WITNESS joins the Partnership on AI https://www.witness.org/witness-joins-the-partnership-on-ai/ Fri, 16 Nov 2018 23:07:44 +0000 https://www.witness.org/?p=2195600 Note from our Program Director Sam Gregory:

At a time when the role of artificial intelligence (AI) in the processes of creating media, managing what we see, and moderating content is becoming increasingly important, WITNESS is glad to be joining the Partnership on AI. We look forward to engaging with others in the Partnership on AI (PAI)to address critical challenges around key focus areas of our Tech Advocacy work including misinformation and disinformation, content moderation, privacy, and facial recognition and deepfakes/synthetic media. There’s a critical opportunity now to ensure that AI is used in a rights-protecting and rights-enhancing way, to ensure that marginalized voices are part of the process of development and implementation and that ethical considerations about when AI is used are front-and-center. WITNESS will be co-chairing the new Working Group on Social and Societal Influence, which is beginning with a focus on AI and media.

From the PAI website:

We are excited to announce that we have added 10 new organizations to the growing Partnership on AI community. The latest cohort of new members represents a diverse range of sectors, including media and telecommunications businesses, as well as civil rights organizations, academia, and research institutes.

These new members will bring valuable new perspectives. For example, the addition of media organizations will be crucial at a time when AI-enabled techniques in synthetic news and imagery may pose challenges to what people see and believe, but also may help to authenticate and verify information.

PAI is also committed to ensuring geographic diversity in exploring AI’s hard questions, and as a result, the latest group of new members includes organizations from Australia, Canada, Italy, South Korea, United Kingdom, and the United States, allowing us to bring together important viewpoints from around the world.

The following organizations join the Partnership on AI in November 2018:

Autonomy, Agency and Assurance Innovation (3A) Institute
American Psychological Association
British Broadcasting Corporation (BBC)
DataKind
The New York Times
OPTIC Network
PolicyLink
Samsung Electronics
The Vision and Image Processing Lab at University of Waterloo
WITNESS

Partnership on AI Executive Director Terah Lyon, said “We are proud to welcome a diverse new group of organizations and perspectives to the Partnership on AI, and I look forward to seeing the impact of their contributions. Technology is a series of decisions made by humans, and by involving more viewpoints and perspectives in the AI debate we will be able to improve the quality of those decisions.”

Matthew Postgate, Chief Product and Technology Officer at the BBC, said: “I am delighted that the BBC has joined the Partnership on AI. The use of machine learning and data enabled services offer incredible opportunities for the BBC and our audience, but also present serious challenges for society. We will only realise the benefits and solve the challenges by coming together with other media and technology organisations in the interests of citizens. Partnership on AI and its member base provide the platform to do just that, and I am committed to ensuring the BBC plays an active part.”

Nick Rockwell, Chief Technology Officer at the New York Times, said: “Our mission at The New York Times is to help people better understand the world, so it is imperative that we understand and participate in the ways technology is changing our lives. At The Times, we already use artificial intelligence in many ways to deepen our readers engagement with our journalism, always in accordance with ethical guidelines and our commitment to our reader’s privacy, so we are both deeply excited and deeply concerned about the power of artificial intelligence to impact society. We are excited to join the Partnership on AI to continue to deepen our understanding, and to help shape the future of this technology for good.”

Jake Porway, Founder and Executive Director at DataKind, said: “We couldn’t be more aligned to the Partnership on AI’s cause as our mission at DataKind is virtually synonymous — to create a world in which data science and AI are used ethically and capably in the service of humanity. There’s huge potential to reach this goal together, and we’re particularly excited to play the role of connector between the many technology companies in the group committed to making positive social change and the needs on the ground that they could support.”

The new cohort of members will participate in the Partnership’s existing Working Groups and will join new projects and work beginning with the Partnership’s upcoming All Partners Meeting.

The Partnership on AI exists to study and formulate best practices on AI, to advance the public’s understanding of AI, and to provide a platform for open collaboration between all those involved in, and affected by, the development and deployment of AI technologies.

To succeed in this mission, we need deep involvement from diverse voices and viewpoints that represent a wide range of audiences, geographies, and interests.

We welcome questions from organizations interested in learning more about membership. To contact us, please see the forms available here.

]]>
2195600
WITNESS Featured in NBC News https://www.witness.org/witness-featured-in-nbc-news/ Mon, 05 Nov 2018 22:06:05 +0000 https://www.witness.org/?p=2195539 Fake news shadowed the Brazilian election last week as disinformation quickly spread across the popular Facebook-owned messaging service WhatsApp to help far-right front-runner Jair Bolsonaro achieve victory.

Bolsonaro had been getting an illegal helping hand from a group of businessmen who were bankrolling a campaign to bombard WhatsApp users with fake news about the opposition candidate.

Researchers have found that fake news stories and rumors spread quickly via person-to-person and group messages on the app, using its features in culturally specific ways or taking advantage of third-party workarounds to add extra layers of utility — and creating new avenues of potential abuse in the process.

Our Senior Program Manager Priscila Neri recently sat down with NBC News to talk about how WhatsApp became linked to mob violence and fake news. Read the article here.

]]>
2195539
In Conversation With National Endowment for Democracy: How Will Deepfakes Transform Disinformation? https://www.witness.org/in-conversation-with-national-endowment-for-democracy-how-will-deepfakes-transform-disinformation/ Mon, 01 Oct 2018 15:41:32 +0000 https://www.witness.org/?p=2195284 People have started to panic about the increasing possibility of manipulating images, video, and audio, often popularly described as “deepfakes”.  In the past decade Hollywood studios have had the capacity to morph faces —from Brad Pitt in “The Curious Case of Benjamin Button” to Princess Leia in “Star Wars’ Rogue One”—and companies and consumers have had tools such as Photoshop to digitally alter images and video in subtler ways.

Disinformation, the intentional use of false or misleading information for political purposes, is increasingly recognized as a threat to democracy worldwide. Many observers argue that this challenge has been exacerbated by social media and a declining environment for independent news outlets. Now, new advances in technology—including but not limited to “deepfakes” and other forms of synthetic media—threaten to supercharge the disinformation crisis.

WITNESS Program Director Sam Gregory, along with four other deepfakes leading experts sat down with the National Endowment for Democracy to talk about these threats and the role they play in the disinformation landscape.

“The most serious ramification of deepfakes and other forms of synthetic media is that they further damage people’s trust in our shared information sphere and contribute to the move of our default response from trust to mistrust,” Sam told NED.

To read the entire interview, click here.

For more on our work on deepfakes, click here.

 

]]>
2195284
In conversation with VICE: Why is it so Hard to Care about Human Rights? https://www.witness.org/in-conversation-with-vice-why-is-it-so-hard-to-care-about-human-rights/ Thu, 13 Sep 2018 18:10:55 +0000 https://www.witness.org/?p=2195080 Ask any humanitarian volunteer you’ve walked past on a sidewalk — it’s an incredibly difficult job to get people to commit themselves to a cause or relief effort in another part of the world.

Our Program Director Sam Gregory was recently interviewed by VICE News about why is it so hard to care about humanitarian causes.

Click here to see what Sam, and other human rights leaders had to say about what it takes to care for justice in the world.

]]>
2195080