Developing norms and standards on disinformation

  • Topic

Normative frameworks for the information space have developed over the course of many years, through collaborations between civil society groups, private sector companies, government, and other stakeholders. However, norms and standards specific to working on disinformation or social media issues are in embryonic stages: either existing initiatives are being revised to address the new online threats, for instance through content moderation, corporate governance, the digital agenda, and the cybersecurity space, or new ones dedicated specifically to disinformation and related social media issues are just forming.

This section will examine how the different codes and principles in this space are evolving and how they can potentially link with existing best practices internationally, as well as ways that programs can be designed to link with these nascent frameworks. Some codes work organizationally, for instance how parties, private or public sector entities should behave to discourage the use and promotion of disinformation, computational propaganda, and other harmful forms of content while encouraging openness, freedom of expression, transparency, and other positive principles related to the integrity of the information space. Others work in terms of individual codes of practice such as for media monitors, fact-checkers, and researchers in the space. Both organizational and individual efforts will be considered in this section.

One way of understanding these normative frameworks for the information space is as a form of negotiation. For example, negotiation between technology companies and other groups (such as governments, advertisers, media, and communications professionals) in agreement on shared norms and standards across non-governmental organizations, media, and civil society that provide oversight and to a certain extent have powers of enforcement of these rules. Different stakeholders enter into different forms of agreement with the information technology and communications sectors depending on the issue agreed on, the principles involved, the means of oversight and safeguards, and ultimately the consequences of any abrogation or divergence from the terms. These standards also focus on the different vectors of information disorder, content, sources, and users. For example, content moderation normative standards such as the Santa Clara Principles, fact-checking principles focusing on both sources and content by the Poynter Institute's International Fact-Checking Network, or standards such as the EU Code on Disinformation that attempt to address all three: content through encouraging better moderation, sources by encouraging efforts to identify them, and users through media information literacy standards.

Other actors, such as parties, policymakers, and the public sector, can work to ensure that norms related to online operations are enforced, with varying degrees of success. Ultimately, these normative frameworks are dependent on agreements between parties to abide by them, but other forms of oversight and enforcement are available to society. Also, the integration of inclusive gender-sensitive approaches to the development of norms and standards and reflecting how work to advance gender equality and social inclusion broadly and work to counter disinformation can and should be mutually reinforcing. Many of the frameworks address corporate stakeholders and the technology sector in particular, such as the Santa Clara Principles on Content Moderation, Ranking Digital Rights, and the Global Network Initiative, and the European Union's Codes of Practice on Disinformation and Hate Speech, while others engage with a broader range of groups, including civil society actors, government, media, and communications sectors. Other frameworks attempt to engage with parties themselves, to create codes of online conduct for candidates and campaigns, either through informal agreements or more explicit codes of conduct. Finally, normative frameworks can be used to ensure that actors working in fields related to disinformation issues promote information integrity, such as journalists and fact-checkers.

This section will cover these categories of normative interventions that address content, actors such as platforms, and the targets of disinformation, hate speech, computational propaganda, and other harmful forms of content, including:

Related multistakeholder norms for cybersecurity, internet freedom, and governance issues

These standards represent norms for the online space that have impacts on disinformation and related content issues but were not specifically or solely designed to address them. These include the Global Network Initiative, The Manilla Principles on Intermediary Liability, and the Santa Clara Principles.

Developing codes on disinformation, hate speech, and computational propaganda issues for the private sector

Codes and other normative standards designed specifically to address disinformation and related information integrity issues. These include the EU Codes of Practice on Disinformation and Hate Speech, Ranking Digital Rights, the Global Internet Forum to Counter Terrorism (GIFCT), and the Paris Call for Trust and Security in Cyberspace.

Party commitments to nonuse of disinformation and computational propaganda and promotion of information integrity principles 

Standards for parties and individual candidates committing them to information integrity principles, including German Party Commitments, Argentina's Ethical Digital Commitment, Brazil's #NãoValeTudo campaign, Nigeria's Abuja Accord, and the Transatlantic Commission on Election Integrity's Pledge for Election Integrity.

Codes of conduct for researchers, fact-checkers, journalists, media monitors, and others

Broader codes of conduct for those working in the information space such as the Poynter Institute’s International Fact-Checking Network's Code of Principles, the Pro-Truth Pledge, Trust Project, Journalism Trust Initiative, and the Certified Content Coalition.

These frameworks all have elements that impact the information space, particularly around freedom of expression, privacy, and the inherent conflicts in creating open spaces for online conversation while also ensuring inclusion and penalties for hateful or other problematic content. They are also evolving and being adapted to the new challenges of an increasingly online, networked society that is confronted by disinformation, hate speech, and other harmful content.


Name

Developing norms and standards on disinformation

Description

Norms and standards specific to working on disinformation or social media issues are in embryonic stages: either existing initiatives are being revised to address the new online threats, for instance through content moderation, corporate governance, the digital agenda, and the cybersecurity space, or new ones dedicated specifically to disinformation and related social media issues are just forming. 

Types

Author

Daniel Arnaudo

Cover

Publish date

04/05/2021