How does AI impact press freedom?
The 3rd of May marks World Press Freedom Day, across the world. An event which is usually marked in the publication of various reports and indices pertaining to how free are journalists to actually do their work. Most of the studies looking into press freedom tend to underline constraints that media personnel face in delivering their duties adequately.
However, the subject of media freedom has been discussed under a variety of lenses and all can agree on a number of things impacting what is commonly accepted as ‘freedom’: geopolitical situation, cultural context and also personal practices. However, there is another factor which has edged its way into this subject: that of Artificial Intelligence.
Julia Haas, Representative on Freedom of the Media for the Office of the OSCE, published a paper on ‘Freedom of the media and artificial intelligence’ in 2020. It looks at how “the use of artificial intelligence (AI) affects freedom of expression and media freedom… [and] focuses on the main concerns when AI is not deployed in a human rights-friendly manner.”
The study reveals a few points that are particularly relevant right now. Some countries are currently making headlines for censoring journalists because of information which the authorities deem inappropriate to be published. It says: “AI can be used as a tool to censor the media and unlawfully surveil citizens and independent journalists. Moreover, in today’s online environment, a few dominant internet intermediaries act as gatekeepers in the curation, distribution and monetization of information, including news content. These intermediaries increasingly deploy AI to govern private speech and public discourse.”
“Some states deploy AI to unlawfully surveil citizens and control public communication in ways inconsistent with international human rights law. Enabling unparalleled possibilities for surveillance, AI can facilitate censorship and means to suppress dissent and independent journalism, both online and offline. Consequently, some states use AI to coerce the press and, ultimately, to tighten digital authoritarianism.
Moreover, private actors, in particular providers of search engines and social media platforms, apply AI to filter content in order to identify and remove or deprioritize “undesired” content, known as content moderation, and to rank and disseminate tailored information, referred to as content curation. Both applications regulate speech with the intention to facilitate online communication, provide user-friendly services, and, crucially, increase commercial profit.
AI in advertising
The OSCE explains how the AI elements in many of today’s advertising, is playing a role which we are not yet fully apprising. “The use of AI to distribute content based on the predicted preferences of individuals is based on extensive data-driven profiling. To maximize revenue, intermediaries may prioritize content that increases user engagement over providing access to diverse information of public interest or to independent quality journalism. This may undermine users’ ability to access pluralistic information and bias their thoughts and beliefs.”
“AI-powered filtering and ranking of content is enabled by the surveillance of user behaviour at scale. To evaluate and predict the “relevance” of content, AI requires extensive, fine-grained data. These data also facilitate advertising, which is the basis of many internet intermediaries’ business model. Commodifying personal data for targeted advertising—which equals profit—incentivizes extensive data collection and processing, a phenomenon described as “surveillance capitalism”. Offering services “for free,” intermediaries profit from profiling and commercializing the public sphere.”
“To police speech, AI is often applied to identify and remove content considered illegal or undesirable, both by states and intermediaries. The vast amount of available content exceeds the ability for human scrutiny. While AI-based filtering of user-generated content may thus be appealing, AI tools are prone to mistakes. In addition to deploying AI themselves, states mandate private actors to monitor and remove content based on vague definitions within strict timeframes. Such outsourcing of human rights protection to revenue-driven private actors may incentivize over-blocking of legitimate speech and raises additional concerns about the rule of law and discrimination.”
The OSCE’s recommends that states and the private sector have to work together to safeguard human rights considerations at the heart of all AI design and deployment. Transparency and accountability should be part of all stages of the processes where media use AI tools.
“People have repeatedly turned to technology to resolve societal challenges. Yet, matters that have long been controversial cannot be resolved solely by outsourcing decision-making processes to AI.Footnote47 Beyond that, technologies can serve as tools for tracking, censorship and repression of the media at an unprecedented scale. While many of the above-mentioned concerns are not unique to AI, its use exacerbates existing threats to free speech and media freedom. To address them effectively, it is crucial to consider the sociotechnical context in which AI is deployed, by whom it is used, and for which purposes. While there can be no one-size-fits-all solution, AI’s impact cannot be assessed or addressed in any meaningful way without transparency and accountability” the OSCE’s study finds.
“Regulatory measures and AI-related policies should be evidence-based and must not have an adverse impact on media freedom. States should refrain from indiscriminately delegating human rights protection to AI. Furthermore, all endeavours need to be integrated in strong data-protection rules. Consenting to intrusive surveillance practices should not be a pre-condition to participation in online public discourse.”
“Transparency is required to know which AI tools are deployed and how automated decisions are made. It is also needed to challenge problematic processes. Those benefiting from AI should be responsible for any adverse consequence of its use. To achieve accountability, strict standards on governance are crucial. Rules should ensure that corporate accountability is tied to companies’ profits and that decision makers can be held responsible. States should consider establishing a tiered AI oversight structure, and explore self- and co-regulation models, along with dispute resolution mechanisms, social media councils or e-courts to rapidly determine violations.”
“To ensure independent scrutiny, national human rights institutions should be empowered to also supervise AI. An important tool is robust human rights impact assessments, which should be conducted periodically throughout the entire AI life cycle and provide publicly available analyses. Moreover, AI tools should be audited regularly and independently,70 and include careful analysis of whether AI is misused to interfere with the press. Good practices from other fields, including legacy media, can provide lessons to address transparency and accountability.”
“Finally, given the intertwined and transnational nature of AI challenges, it is crucial to join efforts and aim for global solutions. There are various important initiatives, such as those by the Organization for Security and Co-operation in Europe, UNESCO, the Council of Europe or the European Union.
AI is neither a magical bullet for society’s challenges nor should it take the blame for all challenges to free speech or media freedom. AI ought not to facilitate digital authoritarianism or high-tech repression of the media. If AI is to enable, rather than undermine, freedom of expression, access to pluralistic information and media freedom, it is imperative for all stakeholders to ensure a human rights-based framework for transparent and accountable AI. As AI increasingly affects every aspect of our communication and media consumption, it is long overdue to embed safeguards in its development and application so that media freedom can thrive.”