All across Europe, nearly everyone seems to agree that the European copyright regulations need to be updated and brought into the 21st century. However, some of the proposed measures appear to be flashbacks to darker times in our past or to the methods of authoritarian dictatorships. This is especially true for the upload filters in Article 13 of the European Commission's proposal. Obliging platforms to screen every bit of content that is uploaded for possible copyright violations requires a censorship system that would endanger the free and open internet and with it our democracy itself. The human rights NGO epicenter.works illustrated what this would mean in an activist intervention as part of the PrivacyWeek 2017: The Internet party takes place without any of its users, since upload filters deny us the free and open access to internet services. On 21 November 2017, the European Parliament's Committee on Legal Affairs (JURI) will consider the European copyright reform. epicenter.works calls upon all members of the committee to take a strong stance against upload filters and not to endanger our democracy by introducing a censorship system.
The current UNESCO report “World Trends in Freedom of Expression and Media Development” shows a dramatic development: the number of internet shutdowns by governments has tripled over the course of just two years. Up until today, there have been 61 cases of internet shutdowns in 2017.
Restrictions of the freedom of information and communication are characteristic for dictatorships. Now the EU wants to legally introduce a measure which fulfils the exact same purpose,
warns Thomas Lohninger of epicenter.works.
Content filtering as a black box
Picture: Hanna Prykhodzka (More pictures from the intervention can be found >> here.)
In Article 13 of the Copyright Directive, the former Commissioner for Digital Economy and Society, Günther Oettinger, proposed that all platforms which hold “large amounts” of user-generated content be required to introduce a content recognition system to detect possible copyright infringements. These systems should screen all images, video, and audio files before making them available to the public. For the users, this system would be a black box that decides which content will be published in Europe and which content will not. Such a system cannot distinguish between the actual infringement of rights and perfectly legal use of work, such as uploading works licensed by the right holder only for certain uses or which fall within the scope of limitations of and exceptions to copyright. For example, the use of works for parody and satire or quotations of works for criticism or review would be blocked by such a system.
Google will see everything uploaded on other platforms
The only currently available implementation of such a content recognition system, which is somewhat functional, is “ContentID”, the system Google developed for YouTube. Smaller platforms like Wikipedia or Github as well as e-learning systems used by universities (such as Moodle) would have to use ContentID and forward the entirety of the content submitted to them by their users to Google.
The largest censorship system in history
According to the proposal, this content recognition and filtering should be carried out proactively – meaning that the content will have to be filtered and censored as it is uploaded and before it is available to the public. Such a system would constantly surveil each and every upload. Even the proposed mechanism for compensation could not possibly repair all damage done by censorship on this scale. The risk of abuse if these kinds of systems are installed on all platforms would be colossal: The difference between a liberal democracy and an authoritarian dictatorship would be as minuscule as the flip of a switch changing the configuration of the censorship system, since the same system could easily censor any kind of content.
Additional costs for European companies, more income for Google
The introduction of such systems would result in substantial expenses for platforms. This would result in European companies either using Google's ContentID or else having to invest substantially in developing their own content recognition systems. This would cause significant commercial damage to European startups and SMEs, but also well-known platforms for works in the public domain or user-generated content like Wikipedia and Github. Even if Google should decide to allow others to use ContentID free of charge, Google would still gain detailed knowledge of the content on their competitors' platforms.
Only once has Wikipedia been required to introduce upload filters – by the government of the People's Republic of China. We refused them and Wikipedia is still blocked in China for this reason. It is appalling that the EU is even considering to introduce the same kind of compulsory surveillance and censorship system for all platforms in Europe through the Copyright Directive,
states Claudia Garád, Executive Director of Wikimedia in Austria.
She also foresees substantial difficulties in implementing such a system:
Should Wikipedia implement upload filters, the project would not be able to go on. We would have to protractedly authorise every single contribution or image – an unnecessary obstacle which would greatly complicate the work of our community, or even make it outright impossible.
Take action against upload filters now!
The resistance against upload filters and other questionable aspects of the proposed copyright reform has been growing within civil society for quite some time now. Other proposed measures include the introduction of an ancillary copyright for press publishers, which has already been proven to be completely unworkable in Germany and Spain. The campaign websites Save the Meme and Change Copyright offer easy ways to get into contact with MEPs and to convince them to stop these dangerous developments and stand up for a copyright reform which actually faces the challenges and needs of the internet in the 21st century.
The UNESCO report is available here: World Trends in Freedom of Expression and Media Development: 2017/2018 Global Report, Paris