Internet platforms like Facebook serve as a modern-day public square for billions of people, providing a crucial space for people to organize and speak their mind. For far too long, users whose content is removed from Facebook have had no way to know what rules they were supposed to comply with, and there has been no real process to fix Facebook’s many censorship mistakes. The ACLU’s own Facebook post about censorship of a public statue was inappropriately censored as nudity by Facebook. People documenting their own experiences as victims or witnesses of discrimination, hate, or police violence have often had their content censored.
Facebook’s publishing of its enforcement guidelines and announcement of a new appeals process for some content removals is a step in the right direction, but more needs to be done. Users need a meaningful, robust right to appeal the removal of any post—and before it is removed. Facebook needs to also ensure that users have the ability to explain why their content should not be taken down. It is also important for transparency and accountability for Facebook to include statistics about content takedowns by category in its regular transparency report. Finally, if Mark Zuckerberg’s thirty-one mentions of artificial intelligence in his recent Congressional testimony is any indication of how Facebook might be considering the use of machine technologies for content censorship, then the stakes for user rights are higher than ever. A.I. will not solve these problems, it will likely exacerbate them.
— Nicole Ozer, Technology and Civil Liberties Director, ACLU Foundations of California