Moderating Social Media Discourse for a Healthy Democracy

Abstract

Description

The prevalence of hate speech and misinformation on the internet, heightened by the COVID-19 pandemic, directly harms minority groups that are the target of vitriol, as well as our society at large (Müller & Scwarz, 2020). In addition, the intersection between the two only exacerbates their harmful effects leading to an increase in intolerance and polarization (Kim & Kesari 2021). Current platform moderation techniques, as well as Section 230 under the Communications Decency Act, have been insufficient in addressing this problem, resulting in a lack of transparency from internet service providers, clear boundaries on user-platforms relations, and sufficient tools to handle a rapidly expanding internet. To address this problem space, we advocate for the following solutions: 1. Algorithmic governance & transparency: Internet Service Providers should be more transparent with users about content moderation policies and algorithms, and clarify users’ basic rights on the platform. 2. Flagging recommendations: We advocate a more effective, efficient and comprehensive flagging system through a combined strategy of content- and user-based approaches. 3. Multiplatform collaboration: Fighting harmful online content requires a collaborative effort among policy makers, civil society groups, researchers, and different platforms. 4. Long-term considerations: Building a regular and prolonged tracking system is essential to make anti-misinformation efforts more efficient and effective, especially in complex scenarios.

LCSH Subject Headings

Citation