Social media platforms will be forced to confront a broken information ecosystem —#2021Predictions
In the coming year, social networks will likely need to moderate their platforms more aggressively and take more widespread actions against accounts that seed, spread, and amplify damaging content, and harass other users.
While social media companies have taken some measures against disinformation and problematic content on their platforms, their efforts have been too little, too late. The lack of adequate moderation on these platforms has badly broken our information ecosystem.
The recommendation algorithms that power social networks are very effective at exposing people to content they might not have otherwise found. While these mechanisms are often benign, they have lured some people into believing conspiracy theories and joining extremist groups. These recommendation algorithms should have been paired with moderation policies designed to prevent these things from happening, and to address disinformation, fake accounts, coordinated amplification, and targeted harassment. While some moderation does occur on these platforms, it was never scaled to the number of users and supported regions. As such, society as a whole has been beta testing these systems for over a decade, and suffering because of it.
Social network companies may also face troubles on the regulation front. Facebook recently stated that EU data transfer regulations may make their life difficult to the point of needing to pull their services from the region. In the US, Section 230 of the Communications Decency Act, a provision in US federal law that immunizes platforms from liability for what users post on them may be revisited in 2021. If Section 230 were to be modified or revoked, or if the EU continues to add new regulations, social networks may find themselves needing to change how they operate in order to avoid breaking laws.
There have been many proposed mechanisms for the design of automated moderation mechanisms for social media content. However, most of these mechanisms suffer from relatively high false positive and false negative rates. Content can be published in a variety of formats – text, audio, video, and images. All of these formats require different analysis approaches – none of which can be truly automated to understand the nuances that humans are able to comprehend. As such, although some extra moderation work may be offloaded to algorithms and machine learning mechanisms, it’s likely that humans will still need to be in the loop for a great deal of their decision-making processes.
Categories