top of page
Writer's pictureSam Ellefson

Assessing current platforms' attempts to curb misinformation

My first instinct when I was tasked with researching and evaluating two social platforms’ efforts and structures for combating misinformation online was to turn to Twitter and Reddit. I use Twitter every day, scrolling through my feed and searching trending topics to see what various users are saying about them.


Twitter has definitely been at the fore of many peoples’ minds when they think of hubs for misinformation. It’s a vast, global social network that, on the surface, seems fairly unregulated in what is deemed permissible content to post.


I found this page on Twitter’s help center detailing how they grapple with and attempt to mitigate misinformation on their platform. The gist of Twitter’s approach to tackling misinformation can be distilled into two sentences it spotlights at the top of this page: “To help enable free expression and conversations, we only intervene if content breaks our rules, which you can learn about below. Otherwise, we lean on providing you with additional context.”


Below this, there’s a video that details Twitter’s approach to contextualizing misleading or false claims with additional information on the erroneous tweets themselves, rather than deleting the tweets altogether. Twitter says it only deletes tweets if they pose “immediate and severe harm.” The way Twitter gauges the immediacy and severity of said harm is something that is left out of this quirky video.


Twitter’s endeavors to inform users of potential misinformation by contextualizing sketchy tweets include a range of approaches. The first one they list is “labeling content,” which also reduces the tweets visibility. The efficacy of this approach is questionable.


A study from 2022 on the spread and scope of labeled content found that “overall difference in interactions was substantial, with labeled tweets being liked approximately 36% more, retweeted 70% more, quote tweeted 88% more, and the median number of replies labeled tweets generated was 84% higher than that of unlabeled tweets.”


The tech company, on the other hand, says they saw “notable decreases in engagement with Tweets labeled with the new (misleading information labels): -13% in replies, -10% in Retweets and -15% in likes.” Twitter’s efforts to curb misinformation by labeling it and supposedly limiting a tweet’s visibility seem to be at odds with its need to grow and retain an engaged user base.

Twitter will also prompt users when they engage with a misleading tweet and create “Twitter moments” in which they give a fact-checking run down of a widespread piece of misinformation. Twitter moments aren’t limited to misinformation, but can be about anything that is trending on Twitter during a given day.


Some of these moments are automated by an algorithm that picks on the frequency of tweets pertaining to a certain topic, while others that deserve more care are managed by a “curation team.”


The company gives a disclaimer that the “curation team isn’t responsible for driving revenue, user growth, or managing Twitter’s partner relationships.” and that Twitter moments “aren’t influenced by advertisers, partners, or Twitter’s business interests.” Twitter’s curation team has partnered with major news organizations like the AP and Reuters in the past to further their fact-checking capabilities.

Twitter also says it publishes “pre-bunks” during times where misinformation is rife and pervasive, which “proactively feature informative messages or updates to counter misleading narratives that emerge.” Twitter has made these pre-bunks for issues ranging from elections to COVID. Twitter is planning to use its “pre-bunks” capabilities as the 2022 midterms inch closer, and the tech giant also says it’s revamping its Civic Integrity Policy.


Reddit is obviously a vastly different social platform than Twitter. With its fragmented subreddits and user anonymity, it would seem that misinformation is more widespread on Reddit, and that content moderators have a difficult time wrangling in all the misleading posts.


Each individual community, or subreddit, on the site has different lists of rules that content moderators enforce, which can vary widely and have substantial impact on the information that is shared within those spaces. Reddit’s site-wide content policy doesn’t have any specific points about misinformation, but they do have a point warning Redditors against impersonating others.

In the past, Reddit has banned certain subreddit communities for violating their content policies — but not explicitly for posting and harboring misinformation. A 2017 study looked at Reddit’s propensity to harbor and spread misinformation compared to Twitter and 4chan, the anonymous message board platform.


The study looked at the frequency of which articles from a range of media organizations are shared on the three distinct platforms, specifically within six selected subreddits. While it did not look specifically at how misinformation spreads on Reddit, the study measured the spread and scope of alternative news links on the platform.

Overall, Reddit’s approach to grappling with and mitigating misinformation on its platform relies largely on individual communities and the guidelines they set for users. I don’t think this is the most effective approach, as Reddit-wide enforcement of its content policies are limited to hate speech and online acts that are associated with that.


A 2021 article in Gizmodo reported on various subreddits going private in protest of the platform’s lax response to rampant vaccine misinformation percolating on its servers. Communities on Reddit that were heavily spreading misinformation were ultimately banned following the digital protests, but it doesn’t seem this was enough to spur a robust or comprehensive approach to limiting misinformation on Reddit.


The anonymous message board site pales in comparison to Twitter’s developed approach to labeling and limiting the scope of misleading or false statements. It’s intriguing to think about users leading the charge on introducing standards for handling misinformation, like in the case of the Reddit protests, when the tech giants don’t take action.

4 views0 comments

Recent Posts

See All

Comments


bottom of page