#98 TRENDING IN Opinion 🔥

What Does Meta's Fact-Check Removal Mean for Your Insta Feed? Inside Community Notes and More

Opinion

Thu, January 23

On January 7th, 2025, Meta published an article titled, “More Speech and Fewer Mistakes” for its Newsroom. Written by Joel Kaplan, Meta's Chief Global Affairs Officer, the article details how it will soon replace its third-party fact-checking program with a Community Notes model similar to X. Additionally, Meta will remove restrictions on topics such as immigration, gender identity, and gender while also increasing political content for users who want to see more of it in their feeds.

There will still be a few weeks before these changes are enacted in the U.S., but this decision has already received backlash with many calling Mark Zuckerberg's (founder and CEO of Meta) decisions an appeal to Trump. Others, however, see it as a victory and the decision has received praise from Trump, Elon Musk, Fox News, and more.

But let's be honest, the important thing for our readers is how this could impact their Instagram experience (and maybe Facebook for anyone keeping up with relatives). So before we get into the potential pros and cons, let's talk about a new feature you'll soon see!

Image Credit: energepic.com from Pexels

Let us slide into your dms 🥰

Get notified of top trending articles like this one every week! (we won't spam you)

How does Community Notes work? And will Meta's Community Notes be any different from X's?

Currently, there’s no statement from Zuckerberg or Meta that indicates any innovative twists on X’s Community Notes model. So it’s safe to assume that Zuckerberg is going to follow X’s model to a T.

As the name suggests, Community Notes acts as a collaborative fact-checking process that aims to allow users to provide context to posts from others. Essentially, if you were to post a note, contributors (which CBS News defines as “users who call out content deemed false or misleading by attaching notes providing more context”) can rate your note and reply to it with helpful information that should answer your questions.

If contributors agree with the information presented in your note, it will be visible to non-contributors. If your note is deemed false or misleading by an approved contributor, a note will appear under your original note with additional context and will not be visible to the rest of X.

Additionally, Community Notes doesn't operate by a majority rules system. According to X, “Only notes rated helpful by people from diverse perspectives appear on posts." However, the way diverse perspectives are described in the official Community Notes guide is, "….agreement between contributors who have sometimes disagreed in their past ratings. This helps prevent one-sided ratings.” So, unless this was simply a vague way of explaining it, you don't need diverse perspectives, you need opposing perspectives for your notes to appear.

Image Credit: Mohamed Hassan from Pixabay

So how are contributors decided? Well, as long as someone has no recent rule violations, a verified phone number, and joined the platform at least six months prior (this part is specific to X, so we'll see if Meta keeps this rule or alters it) they can rate notes and contribute to conversations. Oh, and don't forget an alias!

According to X, one will be required so you can keep your identity private. Lastly, just note that Community Notes aren't moderated (unless there are any rule violations). So if someone speaks out of turn to you…Meta will most likely not respond to it.

Take the Quiz: What Spring Movie Should You Watch Based on Your Personality?

Figure out the perfect spring movie to watch based on your mood, preferences, and personality traits!

Okay, got it. But why is this happening?

According to an article by Variety, Zuckerberg defended his stance on removing fact-checking from Meta on an episode of the “Joe Rogan Experience.” In it, he told Rogan about two major events to institute fact-checking: Donald Trump's 2016 election win and Brexit in the U.K (which was the UK's depaturature from the European Union). “I think that those were basically two events where for the first time, we just faced this massive, massive institutional pressure to basically start censoring content on ideological backgrounds,” Zuckerberg said.

Zuckerberg said he tried to act in good faith but with events such as COVID-19 and reportedly being told to censor information regarding the Covid vaccine, he felt that Meta's fact-checking was becoming something out of, “like, y'know, '1984' one of these books where it really is a slippery slope." Zuckerberg concluded that the programs were destroying trust, especially in the United States and that there needed to be change. However, change didn't mean improving the system, but simply eliminating it.

Image Credit: Daniel Chrisman from Pixabay

For those who didn't listen to/know about the podcast, you most likely heard different reasons as to why Meta is changing policies. The most popular theory I've seen is that Zuckerberg is changing Meta's policies to earn Trump's favor.

Whether this would be for political gain or to make up for opposing Trump in the past is unclear, but the theory does hold some weight. In addition to changing policies, Meta announced that content moderators will be moved from California to Texas to reduce bias. Additionally, he also added Dana White, CEO of Ultimate Fighting Championship and long-time friend/supporter of Trump, to Meta's board of directors.

Others have mentioned that Meta may have changed policies to reflect Zuckerberg's vision for free speech. In the aforementioned Meta article, there's a quote from Zuckerberg's 2019 Georgetown University speech where he said, "Some people believe giving more people a voice is driving division rather than bringing us together.

More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous.” Meta's change in policy has been praised by organizations like the Cato Institute and FIRE (both linked), who each published articles deeming the changes as victories to free speech (FIRE even said that Meta's decision aligns closely with the organization's recommendations).

To be honest, I think both theories could be true. Yes, Zuckerberg has had an interest in the expansion of free speech for some time now, but the fact that he waited until Trump's victory and nearing Inauguration Day to make changes does seem a bit politically motivated.

Especially since Meta was praising their diversity and fact-checking initiatives prior to the election. But hey, maybe we could all be wrong and Zuckerberg really just intended to hang out with Elon Musk more often? I mean, proudly adopting a fellow billionaire's business practices has to be one of the highest forms of flattery in the business world.

Image Credit: cottonbro studio from Pexels

On Hate Speech: Will Meta allow it or are there measures in place against it?

Remember when I said that Meta would be removing restrictions on topics such as immigration, gender identity, and gender? Unfortunately, this is what Meta means according to AP News:

“We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’” In other words, it is now permitted to call gay people mentally ill on Facebook, Threads and Instagram. Other slurs and what Meta calls “harmful stereotypes historically linked to intimidation” — such as Blackface and Holocaust denial — are still prohibited."

Image Credit: Lisa Fotios from Pexels

Umm…yikes. Clear encouragement of bullying and harassment aside, allowing non-experts to make allegations of mental illness, especially when based on ONE aspect of someone's identity, is a serious issue in its own right.

Misdiagnosis of conditions such as autism, ADHD, OCD, and more are already rampant on the internet because of non-experts. Thanks to weird internet trends and over-reliance on those online quizzes that “diagnose” you with a specific developmental disorder (insert friendly reminder to see a professional for these matters!), many mental health professionals have to scramble to advise the chronically online not to attempt to do their job!

Data from sources such as The Pew Research Center, American Psychological Association, the CDC, and more show that we're undeniably in the middle of a mental health crisis. We truly do need everyone in the mental health field to help find a solution, but that won't happen if experts need to redirect their attention to stop misinformation from spreading thanks to baseless allegations.

Which again, not only unhelpful, but they also worsen stigmas regarding mental health, immigrants, and the LGBTQ+ community. (Will probably do an article on this in the future to be honest, because this is something very important to consider with the departure of fact-checkers).

And since we're on the topic of negative impacts on mental health, a quote from Arturo Béjar, a former engineering director at Meta who specializes in curbing online harassment, had concerns about Meta's new content policies. As reported by AP News, Béjar said that Meta will no longer be proactively enforcing rules against things like self-harm, bullying, and harassment. Instead, the platform will now rely on user reports before it takes action.

However, even if a report were to go through Béjar says, “Meta knows that by the time a report is submitted and reviewed the content will have done most of its harm. I shudder to think what these changes will mean for our youth, Meta is abdicating their responsibility to safety, and we won't know the impact of these changes because Meta refuses to be transparent about the harms teenagers experience, and they go to extraordinary lengths to dilute or stop legislation that could help."

Image Credit: Total Shape from Pixabay

How this might play out on Instagram (and Facebook) + Final Thoughts

In the best-case scenario, I'd imagine your Instagram feed will pretty much remain the same (though it's likely the app will pressure you into trying the Community Notes feature). Yes, some people may feel the need to be extra toxic now that there's even less restrictions, but y'know….Instagram comments were already astoundingly negative before the introduction of Community Notes. The adjustment wouldn't be terribly brutal (at least I hope).

In the very worst case scenario, Instagram and Facebook are going to (d)evolve into online high school debate teams…just with no rules, no enforcement or encouragement of etiquette, and no evidence. Without moderation or an incentive to acknowledge proven facts or research, it's highly likely some users are going to angrily reply to everything that mildly inconveniences them or just call someone stupid when they try to address something.

And yes, I am aware that this is exactly what the internet seems like now, but if we're really at a point where we have to debate over whether or not we should be civil to one another…we basically lost all hope for the internet getting better.

Image Credit: Yan Krukau from Pexels

But with all this said, I do think that the concept of Community Notes could work if it was an add-on to fact-checking instead of replacing it. That way, people can feel heard and any misinformation or post errors can be corrected before additional false claims come about.

Or better yet, why not combine the two to make an educational feature for the social network? (And before you boo me, just know that you could get a lot of your information from a book or newspaper instead of your phone. For better or worse, social media is becoming a tool to learn, so why not expand on that?)

Similarly to the current Community Notes model, users can ask questions about topics and contributors can provide context. However, what if our contributors were professionals in the topics users are interested in?

Expertise would be verified upon joining by providing proof of experience (which can be a degree, resume, portfolio, or other examples depending on the field). Depending on what users are interested in, there will always be an expert who can give them a few artistic pointers, clarify some confusing coding concepts, help you identify shady schemes, and anything else you'd want to know!

As for the fear of bias, that can be resolved by incorporating experts across a variety of backgrounds. Similarly to the controversial DEI system, it'd be a requirement to have an equal (or at least fair) amount of experts across political, economic, cultural, and minority backgrounds. Ideally, this would allow for everyone to have their voices heard while also spotlighting voices that haven't gotten to speak as loudly within certain fields.

Image Credit: RDNE Stock Project from Pexels

And as long as no one throws gasoline and a match on the concept of “think before you speak”, I think this could work great! In the spirit of hearing others out, why not politely contribute ideas for better Community Notes and/or social media system in the comments? (Who knows? Maybe your ideas will be the topic for another article!)

Kamaria Williams
1,000+ pageviews

Writer since Jun, 2024 · 4 published articles

Kamaria Williams (she/her) is an Oakland-born creative writer, journalist, and editor. Aside from the ever-growing list of projects she’s working on, she’s an Editor-In-Chief for Mollusk Literary Magazine, an alumna of The School of the New York Times, and a college freshman. When she’s not editing or working on stories, you can find her lying in bed with her headphones on, lost in whatever she’s listening to.

Want to submit your own writing? Apply to be a writer for The Teen Magazine here!
Comment