Meta, the parent company of Instagram and Threads, is facing increasing backlash over its moderation practices, which many users are now describing as excessively harsh and arbitrary. Social media users, including high-profile consultants and everyday members, are reporting incidents of account restrictions, deletions, and post removals due to what they claim are minor or even entirely innocent infractions. The frustration has reached a boiling point, particularly on Meta’s platform Threads, where the hashtag “Threads Moderation Failures” is now trending.
A Growing List of Complaints
The situation has garnered attention from social media consultant Matt Navarra and others, including Jorge Caballero, who accuse Meta of making major moderation mistakes. Caballero specifically mentioned how the company’s automated fact-checking system wrongly flagged and throttled posts containing factual information, including critical updates about natural disasters like hurricanes. The common thread among users is that Meta’s system appears to be too quick to remove or downrank posts, often without appropriate justification.
The seemingly out-of-control moderation has left many users feeling like they’re walking on eggshells while using the platforms. One of the more bizarre cases has been dubbed “crackergate,” with posts mentioning innocent phrases like “cracker jacks” being flagged and removed. This moderation behavior has sparked concerns about how Meta’s algorithms are interpreting language, with some users pointing out that the system’s sensitivity seems to be dialed too high.
Meta’s Boss Responds
Meta’s head of Instagram and Threads, Adam Mosseri, has been responding to some of these complaints, admitting that something may be wrong and that he is “looking into it.” Mosseri’s direct engagement on Threads has somewhat appeased a few users, but the overarching sense of dissatisfaction with Meta’s moderation system persists.
Social media moderation has long been a contentious issue. However, the recent flood of complaints—combined with widespread confusion about what content is deemed unacceptable—suggests that Meta’s reliance on automated systems may be backfiring.
Joking Bans and Automated Mistakes
Some of the bans and restrictions seem not only exaggerated but deeply out of touch with normal user behavior. In one case, a woman was locked out of her Instagram account for briefly after making a light-hearted joke about “wanting to die” due to the oppressive summer heatwave. Instead of interpreting the comment as a joke, Meta’s system flagged the post, leading to an account restriction.
For others, such as Navarra, it was a post about Meta itself that caused his Threads account to be downranked. He shared a story about former NFL star Tom Brady falling for a Meta AI hoax, only to receive a notification that his account would be downranked—a process that limits the visibility of his posts across the platform.
My Experience: A Wrongful Ban for Being Underage
The moderation problems extend beyond just posts. On Tuesday, I found myself at the center of the issue when Instagram disabled my account, accusing me of being under 13 years old, the minimum age required to have an Instagram account. Despite appealing the decision and providing my state-issued ID, Meta stood by its claim that I was too young to use the platform.
This experience has left me devastated, as every connection and post I’ve made since college has now disappeared. Like many others, I have been using Instagram for years, well before Meta acquired it, and suddenly losing access to my account feels like a significant personal loss. Meta provided a brief window to download my data, but the link they sent me failed to load, leaving me in limbo and with little recourse to restore my account.
Age-Related Restrictions and the Broader Moderation Issue
Meta’s focus on regulating underage accounts isn’t new. In 2021, the company began requiring all users to input their birthdates, a move designed to improve the protection of younger users and comply with regulations. However, this approach seems to have created a wave of wrongful bans and issues for users like myself, whose age should have been easily verified with my ID.
It’s unclear whether these age-related bans are tied to other controversial moderation decisions, but Meta’s current methods appear to be undermining its platforms’ usefulness. Influential gaming deal poster Wario64, for example, has reported repeated issues with Threads flagging his posts as spam. The situation became so frustrating that he announced he would cease posting gaming deals on Threads for the foreseeable future, a move that may affect the platform’s broader engagement, especially during events like Prime Day or live gaming coverage.
A Broken System in Need of Repair?
Meta’s use of automated moderation tools was intended to streamline the process of keeping its platforms safe and free from harmful content. But as the number of complaints grows, it’s becoming increasingly apparent that the system is far from perfect. Many users now fear posting anything that could be remotely interpreted as problematic, creating a chilling effect across both Instagram and Threads.
Adam Mosseri’s acknowledgment of the problems is a step in the right direction, but the pressure is on Meta to make substantial changes. With the platform’s credibility at risk and users feeling more alienated than ever, only time will tell if the social media giant can fix its moderation failures before more people decide to walk away from Instagram and Threads for good.
Conclusion
Meta’s moderation practices on Instagram and Threads have reached a critical point, with users growing increasingly frustrated over wrongful account bans, post removals, and fact-checking mistakes. While Adam Mosseri has indicated that the company is looking into the issues, many users remain disillusioned, feeling that the platform is becoming less useful due to overzealous moderation. Meta will need to address these concerns quickly if it hopes to maintain trust and user engagement on its platforms.