Does Social Media Build Community or Cults?

Social media companies push users into the alt-right pipeline to ensure engagement and user loyalty, leading to more profit for said companies and disregarding the violent aftermath.

Reading Time: 5 minutes

Cover Image
By Iris Lin

While scrolling on my TikTok “For You Page,” a familiar face popped up on my screen. A close friend of mine, whom I hadn’t spoken to for years, was posting extremist views on his public platform. My heart dropped after seeing someone, whom I shared late night Fourth of July fireworks and carpooled to school with, fall into the alt-right pipeline. “Taking the red pill,” “radicalization,” and “the alt-right pipeline”—these are all terms referring to young adults, usually men, who end up doing and spewing extremist things. But why do so many young people fall into this pipeline? After seeing my old friend’s post, along with the popularization of social media figures such as Andrew Tate, I decided to test whether or not online radicalization is encouraged by TikTok.

I created a TikTok account with no link to my email, my phone number, or the device I previously had an account on and scrolled through my “For You Page” to see just how long an alt-right-leaning video would pop up. After viewing only four clips, I landed on a video of Tate, an American-British man who recently appeared on numerous podcasts and spread misogynist takes on women. Tate is often seen as a “gateway drug” into the world of right wing radicalization. His content is centered around young men living an “alpha male” lifestyle through hateful speech and obsession with the material world. I decided to like the video and kept scrolling. I continued to form my algorithm with a mixture of video game clips, Twitch streamers, and podcast videos for about a day. By the next day, almost every video on my “For You Page” attacked women, a specific race, a body type, or a religion. The comment section was almost always centered on how a woman dressed, whether or not she would be an “easy lay,” and whether a teenage-looking girl was innocent enough. Most of the vile videos came in the form of duets, in which an older man usually duetted a young girl. Commentary for these videos focused on the amount of makeup the girl was wearing and her clothes, but rarely ever on the point of her content or what she said in the videos. It wasn’t until about half a week after that more violent videos surfaced, surprisingly centered around self-harm; self-hate; and dissatisfaction with one’s body, life, or relationship status. I was expecting to see violence toward others, but instead, it seems like radicalization may stem from self-loathing.

After performing my experiment, I researched past incidents in which young boys finally acted on the radicalizing content they had been fed, and resorted to violence. Specifically, I chose to dive into an attack that affected me personally. The terrorist attack in Christchurch, New Zealand killed about 50 Muslims in two mosques. Right before the shooter entered the mosques, he posted on 8chan, “Well lads, it’s time to stop s***posting and time to make a real life effort post.” Doctor of Philosophy at Western Sydney University Luke Munn explained that it was clear by the terrorist’s post that his violence was a result of extremist views formulated online and especially nurtured by the 8chan community, where killing Muslims was the endgame. As we engage ourselves more and more with the digital world, we find that other apps, no longer centered around discussion or forums, are also pushing the agenda of alt-right extremists. But is this result just herd mentality, or are the social media apps we use every day programmed to propagate alt-right motives?

To determine why social media apps are becoming a breeding ground for alt-right radicalization, we need to pinpoint the main goal of these apps. Return on investment (ROI) is usually the main indicator of a company’s success, and large social media companies are no different. One way to measure a company’s ROI is through key performance indicators. For social media, these indicators are predicated on engagement, which includes likes, comments, shares, and average watch time. In order to make a profit, social media apps will push content that ensures engagement through the creation of a kind of close-knit community that users feel obligated to return to and participate in. Radicalization, both left wing and right wing, is an easy way to ensure profit and engagement, so social media apps will push videos that guarantee a strong reaction until users are in a niche circle within that app where extremist communities are formed.

A logical question that is usually asked is “Why is alt-right radicalization so much more common than alt-left?” Contrary to popular belief, it’s not. Even though alt-left radicalization is just as common as right-wing radicalization, the alt-left tends to be quieter within their community, and statistically, those who fall into the alt-left are usually more educated and more difficult to manipulate into a toxic and violent mindset. The alt-left operates similarly to a class group chat where information is shared and regurgitated from other inputs, whereas the alt-right operates like a cult. The alt-left, until recently, doesn’t usually appeal to younger audiences since it requires more academic reading. The alt-right is more concerned with clickbait, buzzwords, derogatory jokes, and absurd views that catch a viewer’s attention, making it easier for a younger audience to be engaged. Tanya Basu of MIT Technology Review extended on this idea by referencing a study that found more than 26 percent of users who comment on videos that tread a fine line between a joke and an attack on an entire demographic will later comment on videos that fall deeper into the alt-right rabbit hole. Usually, the videos perpetuate beliefs affiliated with Neo-Nazis, such as the justification of “white supremacy on the basis of eugenics and ‘race science.’”

As CEOs of these social media apps continue making empty promises to address radicalization, and public pushback against government regulations fails to change, thousands are falling prey to right wing ideology, and hate group numbers are rising. Government officials are currently forced to battle the thin line between free speech and the safety of others when dealing with censorship of hate speech. The easiest way to ensure that fewer young people fall into the alt-right is through education. Digital literacy ingrained at a young age, a credible counterview made by creators who aren’t just yelling liberal buzzwords back to alt-right videos, and manipulation of search engine results to show or hide certain content seem to be the solution.

Many proclaim that the government would never have the power to stop the rise of the alt-right and that even if it did, it would be unprecedented and overbearing. However, this occurrence isn’t the first time American legislators have had to fight against unwanted digital media. In the early 2000s, with the rise of personal computers, pornographic spam became a digital epidemic. Opening your computer would trigger a half dozen pop-ups of porn, no matter the user. After an increase in complaints and concern over child safety, the CAN-SPAM Act of 2003 was passed. The federal law illegalized all unsolicited pornographic and marketing spam, and the problem disappeared within the next year. The problem with today’s alt-right dilemma is that lawmakers don’t want to restrict social media. Social media “tech gods” pay their lobbyists an absurd amount of money to convince lawmakers, ensuring restrictions that hinder profit won’t be passed. A clear example of this power was when Google paid $21.7 million on lobbying in one year alone. No lawmaker would willingly liquidate their position, and hence, there is no solution in sight if these social media companies continue to indirectly manipulate regulations. As long as those in control of the media keep their ROI as their main concern, alt-right radicalization will continue to be a problem.