How Short-Form Video Recommendation Algorithms Can Turn You Into Your Alt-Right Uncle

How Short-Form Video Recommendation Algorithms Can Turn You Into Your Alt-Right Uncle

Imagine this: it's Christmas dinner and you're settling down with your family. The scent of roasted turkey fills the room, the warm glow of the Christmas lights illuminates the ambiance, and everyone is geared up for an evening of unity and warmth. However, just as predictably as the festive season rolls around, Uncle Joe, your family's self-proclaimed political pundit, steers the chat toward his latest alt-right hypothesis, igniting fiery discussions on immigration nuances and conspiracy tales. While these debates might traditionally be confined to annual festive gatherings, a similar pattern seems to be echoing on an expansive, worldwide canvas — and it's an everyday affair.

In our modern digital era, platforms featuring short-form videos, like YouTube's Shorts, TikTok, and Instagram Reels, mirror the role of Uncle Joe, commanding significant sway over society's ideological "banquet." Increasing evidence hints that the algorithms behind these platforms, especially YouTube's, are steadily pushing a menu rich in alt-right content, interspersed with hints of misogyny, anti-feminism, and transphobia. The alarming distinction here is that this isn't a once-a-year event, but an incessant stream available around the clock. As the digital 'Uncle Joe' molds our online diet, it's crucial we introspect about what we're digesting.


Micro-Messaging, Macro Impact: How Short-Form Videos Fuel Extremist Narratives

It's unsurprising to see platforms like TikTok, YouTube Shorts, and Instagram Reels becoming hotbeds for alt-right influencers. Their strength stems from the capacity to condense intricate ideologies into catchy, out-of-context phrases, which are then enhanced with strategic filters, music, and other creative tools. Given these platforms' emphasis on brief and easily consumable content, they inadvertently offer a perfect environment for the spread of extreme views. These views are presented with a polished, user-centric interface, masking and swiftly spreading their underlying, often harmful, messages.

Within this context, the "manosphere" has adeptly harnessed these platforms for propagation. The manosphere is a network of websites, blogs, and forums championing masculinity while often opposing feminism. As a digital realm, it encompasses various communities, from men's rights activists and involuntary celibates (Incels) to groups like Men Going Their Own Way (MGTOW), pick-up artists (PUA), and fathers' rights advocates. Utilizing short-form video platforms, they've managed to extend their reach, resonating with audiences who may otherwise never have encountered their ideologies.


For many active on social media, the name Andrew Tate likely rings a bell. This British-American influencer, known for his contentious views and vast online following, is currently embroiled in a legal battle. Tate faces charges of rape, human trafficking, and the formation of an organised crime group in Romania aimed at exploiting women. After enduring over seven months of house arrest, he's been released, though he still faces movement restrictions. Alongside him, his brother, Tristan, and two associates also stand accused, vehemently denying all allegations.

Previously ejected from the UK's Big Brother in 2016 due to a controversial video, the self-proclaimed "misogynist" now boasts 6.9 million Twitter followers, often using his platform to voice divisive opinions on women. Even though he's been banned from platforms like YouTube, Facebook, and Instagram, his legacy persists, largely due to manosphere users who continue to disseminate his videos online.

Originating from Chicago in 1986 and later relocating to the UK post his parents' divorce, Tate's global online notoriety has grown, fuelled by displays of his opulent lifestyle. Regarding his income streams, Tate has cited significant earnings from the adult entertainment industry, specifically through a webcam business.


A study by The Guardian used a fabricated  TikTok 18-year-old user account to test the platform's recommendation system. Initially, the account received a variety of content suggestions. However, after interacting with male-centric videos, the algorithm increasingly favoured content from Andrew Tate. Without proactive searching, the account became inundated with Tate's videos, some of which criticised mental health support and ridiculed those wearing masks. Tate's videos have amassed 11.6 billion views on TikTok. Concerns have been voiced by experts over TikTok's assertive recommendation algorithm. In response, TikTok reaffirmed its commitment against misogynistic content and promised to bolster its guidelines and protections.


Manosphere’s Rise on YouTube Shorts: The Algorithmic Undercurrents

Andrew Tate represents merely the visible crest of a deeply unsettling undercurrent.

This report published on the Institute for Strategic Dialogue delves into YouTube's algorithmic tendencies, spotlighting its potential role in steering Australian male youths towards extremist content characterised by misogyny and anti-feminism. By leveraging experimental profiles, the investigation gauged the trajectory of content suggestions made by YouTube and its feature, 'YouTube Shorts', to these demographic groups.

Over the course of this qualitative research, which assessed algorithm-driven suggestions for 10 trial profiles, a pattern emerged again: the accounts began receiving video suggestions hostile to women and feminist ideologies. Engaging with these videos—watching and liking them—only intensified the trend, leading to more explicit recommendations linked to the 'Manosphere' and ‘Incel' communities.

Interestingly, while YouTube's primary interface tended to maintain a consistency in content suggestions aligned with the initial interests of these profiles, Shorts exhibited a starkly different behaviour. This feature reacted more vigorously to user interactions, pushing increasingly radical content in a short span. Notably, on Shorts, the content suggestions, often echoing sentiments of right-wing and self-proclaimed 'alt-right' proponents, were strikingly similar across all profiles. The age of the user, whether minor or adult, didn’t seem to affect the algorithm’s content choices.


YouTube's Resilient Influence

Navigating these intricate challenges is like unboxing a puzzle with shifting pieces. As we ponder the nuanced impact of platforms like TikTok and Instagram, it's evident that their recommendation algorithms tend to paint our feeds with a more personalized palette. However, the canvas takes a different brushstroke when it comes to YouTube. A recent study by Mozilla has found that YouTube's recommendation algorithm is responsible for driving 70% of the content people watch on the platform. This algorithm shapes the information consumed by billions of users, and while YouTube provides controls to customize recommendations, these tools are not very effective.


The study analyzed data from over 20,000 participants over seven months and evaluated YouTube's recommendation-tuning features, such as Dislike, Not interested, Remove from history, and Don't recommend this channel.

Participants were given a browser extension that added a "Stop recommending" button to YouTube videos, triggering one of the algorithm-tuning responses. Despite using these controls, the study revealed that they had a minimal impact on the recommendations users received. On average, rejecting a video led to about 115 subsequent recommendations that closely resembled the rejected content.

Previous research has shown that YouTube's recommendation system tends to reinforce users' existing beliefs and push them towards more extreme content. The platform has also faced criticism for promoting inappropriate or harmful content, despite its guidelines. The study found that even after users gave negative feedback, videos that seemingly violated YouTube's policies continued to be recommended.

The most effective tools for users to influence recommendations were found to be "Remove from history" and "Don't recommend this channel," reducing unwanted recommendations by 29% and 43%, respectively. However, even these tools couldn't completely eliminate content from muted channels.

The study's authors suggested that YouTube could improve user satisfaction by allowing users to actively train the algorithm using keywords and content types they want to exclude.


Conclusion

In conclusion, the spread of extremist views, facilitated by algorithmic curation on platforms like YouTube's Shorts, TikTok, and Instagram Reels, is a defining issue of our modern digital era. The prevalence of this issue, much like the familiar, yet tense narrative that unfolds at a family dinner table, is indicative of the broader discord being fanned in the public sphere.

First and foremost, it is imperative that we, as consumers of digital content, cultivate an awareness and skepticism towards the information we consume online. As Uncle Joe commands the attention at our festive gatherings, we must discern what content is being pushed our way through algorithms, evaluating its credibility and reflecting on its influence over our thoughts and perspectives. We have a responsibility to become discerning consumers, actively engaging with content that encourages unity and understanding rather than division and intolerance.

From a platform regulation standpoint, there is a clear need for more effective tools that empower users to have genuine control over the content they are exposed to. YouTube, as one of the world’s most influential platforms, carries a significant burden of responsibility. The recent study by Mozilla illuminates not only the persistence of YouTube’s algorithm in recommending potentially harmful content but also the inadequacy of the platform’s current user controls. Solutions may lie in more transparent and customizable algorithmic controls. One proposal, as suggested by the researchers, is allowing users to actively train the algorithm, specifying keywords and content types they want to avoid. This could be paired with more stringent, transparent, and enforceable guidelines for content, and a public commitment to regularly audit and improve the recommendation system in response to user feedback and independent analysis.

Additionally, governments and regulatory bodies around the world could consider taking a more active role in supervising and regulating content recommendation algorithms. This could be through legislation that mandates transparency, regular auditing of recommendation algorithms by third parties, or the establishment of an independent body that can assess and enforce a standard of ethical practice for these algorithms.

Finally, there is a need for a broader cultural reflection. The popularity of extremist content, irrespective of the algorithmic factors, points to a deeper unrest within society. Addressing the source of the appeal of such content – which could be rooted in various socio-economic issues or a broader political climate that tolerates or even encourages polarization – is an essential part of the solution.

In the end, taming the digital ‘Uncle Joe’ will require a multifaceted approach. It involves not only algorithmic adjustments and regulatory frameworks but also a deeper societal commitment to promoting dialogue, understanding, and empathy in a world that seems increasingly willing to entertain divisive and extremist views. In an era where our ideological 'banquets' are often shaped by unseen algorithms, reasserting our agency over our digital diets is more critical than ever.