Twitter says its algorithm amplifies right wing political content | Latest News India - Hindustan Times
close_game
close_game

Twitter says its algorithm amplifies right wing political content

By, New Delhi
Oct 23, 2021 04:28 AM IST

This comes a week after a whistle-blower singled out Facebook’s engagement-based algorithm as a crucial source of hate speech and fake news.

Twitter’s algorithm disproportionately amplifies right wing political content and news sources, the company has found in an analysis involving millions of its users across seven countries, and its researchers are unsure why that is happening. India was not among the seven countries studied.

Experts have recently called on companies such as Twitter and Facebook to share access with academic researchers to get to the bottom of harms their technologies may be causing.
Experts have recently called on companies such as Twitter and Facebook to share access with academic researchers to get to the bottom of harms their technologies may be causing.

The analysis, carried out between April 1 and August 15, 2020 covered 5% of the company’s active users and could be one of the largest such studies by any of the major social media companies on how their products influence what people read and interact with online, and thus its societal impact.

Hindustan Times - your fastest source for breaking news! Read now.

It comes a week after a whistle-blower singled out Facebook’s engagement-based algorithm as a crucial source of hate speech and fake news.

Twitter’s new study, released by the company on Thursday, found that its algorithm amplifies right wing political narratives.

“Our results reveal a remarkably consistent trend: In 6 out of 7 countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the US media landscape revealed that algorithmic amplification favours right-leaning news sources,” said the study, involving academics from University of Cambridge, UC Berkley and University College London.

In a series of tweets detailing the findings, Rumman Chowdhury, Twitter’s director of software engineering, added that “...establishing *why* these observed patterns occur is a significantly more difficult question”.

“Twitter is a sociotechnical system -- our algos are responsive to what’s happening. What’s next is a root cause analysis — is this unintended model bias? Or is this a function of what & how people tweet, and things that are happening in the world? Or both?” the Twitter staffer added in another tweet.

Ferenc Huszár, senior lecturer in Machine Learning at the University of Cambridge and one of the authors of the study, said there could be a number of reasons. “The patterns we observe can be a result of several contributing factors, and there are probably too many explanations to enumerate,” he said in an email to HT.

“Differences may arise, for example, from people with different political interests simply using Twitter differently: some communities might use the retweet, like or reply functions more, or attach a slightly different relevance to each of these actions,” he added.

Experts have recently called on companies such as Twitter and Facebook to share access with academic researchers to get to the bottom of harms their technologies may be causing. These concerns have stemmed from a series of controversies that have involved almost all major social media companies – most prominently, Facebook.

Huszár identified the potential in this area as one of the most significant takeaways from Twitter’s new study. “Companies such as Twitter can use the same tools they typically use to improve products and drive profits (experimentation and data science) to study the societal impacts their products have. We are only scratching the surface here, I am hoping to see more of this kind of work in the future,” he said.

A second researcher agreed. “Academic research needs to engage with the analysis of the ways in which information is aggregated, organised and accessed - Twitter research is a positive and much needed step in that direction. It is commendable that Twitter is investing time and resources in understanding its content feeds and is making data set available for other researchers to verify, which is more than what you can say about other platforms. First step to fixing a problem is admitting you have a problem,” said Jyoti Panday, researcher, Internet Governance Project, Georgia Tech.

Panday added it is significant that Twitter appears to admit that it does not understand its algorithms and the “complex communication infrastructure may be beyond the comprehension of the owners of the infrastructure, with the effects of algorithms unclear even to those writing them”.

‘TOP TWEETS’ IN FOCUS

The algorithm studied here is the code that determines how people see posts when they view their Twitter feed in the default “top tweets” setting, as opposed to the “latest” mode that orders tweets chronologically. The study picked a control group that was never offered the ‘top tweets’ feature since its roll-out in 2016 and compared what they would have seen with other users who had the feature.

To fetch the “top tweets”, the study explains in its appendix, Twitter uses a machine learning (ML) model to predict posts a person is likely to engage with (read, comment on or retweet). These predictions are based on factors such as “inferred topic of the tweet” and past behaviour of a user such as “engagement history between the user and the author of the tweet”.

The researchers picked elected officials from seven countries – Canada, France, Germany, Japan, Spain, United States and United Kingdom. Except for in Germany, tweets by political leaders from the right “receive more algorithmic amplification than the political left when studied as a group”, said Chowdhury, and ML researcher Luca Belli in a blog post.

The lopsided amplification was strongest in Canada (Liberals 43% vs Conservatives 167%) and the UK (Labour 112% vs Conservatives 176%).

A second part of the study, which analysed how posts by US media outlets were amplified, found that right-leaning news outlets, such as New York Post and Brietbart, were more amplified than neutral sources like Reuters and left-leaning news outlets like the LA Times.

Twitter’s ML Ethics, Transparency and Accountability (META) team will now carry out a root-cause analysis and determine “what, if any, changes are required to reduce adverse impacts by our Home timeline algorithm,” the two staffers said in the blog post.

In the past, a similar analysis of its ML technologies led Twitter to abandon its photo-cropping feature after it found that the algorithm that decided how to crop images to a uniform square “had potential for harm”. This came after a controversy when users discovered the auto-cropping code would focus only on the face of a Caucasian US politician while leaving out that of an African-American.

As a result, the company changed its system to allow users to upload photos of any cropping.

Unveiling Elections 2024: The Big Picture', a fresh segment in HT's talk show 'The Interview with Kumkum Chadha', where leaders across the political spectrum discuss the upcoming general elections. Watch now!

Get Current Updates on India News, Election 2024, Arvind Kejriwal News Live, Bihar Board 10th Result 2024 Live along with Latest News and Top Headlines from India and around the world.
SHARE THIS ARTICLE ON
Share this article
  • ABOUT THE AUTHOR
    author-default-90x90

    Binayak reports on information security, privacy and scientific research in health and environment with explanatory pieces. He also edits the news sections of the newspaper.

SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Friday, March 29, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On