Antisocial media: Why Facebook’s missteps threaten internet and society | Latest News India - Hindustan Times
close_game
close_game

Antisocial media: Why Facebook’s missteps threaten internet and society

Hindustan Times, New Delhi | By
Sep 03, 2020 05:20 AM IST

Experts say company has wielded policies opaquely, pushing the case of regulator role that could change the nature of the internet.

“It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.”

Facebook CEO Mark Zuckerberg(AP file photo)
Facebook CEO Mark Zuckerberg(AP file photo)

Facebook CEO Mark Zuckerberg offered this apology to members of the US Congress in March, 2018 when he was called to explain how a third party, Cambridge Analytica, used its service to carry out psychometric analysis of Americans in a way that possibly influenced the 2016 US presidential election.

Unlock exclusive access to the story of India's general elections, only on the HT App. Download Now!

This was not the first time Zuckerberg or his company said sorry (see box) - each time only to be followed by a new abuse of service that Facebook failed to stop. These have, in the CEO’s own words from the 2018 hearing, involved “fake news, foreign interference in elections, and hate speech, as well [threats from] developers and data privacy [breaches]”.

In India, some of these harms have manifested as abusive troll armies, individuals inciting rioting and political support groups spreading coordinated misinformation to swing votes.

In its latest controversy, the company has been accused of not acting against Bharatiya Janata Party politicians for violation of content policies, a decision purportedly influenced by a senior executive seen as being close to the party. Facebook and BJP have denied the allegations, and the ruling party has instead blamed the company of censoring its supporters. The company did not respond to requests for details on how it moderates content or what its stand is on such policies. 

“Across the board, we’ve watched as the company (Facebook) does whatever is in its profit interest, as it selectively chooses when to censor or otherwise subvert content and, conversely, when to do nothing about it even if it goes viral, so long as there is adequate commercial justification to take no action. This unwavering adherence to whatever policy serves the business model subjugates the democratic interest and must be corrected,” said Dipayan Ghosh, a former White House official who is currently the co-director of Digital Platforms & Democracy Project at the Harvard Kennedy School.

The controversies around Facebook now reflect the complicated regulatory riddle: what are the rules of online speech? Who should write them? Who should enforces them? These questions, according to top cyberlaw and tech policy experts, will determine the future of the internet and the politics, culture and commerce it has come to shape since the turn of the millennium.

This is because at present, Facebook (which also owns WhatsApp and Instagram), Twitter, YouTube and such sites are not liable for illegal or hateful content – only the user posting it is. In India, under Section 79 of the Information Technology Act, they need to act when notified by authorities in order to keep this immunity.

“As businesses subject to government regulation, and in receipt of the de facto or de jure privilege of inflicting harm on private citizens without legal responsibility, the platform companies must constantly negotiate with political power. The resulting contradiction lies at the heart of the mess that presently existing social media are making of politics and society everywhere,” said Mishi Choudhary, legal director of Software Freedom Law Centre.

OPAQUE RULES

To understand this contradiction fully, it is important to look at the rights and obligations involved. In almost all democratic nations, online speech enjoys the same privileges as offline speech. Social media websites, which primarily follow American law but account for some local legal obligations, follow a system of self-regulation.

This is in part because it is not feasible to determine jurisdiction and apply filters on global communications, but is mostly to protect freedom of expression and civil liberties from state surveillance and censorship.

This results in policies and guidelines they spell out for users. “They specify certain rules for their platforms and take down content either through algorithms – which Facebook claims is responsible for 95% of its takedowns – or when they are notified of complaints from users,” said Apar Gupta, executive director at the Internet Freedom Foundation (IFF), which has called on the Indian government to push for an international human rights audit of how Facebook combats hate speech.

In addition to automated programs that look for keywords or observe how a post is reported, Facebook also deploys thousands of contractors who manually review flagged content for moderations.

It is this prerogative that Facebook, and similarly popular content websites such as YouTube, have been accused of wielding unfairly, excessively or inadequately. “If you don’t follow your own policies consistently, it will lead to situations like these where Facebook has been accused of political bias,” added Gupta.

NOT ENOUGH

The problem becomes acute when it involves illegal content posted by people in position of prominence.

In June, Facebook refused to hide or take down a post by US President Donald Trump purportedly inciting shooting as protests over the death of George Floyd escalated. Twitter hid the tweet. Zuckerberg said his company decided to let it remain. “...I believe people should be able to see this for themselves, because ultimately accountability for those in power can only happen when their speech is scrutinized out in the open,” he wrote.

“I think it illustrates that content moderation involves 1 million impossible decisions, probably daily. And the more rules you have, the more difficult it is to enforce them consistently. Facebook is inevitably going to make a ton of mistakes, both in censoring too much and in not censoring enough,” said David Greene, civil liberties director at the Electronic Frontier Foundation (EFF), one of the world’s oldest digital rights advocacy groups.

Greene added that the decision to let the Trump post stay makes Facebook seem naive about how influential its platform is in the public debate, and how hard it is to undo harmful statements that spread widely through it.

IN EXCESS

While comments by prominent people are immediately scrutinised, Facebook’s algorithms and contracted moderators play a complicated role in censorship. In 2018, the OFFLINE-ONLINE project by EFF and Visualizing Impact detailed several instances where Facebook censored marginalised communities.

These included takedowns of posts and images discussing racial injustice in the US, a ban on a native American tribe leader for his name seeming to be as fake, deletion of the Facebook pages of seven Palestinian journalists (which Facebook later admitted was a mistake) and account restrictions on multiple Rohingya activists.

Experts see these as examples of the differential approach the companies may have to censorship, depending on the privilege of the voice involved. “They have risen above the law because they are fictionally supposed never to do what government constantly forces them to do. (But) they comply, and intensively moderate all content that passes through their hands. They do this in order to maintain their indispensable privilege of civil impunity,” said SLFC’s Choudhary.

The company’s free speech defence is further overshadowed by the role its algorithms have played in amplifying content that are a threat to civil liberties.

UK-based counter-extremist organisation Institute of Strategic Dialogue said in mid-August that its investigation found Facebook’s algorithm “actively promotes” Holocaust denial content. That typing “holocaust” in the Facebook search function itself, the report said, brought up suggestions for denial pages, which in turn recommended links to publishers that sell such literature.

FUNDAMENTAL RETHINK?

The problem of how to tackle online speech harks back to similar debates around offline speech. “It brings us back to the debate over what is problematic speech. Legal speech can vary across countries. Speech also involves a lot of context, often local, which could determine its acceptability,” said Jyoti Panday, researcher at the Internet Governance Project.

The answer to this regulation is unlikely to rest with governments. “Something needs to change but more regulation will only help censorship by proxy for states,” said SFLC’s Choudhary. “Hard problems are hard and there are no silver bullets or one solution. The moment calls for a serious rethink not just new rules but public’s participation in making, interpreting and enforcing them, improved social policy based on a better understanding of the internet,” she added.

The position was echoed by Greene of EFF. “History teaches us that when government takes over such practices, government uses it to censor its opponents. We do urge platforms to conduct their curation within a human rights framing, namely, the Santa Clara Principles,” he said.

The Santa Clara principles are a set of guidelines that seek disclosure from social media companies of how many posts are flagged, what action is taken, the need for adequate notice before a takedown and clear avenues of appeals for users. “Companies need to be more transparent with these policies; knowing exactly how moderation takes place is crucial to find ways to work on it, improve it,” added Panday.

Harvard Kennedy School’s Ghosh said updates to Section 230 of the American Communication Decency Act, “which is the norm of online speech regulation”, could be updated to address some of the issues. “Particularly, [the] carve-outs from the sweeping liability shields it affords internet platforms,” he said.

It was the Declaration of the Independence of Cyberspace by one of EFF’s founders, American political activist John Perry Barlow, that in 1996 proclaimed the internet as a new domain of pure freedom to which laws of governments “have no meaning” and did not apply.

Contemporary legal experts now believe the approach may have been utopian, and the internet has evolved in ways that the role of law may be inevitable, especially since harms from the virtual world extend to the offline space.

“The ‘laws’ that increasingly have no meaning in online environments include not only the mandates of market regulators but also the guarantees that supposedly protect the fundamental rights of internet users, including the expressive and associational freedoms whose supremacy Barlow asserted,” writes Julie E Cohen, professor of law and technology at the Georgetown University Law Centre, in a paper titled Internet Utopianism and the Practical Inevitability of Law.

‘A Natural Monopoly’

“More generally, in the networked information era, protections for fundamental human rights—both on- and offline—have begun to fail comprehensively,” the paper added.

In the particular context of Facebook, which has weathered the controversies with little implication on its earnings or user base, there are now calls to break it up.

“Facebook has become a monopoly — a natural monopoly at that, whereby it benefits from powerful network effects that organically raise barriers to entry. This set of circumstances invites us to consider regulating it like a utility and, potentially, breaking it up in the coming years,” Ghosh said.

A similar call was issued in May 2019 by a co-founder of Facebook who has since left the company. “Mark’s influence is staggering, far beyond that of anyone else in the private sector or in government... Mark alone can decide how to configure Facebook’s algorithms... He sets the rules for how to distinguish violent and incendiary speech, and he can choose to shut down a competitor by acquiring, blocking or copying it,” wrote Chris Hughes in an opinion piece for the New York Times in May 2019.

“We are a nation with a tradition of reining in monopolies, no matter how well intentioned the leaders of these companies may be. Mark’s power is unprecedented and un-American. It is time to break up Facebook.”

Discover the complete story of India's general elections on our exclusive Elections Product! Access all the content absolutely free on the HT App. Download now!

Get Current Updates on India News, Lok Sabha election 2024 live, Election 2024 along with Latest News and Top Headlines from India and around the world.
SHARE THIS ARTICLE ON
Share this article
  • ABOUT THE AUTHOR
    author-default-90x90

    Binayak reports on information security, privacy and scientific research in health and environment with explanatory pieces. He also edits the news sections of the newspaper.

SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Tuesday, April 16, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On