US senator inquires about social media companies’ readiness for Lok Sabha elections in India
Given the history of social media platforms, including Meta-owned WhatsApp, amplifying misleading and false content in India, Senator Bennet’s inquiry seeks clarity on the measures these companies have implemented to address this issue.
The letter coincides with the imminent announcement of elections in India by the Election Commission of India (ECI). Senator Bennet’s correspondence, directed to the leaders of Alphabet, Meta, TikTok, and X, requests information about the steps these companies have taken to prepare for elections across different nations, particularly India.
Highlighting the persistent threats posed by social media platforms to electoral processes, Senator Bennet underscores the evolution of these risks.
While users have previously utilised deepfakes and digitally altered content in electoral contexts, the emergence of artificial intelligence (AI) models further compounds these challenges. Bennet notes that the proliferation of sophisticated AI tools has lowered previous barriers to entry, enabling almost anyone to create highly realistic images, videos, and audio, thus posing significant threats to democratic processes and political stability.
With over 70 countries holding elections and more than two billion people casting ballots this year, 2024 is the year of democracy.
Australia, Belgium, Croatia, the European Union, Finland, Ghana, Iceland, India, Lithuania, Namibia, Mexico, Moldova, Mongolia, Panama, Romania, Senegal, South Africa, the United Kingdom, and the United States are expected to hold major electoral contests this year.
In his letter to Elon Musk of X, Mark Zuckerberg of Meta, Shou Zi Chew of Tik Tok and Sundar Pichai of Alphabet, Bennet requested information on the platforms’ election-related policies, content moderation teams, including the languages covered and the number of moderators on full-time or part-time contracts, and tools adopted to identify AI-generated content.
Democracy’s promise that people rule themselves is fragile, Bennet continued. Disinformation and misinformation poison democratic discourse by muddying the distinction between fact and fiction. Your platforms should strengthen democracy, not undermine it, he wrote.
In India, the world’s largest democracy, the country’s dominant social media platforms including Meta-owned WhatsApp have a long track record of amplifying misleading and false content. Political actors that fan ethnic resentment for their own benefit have found easy access to disinformation networks on your platforms, the Senator wrote.
Bennet then asked about details of their new policies and people that have placed for India elections. What, if any, new policies have you put in place to prepare for the 2024 Indian election? How many content moderators do you currently employ in Assamese, Bengali, Gujarati, Hindi, Kannada, Kashmiri, Konkani, Malayalam, Manipuri, Marathi, Nepali, Oriya, Punjabi, Sanskrit, Sindhi, Tamil, Telugu, Urdu, Bodo, Santhali, Maithili, and Dogri? he asked.
Of these, please provide a breakdown between full-time employees and contractors, Bennet said.
The Senator told the social media CEOs that beyond their failures to effectively moderate misleading AI-generated content, their platforms also remain unable to stop more traditional forms of false content.
China-linked actors used malicious information campaigns to undermine Taiwan’s January elections. Facebook allowed the spread of disinformation campaigns that accused Taiwan and the United States of collaborating to create bioweapons, while TikTok permitted coordinated Chinese-language content critical of President-elect William Lai’s Democratic Progressive Party to proliferate across its platform, it said.
According to the Senator, he has heard from the heads of the US Intelligence Community that the Russian, Chinese, and Iranian governments may attempt to interfere in US elections.
As these and other actors threaten peoples’ right to exercise popular sovereignty, your platforms continue to allow users to distribute fabricated content, discredit electoral integrity, and deepen social distrust, he wrote.
Bennet requested information on the platforms’ election-related policies, content moderation teams including the languages covered and the number of moderators on full-time or part-time contracts and tools adopted to identify AI-generated content.