In a statement, TikTok said it bans and prohibits election disinformation and paid political ads from its platform
TikTok algorithms are very good at finding videos to keep people glued to their phone screens for hours.
What they’re not so good at is spotting ads that contain blatant misinformation about the US election, a new report finds. This is despite TikTok banning all political ads on its platform in 2019.
The report raises new concerns about the hugely popular video-sharing app’s ability to catch election lies at a time when more and more young people are using it not only for entertainment but also for information.
The nonprofit Global Witness and the Cybersecurity for Democracy team at New York University released the report on Friday.
Global Witness and NYU have tested whether some of the most popular social platforms – Facebook, YouTube and TikTok – can detect and remove fake political ads aimed at US voters ahead of next month’s midterm elections. The watchdog has conducted similar tests in Myanmar, Ethiopia, Kenya and Brazil with ads containing hate speech and misinformation, but this is the first time it has done so in the United States.
The ads in the US contained misinformation about the voting process, such as when and how people can vote, as well as how election results are counted.
They were also designed to sow distrust in the democratic process by spreading unsubstantiated claims of “rigged” votes or decisions before election day. All were submitted to social media platforms for approval, but none were actually published.
TikTok, which is owned by Chinese company ByteDance, fared the worst, firing 90% of the ads the group served. Facebook fared better, catching seven of the 20 fake ads — in both English and Spanish.
Jon Lloyd, senior adviser at Global Witness, said TikTok’s results in particular were a “huge surprise” to us given the platform’s outright ban on political advertising.
TikTok said in a statement that it bans and bans election disinformation and paid political ads from its platform.
“We value feedback from NGOs, academics and other professionals to help us continually strengthen our processes and policies,” the company said.
Facebook’s systems detected and removed most of the ads submitted to Global Witness for approval.
“These reports were based on a very small sample of ads and are not representative of the number of political ads we review every day around the world,” Facebook said.
“Our ad review process has multiple layers of analysis and detection, both before and after an ad is published.” She added that it invests “significant resources” to protect elections.
YouTube has since detected and removed all problematic ads and even suspended the Global Witness test account that was set up for posting the fake ads in question.
However, the Alphabet-owned video platform also did not disclose any of the false or misleading election ads the group submitted for approval in Brazil.
“So it shows that there is a real global disparity in their ability to enforce their own policies,” Lloyd said.
Google said it has “developed extensive measures to address misinformation” on its platforms, including false claims about elections and voting.
“In 2021, we blocked or removed more than 3.4 billion ads for violating our policies, including 38 million for violating our misrepresentation policies,” the company said in a prepared statement.
“We know how important it is to protect our users from this type of abuse – especially around major elections like those in the United States and Brazil – and we continue to invest and improve our enforcement systems to better detect and remove this content. Lloyd said the consequences of failing to control disinformation will be widespread.
“The consequences of inaction could be catastrophic for our democracies and our planet and our society in general,” Lloyd said. “Increasing polarization and all that. I don’t know what it’s going to take for them to take it seriously.”