Wired Magazine reports:
Whereas Twitter, Facebook, and YouTube have developed lengthy policies covering hundreds of circumstances, TikTok provided a short list of bullet points addressing only the most extreme content.
Amid growing scrutiny over censorship and its Chinese ownership from regulators, the company has pledged that more transparency would be forthcoming. Now, it’s here. On Wednesday, TikTok released a comprehensive set of new Community Guidelines that more closely resemble those of its peers, which go into effect immediately.
The rules are far more extensive than the previous bullet points, and are organized into 10 distinct categories covering everything from terrorist propaganda to hate speech to sexual content.
Business Insider reports:
TikTok released a new set of updated community guidelines on Wednesday, and among them is a rule explicitly banning content that “denies well-documented and violent events have taken place.”
The rule falls under the “hateful ideology” section in the new guidelines, and would apply to Holocaust denial, a TikTok spokesman confirmed to Business Insider. TikTok also confirmed that its new rule would also apply to Sandy Hook conspiracies.
By including this line in its community guidelines TikTok appears to be setting itself apart from older Silicon Valley behemoths like Facebook and YouTube.
The Washington Post reports:
With more than 1.5 billion downloads, TikTok is racing to keep up with its fast-growing, younger-leaning fans, who post short, sometimes silly, videos of themselves and their friends. Major social media platforms including Facebook, Google-owned YouTube and Twitter long have maintained lengthy policies explaining what they allow and block, spurred on by a similar need to protect their brands — and satisfy regulators’ concerns.
But TikTok faces unique challenges because of its Chinese ties, which have fueled suspicions that it manages content in a way that satisfies Beijing’s censors. Others in Congress have questioned the privacy and security of the app itself, a concern shared by the U.S. military, where the Army and Navy have banned it from government-issued devices and the Pentagon has directed that its approximately 23,000 employees uninstall it from their phones.
The company published a lengthy list of who it considers to be dangerous individuals and organizations that can’t use its app, including groups affiliated with hate, extortion, organ trafficking, cybercrime and extremism. And in doing so, it defined what it considers to be a terrorist organization.
It also greatly expanded its policies around minor safety, an issue that TikTok has had to grapple with in the U.S., especially in terms of children’s data privacy. The new policies say explicitly that users must meet minimum age requirements to use TikTok.
The new rules don’t ban misinformation outright. But they do explicitly say that misinformation that’s created to cause harm to users or the larger public is prohibited, and that includes misinformation about elections or other civic processes.