Digital Platform Policy Highlights - Digest 24
Q4 2023 and Q1 2024: This post outline how digital platforms change their policies to safeguard against users' misconduct.
This post is part four of a series documenting policy changes and feature improvements introduced by platforms in Q4 2023 and Q1 2024.
TL;DR →Policy changes to curb bad behaviors on the platforms:
Discord introducing new warning system and teen safety features
Google introducing New AI Content Moderation Policy for Android Apps
Telegram Complying with Anti-terrorism Content Guidelines by Android
Discord introduces new warning system and teen safety features
Discord, a popular chat app, has made its moderation system more transparent with a new warning system that informs users when they have violated the platform’s rules and how to avoid future violations. The company has also introduced a new feature called Teen Safety Assist, which automatically filters out potentially sensitive images and alerts teens when they receive messages or friend requests from unknown people. This appears to be a tradeoff between easing the concerns of hesitant parents who restrict their children's use of Discord and implementing permanent bans because of unintentional errors.(link)
Google's New AI Content Moderation Policy for Android Apps
Google has updated its policy for Android apps using generative AI. Starting next year, these apps must include a feature to report offensive AI-generated content to stay on Google's Play Store. The policy targets AI chatbots, image apps, and apps creating AI-generated voice or video content, but excludes those using AI for summarizing or productivity purposes. With generative AI making it easier to create deepfakes and deceptive content, platforms like Google will need to rapidly adapt to fend off a deluge of harmful activity.(link)
Telegram Blocks Hamas on Android
Telegram, complying with Google Play Store's anti-terrorism content guidelines, has blocked access to Hamas’s channels on its Android app. Telegram, known for its commitment to privacy and minimal censorship, often resists restricting content. This action, however, also showcases Telegram's commitment to adhering to external regulatory demands. What makes this case interesting is that the external force is not a country, but a digital platform: the Google Play Store.(link)
YouTube restricts AI music imitations
YouTube is tightening its content guidelines to regulate AI-generated deepfakes, especially those mimicking musicians' voices. These new rules, which include mandatory labeling of AI-generated content, underscore YouTube's crucial relationship with the music industry. The platform relies heavily on music licenses for a vast array of content, making the industry integral to YouTube's operations. By allowing uploaders to self-declare AI-generated content, YouTube can give artists and publishers a chance to request pulling down a content. Not all artists may view AI-generated voice mimicry as a cause for concern; indeed, some might even find it flattering. (link)
YouTube tries buffering for ad blockers
YouTube has implemented a policy that introduces a delay in video loading for users of ad blockers. This is in effect across all browsers. This move is part of a broader strategy to discourage the use of ad blockers and ensure ad visibility on the platform. While YouTube and YouTubers depend on ads for revenue, ads have become increasingly longer on the platform, leading more users to resort to ad blockers. This raises a question: would viewers have turned to ad blockers if the ads were more tolerable? (think of the relationship between the readers and advertisers in a fashion magazine) (link)
Twitch rescinds policy on sexual content
Twitch recently updated its content policy to allow certain forms of sexual content, including artistic nudity. However, after receiving community concerns and reflecting on the impact, Twitch has decided to withdraw the part of the policy permitting artistic nudity. This decision comes amid worries about the potential misuse of AI-generated "deepfakes" and the platform's ability to distinguish between digital art and real photography. While the initial intention was to support artists without fear of punishment, some streamers seemed to have exploited the policy, prompting Twitch to roll back the changes.(link)
Research help from Jennifer Xie, Marshall Singer, Angelina Wang, Anna Li, Anantesh Mohapatra and John Mai (Thanks a ton, folks!)
If you know someone who likes reading such stuff, please share it with them.