Digital Platform Policy Highlights - Digest 42
External regulations and shifting user expectations have prompted major platforms in Q4 2024 to adjust their policies, focusing on transparency, content moderation, and user control.
This post is part five of a series highlighting how platforms are responding in Q4 2024 to external regulations through policy updates and product adjustments aimed at enhancing compliance and user trust
TL; DR→ Here are policy changes to address external regulation:
Google's Antitrust Reckoning: A Win for Behavioral Economics and Regulatory Enforcement
Meta Faces Legal Backlash Over Harmful Teen-Focused Features
TikTok Restricts Beauty Filters for Minors Amid Mental Health Concerns
OpenAI Courts Controversy with Government-Focused ChatGPT Version
Google and Antitrust: Behavioral Economics and definition of a “market”
United States District Court Judge Amit Mehta ruled that Google violated antitrust laws due to their exclusive default agreements. The core behavioral economics argument here is that default positions create powerful behavioral lock-in effects, i.e., consumers stick with Google out of habit. Google’s exclusive $20B deal with Apple to be the default search engine for Safari and their similar deals with Android manufacturers were used as arguments. The timing here is interesting as AI assistants like Claude and Gemini are increasingly “replacing” traditional search. Attempting to redefine what constitutes a "market" in the AI era could be Google’s best defense. (link)
Meta Faces Legal Backlash in Massachusetts Over Harmful Teen-Focused Features
Massachusetts Superior Court’s ruling, which says that Meta cannot avoid facing the state’s social media lawsuit, highlights how platform engagement mechanics are increasingly scrutinized through regulatory risk frameworks. Features like plastic surgery filters that enhances “users choices” on the platform has been reframed as potentially exploitative design patterns when deployed to vulnerable demographics such as teens, as such features can lower self-esteem and personal well-being. After over a decade of researchers highlighting that social media algorithms inadvertently prioritize engagement over well-being of users, we are finally seeing emerging regulatory consensus that platform design choices targeting vulnerable groups require separate governance frameworks. (link)
TikTok Restricts Beauty Filters for Minors Amid Mental Health Concerns
TikTok's beauty filter restrictions for minors represent a strategic preemptive governance move that anticipates regulatory pressure while potentially differentiating their youth safety approach. TikTok is clearly recalibrating the platform economics equation to factor in long-term reputational capital over short-term engagement metrics. As noted above, the timing is apt as it comes amid increased regulatory scrutiny of youth mental health impacts across social platforms. I hope that this is the beginning of a platform safety arms race, where companies compete on protective features rather than solely engagement mechanics. The big question is whether regulators' attention will rapidly shift from social media harms to more immediate AI governance concerns, given how GenAI will affect classroom learning. (link)
Steam Ends Arbitration Option, Forcing Users Into Court
An interesting inversion of the standard platform playbook on dispute resolution economics.While most platforms embrace arbitration to avoid costly litigation, Valve (the owner of Steam platform) appears to be calculating that raising the barrier to legal action through court requirements will better protect them against the recent trend of mass arbitration campaigns. Valve seemingly introduced this strategic friction point designed to discourage small claims by increasing both financial and procedural hurdles. The real question is whether this approach might backfire by inadvertently enabling class action suits that pose greater financial risk to platforms than individual arbitration costs.(link)
OpenAI Courts Controversy with Government-Focused ChatGPT Version
OpenAI's government-tailored ChatGPT variant is a smart regulatory positioning strategy that aims to preemptively shape AI governance by embedding itself within government infrastructure. This feels like a double-ended strategy: (1) establishing itself as essential infrastructure before regulatory frameworks fully crystallize, and (2) signaling its agility in addressing concerns around possession of sensitive organizational data, given that users use ChatGPT despite restrictions. This strategy is also a double-edged sword, if you ask me. While its adoption can potentially act as a shield against future regulatory constraints, it could also trigger heightened scrutiny over public-private data boundaries. (link)
Research help from John Mai, Simran Joshi, Nicole Wu, Aarav Gupta, and Anantesh Mohapatra (Thanks a ton, folks!)
Thank you for reading Platform Policy Research. If you know someone who like this stuff, please share it with them :)

