Digital Platform Policy Changes in response to involvement of external regulators
The 3rd post in the Digital Platform Policy Changes Insights series. Digital platforms frequently adjust their policies to comply with external regulations, yet this does not always benefit users.
TL;DR: This post documents how external regulators attempt to fix problems while documenting how the platform reacts to these changes. This is the 3rd post in the series: the 1st post focused on policies to woo new users (link). The 2nd post focused on retaining existing users (link).
Table of contents:
A stylized illustration to argue why self-policing may not be enough
Tech Titans and the Thorn of Antitrust: Past and Present Challenges
Navigating the Regulatory Landscape: Big Tech's Dance with Compliance and Consequences
Page 1: Unraveling the Wild West Era of the Internet
In the era of digital platforms, this decade brings a seismic shift in how the world is perceiving and regulating digital platforms’ actions. Tech enthusiasts, policymakers, journalists, and academics alike, find themselves confronted with a compelling analogy reminiscent of a historic moment in the world of sports, particularly in the National Hockey League (NHL) - the rise of the "Avery Rule." (Please watch the video above)
During a game in 2008, Sean Avery boldly stationed himself before the legendary Devils' goalie, Martin Brodeur. While this wasn't a menacing body check, it was a provocative act - waving his arms and stick in Brodeur's face, effectively blocking the revered goalie's view. The distraction proved too much for the goalie, allowing Avery to score a goal. So NHL had to jump in and establish rules against such actions, which are now colloquially termed as the “Avery Rule” (Again, please watch the embedded video above).
This pivotal moment in hockey history provides a fascinating backdrop to the contemporary challenges facing the digital world. In recent years, concerns have grown about the conduct of tech giants, social media platforms, and digital services. Like Avery, they persistently push the boundaries of acceptable behavior, seemingly finding new legal gray areas that give them an edge without technically breaking the law. While not always violating existing regulations, their actions often spark public outrage and calls for reform.
In the early days of the internet, it was often referred to as the "Wild Wild West," with few regulations governing the actions of tech companies. As technology evolved rapidly, new laws struggled to keep pace, leaving regulators unaware of the potential consequences. Concepts like private property rights, antitrust and monopoly behavior, and content moderation became significant advantages for major platform companies before meticulous regulations could be established. Notably, these digital platforms thrived under the protection of Section 230, also known as the Communications Decency Act of 1996, which ensured freedom of expression and limited liability for platforms when users posted defamatory, offensive, or illegal content.
Undoubtedly, Section 230 has been a key enabler of the digital platform revolution, the benefits of which we are reaping till today. Legal immunity from content meant that they could experiment on ways to enable and encourage engagement between anonymous users without really worrying about the topics on which they engaged. The first “Avery situation” emerged when some entrepreneurs figured that topics that were traditionally deemed as toxic generated more engagement than benign topics.
Fast forward to 2024, most regulators are now questioning: Does Section 230 imply that digital platforms are absolved of the responsibility to remove damaging content that could harm ordinary users, especially when their algorithms amply contents that generate engagement? Do we need a digital platform equivalent of “Avery Rule” ?
Page 2: A stylized illustration to argue why self-policing may not be enough
To be fair to the digital platform firms, most of them have behaved responsibly to most part of the first decade of this millennium. The silicon-valley pundits vouched that digital platforms mostly did the “right” things and pushed boundaries which were arguably meaningless in this millennium. For instance, while Amazon transformed the e-commerce industry, it avoided collecting/paying state taxes to books sold to customers outside Washington because of the “physical presence or nexus” rules in Washington. PayPal, by calling itself a digital payment system, did not require a banking license to operate, making it easier for consumers to open an account. A lot of big tech firms provided their primary services for free (think Google search, Facebook etc.) and legitimized the concept of “paying with your personal data”, which most early adopters were very comfortable with.
It wasn't until the mid-2010s that the repercussions of such actions (or lack thereof) began to manifest in public awareness. In 2015, news surfaced that an academic from the University of Cambridge exploited Facebook’s lax data access rules to collect information about users and their friend network. This data, when used by political advertisers, was argued to have major impact on voter turnouts in real elections, thrusting the concept of psychological targeting into the spotlight. For a considerable time, legal actions struggled to hold Mark Zuckerberg culpable for the data leak debacle. However, the landscape changed with the enactment of GDPR in 2018. Under the pressure of the GDPR and other data privacy regulations, Facebook had no choice but to comply and settle numerous cases, including a hefty $5 billion fine from the FTC, a $100 million penalty from the SEC, and a £663,000 fine from the U.K.'s Information Commissioner’s Office (link).
Around the time of the unfolding of the Facebook saga, news surfaced about a massive data breach at Equifax, potentially impacting the digital identities of a majority of Americans. This incident traced back to lax server maintenance practices, which enabled hackers to exploit a vulnerability and steal data from Equifax servers. Strikingly, Equifax seemed unaffected for a considerable period. In fact, some even speculated that Equifax profited from this incident, by promoting IdWatchDog, its premium identity protection services. It was only after the FTC and CFPB went after Equifax for violating multiple acts such as the Gramm-Leach-Bliley Act, that Equifax paid up $700 million for monetary relief and offered free identify protection services. Furthermore, only after the passing a legislation in 2018 did the credit bureaus enabled freezing of credit reports at no cost, refrain from profiting from data during the freeze, and grant consumers free access to their credit reports.
In some experts’ mind, the takeaway from the Facebook and Equifax saga was clear-cut: the economic power that big-tech firms amassed under a Laissez-faire approach acted as a disincentive to engage in self-policing. And that had started to show real consequences. There needed some “real” actions: legal or legislative.
Page 3: Tech Titans and the Thorn of Antitrust: Past and Present Challenges
Reading this in 2024, you would agree that one thing that is common across most tech platform firms is the specter of antitrust actions, as exemplified by Microsoft's antitrust tussle since the 1990s. This high-profile legal battle, triggered by the bundling of Internet Explorer and Windows Media Player, was the spark that ignited discussions and raised awareness of the influence wielded by tech giants.
In 1998, Microsoft was accused of abusing their monopoly over operating systems to drive third-party browsers such as Netscape Navigator out of business. Yet, in their defense, Microsoft argued that their competitor, Netscape, should shoulder the blame for failing to keep up with software updates. They contended that offering free software for installation was a boon for consumers, who flocked to Internet Explorer for its quality and convenience. Supporters of Microsoft pointed to the ease of installation on Toshiba laptops and other clones as evidence of the software's widespread popularity, showcasing Microsoft's innovative superiority in the tech sector.
However, the Department of Justice (DOJ) saw Microsoft's actions as monopolistic and argued that it violated the Sherman Antitrust Act by restricting PC manufacturers from installing a third party browser. Consequently, they mandated that Microsoft split into two separate entities—one for operating system and one for software development. Microsoft, however, resisted, eventually opting to share the technical specifications of communicating with the OS with other PC manufacturers to produce interoperable products with Windows. Back then, it raised questions: Was the government exerting too much control? Was Microsoft simply bearing the brunt of being the best in the market?
20 years later, many more big tech firms have come under scrutiny by the DOJ and the Federal Trade Commission (FTC), facing a barrage of lawsuits from attorneys across the United States. Google, for instance, found itself entangledover 100 separate lawsuits in 2020 and 2021, ranging from search engine monopoly, the Google Play Store, its ad business, and concerns over data privacy(link). Facebook too was hit by a slew of lawsuits. In conjunction with 46 states, the FTC accused the company of executing hostile “takeovers” of Instagram and WhatsApp(link). Similar legal challenges have arisen for Apple, Amazon, and Microsoft, primarily concerning their monopolistic practices in competition markets, data privacy, and anti-competitive mergers and acquisitions involving smaller companies.
The most common argument in favor of big-tech firms is that the network effects naturally makes digital products and services “monopolistic”, but that network effects benefit all users. The primary counter-argument of federal executive departments such as the DOJ or FTC is that may of platforms’ actions not only impose negative externalities, but also harms its own users/stakeholders due to platforms’ “criminal” focus on profitability.
Despite the legal onslaught, observers argue that these tech giants often sidestep serious consequences. Could the government take on these tech giants effectively, especially when a legal approach often ends in a settlement without much “real” outcome, such as a “break-up” of big tech (a concept that seems rather ambiguous and uncertain at best)?
Page 4: Proactive Regulatory Measures Addressing Future Challenges
As noted above, despite legal efforts, it appears that these tech giants remain largely unharmed. Consequently, governments worldwide are actively working to tighten the reins on them by proactively crafting legislation that safeguards the public's and other competitors' interests.
On the global stage, the European Union is determined to regulate how much tech companies can amass user data. The General Data Protection Regulation (GDPR) employs various strategies to safeguard consumers' personal data. It allows the government to levy fines on companies for data breaches, mandates explicit consent for data collection, and encourages companies to provide end-to-end encryption for their services. Moreover, this legislation bestows upon E.U. citizens and residents an "Online Bill of Rights," which includes the right to know how their data is used and the right to request the deletion of their data.
The epitome of such legislations has been in 2022 when the EU enacted the Digital Services Act (DSA). Experts opine that this act will impact all tech companies: Amazon, Apple, Google, Meta, Microsoft, Snapchat, TikTok, you name them. A key provision of DSA is that it empowers E.U. officials to levy fines of up to 6% of the global annual revenue of these massive platforms (link).
Governments in Asia, such as China, India, and Korea, are not far behind too. China enacted The Cybersecurity Law (CSL) in 2017, encompassing personal information protection, network operator obligations, critical information infrastructure, and security reviews. Additionally, The Personal Information Protection Law (PIPL), in effect since November 1, 2021, strengthens personal information protection and delineates detailed obligations for personal information processors. These regulations are rigorously enforced. For example, on July 21, 2022, DIDI, a transportation company in China, Didi Chuxing (Didi), incurred a fine of RMB 8.026 billion (approximately USD 1.2 billion) for violating CSL, Data Security Law (DSL), and PIPL simultaneously (link). This penalty is the largest fine ever imposed for breaching data protection regulations, surpassing Amazon's USD 877 million fine under the General Data Protection Regulation in the EU.
India's Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules of 2011, a clarification of the Information Technology Act of 2000 amended in 2008, notably Section 72A, stipulates penalties for intentionally or knowingly disclosing personal information acquired under a lawful contract without consent or breaching such a contract. In recent years, India has also enacted the Digital Personal Data Protection Act of 2023 (DPDP Act) on August 11 to prevent illegal data processing, imposing fines of up to 250 crore rupees ($30 million) for noncompliance(link). Similar to the EU's GDPR, South Korea's Personal Information Protection Act (PIPA) regulates the collection, use, and processing of personal data of data subjects. In early 2023, significant amendments were implemented and effective from September 15, 2023. These amendments enhanced the rights of data subjects, including the right to data portability and the right to opt out of automated decision-making. Moreover, they established new criteria for transferring personal data overseas and replaced criminal penalties with more administrative penalties.
Even the US, which often takes the markets approach to regulations, has seen some actions lately. California leads the charge among state governments in curbing the actions of tech companies. Their most forward-thinking policy is the California Consumer Privacy Act (CCPA). Under CCPA, California residents gain the right to know what personal information companies collect from them, the power to delete unauthorized personal data, and the option to opt out of selling their personal information. This legislation empowers consumers to control the use and share of information, affecting some tech firms that practised covert data-sharing practices.
Even after all these legislations, it is unclear whether big-tech firms are affected “enough” to make some real changes. Specifically in the US, observers argue that big-tech firms leverage lobbying efforts and political contributions to stall policy decisions and the implementation of stricter laws.
Page 5: Navigating the Regulatory Landscape - Big Tech's Dance with Compliance and Consequences
“DSA(digital Service Act) is an important milestone in the fight against illegal content online.We are mindful of our heightened responsibilities in the E.U. as a major technology company and continue to work with the European Commission on meeting the requirements of the DSA." —-- Microsoft
As laws tighten their grip, we witness how platforms respond to these new rules. There's an optimistic angle and a darker one to this story. On the bright side, the Big Tech antitrust movement is gaining ground. In 2022, Google boldly let certain Android app developers handle payments directly, bypassing the Play Store's fees. Thanks to the Digital Markets Act, Google and Apple now allow users to sideload apps, breaking free from the Play Store or App Store confines.
In the realm of regulations like the DSA and the CCPA, Microsoft has committed to bolstering the safety of its Bing search engine. Apple is championing user privacy and security, while Google and Pinterest have joined the E.U.'s mission. TikTok introduced a tool for reporting illegal content and pledged to provide E.U. users with clear explanations when their content gets removed. However, it's a fair question: are these tech giants genuinely complying, or is it all just talk without a solid plan? While they may appear to comply on the surface, there's a lack of comprehensive oversight in creating a genuinely secure data environment.
The GDPR continues to loom large over companies, posing data rights and minimization challenges. Despite being in effect since 2018, companies seem to struggle to fully adhere to its provisions. In 2022, Google faced a hefty $57 million fine for GDPR violations, primarily due to insufficient data processing transparency(link). As for Meta's situation, it is even more astonishing; they were slapped with a jaw-dropping $1.3 billion fine for transferring data from the E.U. to the U.S. for business purposes. Surprisingly, these penalties haven't deterred Meta much, considering their whopping annual revenue of nearly $116 billion(link). Fast forward to 2023, Meta Platforms Inc. has been racking daily fines of 1 million crowns (roughly USD 94,145) since August 14, all due to privacy infringements stemming from user data harvesting for targeted advertising, as reported by Reuters(link). Nevertheless, despite the weighty penalties, Meta is seeking a temporary injunction against the order, which mandates daily fines for the subsequent three months.
Page 6: Platform Users response to policy changes
While the General Data Protection Regulation (GDPR) aims to boost consumer data privacy, it has sparked some debates. Government regulations may seem like they're curbing businesses from playing around with private data, but users have mixed feelings about the GDPR's effects. A 2021 Pew Research Center study found that in countries where the GDPR is active, folks make 21.6% more searches and browse 16.3% more pages to get the info they want (link). In my own research with my collaborators from UC Irvine, (Donghwa Bae and Prof. Tingting Nian) we find similar findings while analyzing CCPA (link). Privacy gains come at the cost of convenience because companies can't personalize services to your tastes. Further, the Pew Research Center revealed that 56% of all U.S. adults advocate for increased government regulation of tech companies. However, 42% of the population supports these tech firms, believing they can expand freely if they follow the rules. Opinions are divided, but the overarching objective remains clear: ensuring that actions align with the public interest, even if this entails some tolerance. Therefore, the pivotal challenge lies in striking a balance that works for the industry and consumers.
Consider the Digital Millennium Copyright Act (DMCA). It restricts digital content related to sensitive topics, like censorship, sexual orientation, and counterfeit goods, from showing up on platforms. YouTube and Google have implemented restrictions on content unsuitable for those under 18. Most folks support this, but critics argue it infringes on free speech. They call out government demands for censorship, with Elon Musk stating that Twitter is compelled to comply. Consequently, Twitter has become a platform where you can post pretty much anything. There's a delicate balance here. Governments must regulate platforms to a degree to ensure convenience and preserve free speech, but they can't let them run wild.
Page 7: Conclusion
In the internet and data age, it's becoming clear that there are no hard and fast rules or clear boundaries that can satisfy both the general public and tech platforms. Striking a balance between meeting the needs of the public, ensuring fair competition, and protecting innovation while preventing illegal activities presents a complex challenge.
One potential remedy lies in the concept of a "credible threat of regulation," a mechanism designed to encourage platforms to behave responsibly. This approach encourages platforms to self-regulate or collaborate with similar entities to establish independent third-party regulatory bodies. In parallel with environmental Sustainable Development Goals (SDGs), companies must demonstrate to the public, fellow businesses, and government authorities that they are committed to offering superior products and services while preserving the environment and minimizing pollution.
To gain public trust and showcase their commitment to fair competition and user privacy, companies could voluntarily subject themselves to external regulatory bodies with the power to impose substantial penalties, such as fines equivalent to 10 or 20 percent of their annual revenue, far more substantial than the current 6 percent limit. Much like the transformative impact of the "Avery Rule" in the NHL, a new era of regulations is now upon us, definitively shaping the future of digital platforms and our digital lives.
This is a part of the Digital Platform Policy Changes Insights series. This writeup focuses on the response to the involvement of external regulators. If you know someone who enjoys these kinds of posts, please share it with them :)
Research help from John Mai and Amanda Mooney
If you know someone who likes reading such stuff, please share it with them.