Democracy under attack: how it relates to digital platforms’ business model

Fake news – information propagated with the intention to mislead – is a weapon against democracy. Although it is not a novel phenomenon of the 21st century, the constant use of digital platforms, the massive processing of personal data, as well as a crisis of trust in reliable institutions lead to an increased likelihood of individuals sharing misinformation. Platforms that used to be a place where individuals published their own ideas, photos, and memes, now distribute paid advertisements and biased content tailored to user’s interest, based on a complex algorithm system.

In this scenario, discussions on how to deal with the spread of fake news is of paramount importance. Regulate it or not? What are the roles and responsibilities of each type of digital platform (e.g. social media, search engines, video on demand services, ecommerce)? How to regulate each of them? Should they moderate content? Should they be held accountable? Should they have transparency duties? How to restrain the power of those companies? What is the judiciary role? There are several doubts that have been asked for years – when the subject is content moderation and digital platforms liability – without any accurate answer. Any level of content moderation has a direct – positive or negative – impact on individuals’ freedom of expression and, consequently, on democracy. The same applies to the non-action of platforms. At certain times it may bring drastic consequences for democracy, in others it may be the recommendable attitude. The challenge is balance both alternatives. This article aims to address how the European Union and the United States of America are dealing with this scenario.

European Union

The General Data Protection Regulation (“GDPR”) was approved in 2016 in the European Union and replaced the Directive 95/46/EC. The GDPR entered into force in 2018 and granted many rights to data subjects, including information about how their personal data are being processed.

Since the business model of digital platforms relies on the processing of personal data to customize the user’s experience and the distribution of information, this update of the European data protection framework was an important step towards enhancing privacy and, in the best scenario case, protecting individuals from political propaganda.

In addition to the GDPR, last year, the Digital Services Act (“DSA”), a brand-new law that aims to regulate online platforms, including social media, was approved in the European Union and entered in effect in November of 2022.

To sum up, the DSA sets forth that providers of intermediary services “cannot be held liable in relation to illegal content provided by the recipients of the service”. It also enhances transparency of online platforms and establishes obligations for a safe online environment.

According to the DSA, there are four categories of systemic risks that “should be assessed in-depth by the providers of very large online platforms and of very large online search engines”. One of them “concerns the actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, as well as public security” (Whereas 1, 80 and 82).

According to Article 34 of the DSA, “providers of very large online platforms and of very large online search engines shall diligently identify, analyse and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services”. When a systemic risk is identified, these companies must mitigate the risk of adopting one of the measures listed in Article 35 (e.g. adapting their terms and conditions and their enforcement; adapting their algorithmic systems).

Recently, the European Commission proposed the Artificial Intelligence Act (“AI Act”) “laying down harmonised rules on AI”. The Act sets forth that “asside from the many beneficial uses of AI, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights (…).” (Whereas 15). If approved, the AI Act must enhance transparency measures and protect citizens’ rights and democratic values.

United States of America

In the United States of America, there is no federal data protection law. Data protection is regulated in some states (e.g. California) and by some federal laws (e.g. Cloud Act and Electronic Communications Privacy Act), for specific industry sectors.

There is no DSA alike law regulating online platform’s transparency or accountability duty for the purpose of fighting misinformation or protecting democracy. In addition, Section 230 of the Communications Decency Act of 1996 (“Section 230”), establishes immunity to online platforms from two perspectives: platforms are not liable based on third-party content, nor for the removal of content in certain circumstances.

In 2020, the U.S. Department of Justice (“D.O.J”) published a report about Section 230.”[1] In summary, it suggests that when Section 230 was approved, it “was meant to nurture emerging internet businesses while also incentivizing them to regulate harmful online content.” However, the internet changed, and online platforms have become the most valuables companies in the world. Instead of continuing to be a simple forum for posting, nowadays, platforms “use sophisticated algorithms to promote content and connect users”. Moreover, according to the report, courts have been giving a broad interpretation to the Section, “diverging from its original purpose”. Thus, considering that these developments have brought benefits and downsides to society, the report concluded that “the time has therefore come to realign the scope of Section 230 with the realities of the modern internet so that it continues to foster innovation and free speech but also provides stronger incentives for online platforms to address illicit material on their services.”

Along the same lines of the report above, in October 2022, a blueprint for an AI Bill of rights[2] was published by the White House Office of Science and Technology Policy “to support the development of policies and practices that protect civil rights and promote democratic values in the building”.

Despite this important report published by the D.O.J and the AI Bill of right’s proposal, in 2023, the United States of America Supreme Court ruled a case in which the families of terrorism victims filed a lawsuit against Google, Twitter and Facebook to hold them liable for terrorist attacks, based on the allegation that the mentioned companies helped foster the attacks. Although, the Supreme Court sidestepped the analysis of Section 230, and ruled that the plaintiffs did not prove their allegation against Google, Twitter and Facebook, the decision represents a relevant victory for the platforms.

Final remarks

Big tech platforms are no longer fragile. Democracies are in crisis all over the world.  The impact of platform’s model in public opinion, and, therefore, in democracies, is clear and poses the question: whether to regulate it or not?

Only experience will show whether GDPR and DSA alike laws will be sufficient to guarantee the level of data protection, accountability, and transparency necessary to ensure the protection of democratic institutions, without extremely burdening tech companies. There are reasons to believe this framework is socially desirable. However, the practice may prove the very opposite and that the United States of America’s model is better suited to this purpose. The next years will be of utmost importance to analyze which framework will protect society against antidemocratic regimes. 

With regards to AI regulation, apparently, both jurisdictions are aware on the risks that AI represents to democracy and of the importance of regulating the matter.

Finally, as can be learned with Simone Lahorgue, while data protection laws protect the individual in an individualized way, competition laws protect the individual in a collective way[3]. Thus, although this article did not address how competition authorities are dealing with big platforms and if it is necessary to update competition laws considering the circumstances of the digital world, it is also important to further analyze it, since democracies may be affected by the market power of digital platforms.

 


 

[1] Available in: < https://www.justice.gov/file/1286331/download>.  Last access in 24.5.2023.

[2] Available in: <https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf>.  Last access in 24.5.2023.

[3] Available in: <Economia digital, proteção de dados e concorrência | Opinião | Valor Econômico (globo.com) >.  Last access in 24.5.2023

Categories