Note on platform regulation



Marco Civil never prevented content moderation by platforms

There is a worrying reversal in the debate about combating misinformation and hate speech. Suddenly, the Civil Rights Framework for the Internet became responsible for the ineffectiveness of platforms in the face of attacks on science and quality information based on facts. In the United States or in England, the Marco Civil did not exist and that did not stop us from witnessing a wave of disinformation that resulted in the Brexit, the election of Donald Trump and the invasion of the Capitol.

The Marco Civil never prevented content moderation by platforms. Those who did not contain the falsifications of reality, the “click farms”, the proliferation of fascist groups and their hate speech were the owners of the platforms. This is not simply because much of Silicon Valley sympathizes with ideas of the incompatibility between unrestricted freedom of economic exploitation and democracies, such as Peter Thiel, founder of PayPall, or Larry Elisson, co-founder of Oracle, enthusiasts of the extreme right and the so-called movement alt-right.

The platforms have an extremely successful remuneration model that has resulted in market values ​​in excess of $1 trillion for the Big Techs that own their controlling interest. What is the main dynamic of this business model? First, the free offer of interfaces and services with the aim of massively collecting data from the people who use them. Second, these data are treated by algorithmic systems for the formation of behavior profiles and micro-segmentation of the population that uses it. Third, profiles are grouped by platforms to be targeted with targeted advertising by those with money, companies, marketing departments, political groups, and others.

Thus, platforms monetize every second that a person browses their structures, which are designed to attract and modulate attention. Therefore, they created the logic of viralization, engagement and sale of likes and impulses. Every effort made by the platforms is not aimed at providing quality information or protecting democracy. Its objective is the spectacularization that allows people to watch and share its contents. Therefore, the impoverishment of debates that we see in world politics owes a lot to this viral logic that depends on turning everything into something surprising.

When attacking the Marco Civil, in general, what is intended is to state that the platforms were prevented from blocking lying and misinformation content. Therefore, the law should require that disinformation be contained by platforms. So now we're going to give Big Tech the legal power to say what is and isn't disinformation. As in the scandal of Cambridge analityca, the solution proposed to Facebook concentrated even more power on the company's management and did not reduce the misinformation process at all – as demonstrated by Frances Haugen, former product manager of the social network.

In the second half of March 2023, whoever entered the Democracy Now on Youtube would come across a warning: “The YouTube community has identified the following content as inappropriate or offensive to some audiences”. The video considered inappropriate was a report on Julian Assange, leader of the Wikileaks who denounced the war crimes of the United States. The same Youtube blocked the visualization of the contents of the podcast technopolitics in twelve episodes. In none of these cases was there misinformation or hate speech, but the platform managers saw fit to reduce views and block content. Interestingly, this is not done on the channels of the extreme right, not even on the channel of former deputy Mamãe Falei. For Youtube these channels do not violate its rules.

The necessary regulation of platforms should not increase their arbitrary power over content. We need a law that reduces that power and puts them under the control of democracies. The regulation requires the necessary information about the data they collect, the crossings carried out and the objectives of the algorithmic systems they use. The terms of use and privacy policies they expose are not enough for democracies and societies to have basic information in their operations about social behavior.

Social networking platforms are not websites or blogs. They stand as public spaces not linked to any cultural, partisan, religious or commercial option. They do this to attract all audiences and reach them with advertising and marketing. In this condition, the platforms must be subject to democratic supervision.

As the immediate management of the platforms is carried out by algorithmic machine learning systems, it is essential to evaluate the impact of the data processing they perform. At the very least, the purposes of the models they create must be clearly exposed, without dubiousness and euphemisms, for those who are being modulated by them. The platforms' terms of use and privacy policies are too generic and do not allow knowing whether they are practicing excessive, discriminatory and inappropriate data collection and treatment.

Just as the Europeans are creating an Artificial Intelligence Council composed of experts in artificial intelligence, representatives of civil society, government and the market, the regulation of platforms, given their complexity, should advance in the formation of a democratic and multisectoral application structure of rules about these social modulation companies.

*Sergio Amadeu da Silveira is a professor at the Federal University of ABC. Author, among other books, of Free software – the fight for the freedom of knowledge (Conrad).

Originally published on the website Other words.

The A Terra é Redonda website exists thanks to our readers and supporters.
Help us keep this idea going.
Click here and find how

See this link for all articles