Gonzalez vs. Google case

One of society’s main challenges today is to decide whether and how content should be moderated by digital platforms.
Last Wednesday (23/02), the United States Supreme Court began the second hearing of the Gonzalez vs. Google case, the outcome of which may indicate whether or not social networks are responsible for content posted by their users that violates their Policies, as well as their Terms of Use.
It is important to note that this case is not to be confused with Google Spain v. Mario Costeja González[1], in which Mario Costeja González filed a complaint with the Spanish Data Protection Agency against La Vanguardia Ediciones SL.
and against Google, its Spanish branch and parent company, in which he asked La Vanguardia to delete or alter the pages showing his personal data, so that his personal data would no longer appear, or to use certain tools made available by online search engines to protect his personal data, as well as requesting that Google Spain and Google Inc be ordered to de-index the content showing his personal data.
In the case in question, on the other hand, the relatives of Nohemi Gonzalez – a young American woman killed in a terrorist attack claimed by the Islamic State (ISIS) on November 13, 2015 in Paris – filed a lawsuit against Google, Twitter and Facebook, especially because the algorithms of Google’s digital platform “YouTube”, Google’s digital platform “YouTube” recommended content from the terrorist group to certain users, claiming that the platform acted as a “recruitment platform for the terrorist group” when it allowed the broadcasting of content aimed at recruiting members, planning terrorist attacks, issuing terrorist threats, instilling fear and intimidating civilian populations.
It is worth remembering that the attacks in Paris resulted in dozens of deaths, with three shootings recorded in different parts of the city – including an attack on the Bataclan concert hall[2].
Gonzalez’s relatives, suing Google and the social networks, claimed that the digital platforms, by allowing users to radicalize their posts, were legally responsible for the damage done to his family – since Google uses algorithms that suggest content to users based on their viewing history and therefore helps ISIS spread its message.
With the decision of this lawsuit, the Justices of the Supreme Court of the United States may consider the application of the so-called Section 230 of the Communications Decency Act.
This provision protects digital platforms (Facebook, Google, Twitter, among others) from lawsuits over content posted by their users, or from decisions related to the removal of content.
Therefore, the US Supreme Court will be able to decide whether these companies can be held responsible for the content their users post, as well as for colluding with extremist propaganda and/or discriminatory advertising.
The lawyers for Gonzalez’s family claim that all Digital Platforms (Google/YouTube, Facebook, Twitter) are responsible for aiding and abetting international terrorism. By not taking significant or aggressive measures to prevent terrorists from using their services, even if they did not play an active role in the specific act of international terrorism, by recommending and reinforcing content to users through their algorithms, they are directly involved due to the collusion of speech and, according to the lawyers of Gonzalez’s family, such a right would not be protected by Section 230.
Google’s defense claims that the Gonzalez family’s arguments are vague and merely speculative about the algorithms’ recommendations.
The lawsuit is not expected to be decided until June 2023.
The issue of whether or not to moderate content on digital platforms and social networks is currently challenging governments and civil society, as the central focus is on how to combat hate speech and disinformation.
Unesco (United Nations Educational, Scientific and Cultural Organization) even promoted the International Conference “For a Trustworthy Internet” from February 21 to 23, 2023.
The event took place in Paris and brought together representatives of governments and civil society to discuss how the UN agency can contribute to creating guidelines to regulate digital platforms, combating disinformation and hate speech while protecting freedom of expression and human rights.
In a nutshell, UNESCO’s aim with this conference was to encourage governments to promote and protect freedom of expression and human rights on the Internet, and for regulatory systems to guarantee independence and adequate supervision.
In this way, it is up to regulators to establish targets and processes, specifying human rights-sensitive issues that digital platforms need to follow.
Even recently, Supreme Court Justice Luís Roberto Barroso, participating in this Conference, stated that digital platforms should have a duty to act even before a court order in cases of illegal posts, especially in the face of content that violates the law of the democratic rule of law, which prohibits calls for the abolition of the rule of law, encouragement of violence to depose the government or incitement of animosity between the Armed Forces and the Powers.
[3] “In the case of clear criminal behavior, such as child pornography, terrorism and incitement to crime, platforms should have a duty of care to use all possible means to identify and remove this type of content, regardless of (judicial) provocation,”[4] Barroso stressed.
But in the case of the United States, the existence of the First Amendment makes this issue even more complex to decide, since this provision does not allow the American Congress to enact laws that restrict freedom of expression or of the press.
For critics of Section 230, this provision allows digital platforms to avoid being held responsible for damage to the community, even if such occurrences could be avoided if the digital platforms moderated the content (such as removing a publication that supports a terrorist act, for example).
Proponents, on the other hand, claim that if the Supreme Court relaxes this understanding, digital platforms, fearing liability and more lawsuits, could remove even more content, thus resulting in a greater threat to freedom of expression. Finally, it is worth noting that the US Supreme Court’s ruling has the power to impact on the responsibility of digital platforms for the need or not to moderate content in all other countries, considering the importance of the issue and the relevance of the decision by its highest court.
Therefore, it is necessary to follow the developments and the reasons for the votes of the justices in order to understand the outcome of a judgment that has the power to change an entire liability regime that was previously established.  

AND HOW IS THE DEBATE ON THE SUBJECT GOING IN BRAZIL? After the anti-democratic acts that took place on January 8, 2023, the federal government rushed to build a response to deal with this thorny issue, especially the posts on social media regarding threats against the democratic rule of law.
To this end, the Ministry of Justice worked on producing what it called the “Democracy Package”[5], in which it suggested to the Presidency of the Republic that a Provisional Measure (MP) be issued ordering digital platforms to remove anti-democratic content, or content that violates democratic values, even before obtaining a court order.
In addition, the Provisional Measure could criminalize conduct on the internet that constitutes an attack on the democratic rule of law, as well as holding digital platforms responsible for failing to take down terrorist and anti-democratic publications.
The Federal Attorney General’s Office has also developed an initiative to create a body called the National Office for the Defense of Democracy, which is yet another initiative by the current government to combat the production and dissemination of fake news.
The Chamber of Deputies is also working on Bill 2.630/2020 (Fake News Bill), which includes a series of measures to mitigate the spread of disinformative content and regulatory rules for platforms.
There is even the possibility of the Federal Government suggesting amendments to Bill 2.630/2020 to build a new proposal for regulating digital platforms and combating hate and anti-democratic speech.
This initiative will be coordinated by the Civil House, and will include the ministries of Justice, Science, Technology and Innovation and Culture, as well as the Secretariat of Communication and the Attorney General’s Office.
On the other hand, there are moves in the National Congress to create the Parliamentary Front in Defense of Social Networks, whose request has already been submitted to the Chamber of Deputies.
According to the authors of this Parliamentary Front, the aim is to “support and defend responsible and free social media, reconciling freedom of expression with other constitutional rights”[6].  WHAT DOES THE CIVIL FRAMEWORK FOR THE INTERNET SAY? The issue of freedom of expression, as we have already seen, is extremely delicate, since it is one of the central fundamental rights in a Democratic State of Law.
This guarantee is provided for in Article 5, items IV and IX, respectively of the Federal Constitution, as well as in the Universal Declaration of Human Rights and other international treaties ratified by Brazil.
Federal Law No. 12.965/14 (Marco Civil da Internet), which establishes principles, guarantees, rights and duties for the use of the Internet in Brazil, also protects digital platforms in relation to content generated by third parties.
According to article 19 of the Marco Civil da Internet, with the aim of protecting freedom of expression, the internet application provider can only be held liable for damages arising from content generated by third parties if, after a specific court order, it fails to take steps to make the content identified as infringing unavailable within the scope and technical limits of its service and within the specified timeframe – with the exception of legal provisions to the contrary.
Therefore, this provision of specific Brazilian internet law also protects digital platforms from being held broadly responsible for the content they generate. It even has the possibility of not removing content if it is not within the limits of its technical capacity.
As such, the proposals on the agenda in Brazil, whether via Provisional Measure, Bill of Law or other device, as well as those being developed by government bodies, need to pay attention to this particular feature of the Marco Civil da Internet.
However, the unfolding of the Gonzalez vs. Google case in the US courts could give new directions and contours to the discussion on the issue of moderation (or not) of digital platforms. [ 1] Source: https://victorhugotmenezes.jusbrasil.com.br/artigos/441755309/1-o-caso-google-spain-vs-mario-costeja-gonzalez [2] Source: https://g1.globo.com/mundo/noticia/2015/11/tiroteios-e-explosoes-sao-registrados-em-paris-diz-imprensa.html[3] Source: ttps:/ www1.folha.uol.com.br/poder/2023/02/barroso-defende-mudar-marcocivil-para-enquadrar-big-techs-por-conteudo-ilegal.shtml [4] Idem [5 ] Source: https://www.politize.com.br/pacote-da-democracia/ [6] Source: https: //teletime.com.br/17/02/2023/deputado-propoe-criacao-da-frente-parlamentar-em-defesa-das-redes-sociais/

 

Want to know more? Get in touch with our team of experts!

Mais Insights

Manipulation of sports results and the importance of prevention programs

Gender equality and female participation in companies: what you need to know

The use of Legal 365 in process management

The discussion on the nature of civil liability in the General Data Protection Act