Search result: 4 articles

x
The search results will be filtered on:
Journal East European Yearbook on Human Rights x

    This study explores the spread of disinformation relating to the Covid-19 pandemic on the internet, dubbed by some as the pandemic’s accompanying “infodemic”, and the societal reactions to this development across different countries and platforms. The study’s focus is on the role of states and platforms in combatting online disinformation.
    Through synthesizing answers to questions submitted by more than 40 researchers from 20 countries within the GDHR Network, this exploratory study provides a first overview of how states and platforms have dealt with Corona-related disinformation. This can also provide incentives for further rigorous studies of disinformation governance standards and their impact across different socio-cultural environments.
    Regarding the platforms’ willingness and efficacy in removing (presumed) disinformation, a majority of submissions identifies a shift towards more intervention in pandemic times. Most submitters assess that this shift is widely welcomed in their respective countries and more often considered as taking place too slowly (rather than being perceived as entailing dangers for unjustified restrictions of freedom of expression). The picture is less clear when it comes to enforcing non-speech related infection prevention measures.
    While the dominant platforms have been able to defend, or even solidify, their position during the pandemic, communicative practices on those platforms are changing. For officials, this includes an increasing reliance on platforms, especially social networks, for communicating infection prevention rules and recommendations. For civil society, the pandemic has brought an increasing readiness – and perceived need – to intervene against disinformation, especially through fact-checking initiatives.
    National and local contexts show great variance at whether platform-driven disinformation is conceived as a societal problem. In countries where official sources are distrusted and/or seen as disseminating disinformation criticism against private information governance by platforms remains muted. In countries where official sources are trusted disinformation present on platforms is seen more negatively.
    While Facebook, Twitter, and Instagram play important roles in the pandemic communication environment, some replies point towards an increasing importance of messaging apps for the circulation of Covid-19-related disinformation. These apps, like Telegram or WhatsApp, tend to fall under the radar of researchers, because visibility of content is limited and scraping is difficult, and because they are not covered by Network Enforcement Act-type laws that usually exclude one-to-one communication platforms (even if they offer one-to-many channels).
    Vis-à-vis widespread calls for a (re)territorialization of their content governance standards and processes amid the pandemic, platform companies have maintained, by and large, global standards. Standardized, featured sections for national (health) authorities to distribute official information via platforms are exceptions thereto.


Matthias C. Kettemann
Prof. dr. Matthias C. Kettemann, LL.M. (Harvard) is head of the research programme “Regulatory Structures and the Emergence of Rules in Online Spaces” at the Leibniz Institute for Media Research | Hans-Bredow-Institut.

Martin Fertmann
Martin Fertmann is a PhD student at the Leibniz-Institut für Medienforschung | Hans-Bredow-Institut’s research programme “Regulatory Structures and the Emergence of Rules in Online Spaces”.

    Terms-of-service based actions against political and state actors as both key subjects and objects of political opinion formation have become a focal point of the ongoing debates over who should set and enforce the rules for speech on online platforms.
    With minor differences depending on national contexts, state regulation of platforms creating obligations to disseminate such actors’ information is considered dangerous for the free and unhindered discursive process that leads to the formation of public opinions.
    Reactions to the suspension of Trump as not the first, but the most widely discussed action of platform companies against a politician (and incumbent president) provide a glimpse on the state of platform governance debates across participating countries.
    Across the countries surveyed politicians tend to see the exercise of content moderation policies of large platform companies very critically
    The majority of politicians in European countries seem to be critical of the deplatforming of Trump, emphasizing fundamental rights and calling for such decisions to be made by states, not private companies
    These political standpoints stand in an unresolved conflict with the constitutional realities of participating countries, where incumbents usually cannot invoke fundamental rights when acting in their official capacities and where laws with “must carry” requirements for official information do not exist for social media and would likely only be constitutional for narrowly defined, special circumstances such as disaster prevention.
    Facebooks’ referral of the Trump-decision to its Oversight Board sparked a larger debate about institutional structures for improving content governance. The majority of participating countries has experience with self- or co-regulatory press-, media- or broadcasting councils to which comparisons can be drawn, foreshadowing the possible (co-regulatory) future of governing online speech.
    Media commentators in participating countries interpreted the deplatforming of Trump as a signal that far-right parties and politicians around the world may face increasing scrutiny, while conservative politicians and governments in multiple participating countries instrumentalized the actions against Trump as supposed proof of platform’s bias against conservative opinions.
    Even without specific legal requirements on content moderation, submissions from several countries refer to a general – often: constitutional – privileging of speech of politicians and office holders. This could potentially support or even compel the decisions of platforms to leave content of political actors up even if it violates their terms of service.


Martin Fertmann
Martin Fertmann is a PhD student at the Leibniz-Institut für Medienforschung | Hans-Bredow-Institut’s research programme “Regulatory Structures and the Emergence of Rules in Online Spaces”

Matthias C. Kettemann
Prof. dr. Matthias C. Kettemann, LL.M. (Harvard) is head of the research programme “Regulatory Structures and the Emergence of Rules in Online Spaces” at the Leibniz Institute for Media Research | Hans-Bredow-Institut.
Article

Access_open Artificial Intelligence and Customer Relationship Management

The Case of Chatbots and Their Legality Framework

Journal East European Yearbook on Human Rights, Issue 1 2021
Keywords artificial intelligence, chatbots, CRM, data protection, privacy
Authors Konstantinos Kouroupis, Dimitrios Vagianos and Aikaterini Totka
AbstractAuthor's information

    In the new digital era as it is formed by the European digital strategy, the explosion of e-commerce and related technologies has led to the formation of tremendous volumes of customer data that could be exploited in a variety of ways. Customer relationship management (CRM) systems can now exploit these data sets to map consumers’ behaviour more effectively. As social media and artificial intelligence widened their penetration, firms’ interest shifted to chatbots in order to serve their customers’ needs. Nowadays, CRM and bots are developed in a parallel way. With the help of these virtual personal assistants, CRM establishes a virtual relationship with consumers. However, the extended collection and use of personal data under this scope may give rise to ethical and legal issues. In this article, the term CRM is defined, followed by an analysis of the way chatbots support CRM systems. In the second part, the legal context of chatbot use will be highlighted in an attempt to investigate whether there are personal data protection issues and whether certain rights or ethical rules are somehow violated. The draft AI Regulation, in combination with the provisions of GDPR and e-Privacy Directive, offers a significant background for our study. The article concludes by demonstrating the use of chatbots as an inherent part of the new digital era and lays special emphasis on the term ‘transparency’, which seems to penetrate the lawfulness of their use and guarantee our privacy.


Konstantinos Kouroupis
Konstantinos Kouroupis: Assistant Professor of European and Data Rights Law, Department of Law, Frederick University, Cyprus.

Dimitrios Vagianos
Dimitrios Vagianos: Electrical & Computer Engineer, Laboratory Teaching staff, Department of International and European Studies, University of Macedonia, Greece.

Aikaterini Totka
Aikaterini Totka: Graduate Student, Department of International and European Studies, University of Macedonia, Greece.
Article

Beizaras and Levickas v. Lithuania

Recognizing Individual Harm Caused by Cyber Hate?

Journal East European Yearbook on Human Rights, Issue 1 2020
Keywords hate speech, verbal hate crime, cyber hate, effective investigation, homophobia
Authors Viktor Kundrák
AbstractAuthor's information

    The issue of online hatred or cyber hate is at the heart of heated debates over possible limitations of online discussions, namely in the context of social media. There is freedom of expression and the value of the internet in and of itself on the one hand, and the need to protect the rights of victims, to address intolerance and racism, as well as the overarching values of equality of all in dignity and rights, on the other. Criminalizing some (forms of) expressions seems to be problematic but, many would agree, under certain circumstances, a necessary or even unavoidable solution. However, while the Court has long ago declared as unacceptable bias-motivated violence and direct threats, which under Articles 2, 3 and 8 in combination with Article 14 of the ECHR, activate the positive obligation of states to effectively investigate hate crimes, the case of Beizaras and Levickas v. Lithuania presented the first opportunity for the Court to extend such an obligation to the phenomenon of online verbal hate crime. This article will first address the concepts of hate speech and hate crime, including their intersection and, through the lens of pre-existing case law, identify the key messages for both national courts and practitioners. On the margins, the author will also discuss the issue of harm caused by verbal hate crime and the need to understand and recognize its gravity.


Viktor Kundrák
Viktor Kundrák has worked for the OSCE Office for Democratic Institutions and Human Rights (ODIHR) as a Hate Crime Officer since 2018. He has been responsible for ODIHR’s hate crime reporting, trained police, prosecutors and judges, and provided legislative and policy support at the national level. He is also a PhD candidate at Charles University in Prague. The views in this article are his own and do not necessarily represent those of ODIHR. Some of the opinions are based on an article published in Czech earlier this year (see V. Kundrák & M. Hanych, ‘Beizaras and Levickas v. Lithuania (Verbal Hate Crime on Social Network and Discriminatory Investigation)’, The Overview of the Judgments of the European Court of Human Rights, Vol. 3, 2020.
Showing all 4 results
You can search full text for articles by entering your search term in the search field. If you click the search button the search results will be shown on a fresh page where the search results can be narrowed down by category or year.