DOI: 10.5553/HYIEL/266627012023011001012

Hungarian Yearbook of International Law and European LawAccess_open

Developments in European Law

Recontextualizing the Role of Social Media in the Formation of Filter Bubbles

Keywords social media, filter bubbles, echo chambers, social polarization, Digital Services Act
Authors
DOI
Show PDF Show fullscreen
Abstract Author's information Statistics Citation
This article has been viewed times.
This article been downloaded 0 times.
Suggested citation
János Tamás Papp, 'Recontextualizing the Role of Social Media in the Formation of Filter Bubbles', (2023) Hungarian Yearbook of International Law and European Law 136-150

    One relatively popular area of scientific research on the social impact of social media is the phenomenon of filter bubbles and echo chambers. This notwithstanding, the true meaning of these concepts has not been precisely determined to date, and the social effects of the phenomena continue to be surrounded by heated debates. In this article, I briefly shed light on the contradictions underlying the theoretical and practical substantiation of filter bubbles and echo chambers. While the emergence of the phenomenon of filter bubbles seems logical from a theoretical point of view, its true presence cannot be discerned in reality. One reason for this is the user’s autonomous filtering activity, another reason is the widely available, diverse media environment. The social polarization that is increasingly experienced today emerged in the mutual interaction of traditional and online media: in the attention-based media environment promoted by the Internet, traditional has media became more and more opinion-based, increasing the possibility of personalization, a process that was further exacerbated by social media. As a result, the press moved towards a more polarized content production. Besides describing this process, in this paper I analyze the EU’s Digital Services Act focusing on the solutions it offers to the problem of recommendation systems and filter bubbles. Finally, I explore the question how quality news media content could help burst personal filter bubbles.

Dit artikel wordt geciteerd in

    • 1. Different Forms of Online Personalization

      It is perhaps not an overstatement to say that online platforms – and social media in particular – have fundamentally changed the flow of information within the public sphere. From the second half of the twentieth century, mass media, which played a decisive role, was largely replaced by individualized, personalized content consumption. The opportunity for mass communication previously open to a few has become available to almost anyone, and with this, a hitherto unimaginable number of speech is in daily circulation. More than 500 hours of video is uploaded to YouTube every minute.1x Hours of video uploaded to YouTube every minute, see at www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/. This means that so much content is generated per minute on a social media site that even if the user wanted to view it without some kind of content-sorting system, the user experience would be chaotic. Social media sites, therefore, optimize their content with sorting algorithms2x Social media algorithms are a set of computational procedures used by social media platforms to curate and present content to users based on various factors such as their past behavior, connections, and preferences. These algorithms help in personalizing user experience, showing relevant content, and facilitating engagement on the platform. See Yakun Li et al., Social media recommendation algorithm based on user’s interest and behavior, at www.ncbi.nlm.nih.gov/pmc/articles/PMC6619984/. to improve the user experience. However, this sorting is being done in line with the platform’s interests, the main purpose of which is

      “to allow the user to spend as much time as possible on their platform (to watch ads), and to this end the algorithm filters and presents content that matches your previous searches and preferences.”3x Ágnes Veszelszki, ‘deepFAKEnews: Az információmanipuláció új módszerei’, in László Balázs (ed.), Digitális kommunikáció és tudatosság, Hungarovox, Budapest, 2021, p. 96.

      The content offered by platforms is therefore personalized for everyone and is tailored to the friends or interests of the respective user and their online activity on and off that platform.4x Birgit Stark et al., Are Algorithms a Threat to Democracy? The Rise of Intermediaries: A Challenge for Public Discourse, Algorithm Watch, 2020, p. 12, at https://algorithmwatch.org/en/wp-content/uploads/2020/05/Governing-Platforms-communications-study-Stark-May-2020-AlgorithmWatch.pdf. On the other hand, social networking sites can use two different forms of personalization. First, as mentioned above, an opaque algorithm determines what content should appear in the user’s feed. Secondly, although people interact with different categories of acquaintances (friends, family, colleagues, etc.) on social media sites, most of them typically keep in touch with people who have similar interests. If someone’s personal and other interests on a given platform are fairly homogeneous, it means that the content shared by friends or the pages they follow is also in line with said person’s preferences.5x Frederik J. Zuiderveen Borgesius et al., ‘Should we worry about filter bubbles?’ Internet Policy Review, Vol. 5, Issue 1, 2016, p. 7. Thus, we may distinguish between two main types of personalized content presentation – in other words, personalization – (i) those that are individually set by the user, and (ii) personalization developed by algorithms and other recommendation systems. These may be referred to as voluntary or pre-selected personalization,6x Richard Fletcher, The truth behind filter bubbles: Bursting some myths, at https://reutersinstitute.politics.ox.ac.uk/news/truth-behind-filter-bubbles-bursting-some-myths. or otherwise ‘explicit’ or ‘implicit’ personalization.7x Borgesius et al. 2016, p. 3.
      The latter, namely, personalization carried out by algorithms, has long been the dominant focus of scientific inquiry into online platforms. By now it is perhaps clear to everyone that social media platforms not only operate as inactive transmitters of information, but also actively “navigate users to the content they define worthy”.8x Tarleton Gillespie, ‘The Politics of Platforms’, New Media & Sociology, Vol. 12, Issue 3, 2010, p. 348. This is because they not only carry out moderation activities when it comes to content, i.e. filtering illegal content, but also act as a kind of ‘curator’ sorting through the vast number of user-generated content to create a user’s individual timeline.9x Tarleton Gillespie, ‘The Relevance of Algorithms, in Media technologies.’ Essays on communication, materiality, and society, 2017, p. 167. These two activities are delimited by Sofia Grafanaki as consisting of curatorial activities “for the purpose of hosting legality” and those “for navigation”.10x Sofia Grafanaki, ‘Platforms, the First Amendment and Online Speech Regulating the Filters’, Pace Law Review, Vol. 39, Issue 1, 2018, p. 119. The former activity is geared towards deciding whether the content can remain on the platform at all, while the latter determines which content to draw users’ attention to, which content to prioritize and which content to neglect.11x Id. p. 126. Consequently, the latter activity ensures that the timelines of users, without exception, are created on the basis of some unique compilation; in other words, they are displayed in a customized way. And ‘customized’ in this sense, according to Paul Bernal, is synonymous with ‘manipulated’, since the timeline of users is compiled along financial-economic, marketing or political goals, and as such, it is not a neutral, influence-free content service.12x Paul Bernal, The Internet, Warts and All, Cambridge University Press, Cambridge, 2018, p. 92.
      It is human decisions that underly the algorithms. The operation of algorithms is based on predefined rules, which decide in any given case how to classify the information to be displayed. For an algorithm to be able to decide on the classification, it must evaluate the information according to predefined criteria. However, this evaluation is based on human decisions, which are necessarily rooted in some kind of value judgment. In other words, on the one side of the process carried out by the algorithm, there are value judgments regarding the usefulness or value of certain information, and then, based on these principles, the decision regarding individual content is made on the other side of the algorithm. These principles can, of course, be value-neutral or purely technical in nature (e.g. how many people liked the given post or how many comments it received, how often we interact with the person who shared it, etc.). However, most algorithms make decisions based on much more sophisticated and complex parameters. There is no other way for algorithms to operate, since purely technical algorithms are easily recognized and exploited by content producers, and on the other hand, it can represent a market advantage if the algorithm can ‘guess the user’s taste’ as accurately as possible. Therefore, algorithms must also take into account parameters such as the topic of the given content, the political of it, the source it stems from, the divisiveness of it, to name just a few aspects. Consequently, these parameters are necessarily hinged on some preliminary subjective decision. Should, for example, a platform attempt to classify an online news portal based on its political orientation, it must make a subjective (human) decision. On 11 January 2021, Adam Mosseri, head of Instagram, wrote on Twitter that “We’re not neutral. No platform is neutral. We all have values, and those values influence the decisions we make.”13x Cited by Seth Oranburg, ‘Social Media and Democracy after the Capitol Riot – The Cautionary Tale of the Giant Goldfish’, Mercer Law Review, Vol. 73, Issue 2, 2022, p. 594. Some scholars have even given the phenomenon a name, labelling it computational propaganda, which they believe is a new form of political manipulation that takes place over the Internet and seeks to influence public opinion through the combined operation of social media platforms, bots and algorithms.14x Samuel C. Wooley & Philip N. Howard (eds.), Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media, Oxford Studies in Digital Politics, New York, 2018, pp. 2-5.

    • 2. The Concept of Filter Bubbles and Echo Chambers

      In connection with this personalization, we often hear about the term filter bubble or echo chamber, which is mainly referred to in connection with the (harmful) influence of platforms on users. The metaphor echo chamber was first coined in 2001 by Cass Sunstein when he described how group-dynamics in opinion-forming processes produce personalized information environments that consistently reflect an individual’s view of himself.15x See Cass R. Sunstein, Republic.com, Princeton University Press, Princeton, 2001. According to Sunstein, in a democratic society, people need to encounter information that differs from their own opinions, in order to be able to fully form their own point of view. Otherwise, people may enter the “spiral of affirmation of attitudes” and drift towards more extreme points of view.16x Cass R. Sunstein, ‘The Law of Group Polarization’, The Journal of Political Philosophy, Vol. 10, Issue 2, 2002, p. 178. A report by a group of experts from the EU also mentions that “internet filtering mechanisms, given their increasing personalization potential, could create more isolated and less engaged social communities within the general public.”17x Vīķe-Freiberga et al., A free and pluralistic media to sustain European democracy, The Report of the High Level Group on Media Freedom and Pluralism, 2013, p. 31. The phenomenon thus suggests that people exclusively or overwhelmingly encounter news that reflects their interests, opinions and beliefs, i.e. that they are stuck in ‘echo chambers’ on social media.18x Katrina Lee, ‘Your Honor, on Social Media: The Judicial Ethics of Bots and Bubbles’, Nevada Law Journal, Vol. 19, Issue 3, 2019, p. 808. According to research, people repeatedly exposed to biased content that favors a political position close to their own will eventually develop a more extreme stance and will be less tolerant of opposing viewpoints.19x Borgesius et al. 2016, p. 8.
      Another commonly cited expression in relation to echo chambers is ‘filter bubble’. Filter bubble is a term coined by Eli Praiser and it refers to a state of mental isolation that can arise from personalized searches when a website’s algorithm guesses what information the users want to see based on the information acquired about them (such as location, past click behavior, and search history).20x See Eli Praiser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think, Penguin Publishing, New York, 2012. As a result, users encounter little or no information that does not fit into their perspective, effectively closing them into their own cultural or ideological bubbles.21x Cass R. Sunstein, Republic.com 2.0, Wolters Kluwer, Budapest, 2013, p. 18. Some studies have shown that users prefer news that makes them feel good and confirms their existing opinions and biases.22x Cass R. Sunstein, #Republic: Divided Democracy in the Age of Social Media, Princeton University Press, Princeton, 2017, pp. 104-108. The way social networking sites work and our ability to quickly and easily share our views with crowds through them further deepens existing cleavages between individual social groups, with new bubbles being created every day.23x Neill Fitzpatrick, ‘Media Manipulation 2.0: The Impact of Social Media on News, Competition, and Accuracy’, Athens Journal of Mass Media and Communications, Vol. 4, Issue 1, 2018, p. 56. According to some authors, the existence of filter bubbles thereby threatens democracy itself.24x Sara J. Benson, ‘@PublicForum: The argument for a Public Forum Analysis of Government Officials’ Social Media Accounts’, Washington University Jurisprudence Review, Vol. 12, Issue 1, 2019, p. 109. Daniel Maggen makes three main claims about filter bubbles. As a starting point, he notes that most of the scholars working on the subject are focused purely on the possible errors in algorithms, ignoring the dangers inherent in the typical operation of algorithms. On the other hand, he points out that filter bubbles also cause policymakers to gauge a narrower range of information, so their decisions are often completely influenced by filter bubbles. Finally, he asserts that from among the two main activities of filter bubbles, i.e. recommending content in harmony with a particular worldview and excluding information that may be contrary to it, the latter is a much larger problem.25x Daniel Maggen, ‘Law In, Law Out: Legalistic Filter Bubbles and the Algorithmic Prevention of Nonconsensual Pornography’, Cardozo Law Review, Vol. 43, Issue 5, 2022, pp. 1751-1753. Some studies go so far as to conclude that the phenomenon of online filter bubbles may have a negative impact even on judicial impartiality.26x Lee 2019, p. 790.
      Although Praiser originally formulated his theorems mainly in relation to search engines, lately he has also discussed the phenomenon in connection with social media sites.27x Eli Praiser, ‘Did Facebook’s Big Study Kill My Filter Bubble Thesis?’ Wired, 7 May 2015, at www.wired.com/2015/05/did-facebooks-big-study-kill-my-filter-bubble-thesis. However, in its new, social media-focused incarnation, the filter bubble concept is increasingly intertwined with the concept of echo chambers. Many articles now use the two terms interchangeably.28x Axel Bruns, ‘Filter Bubble’, Internet Policy Review, Vol. 8, Issue 4, 2019, at https://policyreview.info/concepts/filter-bubble. As Axel Bruns points out, even though quite a lot has been written about this phenomenon both Sunstein and Praiser did not clearly define the concepts of filter bubbles and echo chambers, hence, in subsequent research, scholars used the concepts either interchangeably or in rather confusing ways.29x Axel Bruns, It’s Not the Technology, Stupid: How the ‘Echo Chamber’ and ‘Filter Bubble’ Metaphors Have Failed Us, Digital Media Research Centre (QUT), 2019, p. 3. In his opinion, therefore, the

      “debate about ill-considered metaphors such as ‘echo chambers’ and ‘filter bubbles’ is a distraction that we can no longer afford, because it keeps us from confronting far more important matters head-on.”30x Id. p. 9.

      Even if we do not fully consider the phenomenon of filter bubbles and echo chambers to be a ‘distraction’, it should be noted that the two concepts now describe the same phenomenon in both scientific and public discourse, therefore, the present study treats the two terms as identical and refers to them interchangeably.

    • 3. Lack of Practical Justification of the Concept

      The scientific controversy surrounding filter bubbles stems from the fact that it is very difficult to empirically investigate the existence of filter bubbles and echo chambers, so the conclusions formulated in this regard are often disputed.31x Judith Moeller & Natali Helberger, Beyond the filter bubble: concepts, myths, evidence and issues for future debates, University of Amsterdam, Amsterdam, 2018, at https://pure.uva.nl/ws/files/29285427/beyond_the_filter_bubble_concepts_myths_evidence_and_issues_for_future_debates_1_.pdf. Some researchers report a high level of clustering along political lines, characterizing social media platforms as echo chambers, while others found that online ideological segregation is low in absolute terms, and the online space is much rather the context for open ideological discourse and exposure to ideological diversity.32x Pablo Barberá et al., ‘Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?’, Psychological Science, Vol. 26, Issue 10, 2015, p. 1532. To cite another example, one study asked internet users how they use various search sites, including social media, and other media to obtain political information. The results showed that using social networking sites or internet search is only one aspect of obtaining political information. This means that those interested in politics are not caught in a bubble by online portals. At the same time, the researchers did not find empirical evidence to support the existence of the phenomenon of echo chambers, as the majority of those surveyed check problematic or surprising political information, thereby exposing themselves to different points of view. Thus, according to researchers, certain views about social networking sites and bubbles not only overestimate technical aspects, but also underestimate the social spread of the conscious use of the Internet, social media and search engines.33x William H. Dutton et al., Social Shaping of the Politics of Internet Search and Networking: Moving Beyond Filter Bubbles, Echo Chambers, and Fake News, at https://ssrn.com/abstract=2944191, p. 1.
      A 2015 study involving 3.8 million Twitter users concluded that they overestimated the ‘ideological segregation’ present on social media. The researchers concluded that the use of social media as a means of communication within interpersonal networks is not inevitably bounded by ideological contours, especially when it comes to nonpolitical issues and events. The results of the research showed that while liberal-minded users tend to retweet tweets from other liberal users, conservatives are particularly likely to retweet tweets from other conservatives. However, non-political topics such as the 2014 Winter Olympics, the Boston Marathon bombing or the 2014 Super Bowl all crossed ideological boundaries.34x Barberá 2015, p. 1537. According to Katrina Lee, even if a scientific study – one that only looks at Twitter, for example – finds evidence of echo chambers and political polarization, that doesn’t necessarily mean that a particular user lives in an echo chamber. She points out that many studies did not consider other sources of information that the user can access, such as face-to-face conversations, newspapers, or television. For this reason, she says, social media has played a smaller role in political polarization than it was feared in the early years of social media.35x Lee 2019, p. 809.
      However, much of the research on the subject ignores the explicit personalization mentioned earlier, i.e. that on social media and in real life, individuals also encounter information and perspectives that may be varied, but from the content that appears, they themselves choose which information to read in detail so that they can create their own echo chambers without the help of algorithms. Research conducted by the University of Michigan also emphasized that exposure to information that matches the attitudes of individuals is not uniquely caused by bubbles created by algorithms but by the users’ own decisions.36x Eytan Bakshy et al., ‘Exposure to ideologically diverse news and opinion on Facebook’, Science, Vol. 348, Issue 6239, pp. 1130-1132. This is because the effects of implicit personalization can be counterbalanced by other forces. For example, people who consciously choose content on their own and consume mostly pre-selected content in their Facebook feed can still be avid users of non-personalized news sites or news channels.37x Borgesius 2016, p. 10. Other authors are also of the view that today, in a large-scale, wide-variety media environment, only a very small part of the population is likely to find themselves in an echo chamber. They claim that the reason why most studies examining echo chambers and filter bubbles are flawed is that they fail to test the theory in the real context of a vastly heterogeneous media environment.38x Elizabeth Dubois & Grant Blank, ‘The echo chamber is overstated: The moderating effect of political interest and diverse media’, Information, Communication & Society, Vol. 21, Issue 5, 2018, p. 744.
      Despite all of the above however, it can at least be said that the arrangement of information through algorithms can certainly have an impact on users and, consequently, on the democratic public sphere. Empirical research has also shown that social media content is much less diverse and balanced from the point of view of individuals,39x Misinformation and biases infect social media, both intentionally and accidentally. See at https://theconversation.com/misinformation-and-biases-infect-social-media-both-intentionally-and-accidentally-97148. which has the result of effectively polarizing society.40x Political bias on social media emerges from users, not platforms. See at https://research.impact.iu.edu/key-areas/social-sciences/stories/social-media-platform-bias.html. Ágnes Veszelszki is also of the opinion that although the filter bubble phenomenon may be called into question based on empirical data, it can certainly be argued that the algorithms operating behind online interfaces influence what information a person encounters on the Internet. In her view, even if we were to reject the existence of the filter bubble phenomenon, the so-called repetition effect is still at work. This means that where the user encounters the same information several times, even from the same source, in a redundant way, it will be much more convincing to them through repetition.41x Veszelszki 2021, p. 96.
      In addition, it should be emphasized that it is not only the information displayed that can significantly impact society, but also its placement, namely, when certain information is placed further lower on the timeline or is hidden altogether. Thus, we can see that although the phenomenon of filter bubbles seems logical from a theoretical point of view, its actual existence cannot be discerned in reality. One reason for this is the user’s autonomous filtering activity (i.e. explicit personalization), another reason is the widely available, diverse media environment. In his study, Swedish researcher Peter M. Dahlgren puts forward 9 reasons to refute the existence phenomenon of filter bubbles, the most important of which are the following: the digital behavior of users does not necessarily match their real preferences; the fact that people seek connection with like-minded individuals does not mean that they avoid those who think differently; different media platforms satisfy different needs; and finally, the concept of filter bubbles is still very confusing in scientific research. He notes that it is very easy to point the finger at online platforms and ‘say something provocative’ about their power to destabilizes democracy. However, it is much more difficult to substantiate this claim, especially when the users themselves are capable of shaping how the technology works. He points out that, e.g. it may be true that personalized algorithms can lead to the formation of filter bubbles from a technical point of view. However, the filter bubble thesis and related literature opinions go beyond the technological side of the issue and draw far-reaching conclusions about the long-term impact on society and democracy. Most of these conclusions, however, fly in the face of empirical research or at least lack support in evidence.42x Peter M. Dahlgren, ‘A critical review of filter bubbles and a comparison with selective exposure’, Nordicom Review, Vol. 42, Issue 1, 2021, pp. 15-33.

    • 4. Polarizing Effect of Media and Online Platforms

      If the user conducts a significant portion of his social life through social networks, in other words, they connect with friends and family through social networks, orient themselves consume entertainment content there, then influence exerted by the social networking sites on them is also likely to be much greater. Where personal experience is reduced to social media, its ability to influence increases radically, and this phenomenon seems to be unfolding as online platforms are increasingly becoming the primary platform for reading the news.43x Erin C. Carroll, ‘Making News: Balancing Newsworthiness and Privacy in the Age of Algorithms’, Georgetown Law Journal, Vol. 106, Issue 1, 2017, p. 82. Studies show that half of US adults get the news at least sometimes from social media and that three-quarters of the Hungarian ‘Facebooking’ population uses the service for news consumption at least weekly and is mainly informed by the news that is presented to them there.44x Bernát Török et al., Internetes attitűdök – Régiónk és a világháló, Nemzetközi Közszolgálati Egyetem, 2020, at https://www.ludovika.hu/wp-content/uploads/2020/12/EJKK_ITKI_Kozep-Europa_kutatasi_jelentes_v.pdf. It follows, that online platforms are transforming the flow of news and information, having a direct impact on journalism and the mass media, thereby shaping their operating practices. Although many people think of algorithms used by platforms as unbiased organizing tools based on simple mathematical operations, but there are human factors behind them and creators who seriously influence journalism through their activities. As such, they serve as a kind of virtual bouncer for the public, who decide what information may enter the ‘community house party’. Mike Ananny and Kate Crawford describe this phenomenon as though the designers of platforms and algorithms acted as a kind of ‘liminal press’.45x Mike Ananny & Kate Crawford, ‘A Liminal Press’, Digital Journalism, Vol. 3, Issue 2, 2015, pp. 192-208.
      As the role of the information gatekeeper shifts from journalists in news agencies to coders and programmers. Meanwhile, the nature of the press and the news it produces is also changing.46x Carroll 2017, p. 71. Just as television does not directly influence the activity of viewers, but does so on the long run, in a much more subtle way,47x Kimberlianne Podlas, ‘Reconsidering the Nomos in Today’s Media Environment’, Touro Law Review, Vol. 37, Issue 4, 2022, pp. 2219-2220. algorithms do not immediately exert their effects, either. Lam Tuy Vo identifies three new trends in this area emerging from the activities of online platforms: (i) information segregation, i.e. that information does not reach everyone in the same way, each user has only one personalized, narrowed news circle; (ii) the consumption of information has become a political act because the platform shapes our profile according to the way we consume or react to information so that each reaction (or lack thereof) conveys our political opinion to the platform; (iii) as a consequence, parallel realities of different users appear.48x Lam Thuy Vo, ‘How the Internet Created Multiple Publics’, Georgetown Law Technology Review, Vol. 4, Issue 2, 2020, p. 412.
      Nowadays, the news press and cable television have also become more opinion-based, meaning that the phenomenon of polarization can also be observed in the context of traditional media. However, this is not a new phenomenon, because a certain degree of personalization has always been present in the press and with the appearance of cable television this became tangible on the television market as well. Viewers can choose the content they want to watch or read based on their own preference and avoid channels or forums that express opinions contrary to theirs. And viewers take advantage of this opportunity to watch channels more in line with their own views.49x Podlas 2022, p. 2240.
      The responsibility for creating parallel realities, therefore, does not lie solely with online platforms. For users to be caught in a bubble of politically or ideologically biased content, someone must also produce this content. The rise of the Internet has not yet marked the end of news consumption through television and the press, and research shows that news consumption via social media and the Internet has not replaced, but only complements television news consumption. It may seem strange that, despite all this, we use the possibility of personalization much more, i.e. we consume more news today than ever before, but from far fewer and less diverse sources.50x Id. p. 2247. Consequently, users think that because they read a lot of news, they are well-informed, which gives them the illusion of a healthy marketplace of ideas.51x Madhura Bhandarkar, ‘Legality of Social Media Algorithms: How They Shape Our Minds and Democracy’, International Journal of Law Management & Humanities, Vol. 4, Issue 1, 2021, p. 1919. The vast majority of users recognize bias in a news or news source and treat the information obtained in this way with that in mind, meaning they consistently engage in ‘selective’ news sharing on social media sites. This means that they typically read and share news that is more aligned with their own ideology, at the same time, they are not ‘naïve newsreaders’ who share every piece of information that comes their way. Thus, the potential negative effects of social media on social polarization are mainly manifested when the news (source) in the question itself is biased.52x Kirill Pogorelskiy & Matthew Shum, News We Like to Share: How News Sharing on Social Networks Influences Voting Outcomes, at https://ssrn.com/abstract=2972231, p. 3.
      As a result of the above phenomena, it is not only the consumption of news that is becoming polarized, but the entire ‘news media’ is undergoing transformation, with television channels, press products and online press operating in a way defined by ideologies.53x Podlas 2022, p. 2243. Of course, this does not absolve online platforms of liability, as robots and bubbles on social media pose potentially unprecedented risks that cannot be found in other types of media, such as the television or radio. These higher risks stem from the speed and frequency with which social media networks are capable of distributing and amplifying low-quality content and inaccurate information.54x Lee 2019, p. 791. The scale of the ranking, moderation, filtering, removal and monetization policies of Facebook, Twitter and other platforms is clearly beginning to resemble editorial responsibility. Consequently, news is becoming increasingly polarized as astonishing, shocking news generates the necessary engagement news sites need to survive in this new environment.55x Oranburg 2022, p. 610. Journalists are now interested in attracting more viewers and generating as much user interaction as possible under their article, not that the article in questions a product of quality journalism.56x Carroll 2017, p. 70. However, most interactions and comments come from gimmicky, clickbait, often inaccurate articles, formulated in a way to elicit intense emotional reactions. In addition, it is also clear that people’s trust in media is steadily declining, owing to the fact that today anyone can be a journalist. And in this regard, some journalists even say that the very idea of ever having an informed public is in danger.57x Carol Pauli, ‘The “End” Of Neutrality: Tumultuous Times Require a Deeper Value’, Cardozo Journal of Conflict Resolution, Vol. 23, Issue 3, 2022, p. 566.

    • 5. What Can We Do about Bubbles?

      There is a relatively widespread view that the Internet has created parallel realities. One of the results is that different groups of users see completely different content, so there is less and less overlap between the information that appears, and there is a less and less common social experience. Without these, a heterogeneous society will have a harder time dealing with social problems. People can see each other as different, alien beings and even enemies, which contributes to an even deeper rift within the different spheres of society.58x Cass Sunstein, Is Social Media Good or Bad for Democracy?, at https://sur.conectas.org/en/is-social-media-good-or-bad-for-democracy/.
      Even if, as explained above, it is not entirely true that this development is down to online platforms – since it is not only the Internet that has created parallel realities – it cannot in any way be said that the Internet, and online platforms, have merely been idle and innocent onlookers in this process. This is because even if they did not create them themselves, online platforms deepen and perpetuate the process of social polarization and can rightly be expected to take action against this development with the means at their disposal. The issue of regulating online platforms and social media is complex, and here I merely want to address the issue of combating filter bubbles and echo chambers.59x For details, see János Tamás Papp, A közösségi media szabályozása a demokratikus nyilvánosság védelmében, Wolters Kluwer, Budapest, 2022.
      Social media platforms are new city centers where users have their own soapbox.60x Rachel Casey, ‘John Stuart Mill and Social Media: Evaluating the Ethics of De-Platforming’, University of Central Florida Department of Legal Studies Law Journal, Vol. 4, Issue 1, 2021, p. 33. Thus, they require an entirely new regulatory approach. Since these platforms operate on a global scale, effective regulation should be achieved mainly, albeit not exclusively, on a transnational level. In recent years, the EU has recognized the dangers posed by online platforms and is employing various tools to remedy the problem. These include self-regulatory codes,61x E.g. Code of Practice on Disinformation, at https://ec.europa.eu/info/strategy/priorities-2019-2024/new-push-european-democracy/european-democracy-action-plan/strengthened-eu-code-practice-disinformation_hu. directives62x E.g. Directive (EU) 2017/541 on combating terrorism, Directive 2017/541 on combating terrorism, Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market. and regulations,63x E.g. P2B – Regulation (EU) 2019/1150 on promoting fairness and transparency for business users of online intermediation services. but the most significant piece of legislation, the gamechanger is the Digital Services Act (DSA).64x Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a single market for digital services and amending Directive 2000/31/EC (Digital Services Regulation, DSA).
      The text of the DSA does not include the terms filter bubble or echo chamber, but it does identify and, to some extent, regulate the content recommendation systems of online platforms. A recommender system is defined as any

      “fully or partially automated system used by an online platform to suggest in its online interface specific information to recipients of the service or prioritize that information, including as a result of a search initiated by the recipient of the service or otherwise determining the relative order or prominence of the information displayed.”65x DSA, Article 3(s).

      Based on this, algorithms that lock users in a bubble are considered to be recommender systems, whether providing search results, offering specific content, or just the relative order in which any content is displayed. The parameters of such recommender systems should be clearly and comprehensibly defined in the terms and conditions,66x Id. Article 27(1). and where the platform uses more than one such recommendation method, the user should be given a choice as to which one they wish to use on the service.67x Id. Article 27(3). In other words, users should be provided with information explaining why the information is being recommended to them, i.e. the criteria that are most relevant to the recommendations.68x Id. Article 27(2). In addition, providers of very large online platforms should offer an option to convey information to users without any profiling,69x Id. Article 38. they should ensure the transparency of their algorithms70x Id. Article 69. and advertising systems,71x Id. Article 26. and certain vetted researchers should be given access to additional data on recommender systems.72x Id. Article 40. All these cannot have a complete and direct impact, and therefore do not constitute a straightforward solution to the phenomenon of filter bubble effects. However, they can nevertheless contribute to a more efficient consumption of information by conscious, informed users, as well as to the understanding of the different recommendation processes for researchers.
      However, DSA also seeks to guarantee content consumption opportunities for less conscious users in other ways. A specific stipulation is that service providers operating an online platform may not design or operate their interfaces in a way that

      “deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.”73x Id. Article 25.

      Very large online platforms should put reasonable, proportionate and effective risk mitigation measures in place to address the systemic risks they have identified, including in cases of any actual or foreseeable negative effects on civic discourse and electoral processes.74x Id. Article 35.
      In addition to all this, DSA provisions relating to the role of very large online platforms in the news media are of particular interest. For example, providers of very large online platforms should pay particular attention to freedom of expression and information, including the freedom and pluralism of the media75x Id. Recital 47. and identify and address systematic risks that jeopardize media freedom and pluralism.76x Id. Article 34. These provisions, therefore, require platforms to preserve and guarantee media pluralism, thereby providing, inter alia, the means to achieve a more balanced mass media. This proposed solution is based on the principle that it is possible to reduce polarization through diverse content and that filter bubbles can be neutralized by introducing various types of news into them.77x Ron Berman & Zsolt Katona, ‘Curation Algorithms and Filter Bubbles in Social Networks’, Marketing Science, Vol. 39, Issue 2, 2020, pp. 296-316. However, given the bias of traditional media, the overall picture becomes more complicated, and the above conclusion may not hold. This approach namely ignores the phenomenon of the so-called hostile news bias, which suggests that users reject otherwise objective and neutral information that contradicts their worldview and see them as hostile news.78x Erin Carroll, ‘Making News: Balancing Newsworthiness and Privacy in the Age of Algorithms’, Georgetown Law Journal, Vol. 106, Issue 1, 2017, p. 72. A study published by an American research group examined how users react to opposing views on Twitter. If the test subjects were Republicans, Twitter recommended a multitude of Democratic content, and vice versa. After examining the attitudes before and after, they found that as a result of the messages received from the opposite side, the users’ attitudes became more and more polarized, indeed, their original ideas were deeply reinforced.79x Cristopher Bail et al., ‘Exposure to opposing views on social media can increase political polarization’, Proceedings of the National Academy of Science, Vol. 115, Issue 1, 2018, pp. 9216-9221.
      As a starting point, the most basic requirement may be that platforms refrain from influencing the public through algorithms. Clear accountability, responsibility and redress mechanisms would be needed to address potential harm resulting from automated decision-making and algorithms. Measures to ensure the impartiality, neutrality and transparency of algorithms and data sets should form the cornerstone of legislation on digital services, and these measures should also cover the sorting of the content displayed to users by algorithms.80x Giancarlo Frosio, ‘Platform Responsibility in the Digital Services Act: Constitutionalising, Regulating and Governing Private Ordering’, in Andrej Savin & Jan Trzaskowski (eds.), Research Handbook on EU Internet Law, Edward Elgar, Cheltenham, 2023, forthcoming. See at https://ssrn.com/abstract=4236510. This can reduce some implicit personalization in the online space, but it is questionable what to do with the phenomenon of explicit personalization. Can an online platform be expected to offer the user diverse content, even against the user’s own preferences and settings?
      The issue of diverse information also forms the basis of regular debates in the context of traditional media, and related practical challenges are increasing in the online space. How can diversity be measured? What are the indicators that help an online platform decide the question of whether the service it provides meets the conditions of a diverse media? What exactly does diverse media mean in the online space at all? Obviously, it is impossible to expect an online platform to balance the content presented to individual users on a scale along political biases, so beyond the aforementioned neutrality, what are the positive obligations we can prescribe for platforms? In addition, there is another big issue: the verifiability of measures taken. This is because online service providers and advertisers have integrated personalized technologies into the user experience so that we have lost the ability to objectively verify the information displayed.81x Thomas Beretich, How online tracking and the filter bubble have come to define who we are, at https://ssrn.com/abstract=2878750, p. 14. In the case of a traditional media stream, everyone sees the same thing, including, where appropriate, the authorities that supervise the area. Meanwhile, on social media, the content supplied is different for each user, so it is impossible to apply central, universally authentic content regulation everywhere. We have now entered the era of the ‘network-based information economy’, where users generally produce information in a decentralized way, and since this can actually be done free of charge, a very large number of people make use of this opportunity, making it almost uncontrollable what and how content is displayed for each user.82x Giovanni Pitruzzella & Oreste Pollicino, Disinformation and hate speech. A European Constitutional Perspective, Bocconi University Press, Milano, 2020, p. 7.

    • 6. Conclusion

      It is clear that the issue of personalization is not a purely technological or legal problem but requires a solution in both respects. Technology cannot be combated by mere restrictions or regulations, as its growth is inevitable, and restrictions can quickly become unrealistic and obsolete. The issue of personalization, filter bubbles and echo chambers also touches upon on the issue of individual responsibility. In the case of possible legal regulation, the choice of value to be chosen by the legislator is, in fact, an echo of the question of responsibility that every citizen, a member of civil society, must ask himself.83x Krzysztof J. Jankowski, Living in the Filter Bubble: Is what we lose something we need to preserve?, at https://ssrn.com/abstract=2982025, p. 26. Society must be able to decide where the line should be drawn between individual responsibility and the protection afforded by regulation. It is also a mistake to treat social media as the origin of all evil, since this ultimately takes the responsibility off the shoulders of the individual. Some believe that the issue should not even be dealt with from the platforms’ perspective, but that the responsibility of journalists should be emphasized instead, since as the role of the press has changed in the age of algorithms, journalists are relied on less and less as a providers of real, unbiased information.84x Carroll 2017, p. 80. According to Stacy Strong, we need a powerful interdisciplinary process to reverse the “psychological, neurological and social processes” that have caused such as decline of trust in the media.85x S.I. Strong, ‘Alternative Facts and the Post-Truth Society: Meeting the Challenge’, University of Pennsylvania Law Review, Vol. 165, Issue 1, 2017, p. 145. On a related note, Sue Robinson argues that despite the aforementioned loss of trust, the press could be the right platform for journalists to reach through multiple parallel realities to recreate a common ground and build bridges between different bubbles based on shared values.86x Sue Robinson, ‘Crisis of Shared Public Discourses: Journalism and How It All Begins and Ends with Trust’, Journalism, Vol. 20, Issue 1, 2018, p. 58. DeVito emphasizes that since the operating system of algorithms is very different from the operation of traditional news media in terms of content and underlying structure, it is necessary to closely monitor the serious social consequences of these codes in relation to news consumption. This, he says, involves applying methods used to monitor and understand this new era of journalism, particularly in-depth ethnographic research into their content recommendation systems and related processes. It also requires that online platforms publicly acknowledge and address their growing influence on information consumption.87x Michael A. DeVito, ‘From Editors to Algorithms. A values-based approach to understanding story selection in the Facebook news feed’, Digital Journalism, Vol. 5, Issue 6, 2016, p. 772.

    Noten

    • 1 Hours of video uploaded to YouTube every minute, see at www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/.

    • 2 Social media algorithms are a set of computational procedures used by social media platforms to curate and present content to users based on various factors such as their past behavior, connections, and preferences. These algorithms help in personalizing user experience, showing relevant content, and facilitating engagement on the platform. See Yakun Li et al., Social media recommendation algorithm based on user’s interest and behavior, at www.ncbi.nlm.nih.gov/pmc/articles/PMC6619984/.

    • 3 Ágnes Veszelszki, ‘deepFAKEnews: Az információmanipuláció új módszerei’, in László Balázs (ed.), Digitális kommunikáció és tudatosság, Hungarovox, Budapest, 2021, p. 96.

    • 4 Birgit Stark et al., Are Algorithms a Threat to Democracy? The Rise of Intermediaries: A Challenge for Public Discourse, Algorithm Watch, 2020, p. 12, at https://algorithmwatch.org/en/wp-content/uploads/2020/05/Governing-Platforms-communications-study-Stark-May-2020-AlgorithmWatch.pdf.

    • 5 Frederik J. Zuiderveen Borgesius et al., ‘Should we worry about filter bubbles?’ Internet Policy Review, Vol. 5, Issue 1, 2016, p. 7.

    • 6 Richard Fletcher, The truth behind filter bubbles: Bursting some myths, at https://reutersinstitute.politics.ox.ac.uk/news/truth-behind-filter-bubbles-bursting-some-myths.

    • 7 Borgesius et al. 2016, p. 3.

    • 8 Tarleton Gillespie, ‘The Politics of Platforms’, New Media & Sociology, Vol. 12, Issue 3, 2010, p. 348.

    • 9 Tarleton Gillespie, ‘The Relevance of Algorithms, in Media technologies.’ Essays on communication, materiality, and society, 2017, p. 167.

    • 10 Sofia Grafanaki, ‘Platforms, the First Amendment and Online Speech Regulating the Filters’, Pace Law Review, Vol. 39, Issue 1, 2018, p. 119.

    • 11 Id. p. 126.

    • 12 Paul Bernal, The Internet, Warts and All, Cambridge University Press, Cambridge, 2018, p. 92.

    • 13 Cited by Seth Oranburg, ‘Social Media and Democracy after the Capitol Riot – The Cautionary Tale of the Giant Goldfish’, Mercer Law Review, Vol. 73, Issue 2, 2022, p. 594.

    • 14 Samuel C. Wooley & Philip N. Howard (eds.), Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media, Oxford Studies in Digital Politics, New York, 2018, pp. 2-5.

    • 15 See Cass R. Sunstein, Republic.com, Princeton University Press, Princeton, 2001.

    • 16 Cass R. Sunstein, ‘The Law of Group Polarization’, The Journal of Political Philosophy, Vol. 10, Issue 2, 2002, p. 178.

    • 17 Vīķe-Freiberga et al., A free and pluralistic media to sustain European democracy, The Report of the High Level Group on Media Freedom and Pluralism, 2013, p. 31.

    • 18 Katrina Lee, ‘Your Honor, on Social Media: The Judicial Ethics of Bots and Bubbles’, Nevada Law Journal, Vol. 19, Issue 3, 2019, p. 808.

    • 19 Borgesius et al. 2016, p. 8.

    • 20 See Eli Praiser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think, Penguin Publishing, New York, 2012.

    • 21 Cass R. Sunstein, Republic.com 2.0, Wolters Kluwer, Budapest, 2013, p. 18.

    • 22 Cass R. Sunstein, #Republic: Divided Democracy in the Age of Social Media, Princeton University Press, Princeton, 2017, pp. 104-108.

    • 23 Neill Fitzpatrick, ‘Media Manipulation 2.0: The Impact of Social Media on News, Competition, and Accuracy’, Athens Journal of Mass Media and Communications, Vol. 4, Issue 1, 2018, p. 56.

    • 24 Sara J. Benson, ‘@PublicForum: The argument for a Public Forum Analysis of Government Officials’ Social Media Accounts’, Washington University Jurisprudence Review, Vol. 12, Issue 1, 2019, p. 109.

    • 25 Daniel Maggen, ‘Law In, Law Out: Legalistic Filter Bubbles and the Algorithmic Prevention of Nonconsensual Pornography’, Cardozo Law Review, Vol. 43, Issue 5, 2022, pp. 1751-1753.

    • 26 Lee 2019, p. 790.

    • 27 Eli Praiser, ‘Did Facebook’s Big Study Kill My Filter Bubble Thesis?’ Wired, 7 May 2015, at www.wired.com/2015/05/did-facebooks-big-study-kill-my-filter-bubble-thesis.

    • 28 Axel Bruns, ‘Filter Bubble’, Internet Policy Review, Vol. 8, Issue 4, 2019, at https://policyreview.info/concepts/filter-bubble.

    • 29 Axel Bruns, It’s Not the Technology, Stupid: How the ‘Echo Chamber’ and ‘Filter Bubble’ Metaphors Have Failed Us, Digital Media Research Centre (QUT), 2019, p. 3.

    • 30 Id. p. 9.

    • 31 Judith Moeller & Natali Helberger, Beyond the filter bubble: concepts, myths, evidence and issues for future debates, University of Amsterdam, Amsterdam, 2018, at https://pure.uva.nl/ws/files/29285427/beyond_the_filter_bubble_concepts_myths_evidence_and_issues_for_future_debates_1_.pdf.

    • 32 Pablo Barberá et al., ‘Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?’, Psychological Science, Vol. 26, Issue 10, 2015, p. 1532.

    • 33 William H. Dutton et al., Social Shaping of the Politics of Internet Search and Networking: Moving Beyond Filter Bubbles, Echo Chambers, and Fake News, at https://ssrn.com/abstract=2944191, p. 1.

    • 34 Barberá 2015, p. 1537.

    • 35 Lee 2019, p. 809.

    • 36 Eytan Bakshy et al., ‘Exposure to ideologically diverse news and opinion on Facebook’, Science, Vol. 348, Issue 6239, pp. 1130-1132.

    • 37 Borgesius 2016, p. 10.

    • 38 Elizabeth Dubois & Grant Blank, ‘The echo chamber is overstated: The moderating effect of political interest and diverse media’, Information, Communication & Society, Vol. 21, Issue 5, 2018, p. 744.

    • 39 Misinformation and biases infect social media, both intentionally and accidentally. See at https://theconversation.com/misinformation-and-biases-infect-social-media-both-intentionally-and-accidentally-97148.

    • 40 Political bias on social media emerges from users, not platforms. See at https://research.impact.iu.edu/key-areas/social-sciences/stories/social-media-platform-bias.html.

    • 41 Veszelszki 2021, p. 96.

    • 42 Peter M. Dahlgren, ‘A critical review of filter bubbles and a comparison with selective exposure’, Nordicom Review, Vol. 42, Issue 1, 2021, pp. 15-33.

    • 43 Erin C. Carroll, ‘Making News: Balancing Newsworthiness and Privacy in the Age of Algorithms’, Georgetown Law Journal, Vol. 106, Issue 1, 2017, p. 82.

    • 44 Bernát Török et al., Internetes attitűdök – Régiónk és a világháló, Nemzetközi Közszolgálati Egyetem, 2020, at https://www.ludovika.hu/wp-content/uploads/2020/12/EJKK_ITKI_Kozep-Europa_kutatasi_jelentes_v.pdf.

    • 45 Mike Ananny & Kate Crawford, ‘A Liminal Press’, Digital Journalism, Vol. 3, Issue 2, 2015, pp. 192-208.

    • 46 Carroll 2017, p. 71.

    • 47 Kimberlianne Podlas, ‘Reconsidering the Nomos in Today’s Media Environment’, Touro Law Review, Vol. 37, Issue 4, 2022, pp. 2219-2220.

    • 48 Lam Thuy Vo, ‘How the Internet Created Multiple Publics’, Georgetown Law Technology Review, Vol. 4, Issue 2, 2020, p. 412.

    • 49 Podlas 2022, p. 2240.

    • 50 Id. p. 2247.

    • 51 Madhura Bhandarkar, ‘Legality of Social Media Algorithms: How They Shape Our Minds and Democracy’, International Journal of Law Management & Humanities, Vol. 4, Issue 1, 2021, p. 1919.

    • 52 Kirill Pogorelskiy & Matthew Shum, News We Like to Share: How News Sharing on Social Networks Influences Voting Outcomes, at https://ssrn.com/abstract=2972231, p. 3.

    • 53 Podlas 2022, p. 2243.

    • 54 Lee 2019, p. 791.

    • 55 Oranburg 2022, p. 610.

    • 56 Carroll 2017, p. 70.

    • 57 Carol Pauli, ‘The “End” Of Neutrality: Tumultuous Times Require a Deeper Value’, Cardozo Journal of Conflict Resolution, Vol. 23, Issue 3, 2022, p. 566.

    • 58 Cass Sunstein, Is Social Media Good or Bad for Democracy?, at https://sur.conectas.org/en/is-social-media-good-or-bad-for-democracy/.

    • 59 For details, see János Tamás Papp, A közösségi media szabályozása a demokratikus nyilvánosság védelmében, Wolters Kluwer, Budapest, 2022.

    • 60 Rachel Casey, ‘John Stuart Mill and Social Media: Evaluating the Ethics of De-Platforming’, University of Central Florida Department of Legal Studies Law Journal, Vol. 4, Issue 1, 2021, p. 33.

    • 61 E.g. Code of Practice on Disinformation, at https://ec.europa.eu/info/strategy/priorities-2019-2024/new-push-european-democracy/european-democracy-action-plan/strengthened-eu-code-practice-disinformation_hu.

    • 62 E.g. Directive (EU) 2017/541 on combating terrorism, Directive 2017/541 on combating terrorism, Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market.

    • 63 E.g. P2B – Regulation (EU) 2019/1150 on promoting fairness and transparency for business users of online intermediation services.

    • 64 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a single market for digital services and amending Directive 2000/31/EC (Digital Services Regulation, DSA).

    • 65 DSA, Article 3(s).

    • 66 Id. Article 27(1).

    • 67 Id. Article 27(3).

    • 68 Id. Article 27(2).

    • 69 Id. Article 38.

    • 70 Id. Article 69.

    • 71 Id. Article 26.

    • 72 Id. Article 40.

    • 73 Id. Article 25.

    • 74 Id. Article 35.

    • 75 Id. Recital 47.

    • 76 Id. Article 34.

    • 77 Ron Berman & Zsolt Katona, ‘Curation Algorithms and Filter Bubbles in Social Networks’, Marketing Science, Vol. 39, Issue 2, 2020, pp. 296-316.

    • 78 Erin Carroll, ‘Making News: Balancing Newsworthiness and Privacy in the Age of Algorithms’, Georgetown Law Journal, Vol. 106, Issue 1, 2017, p. 72.

    • 79 Cristopher Bail et al., ‘Exposure to opposing views on social media can increase political polarization’, Proceedings of the National Academy of Science, Vol. 115, Issue 1, 2018, pp. 9216-9221.

    • 80 Giancarlo Frosio, ‘Platform Responsibility in the Digital Services Act: Constitutionalising, Regulating and Governing Private Ordering’, in Andrej Savin & Jan Trzaskowski (eds.), Research Handbook on EU Internet Law, Edward Elgar, Cheltenham, 2023, forthcoming. See at https://ssrn.com/abstract=4236510.

    • 81 Thomas Beretich, How online tracking and the filter bubble have come to define who we are, at https://ssrn.com/abstract=2878750, p. 14.

    • 82 Giovanni Pitruzzella & Oreste Pollicino, Disinformation and hate speech. A European Constitutional Perspective, Bocconi University Press, Milano, 2020, p. 7.

    • 83 Krzysztof J. Jankowski, Living in the Filter Bubble: Is what we lose something we need to preserve?, at https://ssrn.com/abstract=2982025, p. 26.

    • 84 Carroll 2017, p. 80.

    • 85 S.I. Strong, ‘Alternative Facts and the Post-Truth Society: Meeting the Challenge’, University of Pennsylvania Law Review, Vol. 165, Issue 1, 2017, p. 145.

    • 86 Sue Robinson, ‘Crisis of Shared Public Discourses: Journalism and How It All Begins and Ends with Trust’, Journalism, Vol. 20, Issue 1, 2018, p. 58.

    • 87 Michael A. DeVito, ‘From Editors to Algorithms. A values-based approach to understanding story selection in the Facebook news feed’, Digital Journalism, Vol. 5, Issue 6, 2016, p. 772.


Print this article