This paper examines how algorithm-driven media platforms fragment public discourse into isolated ideological enclaves. Using a mixed-methods approach — content analysis of 5,000 social media posts and semi-structured interviews with 30 users — the study finds that personalization algorithms significantly reduce cross-ideological exposure. Results indicate a 62% decrease in diverse content encounters compared to non-personalized feeds. The paper concludes with recommendations for interface transparency and user agency.
Findings support the fragmentation thesis but suggest that user behavior (e.g., selective liking) interacts with algorithms. Implications for media literacy and regulatory design are considered. mack e media
Algorithmic media ecosystems risk undermining the conditions for shared public discourse. Future research should explore intervention designs that preserve personalization while ensuring minimum diversity thresholds. While personalization enhances user experience
Scholars such as Pariser (2011) introduced the concept of the “filter bubble,” while Sunstein (2017) argued for the necessity of “unplanned encounters” in a healthy public sphere. However, empirical evidence remains mixed, with some studies showing only moderate fragmentation (Bruns, 2019). empirical evidence remains mixed
Mack E. Media Department of Communication Studies University of Media Arts
Contemporary media environments are increasingly shaped by machine learning algorithms that prioritize engagement. While personalization enhances user experience, concerns have grown over its impact on democratic deliberation. This study investigates the hypothesis that algorithmic curation fragments public discourse.