Black-box Audit of YouTube's Video Recommendation: Investigation of Misinformation Filter Bubble Dynamics (Extended Abstract)
Black-box Audit of YouTube's Video Recommendation: Investigation of Misinformation Filter Bubble Dynamics (Extended Abstract)
Matus Tomlein, Branislav Pecher, Jakub Simko, Ivan Srba, Robert Moro, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Maria Bielikova
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 5349-5353.
https://doi.org/10.24963/ijcai.2022/749
In this paper, we describe a black-box sockpuppeting audit which we carried out to investigate the creation and bursting dynamics of misinformation filter bubbles on YouTube. Pre-programmed agents acting as YouTube users stimulated YouTube's recommender systems: they first watched a series of misinformation promoting videos (bubble creation) and then a series of misinformation debunking videos (bubble bursting). Meanwhile, agents logged videos recommended to them by YouTube. After manually annotating these recommendations, we were able to quantify the portion of misinformative videos among them. The results confirm the creation of filter bubbles (albeit not in all situations) and show that these bubbles can be bursted by watching credible content. Drawing a direct comparison with a previous study, we do not see improvements in overall quantities of misinformation recommended.
Keywords:
Artificial Intelligence: General