
Dr. Nitin Agarwal, Maulden-Entergy Chair and Donaghey Distinguished Professor of Information Science at the University of Arkansas at Little Rock, received the Best Paper Award at the 2025 International Conference on AI-Driven Media Innovation (AIMEDIA 2025) in Venice, Italy, for his groundbreaking research using artificial intelligence to uncover embedded bias in YouTube content.
Agarwal’s award-winning article, “AI-powered multi-layered narrative analysis to uncover bias in YouTube content,» presents a new human-centered AI framework that goes beyond superficial metrics to reveal how emotions, sentiments, and biases evolve in digital video content. The research has been credited with advancing media transparency and accountability in an era dominated by engagement-driven algorithms.
The study addresses a growing concern in the digital age: how platforms like YouTube shape public perception through algorithmic design and emotionally charged framing. While most previous research has focused on headlines, thumbnails, and engagement metrics, Agarwal’s team looked at the full story arc of videos to better understand how messages evolve from first impression to the underlying substance.
Using artificial intelligence, researchers conducted a multi-layer analysis of YouTube content, evaluating AI-generated titles, descriptions, transcripts, and summaries. This approach allowed the team to trace changing emotional tone and biases as viewers delve deeper into the content.
The results were “striking,” Agarwal said.
Feeling became more positive and joyful through deeper levels of analysis, while anger and toxicity sharply decreased. Video titles, which are often optimized to attract clicks, were consistently the most provocative, while main stories tended to be more balanced, measured and constructive.
“This research shows how AI can be used to interpret meaning, not just data,” Agarwal said. “By analyzing content holistically, we can uncover the gaps between what attracts attention on the surface and what a message actually communicates. This distinction is essential to creating more transparent and trustworthy digital environments.”
Published in the IARIA 2025 Congress Proceedings, the study represents a significant advancement in AI-based media analysis. By integrating sentiment, emotion, and toxicity detection across multiple layers of content, the research establishes a new framework for evaluating online media beyond headlines and engagement metrics.
The implications extend far beyond YouTube. The methodology can be applied across social and digital platforms to help researchers, policymakers and technology developers better understand how algorithms influence engagement and how these systems could be redesigned to promote fairness, accuracy and contextual integrity.
“Our goal is to help platforms and audiences distinguish between catchy rhetoric and authentic communication,” Agarwal said. “This type of analysis brings us closer to AI systems that truly support informed decision-making.”
This recognition adds to a growing list of international accolades for Agarwal and the Collaboratory for Social Media and Online Behavioral Studies (COSMOS)which he founded and directs. COSMOS conducts interdisciplinary research at the intersection of computer science, behavioral analytics, and social impact, with support from federal agencies and international partners focused on algorithmic transparency, cognitive warfare, AI applications, and digital ethics.
