Repository logo
About
Deposit
Communities & Collections
All of UWSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "arabzadehghahyazi, negar"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    Evaluation of Information Access Systems in the Generative Era
    (University of Waterloo, 2025-06-20) arabzadehghahyazi, negar
    The rapid advancement of information access technologies, including neural retrieval models and generative information-seeking systems, has outpaced traditional evaluation methodologies, exposing fundamental gaps in assessing their effectiveness. Existing evaluation frameworks struggle to adapt, particularly in the presence of sparse relevance labels, limiting their ability to fairly and comprehensively compare retrieval and generation-based systems. The emergence of large language models (LLMs) further complicates evaluation, as they challenge conventional assessment paradigms while offering new opportunities for automated evaluation. To address these issues, it is crucial to first identify flaws in current evaluation methodologies and then develop more robust, efficient, and adaptable assessment strategies. This thesis begins by demonstrating that evaluation based on sparse labeling introduces substantial biases and inconsistencies in system rankings, often failing to recognize genuine improvements in retrieval effectiveness. We show that in traditional IR benchmarks,stronger models may retrieve highly relevant but unjudged documents, leading to underestimation of their performance. To mitigate this, we propose an alternative evaluation approach based on distribution of retrieved results and labeled data using Fréchet Distance. This method not only improves robustness in the presence of sparse labels but also facilitates direct comparison between retrieval-based and generative models on a common evaluation scale. We then investigate how LLMs can be leveraged to evaluate IR systems, distinguishing between their use for evaluating retrieval-based methods and generative IR systems. A key focus of this work is the role of LLMs in automated relevance judgments. We systematically compare different LLM-based relevance assessment methodologies, highlighting the lack of standardization in evaluating these approaches. To address this gap, we propose a structured framework that evaluates relevance judgment methods based on their alignment with human labels and their impact on system rankings. Furthermore, we examine the effect of prompt formulation on LLM-based evaluation, demonstrating how prompt variations can significantly influence the consistency and reliability of assessment outcomes. Finally, we extend our study beyond retrieval-based evaluation to assessing generated content across multiple applications. We explore retrieval-assisted methods for evaluating generative textual content, IR-inspired approaches for assessing text-to-image generation models, and a broader framework for evaluating LLM-powered applications. These contributions lay the foundation for a new generation of evaluation methodologies that keep pace with evolving information access technologies, ensuring that improvements in retrieval and generative AI systems can be accurately and meaningfully assessed.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback