Interpretable Machine Learning (IML) Methods: Classification and Solutions for Transparent Models
Loading...
Date
2024-09-18
Authors
Advisor
Chenouri, Shoja'eddin
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
This thesis explores the realm of machine learning (ML), focusing on enhancing model interpretability called interpretable machine learning (IML) techniques. The initial chapter provides a comprehensive overview of various ML models, including supervised, unsupervised, reinforcement, and hybrid learning methods, emphasizing their specific applications across diverse sectors. The second chapter delves into methodologies and the categorization of interpretable models. The research advocates for transparent and understandable IML models, particularly crucial in high-stakes decision-making scenarios. By integrating theoretical insights and practical solutions, this work contributes to the growing field of IML, aiming to bridge the gap between complex IML algorithms and their real-world applications.
Description
Keywords
Interpretable Machine Learning (IML), Explainable Machine Learning (EML), Machine Learning (ML), Machine Learning Classification, Transparent ML Models