Inherent Limitations of Dimensions for Characterizing Learnability
Loading...
Date
2024-09-24
Authors
Advisor
Ben- David, Shai
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
The fundamental theorem of statistical learning establishes the equivalence between various notions of both agnostic and realizable Probably Approximately Correct (PAC) learnability for binary classification with the finiteness of the VC dimension. Motivated by these foundational results, this work explores whether similar characterizations of learnability and sample complexity can be defined through dimensions in other learning models. We introduce the concept of a scale-invariant dimension and demonstrate that for a range of learning tasks, including distribution learning, any such dimension fails to capture learnability. Additionally, we define Computable PAC (CPAC) learning and show that not all classes are properly computably PAC learnable, highlighting a significant limitation in classical PAC frameworks. Our analysis further reveals that, unlike binary classification, realizable learning does not always imply agnostic learning in settings such as distribution learning and CPAC learning. Finally, we address learning under conditions of data corruption, whether from adversaries or self-manipulating agents, and show that the extent of prior knowledge about manipulation capabilities can significantly affect learnability. We address various ways in overcoming uncertainty in manipulation capabilities by either learning manipulation capabilities from distribution shifts or further oracle access or by allowing abstentions. We also show that this uncertainty in manipulation capabilities can sometimes be overcome without additional oracle access in the realizable case while the agnostic case requires such resources. These findings further underscore that in order to characterize agnostic learnability it is not always sufficient to understand the realizable case.