A huge part of the recent success of highly parametrized ML models is due to their apparent ability to generalize to unseen data. This ability is seemingly in tension with mathematical results from traditional statistics (e.g. bias-variance trade-off) and statistical learning theory (e.g. PAC theorems) which rely heavily on either strong assumptions about the underlying probability distribution or restrictions on the hypothesis class. The predominant engineering epistemology claims failure of ML theory and suggests that contemporary ML models generalize well even beyond the classical overfitting regime.
This workshop aims to shed light at the generalization overfit tension and will address the following questions:
WebEx Link: https://unistuttgart.webex.com/unistuttgart/j.php?MTID=m909ba73ec2c98127421ac0424f22863d
Should you want to join in person please write to nico.formanek(at)hlrs.de.
A schedule can be found at https://philo.hlrs.de/?p=415.
Participants (confirmed):
* Tom Sterkenburg (ML Epistemology, LMU Munich) * Timo Freiesleben (ML Epistemology, Uni Tübigen) * Jan-Willem Romeijn (Philosophy of Statistics, U Groningen) * Petr Špelda (Philosophy of Induction, Charles University) * Vít Střítecký (Philosophy of AI, Charles University)
70569 Stuttgart – Vaihingen, Nobelstrasse 19, HLRS, Room Berkeley/Shanghai.
May 28, 2024 09:30
May 28, 2024
Stuttgart
Back to list