Why does machine learning work so well, and what are the theoretical constraints on what it can learn?
A Universal Law of Robustness via Isoperimetry
Sébastien Bubeck, Mark Sellke
2021
ArXiv
PAPER
Training Neural Networks is ER-complete
Mikkel Abrahamsen, L. Kleist, Tillmann Miltzow
et al.
2021
ArXiv
PAPER
Understanding deep learning requires rethinking generalization
Chiyuan Zhang, Samy Bengio, Moritz Hardt
et al.
2016
ICLR
PAPER
The Lack of A Priori Distinctions Between Learning Algorithms
D. Wolpert
1996
Neural Computation
PAPER
Draft : Deep Learning in Neural Networks : An Overview
J. Schmidhuber
2014
Neural Networks
PAPER
On the Number of Linear Regions of Deep Neural Networks
Guido Montúfar, Razvan Pascanu, Kyunghyun Cho
et al.
2014
NIPS
PAPER
An exact mapping between the Variational Renormalization Group and Deep Learning
Pankaj Mehta, D. Schwab
2018
PAPER
Deep learning via Hessian-free optimization
James Martens
2010
ICML
PAPER
Why Does Deep and Cheap Learning Work So Well?
Henry W. Lin, Max Tegmark
2016
ArXiv
PAPER
1 Efficient BackProp
Yann LeCun, L. Bottou, G. Orr
et al.
2012
PAPER
Neural Networks and the Bias/Variance Dilemma
Stuart Geman, Elie Bienenstock, Rene Doursat
et al.
1992
Neural Computation
PAPER
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
Y. Gal, Zoubin Ghahramani
2015
NIPS
PAPER
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle, Michael Carbin
2018
ICLR
PAPER
What's hidden in the hidden layers?
D. Touretzky, D. Pomerleau
1989
PAPER
The Description Length of Deep Learning models
Léonard Blier, Y. Ollivier
2018
NeurIPS
PAPER
Provable Bounds for Learning Some Deep Representations
Sanjeev Arora, Aditya Bhaskara, Rong Ge
et al.
2013
ICML
PAPER
Bottom-up Deep Learning using the Hebbian Principle
Aseem Wadhwa, Upamanyu Madhow
2016
PAPER
Group theoretical methods in machine learning
Risi Kondor
2008
BOOK
Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design
Yoav Levine, David Yakira, Nadav Cohen
et al.
2017
ICLR
PAPER
Help
Welcome to your learning pathway, "Theoretical concerns in machine learning".
You can add new resources by clicking the button.
To add a scientific paper (research article), paste its title, then hit Return. The paper will be identified using Semantic Scholar so that you don't have to type in the year, authors, journal or abstract.
To add a web page, paste its URL and Learning Pathways will fetch the page title for you. You may need to edit the title if it is not correct.
Click on a resource to see details about it, or to edit it. Using the buttons at the bottom of the page, resources can be re-ordered and section headings can be added.
Connection and sharing options: Theoretical concerns in machine learning