Deep learning is one of the most successful recent techniques in computer vision and automated data processing in general. The basic idea of supervised machine learning is to define a parameterized function, called a network, and optimize the parameters in such a way that the resulting function maps given inputs x to desired outputs y on a training set of pairs (x,y) -- a process referred to as training the network. The word deep learning describes the design of network architectures as a deeply nested function of simple building blocks. The ultimate goal of machine learning is to approximate the true underlying but unknown relation between the input and the output, such that the trained network is able to make good predictions even on examples outside of the training set -- an aspect referred to as generalization.
The quickly evolving (if not exploding) field of deep learning has led to amazing applications.
This lecture will give an introduction to deep learning, describe common building blocks in the network architectures, introduce optimization algorithms for their training, and discuss strategies that improve the generalization. In particular, we will cover the following topics:
Besides the lecture notes, the relevant literature for this course includes:
- Dozent/in: Alexander Auras
- Dozent/in: Marius Bock
- Dozent/in: Kanchana Vaishnavi Gandikota
- Dozent/in: Natacha Kuete Meli
- Dozent/in: Zorah Lähner
- Dozent/in: Michael Möller
- Dozent/in: Ulrich Schipper
- Dozent/in: Jan Philipp Schneider