During the course, you will find here the documents for the theoretical and practical exercises as well as the corresponding solutions.
It is suggested that the implementations of the exercises are done in python (recommended, though MATLAB is also accepted). If you want to install python, it is probably the easiest to use the free Anaconda python distribution. The distribution is available on Linux, Mac, and Windows and some of the advantages are listed here. Please make sure that you successfully installed python (or MATLAB) before the first lecture in order to increase the time you have for the actual exercise.
In the lecture, the general concept of greedy algorithms has been introduced in which locally optimal choices are made. In this exercise, we apply this idea to the knapsack problem. We are not only going to formally define the algorithm but also to implement it.
In the lecture, we have seen the general concept of dynamic programming and it is the purpose of this exercise to apply it to the knapsack problem. We are going to not only formally define the algorithm but also implement it.
Note: this exercise has not been tackled in the lecture, but is offered here as an optional exercise to train for the exam.
In this exercise, we will implement two simple (randomized) search heuristics for the 0-1-knapsack problem in order to get acquainted to stochastic algorithms and to show how easy their application can be.
In the lecture, we have seen the concepts of gradients, Hessian matrices, and level sets. Here, we will apply these concepts to specific convex quadratic functions.
In the lecture, we have seen the general idea of numerical optimization algorithms, following decent directions and a few concrete examples of decent directions as well as line search variants. Here, we will have a closer look on their performance on specific convex quadratic functions of the form f(x) = xTAx.