Thesis topics

Master’s Thesis Projects

Efficient Monte-Carlo Estimation Using Control Variates in Chemical Reaction Networks

Motivation: Continuous-time Markov chains that describe networks of chemical reactions are a popular stochastic modeling framework in the context of quantitative and systems biology. Monte-Carlo simulations of such models suffer from the necessity to generate a large number of trajectories to accurately estimate measures of interest. The goal of this project is to employ control variates to reduce the variance of the considered estimators and thus to reduce the necessary number of trajectories.

Challenges: derivation of the corresponding mathematical equations; efficient implementation

Prerequisites: no biological background needed; background in stochastic modeling; Monte-Carlo simulation of continuous-time Markov chains

Co-Supervisor: Michael Backenköhler

Information Spreading in Networks

Motivation: Modeling the stochastic dynamics of diffusion processes in complex networks has many applications, e.g. modeling, understanding, predicting, and controlling the outbreak of epidemics, the spread of rumors and memes, and the interactions of interconnected neurons.
We offer several topics in this area, for instance:

  • Efficient stochastic simulation of spreading processes
  • Developing vaccination strategies for epidemics on large networks
  • Inferring the underlying network structure given time-series data
  • Deriving differential equations describing the mean behavior of the stochastic dynamics, taking community-structure into account

Challenges: derivation of the corresponding equations; efficient implementation

Prerequisites: background in probability theory and programming

Co-Supervisor: Gerrit Großmann

Master’s/Bachelor’s Thesis Projects

Learning(?) Course Allocation

Motivation: Given real world data of student’s applications to a large number of different courses, that all have limited capacities, it is obviously hard to find the optimal allocation. Currently (Deep) Reinforcement Learning is of growing interest for similar optimization problems. We would like to investigate whether using this technique can be used for such problems and further compare to commonly used optimization techniques, e.g. linear programming or genetic algorithms.

Challenges: deep reinforcement learning, different approaches to solve the topic

Prerequisites: knowledge about ML/RL is helpful but not mandatory

Co-Supervisor: Timo P. Gros