Dr. Gerrit Großmann
Prof. Dr. Verena Wolf
For any issues regarding the seminar, please e-mail Gerrit Großmann and have [DeepDiffusion2023]
(including the brackets) in the subject line.
In case you are interested in writing a thesis on this topic, please also contact Gerrit Großmann.
waiting list
by emailing us. (Currently, there is one open spot.)14:15 to 16:00
in room 1.06 (E1.1)
(in-person).Friday, October 27
.Milestone I
- Send three topic preferences to GerritMilestone II
- Submit slides + Q&A session topics 1-6Milestone III
- Submit and present tutorial notebookMilestone IV
- Submit reviewsMilestone V
- Submit final tutorial notebookDiffusion models have recently transformed the field of generative deep learning. This seminar will explore this vibrant area of research.
This seminar is targeted at students who already have a background in deep learning (theoretical and practical) and are keen to dive deeper into probabilistic diffusion and related concepts such as stochastic processes and normalizing flows. Experience with diffusion models will be beneficial, but not required. While we do not aim to provide comprehensive coverage of the topic, we have selected certain papers that we find exceptionally engaging, fun, and thought-provoking.
The seminar is equally divided into two segments: a classical part, where students present a concept based on a research paper, and a practical part, where students develop a tutorial notebook inspired by the ideas from that paper.
General background knowledge and practical experience in deep learning are strongly recommended. Experiences with diffusion models will be helpful but are not necessary.
To pass the seminar, you have to attend all sessions and:
… with a passing grade. The final grade is based on your presentation (40%), tutorial notebook (50%), and reviews (10%). You will fail the whole seminar if you receive a failing grade in any of the three parts. If we are undecided, we will also consider the discussion participation.
Identify the key ideas and concepts and give a self-consistent presentation explaining these concepts to your fellow students.
The presentation should be 15 to 20 minutes long.
Here are some suggestions for a good presentation (we will use this as a basis for grading the presentations):
Your task is to create a self-consistent tutorial within a Jupyter Notebook that elucidates the key concepts of the selected paper through code examples. The scope of this project may vary depending on the complexity of the paper. A complete re-implementation may be achievable or it may be more appropriate to concentrate on a small toy problem focusing on a single concept. The Notebook should be well-structured, containing text and figures that clarify the concepts, as well as well-documented code. For guidance and inspiration, you may refer to projects like the Annotated Diffusion Model and the TeachOpenCADD platform, for instance, E(3)-invariant Graph Neural Networks. In addition, you can checkout the Stanford CS224W Graph ML Tutorials.
Formalities:
Topic03_Grossmann_GeometricLatentDiffusionModelsFor3DMoleculeGeneration.ipynb
.We will assign three tutorial notebooks to each student to review. The primary objective of this review is to offer suggestions for enhancements and to identify any potential mistakes or ambiguities, whether they be technical, conceptual, or grammatical in nature. Each review should be approximately one to two pages in length and must be sent to the author and Gerrit via email.
Topic | Student | Paper |
---|---|---|
1 | Salaheldin Y. A. Mohamed | Structured Denoising Diffusion Models in Discrete State-Spaces |
2 | - | Efficient and Degree-Guided Graph Generation via Discrete Diffusion Modeling |
3 | - | Geometric Latent Diffusion Models for 3D Molecule Generation |
4 | Bartłomiej Pogodziński | Consistency Models |
5 | Akansh Maurya | TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets |
6 | Davronbek Islamov | Training Diffusion Models with Reinforcement Learning |
7 | Yasin Esfandiari | Graphically Structured Diffusion Models |
8 | Soumava Paul | Generative Modelling with Inverse Heat Dissipation |
9 | Monseej Purkayastha | Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise |
10 | - | Equivariant flow matching |
11 | - | Generative Modeling with Optimal Transport Maps |
12 | - | Improving and Generalizing Flow-Based Generative Models with Minibatch Optimal Transport |
The campus library will provide a semester reserve featuring the books Probabilistic Machine Learning: Advanced Topics and Deep Learning with PyTorch: Build, train, and tune neural networks using Python tools.
Otherwise, we recommend:
For the implementation, we also recommend: