Research
Dynamical Systems on Statistical Manifolds for Image and Data Analysis
This project is funded by the DFG within the Priority Programme on the Theoretical Foundations of Deep Learning
Scope. We exploit basic statistical manifolds to devise dynamical system models for image analysis that exhibit favorable properties in comparison to established convex and non-convex variational models: smoothness, probabilistic interpretation, efficiently converging parallel and sparse Riemannian numerical updates that scale up to large problem sizes. Concepts of information geometry support the design of well-understood networks that perform context-sensitive inference and enable comprehensible decisions in applications.
The current focus is on the assignment manifold and assignment flows for image labeling, that apply more generally to the context-sensitive classification of any data on any graph. See here for an introduction:
Mathematical aspects. Information geometry, coupled and regularized optimal transport, statistical manifolds, geometric numerical integration; design of deep networks without `black-box gap', statistical performance guarantees.
Our recent work includes (in chronological order):
Linearized deep assignment flows can be turned into stochastic predictors for classifying metric data on graphs. Adopting the PAC-Bayes framework, we show that state-of-the-art risk bounds can be used as training objectives for learning posterior distributions on the hypothesis space, and that tight out-of-sample risk certificates can be computed efficiently.
Using a reparametrization of assignment flows, a novel non-local graph-PDE can be derived for metric data labeling. The underlying information-geometric formulation reveals a connection between geometric numerical integration and accelerated optimization of a non-convex DC (difference of convex functions) potential. Various established approaches to nonlocal image processing (denoising, inpainting, ..) are recognised as special cases.
The parameter gradient of linearized assignment flows can be computed in closed form and efficiently evaluated numerically using a low-rank approximation.
Labelings determined by the assignment flow correspond to geodesics with respect to a particular metric and critical points of a related action functional.
Convergence and stability of the assignment flow been established under suitable assumption on the network parameters. This means in particular that data labelings correspond to equilibria and to fixed points of geometric numerical schemes for integrating the flow, each with a corresponding basin of attraction.
A preliminary extension from graphs to the continuous domain in the `zero-scale limit' (local interaction only) reveals the interplay between the underlying geometry and variational aspects.
A more classical additive variational reformulation provides a smooth geometric version of the continuous cut approach.
Parameter learning. We study how weights for geometric diffusion that parametrize the adaptivity of the assignment flow can be learned from data. Symplectic integration ensures the commutativity of discretization and optimization operations. Results reveal the steerability of the assignment flow and its potential for pattern formation.
Unsupervised label learning. Our recent work concerns the emergence of labels in a completely unsupervised way by data self-assignment. The resulting self-assignment flow has connections to low-rank matrix factorization and spatially regularized discrete optimal mass transport that are explored in our current work.
We extended the assignment flow to unsupervised scenarios, where label evolution on a feature manifold is simultaneously performed together with label assignment to given data. The following papers introduce the corresponding unsupervised assignment flow.
Geometric numerical integration. We conducted a comprehensive study of geometric integration techniques, including automatic step size adaption, for numerically computing the assignment flow in a stable, efficient and parameter-free way.
Evaluation of discrete graphical models. We applied our approach to solve in a novel way the MAP labeling problem based on a given graphical model by smoothly combining a geometric reformulation of the local polytope relaxation with rounding to an integral solution. A key ingredient are local `Wasserstein messages' that couple local assignment measures along edges.
Kick-off paper that introduces the basic approach: