Check out our paper on amortized solutions of model-constrained Bayesian inverse problems using surrogate-driven measure transport.
Figure: The total speedups of our surrogate-driven mMALA against model-driven mMALA for asymptotically exact sampling from the posterior. This speedup measure includes upfront surrogate construction costs. Our methods achieve over 8x speedups in generating effective posterior samples and the surrogate construction cost breaks even after collecting 10-66 effective posterior samples.
This work considers using a neural surrogate of the parameter-to-observable (PtO) map to accelerate infinite-dimensional geometric MCMC. In particular, the surrogate is used to predict the posterior geometric quantities related to the log-posterior gradient and Hessian. While this idea is ostensibly simple, we demonstrate that conventional supervised learning of neural surrogates is insufficient to accelerate geometric MCMC, even when a large number of training samples is used. The key reason is that conventional supervised learning does not directly enforce control of surrogate Jacobian error, and this error leads to low-quality proposal moves.
We demonstrate that by utilizing samples of the PtO map Jacobian to enforce direct control of surrogate Jacobian error during training, the resulting derivative-informed neural surrogate (DINO) achieves 2-8x speedups in geometric MCMC. Additionally, the surrogate construction cost breaks even after collecting merely 10-66 effective posterior samples.
[preprint]
In this work, we aim to understand and improve the reliability of neural operators as surrogates of parametric nonlinear PDEs in infinite-dimensional Bayesian inverse problems (BIPs). We first derive an a priori bound that allows us to understand how the error in operator learning controls the error in the posterior distribution of BIPs. We then proposed a post-training error correction strategy. This strategy enhances the accuracy of a trained neural operator by solving a linear variational problem based on the neural operator's predictions. We demonstrate that this correction step results in a quadratic reduction of the approximation error for well-trained neural operators.
[journal]
Predictions before collecting data
Collected microscopy data
Predictions after collecting data
In a series of works, we formulated and proposed suitable methodologies for the Bayesian calibration of models for diblock copolymer self-assembly. In particular, the model calibration procedure accounts for the aleatoric randomness represented by the metastable state of self-assembly. We advocate for using likelihood-free inference methodologies in conjunction with constructing summary statistics to facilitate effective parameter inference. We explored inference methodologies, including pseudo-marginal methods and triangular transport maps. We designed summary statistics based on power spectra and energy functionals from microscopy data. The utilities of these summary statistics are quantified by estimating expected information gains.
Measure transport: [journal]
In this work, we formulate and analyze self-consistent field (SCF) calculations of diblock copolymers in the framework of PDE-constrained optimization. We derive the Hessian action of the SCF optimization and show that the semi-implicit Seidel (SIS) scheme proposed by Ceniceros and Fredrickson [link] assumes a particular block diagonal Hessian approximation. We extend the SIS scheme from a Fourier-based scheme to a real space–based one utilizing Laplacian operators. This extension allows us to accelerate SCF optimization on domains with complex geometries.
[thesis, chap. 3]
Figure: Superior convergence speed in SCF calculations for diblock copolymer thin films with strong immiscibility.
Figure: 3D simulation of diblock copolymer thin films with quadratic reduction of the residual norm value. (right bottom).
In this work, we analyzed and proposed a fast and robust algorithm for directly minimizing the Ohta–Kawasaki free energy. We formulate a mass-conservative Newton iteration for energy minimization. We utilize an adaptive Gauss–Newton convexification of the Hessian operator and an inexact line search to ensure that the iteration is guaranteed to be monotonically energy decreasing. The algorithm is used to study the effects of polymer–substrate interactions in the directed self-assembly of diblock copolymers via chemoepitaxy.