16:00 online and NORCE 4th floor conference room.
Talk by Matthias Morzfeld of Scripps (UCSD).


relevant illustration

Randomized optimization, also known as randomized maximum likelihood or randomize-then-optimize, has proven very useful for the solution of large scale inverse problems and associated uncertainty quantification (UQ). We present a hierarchical extension of these ideas to enable the solution of inverse problems in which the prior is partially unknown. We call our extension the RTO-TKO, because it (i) applies a randomized optimization (RTO) twice, once to sample a model and then again to sample hyper-parameters; and (ii) it is a technical knockout (TKO) because our sampler is not bias free (as is every unweighted, optimization-based sampler). I will demonstrate the ideas on a 2D inversion of marine electromagnetic data collected off-shore New Jersey and I will also emphasize the need for grid invariance for adequate UQ in gridded models. This is joint work with Dani Blatter, Kerry Key and Steven Constable.