is Professor for Astrophysics, Cosmology, and Artificial Intelligence at LMU Munich.
He is working towards a holistic version of data-driven cosmology that integrates expertise in observational data collection and calibration, statistical analysis, machine learning, analytical insights into cosmic structure formation, galaxy evolution, and fundamental physics. His group applies these techniques and tests their own models.
In order to compress and more easily interpret Lyman-α forest (LyαF) datasets, summary statistics, e.g. the power spectrum, are commonly used. However, such summaries unavoidably lose some information, weakening the constraining power on parameters of interest. Recently, machine learning (ML)-based summary approaches have been proposed as an alternative to human-defined statistical measures. This raises a question: can ML-based summaries contain the full information captured by traditional statistics, and vice versa? In this study, we apply three human-defined techniques and one ML-based approach to summarize mock LyαF data from hydrodynamical simulations and infer two thermal parameters of the intergalactic medium, assuming a power-law temperature-density relation. We introduce a metric for measuring the improvement in the figure of merit when combining two summaries. Consequently, we demonstrate that the ML-based summary approach not only contains almost all of the information from the human-defined statistics, but also that it provides significantly stronger constraints by a ratio of better than 1:3 in terms of the posterior volume on the temperature-density relation parameters.
Making inferences about physical properties of the Universe requires knowledge of the data likelihood. A Gaussian distribution is commonly assumed for the uncertainties with a covariance matrix estimated from a set of simulations. The noise in such covariance estimates causes two problems: it distorts the width of the parameter contours, and it adds scatter to the location of those contours which is not captured by the widths themselves. For non-Gaussian likelihoods, an approximation may be derived via Simulation-Based Inference (SBI). It is often implicitly assumed that parameter constraints from SBI analyses, which do not use covariance matrices, are not affected by the same problems as parameter estimation with a covariance matrix estimated from simulations. We investigate whether SBI suffers from effects similar to those of covariance estimation in Gaussian likelihoods. We use Neural Posterior and Likelihood Estimation with continuous and masked autoregressive normalizing flows for density estimation. We fit our approximate posterior models to simulations drawn from a Gaussian linear model, so that the SBI result can be compared to the true posterior. We test linear and neural network based compression, demonstrating that neither methods circumvent the issues of covariance estimation. SBI suffers an inflation of posterior variance that is equal or greater than the analytical result in covariance estimation for Gaussian likelihoods for the same number of simulations. The assumption that SBI requires a smaller number of simulations than covariance estimation for a Gaussian likelihood analysis is inaccurate. The limitations of traditional likelihood analysis with simulation-based covariance remain for SBI with a finite simulation budget. Despite these issues, we show that SBI correctly draws the true posterior contour given enough simulations.
In a typical Bayesian inference problem, the data likelihood is not known. However, in recent
years, machine learning methods for density estimation can allow for inference using an estimator
of the data likelihood. This likelihood estimator is fit with neural networks that are trained on
simulations to maximise the likelihood of the simulation-parameter pairs - one of the many
available tools for Simulation Based Inference (SBI), (Cranmer et al., 2020)…
©all images: LMU | TUM
2024-12-27 - Last modified: 2024-12-27