Home  | Publications | RF25

Sparse Gaussian Neural Processes

MCML Authors

Link to Profile Vincent Fortuin

Vincent Fortuin

Dr.

Associate

Abstract

Despite significant recent advances in probabilistic meta-learning, it is common for practitioners to avoid using deep learning models due to a comparative lack of interpretability. Instead, many practitioners simply use non-meta-models such as Gaussian processes with interpretable priors, and conduct the tedious procedure of training their model from scratch for each task they encounter. While this is justifiable for tasks with a limited number of data points, the cubic computational cost of exact Gaussian process inference renders this prohibitive when each task has many observations. To remedy this, we introduce a family of models that meta-learn sparse Gaussian process inference. Not only does this enable rapid prediction on new tasks with sparse Gaussian processes, but since our models have clear interpretations as members of the neural process family, it also allows manual elicitation of priors in a neural process for the first time. In meta-learning regimes for which the number of observed tasks is small or for which expert domain knowledge is available, this offers a crucial advantage.

inproceedings


AABI 2025

7th Symposium on Advances in Approximate Bayesian Inference collocated with the 13th International Conference on Learning Representations. Singapore, Apr 29, 2025. To be published. Preprint available.

Authors

T. Rochussen • V. Fortuin

Links


Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: RF25

Back to Top