Gaussian processes (GPs) are widely used as surrogate models in Bayesian optimization (BO). However, their predictive performance is highly sensitive to the choice of hyperparameters, often leading to markedly different posterior predic-<br>tions. Hierarchical BO addresses this issue by marginalizing over hyperparameters to produce an aggregated posterior, which is then evaluated using an acquisition function (AF). Yet, this aggregation can obscure the disagreement among individual GP posteriors, an informative source of uncertainty that could be exploited for more robust decision-making. To overcome this limitation, we propose Imprecise Acquisitions in Bayesian Optimization (IABO), which maintains a set of GP models and evaluates the AF separately under each one. This results in an imprecise, set-valued AF whose spread naturally captures model disagreement. We investigate two aggregation strategies applied at different stages: (i) acquisition-level aggregation, where AF values are combined into a single scalar via an aggregated AF, and (ii) decision-level aggregation, where each AF is optimized independently and the resulting maximizers are compared using stochastic dominance criteria. Our approach is applicable to arbitrary AFs, and experiments show that our decision-level strategies are highly competitive, often outperforming standard BO baselines across a range of benchmarks problems.
inproceedings MHR+25a
BibTeXKey: MHR+25a