Consensus-based optimization (CBO) is a powerful and versatile zero-order multi-particle method designed to provably solve high-dimensional global optimization problems, including those that are genuinely nonconvex or nonsmooth. The method relies on a balance between stochastic exploration and contraction toward a consensus point, which is defined via the Laplace principle as a proxy for the global minimizer. In this paper, we introduce new CBO variants that address practical and theoretical limitations of the original formulation of this novel optimization methodology. First, we propose a model called δ-CBO, which incorporates nonvanishing diffusion to prevent premature collapse to suboptimal states. We also develop a numerically stable implementation, the Consensus Freezing scheme, that remains robust even for arbitrarily large time steps by freezing the consensus point over time intervals. We connect these models through appropriate asymptotic limits. Furthermore, we derive from the Consensus Freezing scheme by suitable time rescaling and asymptotics a further algorithm, the Consensus Hopping scheme, which can be interpreted as a form of (1,λ)-Evolution Strategy. For all these schemes, we characterize for the first time the invariant measures and establish global convergence results, including exponential convergence rates.
misc FHK+26
BibTeXKey: FHK+26