Home  | Publications | BBH+22

Finding Optimal Arms in Non-Stochastic Combinatorial Bandits With Semi-Bandit Feedback and Finite Budget

MCML Authors

Link to Profile Eyke Hüllermeier PI Matchmaking

Eyke Hüllermeier

Prof. Dr.

Principal Investigator

Abstract

We consider the combinatorial bandits problem with semi-bandit feedback under finite sampling budget constraints, in which the learner can carry out its action only for a limited number of times specified by an overall budget. The action is to choose a set of arms, whereupon feedback for each arm in the chosen set is received. Unlike existing works, we study this problem in a non-stochastic setting with subset-dependent feedback, i.e., the semi-bandit feedback received could be generated by an oblivious adversary and also might depend on the chosen set of arms. In addition, we consider a general feedback scenario covering both the numerical-based as well as preference-based case and introduce a sound theoretical framework for this setting guaranteeing sensible notions of optimal arms, which a learner seeks to find. We suggest a generic algorithm suitable to cover the full spectrum of conceivable arm elimination strategies from aggressive to conservative. Theoretical questions about the sufficient and necessary budget of the algorithm to find the best arm are answered and complemented by deriving lower bounds for any learning algorithm for this problem scenario.

inproceedings


NeurIPS 2022

36th Conference on Neural Information Processing Systems. New Orleans, LA, USA, Nov 28-Dec 09, 2022.
Conference logo
A* Conference

Authors

J. Brandt • V. Bengs • B. Haddenhorst • E. Hüllermeier

Links

URL

Research Area

 A3 | Computational Models

BibTeXKey: BBH+22

Back to Top