09.10.2025

Teaser image to Rethinking AI in Public Institutions - Balancing Prediction and Capacity

Rethinking AI in Public Institutions - Balancing Prediction and Capacity

Researcher in Focus: Unai Fischer Abaigar

Unai Fischer Abaigar is a researcher at MCML whose work focuses on improving decision-making in public institutions by developing AI systems that are both fair and effective in practice.

What is your research about?

A core theme of my research is to think holistically about decision-making systems. I don’t just look at predictive algorithms in isolation, but at how they are designed, deployed, and integrated into broader institutional processes. My focus is on public institutions that face high-stakes decisions under capacity constraints, for example, employment agencies identifying jobseekers at risk of long-term unemployment, hospitals deciding which patients to triage first, or fraud investigators determining which cases to look into.

Much of your work is related to the application on ML in the public sector. Can you explain to us the main challenges we are currently facing in this regard?

One major challenge is that while predictive models are increasingly adopted, their actual downstream value is often unclear. The difficulty is that public institutions rarely have clearly specified measurable goals: policy objectives are often fuzzy or contested. That makes it hard to align technical design with what institutions really care about, and to ensure that predictions actually improve decision-making. On top of that, there are the practical constraints: working with sensitive data under strict privacy rules, adapting to legal and organizational constraints, and negotiating with stakeholders who may have different priorities or varying levels of trust in the technology.

«My focus is on public institutions that face high-stakes decisions under capacity constraints, for example, employment agencies identifying jobseekers at risk of long-term unemployment, hospitals deciding which patients to triage first.»


Unai Fischer Abaigar

MCML Junior Member

Recently your paper “ The Value of Prediction in Identifying the Worst-Off” won an award at ICML 2025. It explores whether in the context of AI-driven predictions for resource allocation, improving the accuracy is more valuable than expanding capacity. Could you tell us which issues led you to investigate this research question?

We were partly inspired by earlier empirical work on early-warning systems in the Wisconsin school system. There, predictive models were used to flag students at risk of dropping out so that schools could better target support. But the findings showed that most of the dropout risk was actually concentrated in a few schools, and once you looked within a single school, the conditions were fairly homogeneous. That meant the real challenge wasn’t individual-level prediction, it was providing more and better support at the school level.

This leads us to ask in our work: when are these predictive systems actually worth it from the perspective of a social planner interested in downstream welfare? Sometimes, instead of investing in making predictions more accurate, institutions might achieve greater benefits by expanding capacity, for example, hiring more caseworkers and processing more cases overall.

Can you walk us through the formal structure of your theoretical model? What are the main assumptions and simplifications?

In the theory part, we deliberately start with very simple models (i.e., linear models under Gaussian assumptions). These stylized setups allow us to derive clean results and understand the fundamental trade-offs between prediction quality and institutional capacity. Of course, these assumptions are quite strong and don’t reflect the complexity of real-world decision-making.

What’s interesting, though, is that when we move to our empirical setting, using administrative data from government employment agencies to study long-term unemployment, the central insights still hold. Even though the distributional assumptions break down and the data are far more complex, we see similar trade-offs to those found in the theory.

«The difficulty is that public institutions rarely have clearly specified measurable goals: policy objectives are often fuzzy or contested. That makes it hard to align technical design with what institutions really care about, and to ensure that predictions actually improve decision-making.»


Unai Fischer Abaigar

MCML Junior Member

The Prediction-Access ratio is a core element of your research, can you tell us more about what that is and how it’s used in your research

The Prediction-Access ratio compares how much welfare improves when an institution slightly expands its capacity versus when it slightly improves prediction accuracy. The motivation is that institutions rarely overhaul their systems entirely, they usually make incremental choices under tight budgets. If the ratio is high, then adding a unit of capacity (say, more caseworkers) generates much larger gains than an additional unit predictive accuracy; if it is low, the reverse is true. We make this precise by examining how small changes shift a welfare-based value function defined for individuals at risk.

Why do you think institutions overvalue improvements in predictive accuracy relative to capacity?

I wouldn’t say this is universally true, it likely depends on the institution and the specific stakeholders involved. Our paper was actually more directed at the research community, especially computer science work that focuses on “ML for social good.” What we wanted to highlight is that connecting those technical advances more directly to concrete institutional challenges could make the work more impactful. In particular, we hope to encourage more research on allocation problems.

«Looking ahead, I want to formalize the notion of what counts as a “good allocation” in practice and to keep working with public institutions so the research stays closely linked to their real-world challenges.»


Unai Fischer Abaigar

MCML Junior Member

What are the future opportunities for this research? Do you plan to extend this work?

I see this as a very promising research direction: developing algorithms that can meaningfully support resource allocation in socially sensitive settings. Looking ahead, I want to formalize the notion of what counts as a “good allocation” in practice and to keep working with public institutions so the research stays closely linked to their real-world challenges. There are also many practical questions to consider. For example, how to design systems that enhance, rather than limit, the expertise and discretion of caseworkers. More broadly, the key question is which aspects of institutional processes can be abstracted into algorithms, and where human judgment remains essential.






Subscribe to RSS News feed

Related

Link to Machine Learning for Climate Action - with researcher Kerstin Forster

29.09.2025

Machine Learning for Climate Action - With Researcher Kerstin Forster

Kerstin Forster researches how AI can cut emissions, boost renewable energy, and drive corporate sustainability.

Link to Making Machine Learning More Accessible with AutoML

26.09.2025

Making Machine Learning More Accessible With AutoML

Matthias Feurer discusses AutoML, hyperparameter optimization, OpenML, and making machine learning more accessible and efficient for researchers.

Link to Compress Then Explain: Faster, Steadier AI Explanations - with One Tiny Step

25.09.2025

Compress Then Explain: Faster, Steadier AI Explanations - With One Tiny Step

Discover CTE by MCML researcher Giuseppe Casalicchio and Bernd Bischl with collaborators at ICLR 2025: efficient, reliable AI model explanations.

Link to Predicting Health with AI - with researcher Simon Schallmoser

22.09.2025

Predicting Health With AI - With Researcher Simon Schallmoser

Simon Schallmoser uses AI to predict health risks, detect low blood sugar in drivers, and advance personalized, safer healthcare.

Link to GCE-Pose – predicting whole objects from partial views

18.09.2025

GCE-Pose – Predicting Whole Objects From Partial Views

From fragments to whole: GCE-Pose, published at CVPR 2025, enhances pose estimation with global context for smarter AI vision.

Back to Top