Recent advances in Large Reasoning Models (LRMs) have rekindled hopes for broadly capable Artificial Intelligence (AI), i.e., Artificial General Intelligence (AGI). Yet, these systems exhibit low generalization efficiency when accounting for computational cost and developer engineering. Most contemporary research focuses on scaling performance rather than understanding why systems generalize and how to achieve this efficiently. We analyze insights from neuroscience, historical machine learning breakthroughs, Large Language Model (LLM)-based approaches, neuro-symbolic strategies, and the Abstraction and Reasoning Corpus (ARC)-AGI challenge series through the lens of Skill-Acquisition Efficiency (SAE). We synthesize five design aspects that we hypothesize to collectively underpin efficient generalization: Model Specificity and Scope, Meaningful Representations, Abstractions and Hierarchies, Knowledge Dynamics, and Integration and Emergent Synergy. Our central claim is that efficient generalization can arise as an engineered emergent property of the deliberate integration of these aspects – in contrast to the opaque emergence observed in monolithic scaling. A comparative analysis of representative systems illustrates how addressing – and integrating – more of these aspects coincides with higher SAE. We offer a design agenda for the research community, shifting focus from monolithic scaling toward deliberate multi-aspect integration as a complementary path to efficient generalization.
misc SOK+26
BibTeXKey: SOK+26