Agentic systems are already being deployed in high-stakes settings such as robotics, autonomous web interaction, and software maintenance, and their capabilities ultimately hinge on memory. While LLM memorization typically refers to static, in-weights retention of training data or recent context, agent memory is online, interaction-driven, and under the agent’s control. Agentic systems must operate over extended horizons, learn from interaction, and adapt as goals and contexts shift. The limiting factor is increasingly not raw model capability but memory: how agents encode, retain, retrieve, and consolidate experience into useful knowledge for future decisions. Consistent with this view, recent commentary has argued that reinforcement learning can finally generalize when supplied with strong priors and explicit reasoning; however, current evaluations often underplay sequential accumulation of experience, where memory becomes decisive. In this context, we propose a workshop devoted to the memory layer for LLM-based agentic systems. Our premise is that long-lived, safe, and useful agents require a princi- pled memory substrate that supports single-shot learning of instances, context-aware retrieval, and consolidation into generalizable knowledge. This workshop aims to advance the design of the memory layer for agentic systems and to convene interdisciplinary researchers across reinforcement learning, memory research, large language models, agentic systems, and neuroscience, with an organizing team that spans these communities.
misc CHL+25
BibTeXKey: CHL+25