Large reasoning models have demonstrated capabilities to solve competition-level math problems, answer “deep research” questions, and address complex coding needs. Much of this progress has been enabled by scaling of data: pre-training data to learn vast knowledge, fine-tuning data to learn natural language reasoning, and RL environments to refine that reasoning. In this talk, I will describe the current LLM reasoning paradigm, its boundaries, and the future of LLM reasoning beyond scaling. First, I will describe the state of reasoning models and where I think scaling can lead to some additional (though perhaps limited) successes. I will then shift to discussing more fundamental issues with models that scale will not resolve in the next few years. I will touch on four current limitations: outdated knowledge, generator-validator gaps, limited creativity, and poor compositional generalization. In all cases, fundamental limitations of LLMs or of supervised learning in general make these problems challenging, inviting future study and novel solutions beyond scaling.
Wednesday, September 24, 2025
Atlanta, USA
Large reasoning models have demonstrated capabilities to solve competition-level math problems, answer “deep research” questions, and address complex coding needs. Much of this progress has been enabled by scaling of data: pre-training data to learn vast knowledge, fine-tuning data to learn natural language reasoning, and RL environments to refine that reasoning. In this talk, I will describe the current LLM reasoning paradigm, its boundaries, and the future of LLM reasoning beyond scaling. First, I will describe the state of reasoning models and where I think scaling can lead to some additional (though perhaps limited) successes. I will then shift to discussing more fundamental issues with models that scale will not resolve in the next few years. I will touch on four current limitations: outdated knowledge, generator-validator gaps, limited creativity, and poor compositional generalization. In all cases, fundamental limitations of LLMs or of supervised learning in general make these problems challenging, inviting future study and novel solutions beyond scaling.
Receive curated event recommendations based on your interests.
See how WebMobi helps you manage registration, check-in, and analytics.
Request a DemoOr start free trial →Machine Learning Seminar Series Fall 2025 | LLM Reasoning Beyond Scaling takes place on Wednesday, September 24, 2025.
Machine Learning Seminar Series Fall 2025 | LLM Reasoning Beyond Scaling is held in Atlanta, USA at CODA Building 9th floor Atrium & Virtual.
Pricing information is available on the event website.
Machine Learning Seminar Series Fall 2025 | LLM Reasoning Beyond Scaling is designed for industry professionals looking to network, learn, and discover new opportunities.
Get curated B2B events, AI-powered insights, and industry trends delivered to your inbox