Silicon Valley’s leading AI researchers are facing a challenge to conventional thinking about artificial general intelligence (AGI). Yann LeCun, a former Meta AI chief and prominent critic of the large language model (LLM) approach, is backing a startup called Logical Intelligence that is pioneering a different method. LeCun argues that the current obsession with LLMs – systems that predict the next word in a sequence – is a dead end. Instead, he believes the path to true AI lies in systems that reason rather than just guess.

Energy-Based Reasoning: A New Approach to AI

Logical Intelligence has developed an “energy-based model” (EBM), which learns by absorbing constraints rather than predicting outcomes. Unlike LLMs, EBMs operate within defined parameters – such as the rules of a Sudoku puzzle – to solve problems without trial-and-error. The startup claims this method requires significantly less computing power and eliminates mistakes.

The company’s first model, Kona 1.0, demonstrated superior performance over leading LLMs in solving Sudoku puzzles, running on a single Nvidia H100 GPU. This suggests that efficient reasoning can be achieved without the massive scale of current LLMs. Logical Intelligence is the first to develop a working EBM, previously only a theoretical concept.

Beyond Language: The Future of AI

The startup envisions EBMs tackling real-world problems where accuracy is critical, such as optimizing energy grids or automating complex manufacturing. Founder and CEO Eve Bodnia emphasizes that these tasks are “anything but language,” implying that the focus should shift away from LLMs’ linguistic strengths.

Logical Intelligence plans to collaborate with AMI Labs, another startup founded by LeCun, which is developing “world models” – AI systems that understand physical dimensions, retain memory, and predict outcomes. The ultimate goal is to combine these approaches: LLMs for human interaction, EBMs for reasoning, and world models for real-world action.

A Shift in Perspective

The core argument is that current AI development is misdirected. LLMs rely on sheer scale and statistical probability, while true intelligence requires a more fundamental approach to reasoning. LeCun and Bodnia suggest that mimicking human language is not the key to unlocking AGI; instead, AI should focus on abstract problem-solving without the constraints of language.

The team expects to deploy EBMs across industries such as energy, pharmaceuticals, and manufacturing. The company’s approach of developing smaller, specialized models for specific tasks—rather than one universal AI—may offer a more practical path forward.

The road to AGI, according to Logical Intelligence, begins with a layered approach that combines different types of AI, each optimized for specific functions.

This new model represents a bold challenge to the prevailing narrative in Silicon Valley, suggesting that the future of AI may lie not in bigger LLMs, but in smarter reasoning systems.