Skip to content
Notes on AIE003Act 1 — Mental Models

How AI Thinks

Replace the idea of intelligence with the simpler and more accurate idea of prediction.

LLMs are neural networks trained on vast text datasets. They identify patterns in connections between words and topics. This training gives them one primary capability: Prediction. They predict the...

Full Explanation

LLMs are neural networks trained on vast text datasets. They identify patterns in connections between words and topics. This training gives them one primary capability: Prediction. They predict the next word or token based on the input context.

Is this different from humans? Yes and no. The human brain is also a predictive engine, but the Grounding differs:

  1. AI Grounding is Wide: It is grounded in the textual representation of the world (all books, all webs). It knows "about" everything but "experiences" nothing.
  2. Human Grounding is Deep: It is grounded in reality—senses, emotions, and physical existence.

Therefore, the proper mental model for "AI Thinking" is not a Silicon Valley brain in a vat, but a sophisticated Prediction Engine continually asking, "Given what I've seen so far, what comes next?"

Related AI Concepts

Resources

No dedicated resources for this episode yet.

Browse the resource library →
Alexey Makarov

Alexey Makarov

AI Enablement Strategist and Educator. Leading the AI Center of Excellence at SEFE. Creator of the Unreasonable AI YouTube channel. Based in Berlin.

About Alexey →