Episode Pipeline
Vote for which episode I cover next. Top-voted episodes move up the queue.
How it works
Browse the queue
Find an episode you want to see below. Episodes are grouped by act and sorted by votes.
Vote with your email
Click Vote and enter your email. No password, no account — takes 10 seconds.
Get notified when it's live
When the episode publishes, you'll get one email with the link. That's it — no spam.
Act 2 — Behavior & Limits: Why AI Feels Weird
11 episodes · 1 votesExplain how bloated context and topic mixing degrade outputs.
Distinguish two different failure modes that users often confuse.
Explain how different message roles steer model behavior.
Expose fluency bias as the reason confident answers can still be wrong.
Introduce RAG as a response to context and memory limits.
Show how grounding reduces hallucinations by constraining sources.
Summarize practical habits for maintaining output quality.
Teach how surfacing assumptions reduces wrong answers.
Show how structure improves reliability and usefulness.
Provide lightweight ways to sanity-check outputs.
Introduce a staged model for trusting AI outputs.
Act 3 — Steering & Trust: Becoming AI-Literate
16 episodes · 0 votesTeach a repeatable prompt structure that works across tasks.
Explain how examples teach behavior without retraining.
Show how breaking work into steps improves quality.
Demonstrate iterative improvement as a core AI skill.
Explain when and why models should clarify before answering.
Show how to steer tone explicitly and professionally.
Define hallucinations accurately without mystique.
Build intuition for low-risk vs high-risk use cases.
Teach fast signals that outputs may be unreliable.
Show how to ask about confidence without false precision.
Explain what sources mean in AI outputs and their limits.
Teach how to spot internal inconsistencies quickly.
Introduce prompt injection as a real-world security risk.
Provide a minimal verification workflow.
Explain tradeoffs between consistency and originality.
Tie trust concepts into a usable decision workflow.
Act 4 — Training & Knowledge: Where Behavior Comes From
15 episodes · 0 votesReinforce why models do not learn from conversations.
Explain how models learn general patterns at scale.
Show why chat models follow instructions.
Explain why models feel helpful and polite.
Clarify when fine-tuning helps and when it misleads.
Explain why models lack recent information.
Distinguish recall from pattern learning.
Explain overfitting with simple intuition.
Show what benchmarks reveal and what they hide.
Present the full training lifecycle in one mental picture.
Explain why private knowledge is inaccessible by default.
Introduce agents as loops of models, tools, and memory.
Clarify division of labor between reasoning and action.
Explain why chat history feels like memory but is not.
Separate what models know from how they behave.
Act 5 — Constraints & Risk: Cost, Speed, Privacy
17 episodes · 0 votesExplain how token flow drives cost.
Reveal hidden cost sources in AI interactions.
Explain why longer context increases cost and latency.
Teach habits that reduce AI costs.
Explain routing decisions between models.
Present the fundamental tradeoff in AI systems.
Explain why systems throttle or fail unexpectedly.
Show how reuse reduces cost and latency.
Explain why AI business models changed.
Define what should never be shared with AI.
Distinguish risk levels of different data types.
Explain safer alternatives to sharing secrets.
Explain what data is typically stored and why.
Explain why organizations impose stricter rules.
Show real-world examples of prompt attacks.
Teach how to remove identifiers without losing meaning.
Summarize practical personal safety rules.
Act 6 — Beyond Chat: Multimodal & Knowledge Systems
18 episodes · 0 votesExplain what seeing means for AI systems.
Clarify why screenshots confuse models.
Explain how images are generated from noise.
Show how partial image editing works.
Explain what is easy and what is hard today.
Explain factors affecting transcription accuracy.
Explain tradeoffs between naturalness and control.
Show how to combine text, image, and audio inputs.
Reconnect memory limits to system design.
Explain semantic meaning as coordinates.
Explain why document splitting matters.
Explain what vector stores actually store.
Separate evidence retrieval from text generation.
Explain how the best evidence is selected.
Explain how grounded answers are produced.
Explain how systems update knowledge safely.
Explain how to test retrieval quality.
Tie all components into a single system blueprint.