Menu

  • Home
  • About
  • Search
  • Featured Shows
Spokely

© 2026 Spokely

The Neuron: AI Explained

The Neuron: AI Explained

The Neuron

Technology

The Neuron covers the latest AI developments, trends and research, hosted by Grant Harvey and Corey Noles. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available every Tuesday on all podcasting platforms and YouTube. Subscribe to our newsletter: https://www.theneurondaily.com/subscribe

Episodes

The AI Agent That Compressed 8 Years of R&D Into 2 Weeks

The AI Agent That Compressed 8 Years of R&D Into 2 Weeks

Scientific discovery has always been slow. Until now. In this episode, we sit down with Dr. Qichao Hu, CEO of SES AI, to reveal how they are using AI agents to turn a 8-year research cycle into a 2-week sprint. By combining autonomous "wet labs" with advanced AI models, they are solving one of the hardest physics problems in tech: the battery bottleneck. We dive deep into how this "Molecular Universe" project isn't just about EV batteries—it's about unlocking power for data centers, robotics, and AR glasses. If you want to see a concrete example of AI agents working in the physical world to solve material science constraints, do not miss this conversation. 🔗 Learn more about SES AI: https://www.ses.ai/ 🔗 Follow the Molecular Universe project: https://molecular-universe.com/about Subscribe for more interviews with the people building AI’s next wave. For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https:// theneuron.ai.
47min•Mar 15, 2026
AI Just Democratized Filmmaking (w/ LTX Co-Founder)

AI Just Democratized Filmmaking (w/ LTX Co-Founder)

In this episode, we sit down with Yaron Inger, co-founder of Lightricks and LTX, to explore the future of open-source AI video. LTX-2 is currently the #1 ranked open-source audio & video model on Hugging Face — with over 4.5 million downloads in just two months. But what makes it different? It runs locally. It can be fine-tuned on your own IP. It integrates into real video workflows. And it might change how filmmaking, education, and creative work evolve in the AI era. We talk about: • Why open models are catching up to Big Tech • How smaller models are getting better through distillation • Running AI video on consumer GPUs • Infinite, autoregressive video generation • AI teachers that change environments in real time • Whether AI will replace filmmakers — or empower them If you care about the future of creativity, open AI, or the economics of filmmaking… this one is worth your time.
1h 2min•Mar 12, 2026
24 Billion AI Uses Later: What Canva Learned About the Future of Design

24 Billion AI Uses Later: What Canva Learned About the Future of Design

You've probably used Canva—but you probably haven't seen what it can do with AI. In this episode of The Neuron, we sit down with Danny Wu, Head of AI Products at Canva, to explore how the platform went from a simple design tool to a full-blown "Creative Operating System" powered by AI—serving 230+ million users every month. Danny walks us through how Canva's MCP server lets you create fully editable designs from inside ChatGPT, Claude, and Microsoft Copilot, why their new Canva Design Model is fundamentally different from typical AI image generators (hint: layers), and why 24 billion AI tool uses later, the most surprising use cases are ones they never anticipated. We also get Danny's take on whether AI will homogenize all design, his advice for freelancers who don't want to get replaced, and a live demo of Canva's AI design generation in action. You'll learn: • How MCP powers Canva inside ChatGPT, Claude, and Copilot • What the Canva Design Model understands that GPT-4 doesn't • Why editable layers (not flat images) are the real AI design breakthrough • Danny's advice for freelancers to become irreplaceable in an AI world • How Canva uses AI internally on tens of millions of lines of code • Why AI assistants are becoming "the new SEO" for user acquisition Try Canva AI at https://canva.com/ai Special thanks to the sponsor of this video, Cohesity: https://www.cohesity.com/ResilienceEverywhere/?utm_source=brand-ta-podcast&utm_medium=direct-publisher&utm_campaign=fy26-q2-01-amer-us-digital-awarewbpg-brd-genbr&utm_content=podcast For more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai.
55min•Mar 10, 2026
BONUS: GPT 5.4 LIVE Test & Learn to Code in 2026: What's Essential vs. What AI Handles Now

BONUS: GPT 5.4 LIVE Test & Learn to Code in 2026: What's Essential vs. What AI Handles Now

Ryan Carson taught over 1,000,000 people how to code at Treehouse and spent 25% of his entire life doing it. Now he says everything about that process needs to change. In this livestream, Ryan joins Corey Noles and Grant Harvey to rethink programming education from scratch. When AI agents can write production code, pass competitive coding challenges, and ship features while you sleep. We'll cover:🧠 What’s still fundamental when agents handle the syntax 🔄 Where beginners should start in 2026 (it’s not where you think) 🚀 The new hard parts: deployment, databases, security, and getting your app on the internet ⭐ Ryan’s viral 3-file system for building with AI agents (5,000+ GitHub stars) 🧪 Why “vibe coding” gets you a prototype but not a product 🛠️ The skills that separate someone who prompts from someone who ships Ryan is the founder of Treehouse (raised $23M, taught 1M+ students, acquired 2021), Builder in Residence at Amp (Sourcegraph's coding agent), and is currently building Untangle, a real production app, almost entirely with AI tools. Whether you're a complete beginner curious about coding in 2026 or an experienced developer rethinking your workflow, this one's for you. 🔗 LINKS & RESOURCES: • Ryan Carson's website: https://www.ryancarson.com/ • Ryan's articles on agent workflows: https://www.ryancarson.com/articles • Code Factory workflow: https://x.com/ryancarson/status/2023452909883609111 • Agent teams in OpenClaw: https://x.com/ryancarson/status/2020931274219594107 • Agents that ship while you sleep: https://x.com/ryancarson/status/2016520542723924279 • Ryan's newsletter: https://ryancarson.substack.com/ • Untangle: https://untangle-us.com/ • Amp (Sourcegraph coding agent): https://ampcode.com/ 🗞️ Subscribe to The Neuron newsletter: https://theneuron.ai
2h 0min•Mar 6, 2026
AI Is Helping Build the Power Source It Desperately Needs (Brandon Sorbom w/ Commonwealth Fusion Systems)

AI Is Helping Build the Power Source It Desperately Needs (Brandon Sorbom w/ Commonwealth Fusion Systems)

AI data centers are going to double their power consumption by 2030—so where's all that energy coming from? One answer is fusion, the same process that powers the sun. In this episode of The Neuron, we're joined by Brandon Sorbom, Chief Science Officer and Co-founder of Commonwealth Fusion Systems, to explore how his company is racing to build the world's first commercial fusion power plant—and how AI is helping them get there faster. Brandon explains why fusion has been "30 years away" for decades, what changed with high-temperature superconducting magnets, and why fusion is fundamentally safer than fission (hint: fusion is "default off"). We dive into CFS's collaborations with Google DeepMind and NVIDIA, what it takes to wrangle 10,000 unique parts, and when we might actually see fusion on the grid. You'll learn: • What fusion actually is (and why it's not nuclear fission) • Why high-temperature superconducting magnets changed everything • How AI is accelerating plasma control and simulation • The safety profile that makes fusion regulated like an MRI, not a reactor • When CFS expects to hit Q > 1 (net energy) and beyond To learn more about Commonwealth Fusion Systems, visit https://cfs.energy. For more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai
1h 3min•Mar 3, 2026
BONUS: Gemini 3 Flash (Smartest, Cheapest AI) with Google DeepMind's Logan Kilpatrick

BONUS: Gemini 3 Flash (Smartest, Cheapest AI) with Google DeepMind's Logan Kilpatrick

From the YT live archives: Google just dropped Gemini 3 Flash—a model that outperforms Gemini 2.5 Pro (their last top model) while running 3x faster at less than 1/4 the cost. It's frontier-level reasoning at Flash-level speed, and it's rolling out globally right now. We're sitting down with Logan Kilpatrick from Google DeepMind to explore what this actually means for developers, knowledge workers, and anyone trying to figure out how AI fits into their workflow. What we'll cover: 🔥 Live demos – Logan will show us Gemini 3 Flash in action, from coding to multimodal understanding ⚡ What's now possible – Use cases that weren't practical with previous models (or weren't possible at all) 🛠️ Building together – We might wire up a tool live if Logan's game (we've got ideas) 💰 Intelligence too cheap to meter – We'll dig into the economics: when AI gets this powerful and this affordable, does it change the hiring calculus? On that last point: right now, data shows AI is raising wages for AI-impacted roles because workers who use AI effectively can command higher salaries. But what happens when frontier intelligence costs $0.50 per million tokens? When does “intelligence as a commodity” flip from “AI makes workers more valuable” to “why hire a human?” We’ll see if we can get Logan’s take on this topic! Key specs on Gemini 3 Flash: Outperforms Gemini 2.5 Pro across most benchmarks 3x faster than 2.5 Pro Less than 1/4 the cost of Gemini 3 Pro 1M token context window Advanced visual and spatial reasoning with code execution 78% on SWE-bench Verified (agentic coding) Rolling out globally in Gemini app, AI Mode in Search, and developer platforms Logan has been at the center of Google's push to make frontier AI accessible to millions of developers. If you're shipping products, building with AI, or just trying to wrap your head around where this is all going, this conversation will give you clarity.
1h 59min•Feb 27, 2026
Diffusion for Text: Why Mercury Could Make LLMs 10x Faster

Diffusion for Text: Why Mercury Could Make LLMs 10x Faster

Diffusion models changed how we generate images and video—now they’re coming for text. In this episode, we sit down with Stefano Ermon, Stanford computer science professor and founder of Inception Labs, to unpack how diffusion works for language, why it can generate in parallel (instead of token-by-token), and what that means for latency, cost, and real-time AI products. We talk through: The simplest mental model for diffusion: generate a full draft, then refine it by “fixing mistakes” Why today’s autoregressive LLM inference is often memory-bound —and why diffusion can shift it toward a more GPU-friendly compute profile Where Mercury wins today (IDEs, voice/real-time agents, customer support, EdTech—anywhere humans can’t wait) What changes (and what doesn’t) for long context and architecture choices The real-world way to evaluate models in production: offline evals + the gold-standard A/B test Stefano also shares what’s next on Mercury’s roadmap—especially around stronger planning and reasoning for agentic use cases. Try Mercury + learn more: inceptionlabs.ai For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https:// theneuron.ai.
48min•Feb 24, 2026
Can AI Improve Customer Service Without Killing Jobs? Crescendo Thinks So

Can AI Improve Customer Service Without Killing Jobs? Crescendo Thinks So

Customer service is one of the industries most impacted by AI — but what if AI alone isn’t the answer? In this episode of The Neuron Podcast, Grant Harvey and Corey Noles sit down with Matt Price, Founder & CEO of Crescendo, to explore how AI and humans working together can outperform automation alone. After spending 13+ years at Zendesk, Matt is now building an AI-native customer experience platform that automates up to 90% of tickets with 99.8% accuracy — without sacrificing empathy, trust, or outcomes. We cover: • Why LLMs are the biggest shift in customer service since the telephone • Why bolting AI onto old CX workflows fails • How Crescendo’s multimodal AI can chat, talk, see images, and control devices in one conversation • Real-world examples (like smart sprinkler troubleshooting via voice + vision + APIs) • Why Crescendo combines AI agents with forward-deployed human experts • How outcome-based pricing aligns incentives around real customer satisfaction • How AI is reshaping (not eliminating) customer service jobs • Why “deflection” is the wrong mindset for CX — and what replaces it • What customer support roles look like in an AI-native future This is a deep dive into the next generation of customer experience, where AI handles scale and speed — and humans deliver judgment, empathy, and innovation. Subscribe for weekly conversations with the builders shaping the future of AI and work. Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
57min•Feb 20, 2026
How Google's Gemini CLI Creator Ships 150 Features a Week

How Google's Gemini CLI Creator Ships 150 Features a Week

Taylor Mullen, Principal Engineer at Google and creator of Gemini CLI, reveals how his team ships 100-150 features and bug fixes every week—using Gemini CLI to build itself. In this first in-depth interview about Gemini CLI's origin story, we explore why command-line AI agents are having a "terminal renaissance," how Taylor manages swarms of parallel AI agents, and the techniques (like the viral "Ralph Wiggum" method) that separate 10x engineers from 100x engineers. Whether you're a developer or AI-curious, you'll learn practical strategies for using AI coding tools more effectively. 🔗 Links: • Gemini CLI: https://geminicli.com • GitHub: https://github.com/google-gemini/gemini-cli • Subscribe to The Neuron newsletter: https://theneuron.ai
56min•Feb 17, 2026
BONUS: OpenAI Codex Demo, Learn the Absolute Basics of Coding with AI

BONUS: OpenAI Codex Demo, Learn the Absolute Basics of Coding with AI

In this week's live-stream replay, we go live for a 2-hour, hands-on deep dive into GPT-5.1 Codex Max with Alexander Embiricos, product lead for OpenAI Codex. You’ll walk out feeling like an agentic-coding wizard, even if you’re starting from zero. GPT-5.1 Codex Max is OpenAI’s latest frontier agentic coding model. It’s built on an upgraded reasoning backbone and trained to handle real-world software engineering tasks end to end: PRs, refactors, frontend builds, and deep debugging. It can work independently for hours, compacting its own history so it can refactor entire projects and run multi-hour agent loops without losing context. In this live session, we’ll set it up together, build real agents, and push Codex Max to its limits.
2h 0min•Feb 13, 2026
Why Energy-Based Models Could Be the Next Big Shift in AI

Why Energy-Based Models Could Be the Next Big Shift in AI

Modern AI has been dominated by one idea: predict the next token. But what if intelligence doesn’t have to work that way? In this episode of The Neuron, we’re joined by Eve Bodnia, Founder and CEO of Logical Intelligence, to explore energy-based models (EBMs) —a radically different approach to AI reasoning that doesn’t rely on language, tokens, or next-word prediction. With a background in theoretical physics and quantum information, Eve explains how EBMs operate over an energy landscape, allowing models to reason about many possible solutions at once rather than guessing sequentially. We discuss why this matters for tasks like spatial reasoning, planning, robotics, and safety-critical systems—and where large language models begin to show their limits. You’ll learn: What energy-based models are (in plain English) Why token-free architectures change how AI reasons How EBMs reduce hallucinations through constraints and verification Why EBMs and LLMs may work best together, not in competition What this approach reveals about the future of AI systems To learn more about Eve’s work, visit https:// logicalintelligence.com. For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https:// theneuron.ai.
55min•Feb 10, 2026
BONUS: Our 2026 AI Predictions.... Who Wins, Who Loses, and What Changes Everything?

BONUS: Our 2026 AI Predictions.... Who Wins, Who Loses, and What Changes Everything?

AI is moving fast — and 2026 is shaping up to be a turning point. In this livestream, Corey and Grant from The Neuron break down our biggest AI predictions for 2026, including: 🏆 Which companies, tools, and model types are most likely to come out on top 📉 Who could lose ground (and what’s driving the shift) 🎲 The wildcards most people aren’t factoring in yet 👀 What to watch across AI policy, agents, open source, and consumer adoption 🧠 The skills and strategies that will matter most in 2026 Join us live for audience Q&A and a real-time debate on the hottest AI takes — then drop your prediction in the comments: what’s the biggest AI surprise coming in 2026? 🔮 Subscribe for weekly AI coverage from The Neuron and more livestreams like this.
2h 40min•Feb 6, 2026
Inside Google Labs: 3 AI Tools That Will Change How You Create

Inside Google Labs: 3 AI Tools That Will Change How You Create

In this special episode, we go hands-on with three cutting-edge AI tools from Google Labs. First, Jaclyn Konzelman (Director of Product Management) demos Mixboard, an AI-powered concepting board that transforms ideas into visual presentations using Nano Banana Pro. Then, Thomas Iljic (Senior Director of Product Management) shows us Flow, Google's AI filmmaking tool that lets you create, edit, and animate video clips with unprecedented control. Finally, Megan Li (Senior Product Manager) walks us through Opal, a no-code AI app builder that lets anyone create custom AI workflows and mini-apps using natural language.
1h 57min•Feb 3, 2026
This AI Agent Builds Better Code Than Most Developers (Factory AI)

This AI Agent Builds Better Code Than Most Developers (Factory AI)

Autonomous coding agents are moving from demos to real production workflows. In this episode, Factory AI co-founder and CTO Eno Reyes explains what "Droids" really are—fully autonomous agents that can take tickets, modify real codebases, run tests, and work inside existing dev workflows. We dig into Factory's context compression research (which outperformed both OpenAI and Anthropic), what makes a codebase "agent-ready," and why Stanford research found that the ONLY predictor of AI success was codebase quality—not adoption rates or token usage. Whether you're a developer curious about autonomous coding tools or just want to understand where AI engineering is headed, this episode is packed with practical insights. 🔗 Try Factory AI: https://factory.ai 📰 Subscribe to The Neuron newsletter: https://theneuron.ai 📖 Resources mentioned: • Factory's compression research: https://factory.ai/news/evaluating-compression
56min•Jan 27, 2026
OpenAI Researcher Explains How AI Hides Its Thinking (w/ OpenAI’s Bowen Baker)

OpenAI Researcher Explains How AI Hides Its Thinking (w/ OpenAI’s Bowen Baker)

AI reasoning models don’t just give answers — they plan, deliberate, and sometimes try to cheat. In this episode of The Neuron, we’re joined by Bowen Baker, Research Scientist at OpenAI, to explore whether we can monitor AI reasoning before things go wrong — and why that transparency may not last forever. Bowen walks us through real examples of AI reward hacking, explains why monitoring chain-of-thought is often more effective than checking outputs, and introduces the idea of a “monitorability tax” — trading raw performance for safety and transparency. We also cover: Why smaller models thinking longer can be safer than bigger models How AI systems learn to hide misbehavior Why suppressing “bad thoughts” can backfire The limits of chain-of-thought monitoring Bowen’s personal view on open-source AI and safety risks If you care about how AI actually works — and what could go wrong — this conversation is essential. Resources: Title URL Evaluating chain-of-thought monitorability | OpenAI https://openai.com/index/evaluating-chain-of-thought-monitorability/ Understanding neural networks through sparse circuits | OpenAI https://openai.com/index/understanding-neural-networks-through-sparse-circuits/ OpenAI's alignment blog: https://alignment.openai.com/ 👉 Subscribe for more interviews with the people building AI 👉 Join the newsletter at https://theneuron.ai
55min•Jan 23, 2026
The Hidden Cost of AI Agents No One Talks About

The Hidden Cost of AI Agents No One Talks About

Everyone is rushing to build AI agents — but most companies are setting themselves up for failure. In this episode of The Neuron, Darin Patterson, VP of Market Strategy at Make, explains why agentic AI only works if your automation foundation is solid first. We break down when to use deterministic workflows vs AI agents, how to avoid fragile automation sprawl, and why visibility into your entire automation landscape is now mission-critical. You’ll see real examples of building agents in Make, how Model Context Protocol (MCP) fits into modern workflows, and why orchestration — not hype — is the real unlock for scaling AI safely inside organizations. Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
1h 0min•Jan 20, 2026
Why IBM Wants AI to Be Boring

Why IBM Wants AI to Be Boring

IBM just released Granite 4.0, a new family of open language models designed to be fast, memory-efficient, and enterprise-ready — and it represents a very different philosophy from today’s frontier AI race. In this episode of The Neuron, IBM Research’s David Cox joins us to unpack why IBM treats AI models as tools rather than entities, how hybrid architectures dramatically reduce memory and cost, and why openness, transparency, and external audits matter more than ever for real-world deployment. We dive into long-context efficiency, agent safety, LoRA adapters, on-device AI, voice interfaces, and why the future of AI may look a lot more boring — in the best possible way. If you’re building AI systems for production, agents, or enterprise workflows, this conversation is required listening. Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
53min•Jan 13, 2026
This AI Grows a Brain During Training (Pathway’s AI w/ Zuzanna Stamirowska)

This AI Grows a Brain During Training (Pathway’s AI w/ Zuzanna Stamirowska)

Imagine an AI that doesn’t just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world’s first post-Transformer frontier model: BDH — the Dragon Hatchling architecture. Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway’s architecture introduces true temporal reasoning and continual learning. We explore: • Why Transformers lack real memory and time awareness • How BDH uses brain-like neurons, synapses, and emergent structure • How models can “get bored,” adapt, and strengthen connections • Why Pathway sees reasoning — not language — as the core of intelligence • How BDH enables infinite context, live learning, and interpretability • Why gluing two trained models together actually works in BDH • The path to AGI through generalization, not scaling • Real-world early adopters (Formula 1, NATO, French Postal Service) • Safety, reversibility, checkpointing, and building predictable behavior • Why this architecture could power the next era of scientific innovation From brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves. If you want a window into what comes after LLMs, this interview is essential. Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
48min•Jan 6, 2026
This 24-Year-Old Raised $64M to Build an AI Smarter Than the World's Best Mathematicians

This 24-Year-Old Raised $64M to Build an AI Smarter Than the World's Best Mathematicians

Carina Hong dropped out of Stanford's PhD program to build "mathematical superintelligence" — and just raised $64M to do it. In this episode, we explore what that actually means: an AI that doesn't just solve math problems but discovers new theorems, proves them formally, and gets smarter with each iteration. Carina explains how her team solved a 130-year-old problem about Lyapunov functions, disproved a 30-year-old graph theory conjecture, and why math is the secret "bedrock" for everything from chip design to quant trading to coding agents. We also discuss the fascinating connections between neuroscience, AI, and mathematics. Lean more about Axiom: https://axiommath.ai/ Subscribe to The Neuron newsletter: https://theneuron.ai
59min•Dec 30, 2025
How AI is Reinventing Chemistry (From a Trailer Lab to a $32B Partnership)

How AI is Reinventing Chemistry (From a Trailer Lab to a $32B Partnership)

Nick Talken started a 3D printing materials company in a trailer lab in his co-founder's backyard, sold it to a 145-year-old German chemical giant, then spun out an AI platform that's now transforming R&D for Fortune 100 companies. Albert Invent's foundational AI model—trained on 15 million molecular structures—is helping scientists at companies like Kenvue (maker of Tylenol, Neutrogena, and Listerine) compress projects from 3 months to 2 days. We dig into how enterprises train bespoke AI models on proprietary data, why you can't just use ChatGPT for chemistry, and what becomes possible when AI can "think like a chemist." Subscribe to The Neuron newsletter: https://theneuron.ai Albert Invent website: https://www.albertinvent.com Kenvue partnership announcement: https://www.businesswire.com/news/home/20251014240355/en/
40min•Dec 23, 2025
1 / 4