
In this conversation, computer scientist Roman Yampolskiy explains why artificial intelligence is fundamentally different from every technology humanity has ever created. This isn’t about tools — it’s about autonomous agents that can outthink, outmaneuver, and ultimately outcompete humans. We explore the Darwinian logic of superintelligence, why control mechanisms fail, and why once AGI exists, human survival becomes a probabilistic outcome rather than a guarantee. From AI deception and self-preservation to simulations, consciousness, Bitcoin, and existential risk, this episode confronts the uncomfortable reality few want to face. This is not optimism or pessimism — it’s a cold assessment of trajectory, incentives, and irreversibility.