
Today, we’re featuring an episode from NPR’s science podcast Short Wave. In it, host Regina G. Barber talks to computer scientist Ilia Shumailov about maybe the buzziest topic around: AI. I’m sure you know AI models like OpenAI's ChatGPT are trained on millions of examples of human-written text. Nowadays, a lot of content on the Internet is written by these generative AI models. That means that AI models trained now may consume their own synthetic content and suffer the consequences. What's the harm? Find out with this episode of Short Wave. Learn more about our flagship conference happening this April at attend.ted.com/podcast Hosted on Acast.