<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <atom:link href="https://feeds.megaphone.fm/CSIS8973369418" rel="self" type="application/rss+xml"/>
    <title>The AI Policy Podcast</title>
    <language>en</language>
    <copyright></copyright>
    <description>Join CSIS’s Gregory C. Allen, senior adviser with the Wadhwani AI Centers, on a deep dive into the world of AI policy. Every two weeks, tune in for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C. </description>
    
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>The AI Policy Podcast</itunes:subtitle>
    <itunes:author>Center for Strategic and International Studies</itunes:author>
    <itunes:summary>Join CSIS’s Gregory C. Allen, senior adviser with the Wadhwani AI Centers, on a deep dive into the world of AI policy. Every two weeks, tune in for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C. </itunes:summary>
    <content:encoded>
      <![CDATA[<p>Join CSIS’s Gregory C. Allen, senior adviser with the Wadhwani AI Centers, on a deep dive into the world of AI policy. Every two weeks, tune in for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C. </p>]]>
    </content:encoded>
    <itunes:owner>
      <itunes:name>CSIS</itunes:name>
      <itunes:email>podcasts@csis.org</itunes:email>
    </itunes:owner>
    <itunes:image href="https://megaphone.imgix.net/podcasts/95599230-b558-11ee-b177-1b0512fd2559/image/d9868d4d78bd1a72307bd99f23febe50.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
    <itunes:category text="News">
      <itunes:category text="Tech News"/>
    </itunes:category>
    <itunes:category text="Technology">
    </itunes:category>
    <itunes:category text="Government">
    </itunes:category>
    <item>
      <title>The Next Chapter of the AI Policy Podcast</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, founding host Gregory C. Allen announces his departure from CSIS and introduces Aalok Mehta, Director of the Wadhwani AI Center, as the new host of the AI Policy Podcast.</description>
      <pubDate>Thu, 16 Apr 2026 14:48:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, founding host Gregory C. Allen announces his departure from CSIS and introduces Aalok Mehta, Director of the Wadhwani AI Center, as the new host of the AI Policy Podcast.</itunes:subtitle>
      <itunes:summary>In this episode, founding host Gregory C. Allen announces his departure from CSIS and introduces Aalok Mehta, Director of the Wadhwani AI Center, as the new host of the AI Policy Podcast.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, founding host Gregory C. Allen announces his departure from CSIS and introduces Aalok Mehta, Director of the Wadhwani AI Center, as the new host of the AI Policy Podcast.</p>]]>
      </content:encoded>
      <itunes:duration>481</itunes:duration>
      <guid isPermaLink="false"><![CDATA[4754a8dc-39a3-11f1-99eb-dbda7986e048]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS2244667553.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Unpacking Russian Military AI with Kateryna Bondar</title>
      <description>In this episode, we are joined by Wadhwani AI Center fellow Kateryna Bondar to discuss her recent reports on Russia's military AI, "How Russia Is Building a Sovereign Drone Ecosystem for AI-Driven Autonomy" and "How Russia Is Reshaping Command and Control for AI-Enabled Warfare." 

We cover Kateryna's background (1:07) before doing a deep dive into the role technological innovation has played in the conflict in Ukraine (7:49). Kateryna then explains why AI capabilities in warfare "cannot be built, can only be grown" (22:24) and unpacks the report's claim that Russia has likely fielded a fully autonomous unmanned system in combat (53:02).

Read Kateryna's report on Russia's AI-enabled C2 architecture here. 

Read her report on Russia's sovereign drone ecosystem here. </description>
      <pubDate>Tue, 14 Apr 2026 20:31:00 -0000</pubDate>
      <itunes:title>Unpacking Russian Military AI with Kateryna Bondar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/de0c9f78-3840-11f1-a17b-3353f014580b/image/3825f1e91ac44162c5832343377dcd2e.jpeg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we are joined by Wadhwani AI Center fellow Kateryna Bondar to discuss her recent reports on Russia's military AI, "How Russia Is Building a Sovereign Drone Ecosystem for AI-Driven Autonomy" and "How Russia Is Reshaping Command and Control for AI-Enabled Warfare." </itunes:subtitle>
      <itunes:summary>In this episode, we are joined by Wadhwani AI Center fellow Kateryna Bondar to discuss her recent reports on Russia's military AI, "How Russia Is Building a Sovereign Drone Ecosystem for AI-Driven Autonomy" and "How Russia Is Reshaping Command and Control for AI-Enabled Warfare." 

We cover Kateryna's background (1:07) before doing a deep dive into the role technological innovation has played in the conflict in Ukraine (7:49). Kateryna then explains why AI capabilities in warfare "cannot be built, can only be grown" (22:24) and unpacks the report's claim that Russia has likely fielded a fully autonomous unmanned system in combat (53:02).

Read Kateryna's report on Russia's AI-enabled C2 architecture here. 

Read her report on Russia's sovereign drone ecosystem here. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by Wadhwani AI Center fellow Kateryna Bondar to discuss her recent reports on Russia's military AI, "How Russia Is Building a Sovereign Drone Ecosystem for AI-Driven Autonomy" and "How Russia Is Reshaping Command and Control for AI-Enabled Warfare." </p>
<p>We cover Kateryna's background (1:07) before doing a deep dive into the role technological innovation has played in the conflict in Ukraine (7:49). Kateryna then explains why AI capabilities in warfare "cannot be built, can only be grown" (22:24) and unpacks the report's claim that Russia has likely fielded a fully autonomous unmanned system in combat (53:02).</p>
<p>Read Kateryna's report on Russia's AI-enabled C2 architecture <a href="https://www.csis.org/analysis/how-russia-reshaping-command-and-control-ai-enabled-warfare">here</a>. </p>
<p>Read her report on Russia's sovereign drone ecosystem <a href="https://www.csis.org/analysis/how-russia-building-sovereign-drone-ecosystem-ai-driven-autonomy">here</a>. </p>]]>
      </content:encoded>
      <itunes:duration>4508</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[de0c9f78-3840-11f1-a17b-3353f014580b]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4975917959.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Inside Project Maven and AI-Powered Warfare with Katrina Manson</title>
      <description>In this special episode, we sit down with Katrina Manson, author of Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare. We explore what drew Katrina to this story (2:31), trace the turbulent history of Project Maven and the obstacles it has overcome (8:03), examine how the U.S. leveraged Maven to support Ukraine against Russia (31:32), and discuss Katrina's latest reporting on Maven's role in the ongoing conflict with Iran (47:40).

You can order a copy of Katrina's book here.</description>
      <pubDate>Thu, 26 Mar 2026 18:30:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this special episode, we sit down with Katrina Manson, author of Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare.</itunes:subtitle>
      <itunes:summary>In this special episode, we sit down with Katrina Manson, author of Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare. We explore what drew Katrina to this story (2:31), trace the turbulent history of Project Maven and the obstacles it has overcome (8:03), examine how the U.S. leveraged Maven to support Ukraine against Russia (31:32), and discuss Katrina's latest reporting on Maven's role in the ongoing conflict with Iran (47:40).

You can order a copy of Katrina's book here.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special episode, we sit down with Katrina Manson, author of <em>Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare</em>. We explore what drew Katrina to this story (2:31), trace the turbulent history of Project Maven and the obstacles it has overcome (8:03), examine how the U.S. leveraged Maven to support Ukraine against Russia (31:32), and discuss Katrina's latest reporting on Maven's role in the ongoing conflict with Iran (47:40).</p>
<p>You can order a copy of Katrina's book <a href="https://wwnorton.com/books/project-maven">here</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3689</itunes:duration>
      <guid isPermaLink="false"><![CDATA[d33a3a14-2941-11f1-8042-7733fcb4cbf2]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7027089480.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Trump's National AI Framework and Super Micro's Chip Smuggling Indictment</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description> In this episode, we unpack President Trump's new national framework for AI legislation (11:15), including reactions from experts and policymakers (39:44). We also discuss the indictment of a Super Micro co-founder for smuggling Nvidia chips into China (42:59) and Nvidia receiving permission to sell H200 chips to China (57:04).</description>
      <pubDate>Tue, 24 Mar 2026 15:31:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle> In this episode, we unpack President Trump's new national framework for AI legislation, including reactions from experts and policymakers.</itunes:subtitle>
      <itunes:summary> In this episode, we unpack President Trump's new national framework for AI legislation (11:15), including reactions from experts and policymakers (39:44). We also discuss the indictment of a Super Micro co-founder for smuggling Nvidia chips into China (42:59) and Nvidia receiving permission to sell H200 chips to China (57:04).</itunes:summary>
      <content:encoded>
        <![CDATA[<p> In this episode, we unpack President Trump's new national framework for AI legislation (11:15), including reactions from experts and policymakers (39:44). We also discuss the indictment of a Super Micro co-founder for smuggling Nvidia chips into China (42:59) and Nvidia receiving permission to sell H200 chips to China (57:04).<br></p>]]>
      </content:encoded>
      <itunes:duration>3761</itunes:duration>
      <guid isPermaLink="false"><![CDATA[8926b30a-2796-11f1-b1fe-93cd3b707911]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8196044363.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Anthropic Goes to Court While Claude Goes to War in Iran</title>
      <description>In this episode, we provide a detailed update on the Anthropic-Pentagon clash, including the Trump Administration's decision to label Anthropic "a supply chain risk" (4:17), the lawsuits Anthropic has filed in response (11:45), and what these lawsuits and recent reporting reveal about how Claude has been used in the war in Iran (43:20).</description>
      <pubDate>Wed, 11 Mar 2026 19:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we explore the Anthropic-Pentagon clash following recent updates. </itunes:subtitle>
      <itunes:summary>In this episode, we provide a detailed update on the Anthropic-Pentagon clash, including the Trump Administration's decision to label Anthropic "a supply chain risk" (4:17), the lawsuits Anthropic has filed in response (11:45), and what these lawsuits and recent reporting reveal about how Claude has been used in the war in Iran (43:20).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we provide a detailed update on the Anthropic-Pentagon clash, including the Trump Administration's decision to label Anthropic "a supply chain risk" (4:17), the lawsuits Anthropic has filed in response (11:45), and what these lawsuits and recent reporting reveal about how Claude has been used in the war in Iran (43:20).</p>]]>
      </content:encoded>
      <itunes:duration>3797</itunes:duration>
      <guid isPermaLink="false"><![CDATA[a4f5fe8e-1d7c-11f1-b324-9797dc8e2ccb]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6362563890.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>A Crash Course on AI Standards with Google DeepMind's Owen Larter</title>
      <description>In this episode, we're joined by Owen Larter, Head of Frontier Policy and Public Affairs at Google DeepMind, to explore the often-overlooked world of AI standards and the role they play in shaping how AI is developed and governed. We discuss what standards are and why they matter for technological progress (2:53), how standards are developed and the key organizations involved (16:05), the relationship between standards and AI regulation like the EU AI Act (26:58), and more.</description>
      <pubDate>Fri, 06 Mar 2026 19:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Owen Larter, Head of Frontier Policy and Public Affairs at Google DeepMind, joins to explore the often-overlooked world of AI standards and the role they play in shaping how AI is developed and governed.</itunes:subtitle>
      <itunes:summary>In this episode, we're joined by Owen Larter, Head of Frontier Policy and Public Affairs at Google DeepMind, to explore the often-overlooked world of AI standards and the role they play in shaping how AI is developed and governed. We discuss what standards are and why they matter for technological progress (2:53), how standards are developed and the key organizations involved (16:05), the relationship between standards and AI regulation like the EU AI Act (26:58), and more.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we're joined by Owen Larter, Head of Frontier Policy and Public Affairs at Google DeepMind, to explore the often-overlooked world of AI standards and the role they play in shaping how AI is developed and governed. We discuss what standards are and why they matter for technological progress (2:53), how standards are developed and the key organizations involved (16:05), the relationship between standards and AI regulation like the EU AI Act (26:58), and more.</p>]]>
      </content:encoded>
      <itunes:duration>2324</itunes:duration>
      <guid isPermaLink="false"><![CDATA[d6e56d84-198b-11f1-be68-777d409375b5]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8490758679.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Andreessen Horowitz's Jai Ramaswamy, Matt Perault: AI Regulation &amp; Innovation</title>
      <description>In this episode of the AI Policy Podcast, Wadhwani AI Center senior adviser Gregory C. Allen is joined by Andreessen Horowitz Chief Legal and Policy Officer Jai Ramaswamy and head of AI policy Matt Perault for a discussion on a16z's AI policy agenda. They will cover a16z's entrance into politics, their position on state and federal AI regulation, and how to ensure AI benefits society.   

Jai Ramaswamy is Chief Legal and Policy Officer at Andreessen Horowitz, overseeing the firm's legal, compliance, and government affairs functions. Previously, he was Chief Risk and Compliance Officer at cLabs. He has also served as the Head of Enterprise Risk Management at Capital One and Global Head of AML Compliance Risk Management at Bank of America/Merrill Lynch. Before joining the private sector, Jai worked for over a decade at the Justice Department, including as Chief of the Asset Forfeiture and Money Laundering Section. 

Matt Perault is the head of AI policy at Andreessen Horowitz, where he oversees the firm's policy strategy on AI and helps portfolio companies navigate the AI policy landscape. Before joining a16z, he was the director of the Center on Technology Policy at University of North Carolina Chapel Hill. He also previously served as head of global policy development at Facebook. Matt is a fellow at the Center on Technology Policy at New York University, the Abundance Institute, and the National Security Institute at the George Mason University Antonin Scalia Law School.</description>
      <pubDate>Tue, 03 Mar 2026 15:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Gregory C. Allen is joined by Jai Ramaswamy and Matt Perault for a discussion on a16z's AI policy agenda.</itunes:subtitle>
      <itunes:summary>In this episode of the AI Policy Podcast, Wadhwani AI Center senior adviser Gregory C. Allen is joined by Andreessen Horowitz Chief Legal and Policy Officer Jai Ramaswamy and head of AI policy Matt Perault for a discussion on a16z's AI policy agenda. They will cover a16z's entrance into politics, their position on state and federal AI regulation, and how to ensure AI benefits society.   

Jai Ramaswamy is Chief Legal and Policy Officer at Andreessen Horowitz, overseeing the firm's legal, compliance, and government affairs functions. Previously, he was Chief Risk and Compliance Officer at cLabs. He has also served as the Head of Enterprise Risk Management at Capital One and Global Head of AML Compliance Risk Management at Bank of America/Merrill Lynch. Before joining the private sector, Jai worked for over a decade at the Justice Department, including as Chief of the Asset Forfeiture and Money Laundering Section. 

Matt Perault is the head of AI policy at Andreessen Horowitz, where he oversees the firm's policy strategy on AI and helps portfolio companies navigate the AI policy landscape. Before joining a16z, he was the director of the Center on Technology Policy at University of North Carolina Chapel Hill. He also previously served as head of global policy development at Facebook. Matt is a fellow at the Center on Technology Policy at New York University, the Abundance Institute, and the National Security Institute at the George Mason University Antonin Scalia Law School.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode of the AI Policy Podcast, Wadhwani AI Center senior adviser Gregory C. Allen is joined by Andreessen Horowitz Chief Legal and Policy Officer Jai Ramaswamy and head of AI policy Matt Perault for a discussion on a16z's AI policy agenda. They will cover a16z's entrance into politics, their position on state and federal AI regulation, and how to ensure AI benefits society.   

Jai Ramaswamy is Chief Legal and Policy Officer at Andreessen Horowitz, overseeing the firm's legal, compliance, and government affairs functions. Previously, he was Chief Risk and Compliance Officer at cLabs. He has also served as the Head of Enterprise Risk Management at Capital One and Global Head of AML Compliance Risk Management at Bank of America/Merrill Lynch. Before joining the private sector, Jai worked for over a decade at the Justice Department, including as Chief of the Asset Forfeiture and Money Laundering Section. 

Matt Perault is the head of AI policy at Andreessen Horowitz, where he oversees the firm's policy strategy on AI and helps portfolio companies navigate the AI policy landscape. Before joining a16z, he was the director of the Center on Technology Policy at University of North Carolina Chapel Hill. He also previously served as head of global policy development at Facebook. Matt is a fellow at the Center on Technology Policy at New York University, the Abundance Institute, and the National Security Institute at the George Mason University Antonin Scalia Law School. </p>]]>
      </content:encoded>
      <itunes:duration>4220</itunes:duration>
      <guid isPermaLink="false"><![CDATA[0d59d834-170f-11f1-ade9-536cc69e410f]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS2815867538.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Inside Anthropic's Standoff with the Pentagon and What It Means for Military AI</title>
      <description>In this episode, we break down the escalating Anthropic-Pentagon clash, including the best arguments for either side, Defense Secretary Pete Hegseth's ultimatum, and the potential consequences of designating Anthropic as a "supply chain risk" or invoking the Defense Production Act (00:34). We then discuss several recent stories that are sparking discourse about the economic impacts of AI (28:58) and a senior government official's claim that DeepSeek's forthcoming model was trained using Nvidia's Blackwell chips and frontier model distillation (45:51).</description>
      <pubDate>Wed, 25 Feb 2026 17:19:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We unpack the Anthropic–Pentagon clash, Pete Hegseth’s ultimatum, DPA risks, AI’s economic impacts, and claims DeepSeek used Nvidia Blackwell chips.</itunes:subtitle>
      <itunes:summary>In this episode, we break down the escalating Anthropic-Pentagon clash, including the best arguments for either side, Defense Secretary Pete Hegseth's ultimatum, and the potential consequences of designating Anthropic as a "supply chain risk" or invoking the Defense Production Act (00:34). We then discuss several recent stories that are sparking discourse about the economic impacts of AI (28:58) and a senior government official's claim that DeepSeek's forthcoming model was trained using Nvidia's Blackwell chips and frontier model distillation (45:51).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we break down the escalating Anthropic-Pentagon clash, including the best arguments for either side, Defense Secretary Pete Hegseth's ultimatum, and the potential consequences of designating Anthropic as a "supply chain risk" or invoking the Defense Production Act (00:34). We then discuss several recent stories that are sparking discourse about the economic impacts of AI (28:58) and a senior government official's claim that DeepSeek's forthcoming model was trained using Nvidia's Blackwell chips and frontier model distillation (45:51).</p>]]>
      </content:encoded>
      <itunes:duration>3607</itunes:duration>
      <guid isPermaLink="false"><![CDATA[47146e22-126e-11f1-ba3b-4b1f6035e319]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS1646920278.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Live from New Delhi: Our Takeaways from the India AI Impact Summit   </title>
      <description>This special episode was recorded in India on the last day of the India AI Impact Summit. We discuss the highlights of our experience at the Summit (00:25), whether India accomplished its goals for the event (10:25), major AI investments announced (16:01), and key messages from speeches by AI CEOs (19:45) and government officials (28:35).</description>
      <pubDate>Fri, 20 Feb 2026 19:19:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>This special episode was recorded in India on the last day of the India AI Impact Summit. </itunes:subtitle>
      <itunes:summary>This special episode was recorded in India on the last day of the India AI Impact Summit. We discuss the highlights of our experience at the Summit (00:25), whether India accomplished its goals for the event (10:25), major AI investments announced (16:01), and key messages from speeches by AI CEOs (19:45) and government officials (28:35).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>This special episode was recorded in India on the last day of the India AI Impact Summit. We discuss the highlights of our experience at the Summit (00:25), whether India accomplished its goals for the event (10:25), major AI investments announced (16:01), and key messages from speeches by AI CEOs (19:45) and government officials (28:35).</p>]]>
      </content:encoded>
      <itunes:duration>3050</itunes:duration>
      <guid isPermaLink="false"><![CDATA[159aa806-0e91-11f1-af33-537da37910a7]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6671834803.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Inside The Second International AI Safety Report with Writers Stephen Clare and Stephen Casper</title>
      <description>The second International AI Safety Report, released on February 3, brings together insights from over 100 AI experts across 30 countries to assess the current state of frontier AI systems. The report examines advanced models' capabilities, the risks they pose, and the technical and governance measures needed to ensure their safe development and deployment. 

In this episode of the AI Policy Podcast, Wadhwani AI Center senior adviser Gregory C. Allen is joined by lead writer Stephen Clare and MIT Ph.D. student Stephen Casper, who authored the section on technical safeguards. They discuss how the latest Safety Report compares to the first edition published last year, explore the Report’s findings on technical safeguards, and unpack the document’s key policy implications.</description>
      <pubDate>Tue, 10 Feb 2026 15:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Lead writer and MIT Ph.D. student discuss how the latest Safety Report compares to the first edition published last year, explore the Report’s findings on technical safeguards, and unpack the document’s key policy implications.</itunes:subtitle>
      <itunes:summary>The second International AI Safety Report, released on February 3, brings together insights from over 100 AI experts across 30 countries to assess the current state of frontier AI systems. The report examines advanced models' capabilities, the risks they pose, and the technical and governance measures needed to ensure their safe development and deployment. 

In this episode of the AI Policy Podcast, Wadhwani AI Center senior adviser Gregory C. Allen is joined by lead writer Stephen Clare and MIT Ph.D. student Stephen Casper, who authored the section on technical safeguards. They discuss how the latest Safety Report compares to the first edition published last year, explore the Report’s findings on technical safeguards, and unpack the document’s key policy implications.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>The second <a href="https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026"><strong>International AI Safety Report</strong></a>, released on February 3, brings together insights from over 100 AI experts across 30 countries to assess the current state of frontier AI systems. The report examines advanced models' capabilities, the risks they pose, and the technical and governance measures needed to ensure their safe development and deployment. </p>
<p>In this episode of the <strong>AI Policy Podcast</strong>, Wadhwani AI Center senior adviser <strong>Gregory C. Allen </strong>is joined by lead writer <strong>Stephen Clare </strong>and MIT Ph.D. student <strong>Stephen Casper</strong>, who authored the section on technical safeguards. They discuss how the latest Safety Report compares to the first edition published last year, explore the Report’s findings on technical safeguards, and unpack the document’s key policy implications.</p>]]>
      </content:encoded>
      <itunes:duration>5634</itunes:duration>
      <guid isPermaLink="false"><![CDATA[8ef5defa-068f-11f1-8a27-032693a2d85d]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5925850823.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Jennifer Pahlka on Reforming Government for the AI Era</title>
      <description>In this special episode recorded at Fathom’s 2026 Ashby Workshops, Greg sits down with Jennifer Pahlka, founder of Code for America and author of Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better. Jennifer walks us through her career journey, from filing paperwork at a child welfare agency to helping pioneer the U.S. Digital Services in the Obama administration (3:45). She describes the need for upstream policy reform (11:29), and discusses AI’s potential to both empower public servants to challenge antiquated practices and help policymakers simplify complex regulations (28:03). Finally, Jennifer shares some AI use cases she’s particularly excited about in government (59:34).

 

Jennifer Pahlka is a senior fellow at the Niskanen Center and the Federation of American Scientists and a senior advisor at the Abundance Network. She previously served as U.S. Deputy Chief Technology Officer, helping start the U.S. Digital Services under the second Obama administration, and as a member of the Defense Innovation Network.

 

Read Jennifer’s book Recoding America and check out her Substack Eating Policy.

 

Jennifer’s recommended reading:


  
Hack Your Bureaucracy by Marina Nitze &amp; Nick Sinai



  
Crisis Engineering by Marina Nitze, Matthew Weaver, &amp; Mikey Dickerson



  
The Procedure Fetish by Nicholas Bagley



  
Why Nothing Works by Marc J. Dunkelman



  
Kill It with Fire by Marianne Bellotti</description>
      <pubDate>Thu, 05 Feb 2026 14:56:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Greg sits down with Jennifer Pahlka, founder of Code for America and author of Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better. </itunes:subtitle>
      <itunes:summary>In this special episode recorded at Fathom’s 2026 Ashby Workshops, Greg sits down with Jennifer Pahlka, founder of Code for America and author of Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better. Jennifer walks us through her career journey, from filing paperwork at a child welfare agency to helping pioneer the U.S. Digital Services in the Obama administration (3:45). She describes the need for upstream policy reform (11:29), and discusses AI’s potential to both empower public servants to challenge antiquated practices and help policymakers simplify complex regulations (28:03). Finally, Jennifer shares some AI use cases she’s particularly excited about in government (59:34).

 

Jennifer Pahlka is a senior fellow at the Niskanen Center and the Federation of American Scientists and a senior advisor at the Abundance Network. She previously served as U.S. Deputy Chief Technology Officer, helping start the U.S. Digital Services under the second Obama administration, and as a member of the Defense Innovation Network.

 

Read Jennifer’s book Recoding America and check out her Substack Eating Policy.

 

Jennifer’s recommended reading:


  
Hack Your Bureaucracy by Marina Nitze &amp; Nick Sinai



  
Crisis Engineering by Marina Nitze, Matthew Weaver, &amp; Mikey Dickerson



  
The Procedure Fetish by Nicholas Bagley



  
Why Nothing Works by Marc J. Dunkelman



  
Kill It with Fire by Marianne Bellotti</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special episode recorded at Fathom’s 2026 Ashby Workshops, Greg sits down with Jennifer Pahlka, founder of Code for America and author of <em>Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better</em>. Jennifer walks us through her career journey, from filing paperwork at a child welfare agency to helping pioneer the U.S. Digital Services in the Obama administration (3:45). She describes the need for upstream policy reform (11:29), and discusses AI’s potential to both empower public servants to challenge antiquated practices and help policymakers simplify complex regulations (28:03). Finally, Jennifer shares some AI use cases she’s particularly excited about in government (59:34).</p>
<p> </p>
<p>Jennifer Pahlka is a senior fellow at the Niskanen Center and the Federation of American Scientists and a senior advisor at the Abundance Network. She previously served as U.S. Deputy Chief Technology Officer, helping start the U.S. Digital Services under the second Obama administration, and as a member of the Defense Innovation Network.</p>
<p> </p>
<p>Read Jennifer’s book <a href="https://www.recodingamerica.us/">Recoding America</a> and check out her Substack <a href="https://www.eatingpolicy.com/">Eating Policy</a>.</p>
<p> </p>
<p>Jennifer’s recommended reading:</p>
<ul>
  <li>
<p><a href="https://www.hackyourbureaucracy.com/">Hack Your Bureaucracy</a> by Marina Nitze &amp; Nick Sinai</p>
</li>
  <li>
<p><a href="https://www.hachettebookgroup.com/titles/marina-nitze/crisis-engineering/9781668652060/?lens=balance">Crisis Engineering</a> by Marina Nitze, Matthew Weaver, &amp; Mikey Dickerson</p>
</li>
  <li>
<p><a href="https://www.niskanencenter.org/the-procedure-fetish/">The Procedure Fetish</a> by Nicholas Bagley</p>
</li>
  <li>
<p><a href="https://www.hachettebookgroup.com/titles/marc-j-dunkelman/why-nothing-works/9781541700215/">Why Nothing Works</a> by Marc J. Dunkelman</p>
</li>
  <li>
<p><a href="https://www.penguinrandomhouse.ca/books/667571/kill-it-with-fire-by-marianne-bellotti/9781718501188">Kill It with Fire</a> by Marianne Bellotti</p>
</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>4200</itunes:duration>
      <guid isPermaLink="false"><![CDATA[de3de688-02a2-11f1-9913-e7c32f1590cd]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9063680554.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Indian and French Ambassadors to the US on Global AI Summits</title>
      <description>This episode cross-posts a fireside chat with the Ambassadors of India and France to the United States, Amb. Vinay Kwatra and Amb. Laurent Bili. The discussion was recorded at the Wadhwani AI Center’s January 30 conference, “Exploring Global AI Policy Priorities Ahead of the India AI Impact Summit.” A full recording of the conference, including additional panels and speakers, can be found here.</description>
      <pubDate>Fri, 30 Jan 2026 21:25:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>This episode cross-posts a fireside chat with the Ambassadors of India and France to the United States, Amb. Vinay Kwatra and Amb. Laurent Bili.</itunes:subtitle>
      <itunes:summary>This episode cross-posts a fireside chat with the Ambassadors of India and France to the United States, Amb. Vinay Kwatra and Amb. Laurent Bili. The discussion was recorded at the Wadhwani AI Center’s January 30 conference, “Exploring Global AI Policy Priorities Ahead of the India AI Impact Summit.” A full recording of the conference, including additional panels and speakers, can be found here.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>This episode cross-posts a fireside chat with the Ambassadors of India and France to the United States, Amb. Vinay Kwatra and Amb. Laurent Bili. The discussion was recorded at the Wadhwani AI Center’s January 30 conference, “Exploring Global AI Policy Priorities Ahead of the India AI Impact Summit.” A full recording of the conference, including additional panels and speakers, can be found <a href="https://www.csis.org/events/exploring-global-ai-policy-priorities-ahead-india-ai-impact-summit">here</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2182</itunes:duration>
      <guid isPermaLink="false"><![CDATA[466c2272-fe22-11f0-b1df-7fd930cbb7c4]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4588669248.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Future of Nvidia’s H200 in China and the Pentagon's New AI Strategy</title>
      <description>In this episode, we discuss and evaluate the BIS' new export policy for Nvidia's H200 chips (00:31) before turning to Beijing's decision to block H200 imports (20:18). We then unpack the Pentagon's recently published AI Strategy, including the shift it represents in DOW's approach to AI integration (29:17).

 

Read the CNAS commentary "Unpacking the H200 Export Policy" here.</description>
      <pubDate>Thu, 22 Jan 2026 14:19:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We talk export policy for Nvidia's H200 chips and the Pentagon's new AI Strategy. </itunes:subtitle>
      <itunes:summary>In this episode, we discuss and evaluate the BIS' new export policy for Nvidia's H200 chips (00:31) before turning to Beijing's decision to block H200 imports (20:18). We then unpack the Pentagon's recently published AI Strategy, including the shift it represents in DOW's approach to AI integration (29:17).

 

Read the CNAS commentary "Unpacking the H200 Export Policy" here.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss and evaluate the BIS' new export policy for Nvidia's H200 chips (00:31) before turning to Beijing's decision to block H200 imports (20:18). We then unpack the Pentagon's recently published AI Strategy, including the shift it represents in DOW's approach to AI integration (29:17).</p>
<p> </p>
<p>Read the CNAS commentary "Unpacking the H200 Export Policy" <a href="https://www.cnas.org/publications/commentary/cnas-insights-unpacking-the-h200-export-policy">here</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3576</itunes:duration>
      <guid isPermaLink="false"><![CDATA[fc965e7c-f79d-11f0-a909-e39ec2709480]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8028055133.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>xAI's Latest Controversy and New York's New AI Safety Bill</title>
      <description>In this episode, we examine Grok’s public posting of child sexual abuse material and non-consensual intimate imagery (00:27), the legal consequences xAI may face (12:41), and the international policy community's response (19:05). We then unpack New York’s RAISE Act, including the politics leading up to Gov. Hochul’s signature (22:51) and the final outcome of negotiations (28:16).</description>
      <pubDate>Fri, 09 Jan 2026 14:52:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We examine Grok’s CSAM controversy, xAI’s legal risk, global reactions, and New York’s RAISE Act.</itunes:subtitle>
      <itunes:summary>In this episode, we examine Grok’s public posting of child sexual abuse material and non-consensual intimate imagery (00:27), the legal consequences xAI may face (12:41), and the international policy community's response (19:05). We then unpack New York’s RAISE Act, including the politics leading up to Gov. Hochul’s signature (22:51) and the final outcome of negotiations (28:16).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we examine Grok’s public posting of child sexual abuse material and non-consensual intimate imagery (00:27), the legal consequences xAI may face (12:41), and the international policy community's response (19:05). We then unpack New York’s RAISE Act, including the politics leading up to Gov. Hochul’s signature (22:51) and the final outcome of negotiations (28:16).<br></p>]]>
      </content:encoded>
      <itunes:duration>2630</itunes:duration>
      <guid isPermaLink="false"><![CDATA[cebabc60-ed6a-11f0-88c2-abf3ced10b9c]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6553896119.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>China's EUV Manhattan Project and Export Control Mythbusting with Chris McGuire</title>
      <description>In this episode, we're joined by Chris McGuire for a conversation about AI and semiconductor export controls. We begin by discussing Chris' career path into AI and national security (1:55), then turn to his views on recent developments, including reports about a Chinese EUV prototype (11:07). We spend the rest of the episode rating common arguments against AI export controls as fact, fiction, or somewhere in-between (40:25).

 

Chris is a Senior Fellow for China and Emerging Technologies at the Council on Foreign Relations and a leading expert on U.S.-China AI competition. Before joining CFR, he served as a career government official for over a decade, including as Deputy Senior Director for Technology and National Security at the National Security Council (NSC) from 2022 to 2024. Links to some of Chris' recent work, as discussed in the podcast, are included below.

 

China’s AI Chip Deficit: Why Huawei Can’t Catch Nvidia and U.S. Export Controls Should Remain

 

Testimony on Strengthening Export Controls on Semiconductor Manufacturing Equipment</description>
      <pubDate>Tue, 06 Jan 2026 15:40:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we're joined by Chris McGuire for a conversation about AI and semiconductor export controls. </itunes:subtitle>
      <itunes:summary>In this episode, we're joined by Chris McGuire for a conversation about AI and semiconductor export controls. We begin by discussing Chris' career path into AI and national security (1:55), then turn to his views on recent developments, including reports about a Chinese EUV prototype (11:07). We spend the rest of the episode rating common arguments against AI export controls as fact, fiction, or somewhere in-between (40:25).

 

Chris is a Senior Fellow for China and Emerging Technologies at the Council on Foreign Relations and a leading expert on U.S.-China AI competition. Before joining CFR, he served as a career government official for over a decade, including as Deputy Senior Director for Technology and National Security at the National Security Council (NSC) from 2022 to 2024. Links to some of Chris' recent work, as discussed in the podcast, are included below.

 

China’s AI Chip Deficit: Why Huawei Can’t Catch Nvidia and U.S. Export Controls Should Remain

 

Testimony on Strengthening Export Controls on Semiconductor Manufacturing Equipment</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we're joined by Chris McGuire for a conversation about AI and semiconductor export controls. We begin by discussing Chris' career path into AI and national security (1:55), then turn to his views on recent developments, including reports about a Chinese EUV prototype (11:07). We spend the rest of the episode rating common arguments against AI export controls as fact, fiction, or somewhere in-between (40:25).</p>
<p> </p>
<p>Chris is a Senior Fellow for China and Emerging Technologies at the Council on Foreign Relations and a leading expert on U.S.-China AI competition. Before joining CFR, he served as a career government official for over a decade, including as Deputy Senior Director for Technology and National Security at the National Security Council (NSC) from 2022 to 2024. Links to some of Chris' recent work, as discussed in the podcast, are included below.</p>
<p> </p>
<p><a href="https://www.cfr.org/article/chinas-ai-chip-deficit-why-huawei-cant-catch-nvidia-and-us-export-controls-should-remain">China’s AI Chip Deficit: Why Huawei Can’t Catch Nvidia and U.S. Export Controls Should Remain</a></p>
<p> </p>
<p><a href="https://www.cfr.org/report/protecting-foundation-strengthening-export-controls-semiconductor-manufacturing-equipment">Testimony on Strengthening Export Controls on Semiconductor Manufacturing Equipment</a></p>]]>
      </content:encoded>
      <itunes:duration>5308</itunes:duration>
      <guid isPermaLink="false"><![CDATA[25946ba8-dd21-11f0-8745-938d0ca0b632]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS2284500845.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Previewing India's AI Impact Summit with MeitY Secretary S. Krishnan</title>
      <description>Since 2023, a series of global AI summits has brought together world leaders to advance international dialogue and cooperation on artificial intelligence. Building on this momentum, Prime Minister Narendra Modi announced the India AI Impact Summit, which will take place in New Delhi in February 2026. As the first summit in the series to be hosted in a Global South country, the AI Impact Summit aims to amplify Global South perspectives and advance concrete action to address both the opportunities and risks of AI. 

On December 8, 2025, the CSIS Wadhwani AI Center hosted S. Krishnan, Secretary of India’s Ministry of Electronics and Information Technology (MeitY), for a livestreamed fireside chat with Wadhwani AI Center Senior Adviser Gregory C. Allen. Secretary Krishnan, who leads India’s national AI strategy, will outline India’s policy priorities and share insights into the goals and global aspirations shaping the upcoming AI Impact Summit. He will also offer a comprehensive look at the central role MeitY plays in driving innovation across India’s AI ecosystem. 

Secretary Krishnan brings more than 35 years of experience in public service, having joined the Indian Administrative Service in 1989. Prior to his current role, he served as the Additional Chief Secretary of the Industries, Investment Promotion and Commerce Department in the Government of Tamil Nadu. He has also served as Senior Advisor in the Office of the Executive Director for India, Sri Lanka, Bangladesh, and Bhutan at the International Monetary Fund, and has represented India in the G20 Expert Groups on International Financial Architecture and Global Financial Safety Nets. Secretary Krishnan holds a bachelor’s degree from St. Stephen’s College in Delhi.</description>
      <pubDate>Mon, 29 Dec 2025 05:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Greg speaks with S. Krishnan, Secretary of India’s Ministry of Electronics and Information Technology.</itunes:subtitle>
      <itunes:summary>Since 2023, a series of global AI summits has brought together world leaders to advance international dialogue and cooperation on artificial intelligence. Building on this momentum, Prime Minister Narendra Modi announced the India AI Impact Summit, which will take place in New Delhi in February 2026. As the first summit in the series to be hosted in a Global South country, the AI Impact Summit aims to amplify Global South perspectives and advance concrete action to address both the opportunities and risks of AI. 

On December 8, 2025, the CSIS Wadhwani AI Center hosted S. Krishnan, Secretary of India’s Ministry of Electronics and Information Technology (MeitY), for a livestreamed fireside chat with Wadhwani AI Center Senior Adviser Gregory C. Allen. Secretary Krishnan, who leads India’s national AI strategy, will outline India’s policy priorities and share insights into the goals and global aspirations shaping the upcoming AI Impact Summit. He will also offer a comprehensive look at the central role MeitY plays in driving innovation across India’s AI ecosystem. 

Secretary Krishnan brings more than 35 years of experience in public service, having joined the Indian Administrative Service in 1989. Prior to his current role, he served as the Additional Chief Secretary of the Industries, Investment Promotion and Commerce Department in the Government of Tamil Nadu. He has also served as Senior Advisor in the Office of the Executive Director for India, Sri Lanka, Bangladesh, and Bhutan at the International Monetary Fund, and has represented India in the G20 Expert Groups on International Financial Architecture and Global Financial Safety Nets. Secretary Krishnan holds a bachelor’s degree from St. Stephen’s College in Delhi.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Since 2023, a series of global AI summits has brought together world leaders to advance international dialogue and cooperation on artificial intelligence. Building on this momentum, Prime Minister Narendra Modi announced the India AI Impact Summit, which will take place in New Delhi in February 2026. As the first summit in the series to be hosted in a Global South country, the AI Impact Summit aims to amplify Global South perspectives and advance concrete action to address both the opportunities and risks of AI. 

On December 8, 2025, the CSIS Wadhwani AI Center hosted S. Krishnan, Secretary of India’s Ministry of Electronics and Information Technology (MeitY), for a livestreamed fireside chat with Wadhwani AI Center Senior Adviser Gregory C. Allen. Secretary Krishnan, who leads India’s national AI strategy, will outline India’s policy priorities and share insights into the goals and global aspirations shaping the upcoming AI Impact Summit. He will also offer a comprehensive look at the central role MeitY plays in driving innovation across India’s AI ecosystem. 

Secretary Krishnan brings more than 35 years of experience in public service, having joined the Indian Administrative Service in 1989. Prior to his current role, he served as the Additional Chief Secretary of the Industries, Investment Promotion and Commerce Department in the Government of Tamil Nadu. He has also served as Senior Advisor in the Office of the Executive Director for India, Sri Lanka, Bangladesh, and Bhutan at the International Monetary Fund, and has represented India in the G20 Expert Groups on International Financial Architecture and Global Financial Safety Nets. Secretary Krishnan holds a bachelor’s degree from St. Stephen’s College in Delhi.</p>]]>
      </content:encoded>
      <itunes:duration>3272</itunes:duration>
      <guid isPermaLink="false"><![CDATA[c04658d4-dc4d-11f0-8dd2-2be934c243f9]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS1972066315.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Trump Signs EO Targeting State AI Laws While Meta Showcases Risks of Weak Tech Regulation</title>
      <description>In this episode, we unpack President Trump’s new executive order targeting state AI laws, including how the final version compares to an earlier draft (1:26), and the legal and political challenges it is likely to face (14:46). We then discuss recent Reuters reporting on Meta’s reliance on scam-driven ad revenue (22:12) and what the social media experience suggests about the risks of failing to regulate AI (45:21).</description>
      <pubDate>Thu, 18 Dec 2025 05:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We talk Trump's new EO on state AI law and Meta's reliance on scam-driven revenue.</itunes:subtitle>
      <itunes:summary>In this episode, we unpack President Trump’s new executive order targeting state AI laws, including how the final version compares to an earlier draft (1:26), and the legal and political challenges it is likely to face (14:46). We then discuss recent Reuters reporting on Meta’s reliance on scam-driven ad revenue (22:12) and what the social media experience suggests about the risks of failing to regulate AI (45:21).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we unpack President Trump’s new executive order targeting state AI laws, including how the final version compares to an earlier draft (1:26), and the legal and political challenges it is likely to face (14:46). We then discuss recent Reuters reporting on Meta’s reliance on scam-driven ad revenue (22:12) and what the social media experience suggests about the risks of failing to regulate AI (45:21).</p>]]>
      </content:encoded>
      <itunes:duration>3393</itunes:duration>
      <guid isPermaLink="false"><![CDATA[56eac4da-db8d-11f0-bb2d-a7ccdbfaeb64]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS3164207986.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>White House Greenlights H200 Exports, DOE Unveils Genesis Mission, and Insurers Move to Limit AI Coverage</title>
      <description>In this episode, we break down the White House’s decision to let Nvidia’s H200 chips be exported to China and Greg’s case against the move (00:33). We then discuss Trump’s planned “One Rule” executive order to preempt state AI laws (18:59), examine the NDAA's proposed AI Futures Steering Committee (23:09), and analyze the Genesis Mission executive order (26:07), comparing its ambitions and funding reality to the Manhattan Project and Apollo program. We close by looking at why major insurers are seeking to exclude AI risks from corporate policies and how that could impact AI adoption, regulation, and governance (40:29).</description>
      <pubDate>Tue, 09 Dec 2025 19:06:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We talk White House’s decision to export H200 chips to China, the One Rule and Genesis Mission executive orders, and major insurers relationship with AI. </itunes:subtitle>
      <itunes:summary>In this episode, we break down the White House’s decision to let Nvidia’s H200 chips be exported to China and Greg’s case against the move (00:33). We then discuss Trump’s planned “One Rule” executive order to preempt state AI laws (18:59), examine the NDAA's proposed AI Futures Steering Committee (23:09), and analyze the Genesis Mission executive order (26:07), comparing its ambitions and funding reality to the Manhattan Project and Apollo program. We close by looking at why major insurers are seeking to exclude AI risks from corporate policies and how that could impact AI adoption, regulation, and governance (40:29).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we break down the White House’s decision to let Nvidia’s H200 chips be exported to China and Greg’s case against the move (00:33). We then discuss Trump’s planned “One Rule” executive order to preempt state AI laws (18:59), examine the NDAA's proposed AI Futures Steering Committee (23:09), and analyze the Genesis Mission executive order (26:07), comparing its ambitions and funding reality to the Manhattan Project and Apollo program. We close by looking at why major insurers are seeking to exclude AI risks from corporate policies and how that could impact AI adoption, regulation, and governance (40:29).</p>]]>
      </content:encoded>
      <itunes:duration>3175</itunes:duration>
      <guid isPermaLink="false"><![CDATA[3e7fd3d8-d532-11f0-991e-3f8021865fc5]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4917609083.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Trump’s Draft AI Preemption Order, EU AI Act Delays, and Anthropic's Cyberattack Report</title>
      <description>In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration’s draft executive order to preempt state AI laws (07:46) and break down the European Commission’s new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic’s report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).</description>
      <pubDate>Fri, 21 Nov 2025 16:39:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Greg’s India trip, Trump AI order, EU digital package, and Anthropic’s report on a China-linked cyber campaign.</itunes:subtitle>
      <itunes:summary>In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration’s draft executive order to preempt state AI laws (07:46) and break down the European Commission’s new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic’s report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration’s draft executive order to preempt state AI laws (07:46) and break down the European Commission’s new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic’s report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).</p>]]>
      </content:encoded>
      <itunes:duration>3266</itunes:duration>
      <guid isPermaLink="false"><![CDATA[a5231052-c6f8-11f0-8178-cf9f93304476]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS3244123205.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>What Selling Nvidia's Blackwell Chips to China Would Mean for the AI Race</title>
      <description>In this episode, Georgia Adamson and Saif Khan from the Institute for Progress join Greg to unpack their October 25 paper, "Should the US Sell Blackwell Chips to China?" They discuss the geopolitical context of the paper (3:26), how the rumored B30A would compare to other advanced AI chips (11:37), and the potential consequences if the US were to permit B30A exports to China (32:00).

Their paper is available here.</description>
      <pubDate>Wed, 05 Nov 2025 18:15:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Georgia Adamson and Saif Khan join Greg to unpack the geopolitical context of the B30A chip and the implications of US exports.</itunes:subtitle>
      <itunes:summary>In this episode, Georgia Adamson and Saif Khan from the Institute for Progress join Greg to unpack their October 25 paper, "Should the US Sell Blackwell Chips to China?" They discuss the geopolitical context of the paper (3:26), how the rumored B30A would compare to other advanced AI chips (11:37), and the potential consequences if the US were to permit B30A exports to China (32:00).

Their paper is available here.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Georgia Adamson and Saif Khan from the Institute for Progress join Greg to unpack their October 25 paper, "Should the US Sell Blackwell Chips to China?" They discuss the geopolitical context of the paper (3:26), how the rumored B30A would compare to other advanced AI chips (11:37), and the potential consequences if the US were to permit B30A exports to China (32:00).</p>
<p>Their paper is available <a href="https://ifp.org/the-b30a-decision/#selling-b30as-to-china-would-undermine-the-trump-administration-s-policy-goals">here</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3869</itunes:duration>
      <guid isPermaLink="false"><![CDATA[4b4a0a8e-ba75-11f0-a457-f3fb33d38928]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9442590819.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How to Build a Career in AI Policy</title>
      <description>One of the most common questions we get from listeners is how to build a successful career in AI policy—so we dedicated an entire episode to answering it. We cover the most formative experiences from Greg's career journey (3:30), general principles for professional success (45:09), and actionable tips specific to breaking into the AI policy space (1:11:52).</description>
      <pubDate>Thu, 30 Oct 2025 16:48:00 -0000</pubDate>
      <itunes:title>Greg and Matt discuss career advice for aspiring AI policy professionals.</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>One of the most common questions we get from listeners is how to build a successful career in AI policy—so we dedicated an entire episode to answering it. We cover the most formative experiences from Greg's career journey (3:30), general principles for professional success (45:09), and actionable tips specific to breaking into the AI policy space (1:11:52).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>One of the most common questions we get from listeners is how to build a successful career in AI policy—so we dedicated an entire episode to answering it. We cover the most formative experiences from Greg's career journey (3:30), general principles for professional success (45:09), and actionable tips specific to breaking into the AI policy space (1:11:52).</p>]]>
      </content:encoded>
      <itunes:duration>6740</itunes:duration>
      <guid isPermaLink="false"><![CDATA[072753c6-b597-11f0-a980-372d40618439]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4073996750.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Sora 2 and the Deepfake Boom</title>
      <description>In this episode, we cover OpenAI’s latest video-generation model Sora 2 (1:02), concrete harms and potential risks from deepfakes (5:18), the underlying technology and its history (27:03), and how policy can mitigate harms (36:31).</description>
      <pubDate>Thu, 23 Oct 2025 14:38:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We cover OpenAI's latest video-generation model Sora 2 and concrete harms and potential risks from deepfakes. </itunes:subtitle>
      <itunes:summary>In this episode, we cover OpenAI’s latest video-generation model Sora 2 (1:02), concrete harms and potential risks from deepfakes (5:18), the underlying technology and its history (27:03), and how policy can mitigate harms (36:31).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we cover OpenAI’s latest video-generation model Sora 2 (1:02), concrete harms and potential risks from deepfakes (5:18), the underlying technology and its history (27:03), and how policy can mitigate harms (36:31).</p>]]>
      </content:encoded>
      <itunes:duration>3663</itunes:duration>
      <guid isPermaLink="false"><![CDATA[ebb7d490-b01d-11f0-ab2f-afaac79d2971]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8236313062.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Congressman Jay Obernolte on the Future of U.S. AI Regulation</title>
      <description> In this episode, we are joined by Rep. Jay Obernolte, one of Congress’s leading voices on AI policy. We discuss his path from developing video games to serving in Congress (00:49), the work of the bipartisan House Task Force on AI and its final report (9:39), competing approaches to designing AI regulation in Congress (16:38), and prospects for federal preemption of state AI legislation (40:32).

Congressman Obernolte has represented California’s 23rd district since 2021. He co-chaired the bipartisan House Task Force on AI, leading the development of an extensive December 2024 report outlining a congressional agenda for AI. He also serves as vice-chair of the Congressional AI Caucus and is the only current member of Congress with an advanced degree in Artificial Intelligence, which he earned from UCLA in 1997. Rep. Obernolte previously served in the California State Legislature.</description>
      <pubDate>Tue, 21 Oct 2025 04:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Greg is joined by Rep. Jay Obernolte, one of Congress’s leading voices on AI policy.</itunes:subtitle>
      <itunes:summary> In this episode, we are joined by Rep. Jay Obernolte, one of Congress’s leading voices on AI policy. We discuss his path from developing video games to serving in Congress (00:49), the work of the bipartisan House Task Force on AI and its final report (9:39), competing approaches to designing AI regulation in Congress (16:38), and prospects for federal preemption of state AI legislation (40:32).

Congressman Obernolte has represented California’s 23rd district since 2021. He co-chaired the bipartisan House Task Force on AI, leading the development of an extensive December 2024 report outlining a congressional agenda for AI. He also serves as vice-chair of the Congressional AI Caucus and is the only current member of Congress with an advanced degree in Artificial Intelligence, which he earned from UCLA in 1997. Rep. Obernolte previously served in the California State Legislature.</itunes:summary>
      <content:encoded>
        <![CDATA[<p> In this episode, we are joined by Rep. Jay Obernolte, one of Congress’s leading voices on AI policy. We discuss his path from developing video games to serving in Congress (00:49), the work of the bipartisan House Task Force on AI and its final report (9:39), competing approaches to designing AI regulation in Congress (16:38), and prospects for federal preemption of state AI legislation (40:32).</p>
<p>Congressman Obernolte has represented California’s 23rd district since 2021. He co-chaired the bipartisan House Task Force on AI, leading the development of an extensive December 2024 report outlining a congressional agenda for AI. He also serves as vice-chair of the Congressional AI Caucus and is the only current member of Congress with an advanced degree in Artificial Intelligence, which he earned from UCLA in 1997. Rep. Obernolte previously served in the California State Legislature.</p>]]>
      </content:encoded>
      <itunes:duration>3514</itunes:duration>
      <guid isPermaLink="false"><![CDATA[b37bc5d2-adf2-11f0-9046-af04d42c28a3]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5107542909.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Impact of AI on Labor with Harry Holzer</title>
      <description>In this episode, we are joined by economist Harry Holzer to discuss how AI is set to transform labor. Holzer was Chief Economist at the U.S. Department of Labor during the Clinton administration and is currently a Professor of Public Policy at Georgetown University. We break down the fundamentals of the labor market (4:00) and the current and future impact of AI automation (10:30). Holzer also reacts to Anthropic CEO Dario Amodei's warning that AI could eliminate half of entry-level white-collar jobs (23:32) and explains why we need better data capturing AI's impact on the labor market (52:53).

Harry Holzer recently co-authored a white paper titled "Proactively Developing &amp; Assisting the Workforce in the Age of AI," which is available here.</description>
      <pubDate>Fri, 17 Oct 2025 17:47:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Economist Harry Holzer joins to discuss how AI is set to transform labor.</itunes:subtitle>
      <itunes:summary>In this episode, we are joined by economist Harry Holzer to discuss how AI is set to transform labor. Holzer was Chief Economist at the U.S. Department of Labor during the Clinton administration and is currently a Professor of Public Policy at Georgetown University. We break down the fundamentals of the labor market (4:00) and the current and future impact of AI automation (10:30). Holzer also reacts to Anthropic CEO Dario Amodei's warning that AI could eliminate half of entry-level white-collar jobs (23:32) and explains why we need better data capturing AI's impact on the labor market (52:53).

Harry Holzer recently co-authored a white paper titled "Proactively Developing &amp; Assisting the Workforce in the Age of AI," which is available here.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by economist Harry Holzer to discuss how AI is set to transform labor. Holzer was Chief Economist at the U.S. Department of Labor during the Clinton administration and is currently a Professor of Public Policy at Georgetown University. We break down the fundamentals of the labor market (4:00) and the current and future impact of AI automation (10:30). Holzer also reacts to Anthropic CEO Dario Amodei's warning that AI could eliminate half of entry-level white-collar jobs (23:32) and explains why we need better data capturing AI's impact on the labor market (52:53).</p>
<p>Harry Holzer recently co-authored a white paper titled "Proactively Developing &amp; Assisting the Workforce in the Age of AI," which is available <a href="https://ari.us/wp-content/uploads/2025/08/ARI_Notre-Dame-Workforce-Report.pdf">here</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4030</itunes:duration>
      <guid isPermaLink="false"><![CDATA[5e1de2f0-ab81-11f0-aedb-83065de682ad]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7244256072.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>What California's SB 53 Means for AI Safety</title>
      <description>In this episode, we dive into California's new AI transparency law, SB 53. We explore the bill's history (00:30), contrast it with the more controversial SB 1047 (6:43), break down the specific disclosure requirements for AI labs of different scales (13:38), and discuss how industry stakeholders and policy experts have responded to the legislation (29:47).</description>
      <pubDate>Thu, 09 Oct 2025 17:55:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Greg and Sadie dive into California's new AI transparency law, SB53. </itunes:subtitle>
      <itunes:summary>In this episode, we dive into California's new AI transparency law, SB 53. We explore the bill's history (00:30), contrast it with the more controversial SB 1047 (6:43), break down the specific disclosure requirements for AI labs of different scales (13:38), and discuss how industry stakeholders and policy experts have responded to the legislation (29:47).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we dive into California's new AI transparency law, SB 53. We explore the bill's history (00:30), contrast it with the more controversial SB 1047 (6:43), break down the specific disclosure requirements for AI labs of different scales (13:38), and discuss how industry stakeholders and policy experts have responded to the legislation (29:47).</p>]]>
      </content:encoded>
      <itunes:duration>2242</itunes:duration>
      <guid isPermaLink="false"><![CDATA[2a047342-a539-11f0-bbe5-6b7a61003aac]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8985475318.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Energy Cost of AI with Joseph Majkut</title>
      <description>In this episode, we're joined by Joseph Majkut, Director of CSIS' Energy Security and Climate Change Program, to take an in-depth look at energy's role in AI. We explore the current state of the U.S. electrical grid (11:34), bottlenecks in the AI data center buildout (43:45), how U.S. energy efforts compare internationally (1:16:06), and more.

Joseph has co-authored three reports on AI and energy: AI for the Grid: Opportunities, Risks, and Safeguards (September 2025), The Electricity Supply Bottleneck on U.S. AI Dominance (March 2025), and The AI Power Surge: Growth Scenarios for GenAI Datacenters Through 2030 (March 2025).</description>
      <pubDate>Thu, 02 Oct 2025 17:01:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Joseph Majkut joins to take an in-depth look at energy's role in AI. </itunes:subtitle>
      <itunes:summary>In this episode, we're joined by Joseph Majkut, Director of CSIS' Energy Security and Climate Change Program, to take an in-depth look at energy's role in AI. We explore the current state of the U.S. electrical grid (11:34), bottlenecks in the AI data center buildout (43:45), how U.S. energy efforts compare internationally (1:16:06), and more.

Joseph has co-authored three reports on AI and energy: AI for the Grid: Opportunities, Risks, and Safeguards (September 2025), The Electricity Supply Bottleneck on U.S. AI Dominance (March 2025), and The AI Power Surge: Growth Scenarios for GenAI Datacenters Through 2030 (March 2025).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we're joined by Joseph Majkut, Director of CSIS' Energy Security and Climate Change Program, to take an in-depth look at energy's role in AI. We explore the current state of the U.S. electrical grid (11:34), bottlenecks in the AI data center buildout (43:45), how U.S. energy efforts compare internationally (1:16:06), and more.</p>
<p>Joseph has co-authored three reports on AI and energy: <a href="https://www.csis.org/analysis/ai-grid-opportunities-risks-and-safeguards">AI for the Grid: Opportunities, Risks, and Safeguards</a> (September 2025), <a href="https://www.csis.org/analysis/electricity-supply-bottleneck-us-ai-dominance?utm_source=chatgpt.com">The Electricity Supply Bottleneck on U.S. AI Dominance</a> (March 2025), and <a href="https://www.csis.org/analysis/ai-power-surge-growth-scenarios-genai-datacenters-through-2030">The AI Power Surge: Growth Scenarios for GenAI Datacenters Through 2030</a> (March 2025).</p>]]>
      </content:encoded>
      <itunes:duration>5149</itunes:duration>
      <guid isPermaLink="false"><![CDATA[811b6c2c-9fb1-11f0-83dc-dffbac0669e9]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4367491481.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Is China Done with Nvidia’s AI Chips?</title>
      <description>In this episode, we discuss how today’s massive AI infrastructure investments compare to the Manhattan Project (00:33), China’s reported ban on Nvidia chips and its implications for export control policy (13:41), Anthropic’s $1.5 billion copyright settlement with authors (33:49), and recent multibillion-dollar AI investments by Nvidia and ASML (44:42).</description>
      <pubDate>Wed, 24 Sep 2025 13:52:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We compare today's AI infrastructure investments to the Manhattan Project, China's Nvidia chip ban, Anthropic’s copyright settlement, and AI investments by Nvidia and ASML.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss how today’s massive AI infrastructure investments compare to the Manhattan Project (00:33), China’s reported ban on Nvidia chips and its implications for export control policy (13:41), Anthropic’s $1.5 billion copyright settlement with authors (33:49), and recent multibillion-dollar AI investments by Nvidia and ASML (44:42).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss how today’s massive AI infrastructure investments compare to the Manhattan Project (00:33), China’s reported ban on Nvidia chips and its implications for export control policy (13:41), Anthropic’s $1.5 billion copyright settlement with authors (33:49), and recent multibillion-dollar AI investments by Nvidia and ASML (44:42).</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>3467</itunes:duration>
      <guid isPermaLink="false"><![CDATA[dbeb72f6-994d-11f0-8cec-7b9d21aa6c8a]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6637508424.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Why is China's AI Sector Booming? </title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we discuss China's focus on AI adoption (00:58), the underlying factors driving investor enthusiasm (14:51), and the national security implications of China's booming AI industry (31:47).</description>
      <pubDate>Fri, 12 Sep 2025 16:05:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss China's focus on AI adoption, the underlying factors driving investor enthusiasm, and the national security implications of China's booming AI industry.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss China's focus on AI adoption (00:58), the underlying factors driving investor enthusiasm (14:51), and the national security implications of China's booming AI industry (31:47).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss China's focus on AI adoption (00:58), the underlying factors driving investor enthusiasm (14:51), and the national security implications of China's booming AI industry (31:47).</p>]]>
      </content:encoded>
      <itunes:duration>2799</itunes:duration>
      <guid isPermaLink="false"><![CDATA[0e96c4b0-8ff0-11f0-b84d-cbbc0927fd9d]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS3853681810.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Unpacking the EU AI Act Code of Practice with Marietje Schaake</title>
      <description>In this episode, we are joined by Marietje Schaake, former Member of the European Parliament, to unpack the EU AI Act Code of Practice. Schaake served as Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers for the Code of Practice, with a focus on AI model safety and security. We discuss the development and drafting of the EU AI Act and Code of Practice (16:47), break down how the Code helps AI companies demonstrate compliance with the Act (28:25), and explore the kinds of systemic risks the AI Act seeks to address (32:00).</description>
      <pubDate>Fri, 05 Sep 2025 04:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Marietje Schaake joins to unpack the EU AI Act Code of Practice, its drafting, compliance, and systemic risks.</itunes:subtitle>
      <itunes:summary>In this episode, we are joined by Marietje Schaake, former Member of the European Parliament, to unpack the EU AI Act Code of Practice. Schaake served as Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers for the Code of Practice, with a focus on AI model safety and security. We discuss the development and drafting of the EU AI Act and Code of Practice (16:47), break down how the Code helps AI companies demonstrate compliance with the Act (28:25), and explore the kinds of systemic risks the AI Act seeks to address (32:00).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by Marietje Schaake, former Member of the European Parliament, to unpack the EU AI Act Code of Practice. Schaake served as Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers for the Code of Practice, with a focus on AI model safety and security. We discuss the development and drafting of the EU AI Act and Code of Practice (16:47), break down how the Code helps AI companies demonstrate compliance with the Act (28:25), and explore the kinds of systemic risks the AI Act seeks to address (32:00).</p>]]>
      </content:encoded>
      <itunes:duration>3053</itunes:duration>
      <guid isPermaLink="false"><![CDATA[0d395ae8-89d7-11f0-8fca-43b7f7620b7b]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9143410021.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>U.S. Takes 10% Stake in Intel and Nvidia Halts H20 Production for China</title>
      <description>In this episode, we unpack the Trump administration’s $8.9 billion deal to acquire a 9.9% stake in Intel, examining the underlying logic, financial terms, and political reactions from across the spectrum (00:33). We then cover Nvidia’s sudden halt in H20 chip production for China, its plans for a Blackwell alternative, and what Beijing’s self-sufficiency push means for the AI race (28:18).</description>
      <pubDate>Wed, 27 Aug 2025 04:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We unpack the Trump admin deal with Intel, and Nvidia halting H20 chip production. </itunes:subtitle>
      <itunes:summary>In this episode, we unpack the Trump administration’s $8.9 billion deal to acquire a 9.9% stake in Intel, examining the underlying logic, financial terms, and political reactions from across the spectrum (00:33). We then cover Nvidia’s sudden halt in H20 chip production for China, its plans for a Blackwell alternative, and what Beijing’s self-sufficiency push means for the AI race (28:18).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we unpack the Trump administration’s $8.9 billion deal to acquire a 9.9% stake in Intel, examining the underlying logic, financial terms, and political reactions from across the spectrum (00:33). We then cover Nvidia’s sudden halt in H20 chip production for China, its plans for a Blackwell alternative, and what Beijing’s self-sufficiency push means for the AI race (28:18).</p>]]>
      </content:encoded>
      <itunes:duration>2239</itunes:duration>
      <guid isPermaLink="false"><![CDATA[c144b85c-82c0-11f0-a88f-dbc3847e0b94]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6979130927.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Trump's AI Chip Licensing Deal, AI Spending Bubble Concerns, &amp; New Advances for Gov't AI Adoption</title>
      <description>In this episode, we'll break down the Trump administration’s new licensing agreement with Nvidia and AMD for semiconductor exports and what this development means for U.S. national security (00:35), explore concerns about an AI-driven economic bubble (22:17), and unpack recent advancements for the federal government's adoption AI after the U.S. General Services Administration approved OpenAI, Anthropic, and Google as vendors (37:18).</description>
      <pubDate>Thu, 14 Aug 2025 06:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Trump admin’s new Nvidia &amp; AMD export deal, AI economic bubble fears, and federal AI adoption milestones.</itunes:subtitle>
      <itunes:summary>In this episode, we'll break down the Trump administration’s new licensing agreement with Nvidia and AMD for semiconductor exports and what this development means for U.S. national security (00:35), explore concerns about an AI-driven economic bubble (22:17), and unpack recent advancements for the federal government's adoption AI after the U.S. General Services Administration approved OpenAI, Anthropic, and Google as vendors (37:18).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we'll break down the Trump administration’s new licensing agreement with Nvidia and AMD for semiconductor exports and what this development means for U.S. national security (00:35), explore concerns about an AI-driven economic bubble (22:17), and unpack recent advancements for the federal government's adoption AI after the U.S. General Services Administration approved OpenAI, Anthropic, and Google as vendors (37:18).</p>]]>
      </content:encoded>
      <itunes:duration>2766</itunes:duration>
      <guid isPermaLink="false"><![CDATA[24a70e7a-78d2-11f0-b006-f7fd26027f9e]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS1170215439.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>H20 Export Dispute and Industry Responses to the EU AI Code of Practice</title>
      <description>In this episode, we cover the renewed debate over U.S. approval of Nvidia’s H20 chip exports to China, from political pushback in Washington to reactions in Beijing (00:30). We also examine how the AI industry is responding to the EU AI Code of Practice and the reasons some companies are choosing not to sign (44:53).



Read Gregory C. Allen's report on DeepSeek here.

Watch or listen to our event with OSTP Director Michael Kratsios here.</description>
      <pubDate>Thu, 07 Aug 2025 16:09:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We unpack U.S. backlash to Nvidia’s H20 chip exports to China and why firms are hesitant to sign the EU AI Code.</itunes:subtitle>
      <itunes:summary>In this episode, we cover the renewed debate over U.S. approval of Nvidia’s H20 chip exports to China, from political pushback in Washington to reactions in Beijing (00:30). We also examine how the AI industry is responding to the EU AI Code of Practice and the reasons some companies are choosing not to sign (44:53).



Read Gregory C. Allen's report on DeepSeek here.

Watch or listen to our event with OSTP Director Michael Kratsios here.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we cover the renewed debate over U.S. approval of Nvidia’s H20 chip exports to China, from political pushback in Washington to reactions in Beijing (00:30). We also examine how the AI industry is responding to the EU AI Code of Practice and the reasons some companies are choosing not to sign (44:53).</p>
<p><br></p>
<p>Read Gregory C. Allen's report on DeepSeek <a href="https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race">here</a>.</p>
<p>Watch or listen to our event with OSTP Director Michael Kratsios <a href="https://www.csis.org/podcasts/ai-policy-podcast/unpacking-white-house-ai-action-plan-ostp-director-michael-kratsios">here</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3940</itunes:duration>
      <guid isPermaLink="false"><![CDATA[db2a44ca-73a8-11f0-b395-af0fc6391b57]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9459286585.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Unpacking the White House AI Action Plan with OSTP Director Michael Kratsios</title>
      <description>On July 30, the CSIS Wadhwani AI Center hosted Michael Kratsios, Director of the White House Office of Science and Technology Policy for a discussion breaking down the recently released AI Action Plan and discuss the Trump administration’s vision for U.S. AI leadership and innovation amid strategic competition with China. 

As the thirteenth Director of the White House OSTP, Mr. Kratsios oversees the development and execution of the nation’s science and technology policy agenda. He leads the Trump administration’s efforts to ensure American leadership in scientific discovery and technological innovation, including in critical and emerging technologies such as artificial intelligence, quantum computing, and biotechnology. In the first Trump administration, he served as the fourth Chief Technology Officer of the United States at the White House and as Under Secretary of Defense for Research and Engineering at the Pentagon.

Watch the full event or read the transcript here: Unpacking the White House AI Action Plan with OSTP Director Michael Kratsios </description>
      <pubDate>Thu, 31 Jul 2025 19:03:00 -0000</pubDate>
      <itunes:title>Unpacking the White House AI Action Plan with OSTP Director Michael Kratsios</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>On July 30, the CSIS Wadhwani AI Center hosted Michael Kratsios, Director of the White House Office of Science and Technology Policy for a discussion breaking down the recently released AI Action Plan and discuss the Trump administration’s vision for U.S. AI leadership and innovation amid strategic competition with China. </itunes:subtitle>
      <itunes:summary>On July 30, the CSIS Wadhwani AI Center hosted Michael Kratsios, Director of the White House Office of Science and Technology Policy for a discussion breaking down the recently released AI Action Plan and discuss the Trump administration’s vision for U.S. AI leadership and innovation amid strategic competition with China. 

As the thirteenth Director of the White House OSTP, Mr. Kratsios oversees the development and execution of the nation’s science and technology policy agenda. He leads the Trump administration’s efforts to ensure American leadership in scientific discovery and technological innovation, including in critical and emerging technologies such as artificial intelligence, quantum computing, and biotechnology. In the first Trump administration, he served as the fourth Chief Technology Officer of the United States at the White House and as Under Secretary of Defense for Research and Engineering at the Pentagon.

Watch the full event or read the transcript here: Unpacking the White House AI Action Plan with OSTP Director Michael Kratsios </itunes:summary>
      <content:encoded>
        <![CDATA[<p>On July 30, the CSIS Wadhwani AI Center hosted Michael Kratsios, Director of the White House Office of Science and Technology Policy for a discussion breaking down the recently released AI Action Plan and discuss the Trump administration’s vision for U.S. AI leadership and innovation amid strategic competition with China. </p>
<p>As the thirteenth Director of the White House OSTP, Mr. Kratsios oversees the development and execution of the nation’s science and technology policy agenda. He leads the Trump administration’s efforts to ensure American leadership in scientific discovery and technological innovation, including in critical and emerging technologies such as artificial intelligence, quantum computing, and biotechnology. In the first Trump administration, he served as the fourth Chief Technology Officer of the United States at the White House and as Under Secretary of Defense for Research and Engineering at the Pentagon.</p>
<p>Watch the full event or read the transcript here: <a href="https://www.csis.org/events/unpacking-white-house-ai-action-plan-ostp-director-michael-kratsios">Unpacking the White House AI Action Plan with OSTP Director Michael Kratsios</a> </p>]]>
      </content:encoded>
      <itunes:duration>2967</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[12159ba0-6e41-11f0-a519-af80383ac7f7]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS3891885078.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>In Memoriam: Andrew Schwartz, Our Friend and Colleague</title>
      <description>In this special episode, we honor the life of Andrew Schwartz, Chief Communications Officer at CSIS and beloved  co-host of this podcast. Andrew was a mentor, a friend, and a tireless champion of the CSIS Wadhwani AI Center’s work. His humor, personal stories, and passion shaped this show and left a lasting impact on all of us. Our team, our community, and CSIS will miss him deeply.</description>
      <pubDate>Tue, 29 Jul 2025 18:50:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>In this special episode, we honor the life of Andrew Schwartz, Chief Communications Officer at CSIS and beloved  co-host of this podcast. Andrew was a mentor, a friend, and a tireless champion of the CSIS Wadhwani AI Center’s work. His humor, personal stories, and passion shaped this show and left a lasting impact on all of us. Our team, our community, and CSIS will miss him deeply.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special episode, we honor the life of Andrew Schwartz, Chief Communications Officer at CSIS and beloved  co-host of this podcast. Andrew was a mentor, a friend, and a tireless champion of the CSIS Wadhwani AI Center’s work. His humor, personal stories, and passion shaped this show and left a lasting impact on all of us. Our team, our community, and CSIS will miss him deeply.</p>]]>
      </content:encoded>
      <itunes:duration>562</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0666509a-6cae-11f0-b742-1397c20ef907]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5575968171.mp3?updated=1753815814" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>China's AI Industrial Policy with Kyle Chan</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we are joined by Kyle Chan, postdoctoral researcher at Princeton’s Sociology Department and adjunct researcher at the RAND Corporation, to explore China's approach to AI industrial policy. We discuss the fundamentals of industrial policy and how it operates in China's digital technology sector (4:15), the evolution of China's AI industrial policy toolkit and its impact on companies (19:29), China's current AI priorities, protectionism strategies, and adoption patterns (47:05), and the future trajectory of China's AI industrial policy amid US-China competition (1:12:22).



Kyle co-authored RAND's June 26 report "Full Stack: China’s Evolving Industrial Policy for AI," which is available here.</description>
      <pubDate>Wed, 16 Jul 2025 14:30:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Kyle Chan joins to break down China’s AI industrial policy, its tools, priorities, and impact on tech and US-China rivalry.</itunes:subtitle>
      <itunes:summary>In this episode, we are joined by Kyle Chan, postdoctoral researcher at Princeton’s Sociology Department and adjunct researcher at the RAND Corporation, to explore China's approach to AI industrial policy. We discuss the fundamentals of industrial policy and how it operates in China's digital technology sector (4:15), the evolution of China's AI industrial policy toolkit and its impact on companies (19:29), China's current AI priorities, protectionism strategies, and adoption patterns (47:05), and the future trajectory of China's AI industrial policy amid US-China competition (1:12:22).



Kyle co-authored RAND's June 26 report "Full Stack: China’s Evolving Industrial Policy for AI," which is available here.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by Kyle Chan, postdoctoral researcher at Princeton’s Sociology Department and adjunct researcher at the RAND Corporation, to explore China's approach to AI industrial policy. We discuss the fundamentals of industrial policy and how it operates in China's digital technology sector (4:15), the evolution of China's AI industrial policy toolkit and its impact on companies (19:29), China's current AI priorities, protectionism strategies, and adoption patterns (47:05), and the future trajectory of China's AI industrial policy amid US-China competition (1:12:22).</p>
<p><br></p>
<p>Kyle co-authored RAND's June 26 report "Full Stack: China’s Evolving Industrial Policy for AI," which is available <a href="https://www.rand.org/pubs/perspectives/PEA4012-1.html">here</a>.</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>5123</itunes:duration>
      <guid isPermaLink="false"><![CDATA[50337732-61c0-11f0-ac0a-337779c412d5]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5256225320.mp3?updated=1752676534" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Senate Strikes AI Law Moratorium, Courts Rule on Copyright Cases, and Congress Talks AGI</title>
      <description>In this episode, we cover the Senate's vote to remove the moratorium on
state AI laws from the reconciliation bill (00:38), the latest AI copyright
court rulings involving Meta and Anthropic (7:38), key takeaways from the House Select Committee on China's AI hearing (20:55), and the latest developments surrounding DeepSeek, including export control impacts and military ties (27:45).</description>
      <pubDate>Wed, 02 Jul 2025 15:45:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>In this episode, we cover the Senate's vote to remove the moratorium on
state AI laws from the reconciliation bill (00:38), the latest AI copyright
court rulings involving Meta and Anthropic (7:38), key takeaways from the House Select Committee on China's AI hearing (20:55), and the latest developments surrounding DeepSeek, including export control impacts and military ties (27:45).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>
In this episode, we cover the Senate's vote to remove the moratorium on
state AI laws from the reconciliation bill (00:38), the latest AI copyright
court rulings involving Meta and Anthropic (7:38), key takeaways from the House Select Committee on China's AI hearing (20:55), and the latest developments surrounding DeepSeek, including export control impacts and military ties (27:45).


</p>]]>
      </content:encoded>
      <itunes:duration>2032</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9b6d19ca-575b-11f0-b6eb-7bb6cd09d8c1]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS3161805662.mp3?updated=1751471497" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI, Cybersecurity, and Securing Model Weights with Miles Brundage and Chris Rohlf</title>
      <description>In this episode, we’re joined by Miles Brundage, independent AI policy researcher and former Head of Policy Research at OpenAI, and Chris Rohlf, Security Engineer at Meta and cybersecurity expert. We cover the fundamentals of cybersecurity today (9:20), whether AI is tipping the offense-defense balance (21:00), the critical challenge of securing AI model weights (34:55), the debate over “AI security doomerism” (1:03:15), and how policymakers can strengthen incentives to secure AI systems (1:08:46).</description>
      <pubDate>Fri, 27 Jun 2025 14:09:00 -0000</pubDate>
      <itunes:title>AI, Cybersecurity, and Securing Model Weights with Miles Brundage and Chris Rohlf</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, Miles Brundage and Chris Rohlf join us to explore how AI is reshaping cybersecurity—from the offense-defense balance to securing model weights. We also discuss “AI security doomerism” and what policymakers can do to strengthen AI system protections.</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Miles Brundage, independent AI policy researcher and former Head of Policy Research at OpenAI, and Chris Rohlf, Security Engineer at Meta and cybersecurity expert. We cover the fundamentals of cybersecurity today (9:20), whether AI is tipping the offense-defense balance (21:00), the critical challenge of securing AI model weights (34:55), the debate over “AI security doomerism” (1:03:15), and how policymakers can strengthen incentives to secure AI systems (1:08:46).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we’re joined by Miles Brundage, independent AI policy researcher and former Head of Policy Research at OpenAI, and Chris Rohlf, Security Engineer at Meta and cybersecurity expert. We cover the fundamentals of cybersecurity today (9:20), whether AI is tipping the offense-defense balance (21:00), the critical challenge of securing AI model weights (34:55), the debate over “AI security doomerism” (1:03:15), and how policymakers can strengthen incentives to secure AI systems (1:08:46).</p>]]>
      </content:encoded>
      <itunes:duration>4633</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5c0739e0-5360-11f0-a96a-23fc27beb185]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7311940577.mp3?updated=1751033678" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI Safety Institute Rebrand, Congressional Hearing on Export Controls, and Meta's New Superintelligence Lab</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we discuss the U.S. AI Safety Institute's rebrand to the Center for AI Standards and Innovation (00:37), BIS Undersecretary Jeffrey Kessler's testimony on semiconductor export controls (10:36), and Meta's new AI superintelligence lab and accompanying $15 billion investment in Scale AI (22:26). </description>
      <pubDate>Wed, 18 Jun 2025 18:06:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We cover the U.S. AI Safety Institute rebrand, BIS export control updates, and Meta’s $15B bet on AI superintelligence.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss the U.S. AI Safety Institute's rebrand to the Center for AI Standards and Innovation (00:37), BIS Undersecretary Jeffrey Kessler's testimony on semiconductor export controls (10:36), and Meta's new AI superintelligence lab and accompanying $15 billion investment in Scale AI (22:26). </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss the U.S. AI Safety Institute's rebrand to the Center for AI Standards and Innovation (00:37), BIS Undersecretary Jeffrey Kessler's testimony on semiconductor export controls (10:36), and Meta's new AI superintelligence lab and accompanying $15 billion investment in Scale AI (22:26). </p>]]>
      </content:encoded>
      <itunes:duration>1897</itunes:duration>
      <guid isPermaLink="false"><![CDATA[07694dbc-4c6f-11f0-b66b-9bfbc03a4752]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7427522848.mp3?updated=1750270320" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Shield AI’s Ryan Tseng on Building an Autonomous Future for the DOD</title>
      <description>On June 9, 2025,  the CSIS Wadhwani AI Center hosted Ryan Tseng, Co-Founder and President of Shield AI, a company building AI-powered software to enable autonomous capabilities for defense and national security. 

 

Mr. Tseng leads strategic partnerships with defense and policy leaders across the United States and internationally. Under his leadership, Shield AI secured major contracts with the U.S. Special Operations Command, Air Force, Marine Corps, and Navy, while expanding internationally with offices opening in Ukraine and the UAE. 

 

Watch the full event here.</description>
      <pubDate>Tue, 10 Jun 2025 14:49:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>CSIS hosted Shield AI’s Ryan Tseng on June 9 to discuss AI, autonomy, and defense innovation. </itunes:subtitle>
      <itunes:summary>On June 9, 2025,  the CSIS Wadhwani AI Center hosted Ryan Tseng, Co-Founder and President of Shield AI, a company building AI-powered software to enable autonomous capabilities for defense and national security. 

 

Mr. Tseng leads strategic partnerships with defense and policy leaders across the United States and internationally. Under his leadership, Shield AI secured major contracts with the U.S. Special Operations Command, Air Force, Marine Corps, and Navy, while expanding internationally with offices opening in Ukraine and the UAE. 

 

Watch the full event here.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>On June 9, 2025,  the CSIS Wadhwani AI Center hosted <strong>Ryan Tseng</strong>, Co-Founder and President of Shield AI, a company building AI-powered software to enable autonomous capabilities for defense and national security. </p>
<p> </p>
<p>Mr. Tseng leads strategic partnerships with defense and policy leaders across the United States and internationally. Under his leadership, Shield AI secured major contracts with the U.S. Special Operations Command, Air Force, Marine Corps, and Navy, while expanding internationally with offices opening in Ukraine and the UAE. </p>
<p> </p>
<p>Watch the full event <a href="https://www.csis.org/events/shield-ais-ryan-tseng-building-autonomous-future-dod">here</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3051</itunes:duration>
      <guid isPermaLink="false"><![CDATA[2d6adb9a-460a-11f0-be6b-3f9e20ee97f3]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4179781888.mp3?updated=1749567298" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI in the “Big Beautiful Bill” and Safety Concerns About Anthropic’s Newest Model</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we discuss House Republicans’ proposed moratorium on state and local AI laws (00:57), break down AI-related appropriations across the executive branch (18:54), and unpack the safety issues and safeguards of Anthropic's newest model, Claude Opus 4 (26:51).



Correction: In this episode, a quote was incorrectly attributed directly to Rep. Laurel Lee (R-Fla.). The statement—“Should the provision be stripped from the Senate reconciliation bill, some Republicans are eyeing separate legislation.”—was reported by The Hill as a paraphrase of Rep. Lee’s comments.</description>
      <pubDate>Wed, 04 Jun 2025 15:36:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We discuss House GOP's AI law moratorium, federal AI funding, and safety concerns with Anthropic’s Claude Opus 4.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss House Republicans’ proposed moratorium on state and local AI laws (00:57), break down AI-related appropriations across the executive branch (18:54), and unpack the safety issues and safeguards of Anthropic's newest model, Claude Opus 4 (26:51).



Correction: In this episode, a quote was incorrectly attributed directly to Rep. Laurel Lee (R-Fla.). The statement—“Should the provision be stripped from the Senate reconciliation bill, some Republicans are eyeing separate legislation.”—was reported by The Hill as a paraphrase of Rep. Lee’s comments.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss House Republicans’ proposed moratorium on state and local AI laws (00:57), break down AI-related appropriations across the executive branch (18:54), and unpack the safety issues and safeguards of Anthropic's newest model, Claude Opus 4 (26:51).</p>
<p><br></p>
<p><em>Correction: In this episode, a quote was incorrectly attributed directly to Rep. Laurel Lee (R-Fla.). The statement—“Should the provision be stripped from the Senate reconciliation bill, some Republicans are eyeing separate legislation.”—was reported by The Hill as a paraphrase of Rep. Lee’s comments.</em></p>]]>
      </content:encoded>
      <itunes:duration>2005</itunes:duration>
      <guid isPermaLink="false"><![CDATA[a3e492d0-4159-11f0-a3b1-93149da0e086]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6668378450.mp3?updated=1749051671" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>China's AI Industrial Policy, New Controls on Huawei's Ascend Chips, and President Trump's Middle East Trip</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we discuss Princeton researcher Kyle Chan's op-ed in the New York Times on China's industrial policy for AI and advanced technologies (0:35), what the Bureau of Industry and Security's new controls on Huawei's Ascend chips mean for China's AI ecosystem (10:09), and our biggest takeaways from President Trump's visit to the Middle East (19:07).</description>
      <pubDate>Wed, 21 May 2025 10:00:00 -0000</pubDate>
      <itunes:title>China's AI Industrial Policy, New Controls on Huawei's Ascend Chips, and President Trump's Middle East Trip</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We cover China’s AI policy, new US chip controls on Huawei, and Trump’s Middle East trip.  Let me know if you'd like it to keep timestamps or specific names like Kyle Chan.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss Princeton researcher Kyle Chan's op-ed in the New York Times on China's industrial policy for AI and advanced technologies (0:35), what the Bureau of Industry and Security's new controls on Huawei's Ascend chips mean for China's AI ecosystem (10:09), and our biggest takeaways from President Trump's visit to the Middle East (19:07).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss Princeton researcher Kyle Chan's op-ed in the New York Times on China's industrial policy for AI and advanced technologies (0:35), what the Bureau of Industry and Security's new controls on Huawei's Ascend chips mean for China's AI ecosystem (10:09), and our biggest takeaways from President Trump's visit to the Middle East (19:07).</p>]]>
      </content:encoded>
      <itunes:duration>1695</itunes:duration>
      <guid isPermaLink="false"><![CDATA[5fa196a2-35da-11f0-bfeb-f38ae2be0cbb]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8722665214.mp3?updated=1747787547" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The AI Diffusion Rule is Rescinded, AI Executives Testify Before Congress, &amp; AI Adoption in the IRS</title>
      <description>In this episode, we discuss the Trump administration’s decision to rescind the AI Diffusion Framework (1:34), the message of top AI executives in their recent Senate testimony (20:03), what AI adoption could mean for the IRS (35:15), the U.S. Copyright Office’s latest report on generative AI training (44:44), and what AI policy might look like in the new papacy (49:24).</description>
      <pubDate>Wed, 14 May 2025 18:04:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode: Trump on AI Diffusion, execs testify to Senate, IRS adoption, Copyright Office report, AI &amp; the papacy.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss the Trump administration’s decision to rescind the AI Diffusion Framework (1:34), the message of top AI executives in their recent Senate testimony (20:03), what AI adoption could mean for the IRS (35:15), the U.S. Copyright Office’s latest report on generative AI training (44:44), and what AI policy might look like in the new papacy (49:24).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss the Trump administration’s decision to rescind the AI Diffusion Framework (1:34), the message of top AI executives in their recent Senate testimony (20:03), what AI adoption could mean for the IRS (35:15), the U.S. Copyright Office’s latest report on generative AI training (44:44), and what AI policy might look like in the new papacy (49:24).</p>]]>
      </content:encoded>
      <itunes:duration>3193</itunes:duration>
      <guid isPermaLink="false"><![CDATA[86a6437c-3037-11f0-9af4-77de689cdaf3]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8055574506.mp3?updated=1747246168" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Fiscal Year 2026 Budget Request, AI Diffusion Framework Update, and the Politburo's Study Session on AI</title>
      <description>In this episode, we discuss what the Trump administration's Fiscal Year 2026 budget request means for federal AI spending, what might happen to the AI Diffusion Framework before its May 15 implementation deadline, and what the Chinese Communist Party Politburo's Study Session on AI indicates about China's AI ambitions. </description>
      <pubDate>Wed, 07 May 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Greg and Andrew talk Trump’s FY2026 budget, AI Diffusion deadline, and China’s Politburo reveal global AI ambitions.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss what the Trump administration's Fiscal Year 2026 budget request means for federal AI spending, what might happen to the AI Diffusion Framework before its May 15 implementation deadline, and what the Chinese Communist Party Politburo's Study Session on AI indicates about China's AI ambitions. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss what the Trump administration's Fiscal Year 2026 budget request means for federal AI spending, what might happen to the AI Diffusion Framework before its May 15 implementation deadline, and what the Chinese Communist Party Politburo's Study Session on AI indicates about China's AI ambitions. </p>]]>
      </content:encoded>
      <itunes:duration>2090</itunes:duration>
      <guid isPermaLink="false"><![CDATA[b2ceb954-2ac2-11f0-8bc3-b76a84858f4e]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6887071082.mp3?updated=1746567937" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Scale AI’s Alexandr Wang on Securing U.S. AI Leadership</title>
      <description>On May 1, 2025, the CSIS Wadhwani AI Center hosted Alexandr Wang, Founder and CEO of Scale AI, a company accelerating AI development by delivering expert-level data and technology solutions to leading AI labs, multinational enterprises, and governments. He shared his insights on key issues shaping the future of AI policy, such as U.S.-China AI competition, international AI governance, and the new administration’s approach to AI innovation, regulation, and global standards.

Alexandr founded Scale AI in 2016 as a 19-year-old MIT student with the vision of providing the critical data and infrastructure needed for complex AI projects. Under his leadership, Scale AI has grown to nearly a $14 billion valuation, serving hundreds of customers across industries ranging from finance to government agencies, and creating flexible, impactful AI work for hundreds of thousands of people worldwide.

Watch full event at the following link: https://www.csis.org/analysis/scale-ais-alexandr-wang-securing-us-ai-leadership

 </description>
      <pubDate>Fri, 02 May 2025 21:03:00 -0000</pubDate>
      <itunes:title>Scale AI’s Alexandr Wang on Securing U.S. AI Leadership</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>On May 1, 2025, the CSIS Wadhwani AI Center hosted Alexandr Wang, Founder and CEO of Scale AI, a company accelerating AI development by delivering expert-level data and technology solutions to leading AI labs, multinational enterprises, and governments. He shared his insights on key issues shaping the future of AI policy, such as U.S.-China AI competition, international AI governance, and the new administration’s approach to AI innovation, regulation, and global standards.  </itunes:subtitle>
      <itunes:summary>On May 1, 2025, the CSIS Wadhwani AI Center hosted Alexandr Wang, Founder and CEO of Scale AI, a company accelerating AI development by delivering expert-level data and technology solutions to leading AI labs, multinational enterprises, and governments. He shared his insights on key issues shaping the future of AI policy, such as U.S.-China AI competition, international AI governance, and the new administration’s approach to AI innovation, regulation, and global standards.

Alexandr founded Scale AI in 2016 as a 19-year-old MIT student with the vision of providing the critical data and infrastructure needed for complex AI projects. Under his leadership, Scale AI has grown to nearly a $14 billion valuation, serving hundreds of customers across industries ranging from finance to government agencies, and creating flexible, impactful AI work for hundreds of thousands of people worldwide.

Watch full event at the following link: https://www.csis.org/analysis/scale-ais-alexandr-wang-securing-us-ai-leadership

 </itunes:summary>
      <content:encoded>
        <![CDATA[<p>On <strong>May 1, 2025</strong>, the CSIS Wadhwani AI Center hosted <strong>Alexandr Wang</strong>, Founder and CEO of Scale AI, a company accelerating AI development by delivering expert-level data and technology solutions to leading AI labs, multinational enterprises, and governments. He shared his insights on key issues shaping the future of AI policy, such as U.S.-China AI competition, international AI governance, and the new administration’s approach to AI innovation, regulation, and global standards.</p>
<p>Alexandr founded Scale AI in 2016 as a 19-year-old MIT student with the vision of providing the critical data and infrastructure needed for complex AI projects. Under his leadership, Scale AI has grown to nearly a $14 billion valuation, serving hundreds of customers across industries ranging from finance to government agencies, and creating flexible, impactful AI work for hundreds of thousands of people worldwide.</p>
<p>Watch full event at the following link: https://www.csis.org/analysis/scale-ais-alexandr-wang-securing-us-ai-leadership</p>
<p> </p>]]>
      </content:encoded>
      <itunes:duration>3053</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e12045b2-278d-11f0-b08f-3767dd15a439]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5865529465.mp3?updated=1746215747" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Michael Kratsios’s Vision for U.S. Tech Leadership, H20 Export Controls, and Huawei’s Ascend 920</title>
      <description>In this episode, we discuss OSTP Director Michael Kratsios’s recent speech on US technology policy at the Endless Frontiers Retreat (0:19), the Trump administration’s decision to control the Nvidia H20 chip (10:48), and what Huawei’s announcement of the Ascend 920 chip means for U.S.-China AI competition (18:24).</description>
      <pubDate>Wed, 23 Apr 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We discuss Michael Kratsios's recent speech on US technology policy, the Trump administration's decision on Nvidia H20 chips, and the Huawei announcement of the Ascend 920 chip.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss OSTP Director Michael Kratsios’s recent speech on US technology policy at the Endless Frontiers Retreat (0:19), the Trump administration’s decision to control the Nvidia H20 chip (10:48), and what Huawei’s announcement of the Ascend 920 chip means for U.S.-China AI competition (18:24).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss OSTP Director Michael Kratsios’s recent speech on US technology policy at the Endless Frontiers Retreat (0:19), the Trump administration’s decision to control the Nvidia H20 chip (10:48), and what Huawei’s announcement of the Ascend 920 chip means for U.S.-China AI competition (18:24).</p>]]>
      </content:encoded>
      <itunes:duration>1748</itunes:duration>
      <guid isPermaLink="false"><![CDATA[3744e122-1fe7-11f0-998d-37ef3ab04429]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS3445789072.mp3?updated=1745374138" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>A New Vision for Advancing AI Governance with Andrew Freedman</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Andrew Freedman, Chief Strategic Officer at Fathom, an organization whose mission is to find, build, and scale the solutions needed to help society transition to a world with AI. They will discuss the origins and purpose of Fathom, key initiatives shaping AI policy around the country such as California Senate Bill 813, and the new administration's approach to AI governance. They will also unpack the concept of “Private AI Governance” and what it means for the future of the U.S. AI ecosystem. 

Andrew Freedman is the Chief Strategic Officer at Fathom, boasting over 15 years of expertise in emerging industries and regulatory frameworks. Previously, he was a Partner at Forbes Tate Partners, where he led the firm's coalition work in technology and emerging regulatory sectors. Andrew has advised governments in California, Canada, and Massachusetts, and has been a speaker at major conferences like Code Conference and Aspen Ideas Fest. Earlier in his career, Andrew served as Chief of Staff to Colorado's Lieutenant Governor, where he established the Office of Early Childhood and secured a $45 million Race to the Top Grant. He also managed the Colorado Commits to Kids campaign, raising $11 million in three months for education funding. Andrew holds a J.D. from Harvard Law School and a B.A. from Tufts University. </description>
      <pubDate>Wed, 16 Apr 2025 14:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Andrew Freedman, Chief Strategic Officer at Fathom, an organization whose mission is to find, build, and scale the solutions needed to help society transition to a world with AI. They will discuss the origins and purpose of Fathom, key initiatives shaping AI policy around the country such as California Senate Bill 813, and the new administration's approach to AI governance. They will also unpack the concept of “Private AI Governance” and what it means for the future of the U.S. AI ecosystem. 

Andrew Freedman is the Chief Strategic Officer at Fathom, boasting over 15 years of expertise in emerging industries and regulatory frameworks. Previously, he was a Partner at Forbes Tate Partners, where he led the firm's coalition work in technology and emerging regulatory sectors. Andrew has advised governments in California, Canada, and Massachusetts, and has been a speaker at major conferences like Code Conference and Aspen Ideas Fest. Earlier in his career, Andrew served as Chief of Staff to Colorado's Lieutenant Governor, where he established the Office of Early Childhood and secured a $45 million Race to the Top Grant. He also managed the Colorado Commits to Kids campaign, raising $11 million in three months for education funding. Andrew holds a J.D. from Harvard Law School and a B.A. from Tufts University. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode of the <strong>AI Policy Podcast</strong>, Wadhwani AI Center Director <strong>Gregory C. Allen</strong> is joined by <strong>Andrew Freedman</strong>, Chief Strategic Officer at Fathom, an organization whose mission is to find, build, and scale the solutions needed to help society transition to a world with AI. They will discuss the origins and purpose of Fathom, key initiatives shaping AI policy around the country such as <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB813">California Senate Bill 813</a>, and the new administration's approach to AI governance. They will also unpack the concept of “Private AI Governance” and what it means for the future of the U.S. AI ecosystem. </p><p><br></p><p>Andrew Freedman is the Chief Strategic Officer at Fathom, boasting over 15 years of expertise in emerging industries and regulatory frameworks. Previously, he was a Partner at Forbes Tate Partners, where he led the firm's coalition work in technology and emerging regulatory sectors. Andrew has advised governments in California, Canada, and Massachusetts, and has been a speaker at major conferences like Code Conference and Aspen Ideas Fest. Earlier in his career, Andrew served as Chief of Staff to Colorado's Lieutenant Governor, where he established the Office of Early Childhood and secured a $45 million Race to the Top Grant. He also managed the Colorado Commits to Kids campaign, raising $11 million in three months for education funding. Andrew holds a J.D. from Harvard Law School and a B.A. from Tufts University. </p>]]>
      </content:encoded>
      <itunes:duration>2883</itunes:duration>
      <guid isPermaLink="false"><![CDATA[4ce62d4c-1a24-11f0-b6c6-8f840df6c71a]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4409839624.mp3?updated=1744740666" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Tariffs, Nvidia’s H20 Export Control Exemption, and OMB’s New AI Guidance</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we discuss what the Trump administration’s tariffs mean for the US AI ecosystem (2:42), reporting that Nvidia’s H20s will be exempt from export controls (8:58), the latest AI guidance from the White House Office of Management and Budget (OMB) (12:48), and the EU’s AI Continent Action Plan (17:07).</description>
      <pubDate>Thu, 10 Apr 2025 13:07:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we talk Trump administration tariffs and what it means for the US AI ecosystem, and more. </itunes:subtitle>
      <itunes:summary>In this episode, we discuss what the Trump administration’s tariffs mean for the US AI ecosystem (2:42), reporting that Nvidia’s H20s will be exempt from export controls (8:58), the latest AI guidance from the White House Office of Management and Budget (OMB) (12:48), and the EU’s AI Continent Action Plan (17:07).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss what the Trump administration’s tariffs mean for the US AI ecosystem (2:42), reporting that Nvidia’s H20s will be exempt from export controls (8:58), the latest AI guidance from the White House Office of Management and Budget (OMB) (12:48), and the EU’s AI Continent Action Plan (17:07).</p>]]>
      </content:encoded>
      <itunes:duration>1225</itunes:duration>
      <guid isPermaLink="false"><![CDATA[bb319924-160c-11f0-92db-27cde4fb590d]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS2873355520.mp3?updated=1744291205" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Mapping Chinese AI Regulation with Matt Sheehan</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we are joined by Matt Sheehan, fellow at the Carnegie Endowment for International Peace. We discuss the evolution of China's AI policymaking process over the past decade (6:45), the key institutions shaping Chinese AI policy today (44:30), and the changing nature of China's attitude to AI safety (50:55). </description>
      <pubDate>Wed, 02 Apr 2025 04:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Matt Sheehan, fellow at the Carnegie Endowment for International Peace, joins to speak on China and AI today. </itunes:subtitle>
      <itunes:summary>In this episode, we are joined by Matt Sheehan, fellow at the Carnegie Endowment for International Peace. We discuss the evolution of China's AI policymaking process over the past decade (6:45), the key institutions shaping Chinese AI policy today (44:30), and the changing nature of China's attitude to AI safety (50:55). </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by Matt Sheehan, fellow at the Carnegie Endowment for International Peace. We discuss the evolution of China's AI policymaking process over the past decade (6:45), the key institutions shaping Chinese AI policy today (44:30), and the changing nature of China's attitude to AI safety (50:55). </p>]]>
      </content:encoded>
      <itunes:duration>4114</itunes:duration>
      <guid isPermaLink="false"><![CDATA[791200a6-0f54-11f0-bf56-d754e0e30e0a]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4220442603.mp3?updated=1743551894" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI Action Plan RFI, California's AI Policy Working Group Report, and Why Programming Jobs Are Disappearing</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we discuss AI companies' responses to the White House AI Action Plan Request For Information (RFI) related to key areas like export controls and AI governance (00:51), the release of the Joint California Policy Working Group on AI Frontier Models draft report (24:45), and how AI might be affecting the computer programming job market (40:10). </description>
      <pubDate>Wed, 26 Mar 2025 14:16:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>AI firms respond to the White House AI Action Plan, California’s AI policy draft, and AI’s impact on programming jobs.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss AI companies' responses to the White House AI Action Plan Request For Information (RFI) related to key areas like export controls and AI governance (00:51), the release of the Joint California Policy Working Group on AI Frontier Models draft report (24:45), and how AI might be affecting the computer programming job market (40:10). </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss AI companies' responses to the White House AI Action Plan Request For Information (RFI) related to key areas like export controls and AI governance (00:51), the release of the Joint California Policy Working Group on AI Frontier Models draft report (24:45), and how AI might be affecting the computer programming job market (40:10). </p>]]>
      </content:encoded>
      <itunes:duration>2896</itunes:duration>
      <guid isPermaLink="false"><![CDATA[e52bbc62-0a4c-11f0-91a7-73bc9a4b4b4c]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7651461802.mp3?updated=1742998883" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The State and Local AI Regulation Landscape with Dean Ball</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Dean Ball, Research Fellow in the Artificial Intelligence &amp; Progress Project at George Mason University’s Mercatus Center. They will discuss how state and local governments are approaching AI regulation, what factors are shaping these efforts, where state and local efforts intersect, and how a fractured approach to governance might affect the AI policy landscape.

In addition to his role at the George Mason University’s Mercatus Center, Dean Ball is the author of the Substack Hyperdimensional. Previously, he was Senior Program Manager for the Hoover Institution's State and Local Governance Initiative. Prior to his position at the Hoover Institution, he served as Executive Director of the Calvin Coolidge Presidential Foundation, based in Plymouth, Vermont and Washington, D.C. He also worked as the Deputy Director of State and Local Policy at the Manhattan Institute for Policy Research from 2014–2018.</description>
      <pubDate>Wed, 19 Mar 2025 15:00:00 -0000</pubDate>
      <itunes:title>The State and Local AI Regulation Landscape with Dean Ball</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Dean Ball, Research Fellow in the Artificial Intelligence &amp; Progress Project at George Mason University’s Mercatus Center, joins to discuss the AI policy landscape. </itunes:subtitle>
      <itunes:summary>In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Dean Ball, Research Fellow in the Artificial Intelligence &amp; Progress Project at George Mason University’s Mercatus Center. They will discuss how state and local governments are approaching AI regulation, what factors are shaping these efforts, where state and local efforts intersect, and how a fractured approach to governance might affect the AI policy landscape.

In addition to his role at the George Mason University’s Mercatus Center, Dean Ball is the author of the Substack Hyperdimensional. Previously, he was Senior Program Manager for the Hoover Institution's State and Local Governance Initiative. Prior to his position at the Hoover Institution, he served as Executive Director of the Calvin Coolidge Presidential Foundation, based in Plymouth, Vermont and Washington, D.C. He also worked as the Deputy Director of State and Local Policy at the Manhattan Institute for Policy Research from 2014–2018.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Dean Ball, Research Fellow in the Artificial Intelligence &amp; Progress Project at George Mason University’s Mercatus Center. They will discuss how state and local governments are approaching AI regulation, what factors are shaping these efforts, where state and local efforts intersect, and how a fractured approach to governance might affect the AI policy landscape.</p><p><br></p><p>In addition to his role at the George Mason University’s Mercatus Center, Dean Ball is the author of the Substack Hyperdimensional. Previously, he was Senior Program Manager for the Hoover Institution's State and Local Governance Initiative. Prior to his position at the Hoover Institution, he served as Executive Director of the Calvin Coolidge Presidential Foundation, based in Plymouth, Vermont and Washington, D.C. He also worked as the Deputy Director of State and Local Policy at the Manhattan Institute for Policy Research from 2014–2018. </p>]]>
      </content:encoded>
      <itunes:duration>3181</itunes:duration>
      <guid isPermaLink="false"><![CDATA[61c337d2-0426-11f0-a65d-53d0aca80f54]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7475980534.mp3?updated=1742322635" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Deepseek Report, Manus AI, and the DOD’s New Acquisition Approach</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we discuss the Wadhwani AI Center’s latest publication on the implications of DeepSeek for the future of export controls (0:40), Chinese company Manus AI (9:05), what Secretary Hegseth’s memo means for the DOD AI ecosystem (15:27), and xAI’s acquisition of 1 million square feet for its new data center in Memphis (21:28). </description>
      <pubDate>Wed, 12 Mar 2025 15:12:00 -0000</pubDate>
      <itunes:title>Deepseek Report, Manus AI, and the DOD’s New Acquisition Approach</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss the Wadhwani AI Center’s latest publication on the implications of DeepSeek for the future of export controls (0:40), Chinese company Manus AI (9:05), what Secretary Hegseth’s memo means for the DOD AI ecosystem (15:27), and xAI’s acquisition of 1 million square feet for its new data center in Memphis (21:28). </itunes:subtitle>
      <itunes:summary>In this episode, we discuss the Wadhwani AI Center’s latest publication on the implications of DeepSeek for the future of export controls (0:40), Chinese company Manus AI (9:05), what Secretary Hegseth’s memo means for the DOD AI ecosystem (15:27), and xAI’s acquisition of 1 million square feet for its new data center in Memphis (21:28). </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss the Wadhwani AI Center’s <a href="https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race">latest publication</a> on the implications of DeepSeek for the future of export controls (0:40), Chinese company Manus AI (9:05), what Secretary Hegseth’s memo means for the DOD AI ecosystem (15:27), and xAI’s acquisition of 1 million square feet for its new data center in Memphis (21:28). </p>]]>
      </content:encoded>
      <itunes:duration>1516</itunes:duration>
      <guid isPermaLink="false"><![CDATA[94899de0-ff54-11ef-bf13-ab1f64979bc9]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5476178078.mp3?updated=1741792737" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The UAE's AI Ambitions</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this special episode, we are joined by Georgia Adamson, Research Associate at the CSIS Wadhwani AI Center, Lennart Heim, Associate Information Scientist at RAND, and Sam Winter-Levy, Fellow for Technology and International Affairs at the Carnegie Endowment for International Peace. We outline the biggest takeaways from our recent report about the UAE's role in the global AI race (2:34), the details of the Microsoft-G42 deal (17:21), our assessment of the UAE-China relationship when it comes to AI technology (25:45), and the future of export controls (44:07).</description>
      <pubDate>Tue, 04 Mar 2025 18:58:00 -0000</pubDate>
      <itunes:title>The UAE's AI Ambitions</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We outline the biggest takeaways from our recent report about the UAE's role in the global AI race, the details of the Microsoft-G42 deal, our assessment of the UAE-China relationship when it comes to AI technology, and the future of export controls.</itunes:subtitle>
      <itunes:summary>In this special episode, we are joined by Georgia Adamson, Research Associate at the CSIS Wadhwani AI Center, Lennart Heim, Associate Information Scientist at RAND, and Sam Winter-Levy, Fellow for Technology and International Affairs at the Carnegie Endowment for International Peace. We outline the biggest takeaways from our recent report about the UAE's role in the global AI race (2:34), the details of the Microsoft-G42 deal (17:21), our assessment of the UAE-China relationship when it comes to AI technology (25:45), and the future of export controls (44:07).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special episode, we are joined by Georgia Adamson, Research Associate at the CSIS Wadhwani AI Center, Lennart Heim, Associate Information Scientist at RAND, and Sam Winter-Levy, Fellow for Technology and International Affairs at the Carnegie Endowment for International Peace. We outline the biggest takeaways from our recent report about the UAE's role in the global AI race (2:34), the details of the Microsoft-G42 deal (17:21), our assessment of the UAE-China relationship when it comes to AI technology (25:45), and the future of export controls (44:07).</p>]]>
      </content:encoded>
      <itunes:duration>3202</itunes:duration>
      <guid isPermaLink="false"><![CDATA[0c5eb696-f92b-11ef-8985-6b8e782c0deb]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6668209410.mp3?updated=1741115176" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Grok 3, DOGE, and Xi Jinping's Meeting with DeepSeek</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In our first video episode, we discuss xAI's release of the Grok 3 family of models, the Department of Government Efficiency's (DOGE) impact on the federal AI workforce, Xi Jinping's meeting with major Chinese AI company executives, and what the Evo-2 model could mean for the future of biology. ﻿</description>
      <pubDate>Tue, 25 Feb 2025 21:24:07 -0000</pubDate>
      <itunes:title>Grok 3, DOGE, and Xi Jinping's Meeting with DeepSeek</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We discuss xAI's release of the Grok 3 family of models, the Department of Government Efficiency's (DOGE) impact on the federal AI workforce, Xi Jinping's meeting with major Chinese AI company executives, and what the Evo-2 model could mean for the future of biology. </itunes:subtitle>
      <itunes:summary>In our first video episode, we discuss xAI's release of the Grok 3 family of models, the Department of Government Efficiency's (DOGE) impact on the federal AI workforce, Xi Jinping's meeting with major Chinese AI company executives, and what the Evo-2 model could mean for the future of biology. ﻿</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In our first video episode, we discuss xAI's release of the Grok 3 family of models, the Department of Government Efficiency's (DOGE) impact on the federal AI workforce, Xi Jinping's meeting with major Chinese AI company executives, and what the Evo-2 model could mean for the future of biology. ﻿</p>]]>
      </content:encoded>
      <itunes:duration>2107</itunes:duration>
      <guid isPermaLink="false"><![CDATA[f624d87c-f3be-11ef-a03a-db03c3724fee]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7622568878.mp3?updated=1740518998" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Unpacking the AI Action Summit</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this special episode, Greg breaks down his biggest takeaways from the Paris AI Action Summit. He discusses France’s goals for the summit (5:05), Vice President JD Vance’s speech about the US vision for AI (12:16), the EU’s approach to the convening (17:13), why the US and UK did not sign the summit declaration (20:50), and the rebranded UK AI Security Institute (23:20).</description>
      <pubDate>Fri, 14 Feb 2025 21:23:00 -0000</pubDate>
      <itunes:title>Unpacking the AI Action Summit</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this special episode, Greg breaks down his biggest takeaways from the Paris AI Action Summit.</itunes:subtitle>
      <itunes:summary>In this special episode, Greg breaks down his biggest takeaways from the Paris AI Action Summit. He discusses France’s goals for the summit (5:05), Vice President JD Vance’s speech about the US vision for AI (12:16), the EU’s approach to the convening (17:13), why the US and UK did not sign the summit declaration (20:50), and the rebranded UK AI Security Institute (23:20).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special episode, Greg breaks down his biggest takeaways from the Paris AI Action Summit. He discusses France’s goals for the summit (5:05), Vice President JD Vance’s speech about the US vision for AI (12:16), the EU’s approach to the convening (17:13), why the US and UK did not sign the summit declaration (20:50), and the rebranded UK AI Security Institute (23:20).</p>]]>
      </content:encoded>
      <itunes:duration>1970</itunes:duration>
      <guid isPermaLink="false"><![CDATA[1e54d106-eb1a-11ef-b2b8-9358e7c1ad8d]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS2095510558.mp3?updated=1739627784" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>DeepSeek Deep Dive</title>
      <description>In this crossover episode with Truth of the Matter, we discuss the origins of Chinese AI Company DeepSeek (0:55), the release of its DeepSeek R1 model and what it means for the future of U.S.- China AI competition (3:05), why it prompted such a massive reaction by U.S. policymakers and the U.S. stock market (14:04), and the Trump administration's response (24:03)  </description>
      <pubDate>Wed, 29 Jan 2025 17:35:00 -0000</pubDate>
      <itunes:title>DeepSeek Deep Dive</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this crossover episode with Truth of the Matter, we discuss the origins of Chinese AI Company DeepSeek, the release of its DeepSeek R1 model and what it means for the future of U.S.- China AI competition, why it prompted such a massive reaction by U.S. policymakers and the U.S. stock market, and the Trump administration's response.</itunes:subtitle>
      <itunes:summary>In this crossover episode with Truth of the Matter, we discuss the origins of Chinese AI Company DeepSeek (0:55), the release of its DeepSeek R1 model and what it means for the future of U.S.- China AI competition (3:05), why it prompted such a massive reaction by U.S. policymakers and the U.S. stock market (14:04), and the Trump administration's response (24:03)  </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this crossover episode with Truth of the Matter, we discuss the origins of Chinese AI Company DeepSeek (0:55), the release of its DeepSeek R1 model and what it means for the future of U.S.- China AI competition (3:05), why it prompted such a massive reaction by U.S. policymakers and the U.S. stock market (14:04), and the Trump administration's response (24:03)  </p>]]>
      </content:encoded>
      <itunes:duration>1753</itunes:duration>
      <guid isPermaLink="false"><![CDATA[8284dab4-de67-11ef-93e5-13e5a30545ef]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6314922510.mp3?updated=1738175605" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Week 1 of the Trump administration, AGI timelines, and DeepSeek</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we break down President Trump's repeal of the the Biden administration's Executive Order (EO) on AI (1:00), the release of the America First Trade Policy memorandum (9:52), and the Trump administration's own AI EO (15:02). We are then joined by Lennart Heim, Senior Information Scientist at the RAND Corporation to discuss the Stargate announcement (20:40), how AI company CEOs are talking about AGI (38:36), and why the latest models from DeepSeek matter (52:02).</description>
      <pubDate>Mon, 27 Jan 2025 19:08:00 -0000</pubDate>
      <itunes:title>Week 1 of the Trump administration, AGI timelines, and DeepSeek</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>The AI Policy Podcast is joined by Lennart Heim, Senior Information Scientist at the RAND Corporation, to talk the Trump administrations AI EO and more. </itunes:subtitle>
      <itunes:summary>In this episode, we break down President Trump's repeal of the the Biden administration's Executive Order (EO) on AI (1:00), the release of the America First Trade Policy memorandum (9:52), and the Trump administration's own AI EO (15:02). We are then joined by Lennart Heim, Senior Information Scientist at the RAND Corporation to discuss the Stargate announcement (20:40), how AI company CEOs are talking about AGI (38:36), and why the latest models from DeepSeek matter (52:02).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we break down President Trump's repeal of the the Biden administration's Executive Order (EO) on AI (1:00), the release of the America First Trade Policy memorandum (9:52), and the Trump administration's own AI EO (15:02). We are then joined by Lennart Heim, Senior Information Scientist at the RAND Corporation to discuss the Stargate announcement (20:40), how AI company CEOs are talking about AGI (38:36), and why the latest models from DeepSeek matter (52:02).</p>]]>
      </content:encoded>
      <itunes:duration>4894</itunes:duration>
      <guid isPermaLink="false"><![CDATA[d2c2ffa6-daa7-11ef-843a-078e0a87408e]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8565921126.mp3?updated=1738005658" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Executive Order on AI and Energy Infrastructure - Emergency Podcast 2.0</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this special episode of the AI Policy Podcast, Andrew, Greg, and CSIS Energy Security and Climate Change Program Director Joseph Majkut discuss the Biden administration's Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. They consider the motivation for this measure and its primary goals (1:07), its reception among AI and hyperscaler companies (12:18), and how the Trump administration might approach AI and energy (17:50). </description>
      <pubDate>Tue, 14 Jan 2025 22:35:00 -0000</pubDate>
      <itunes:title>Executive Order on AI and Energy Infrastructure - Emergency Podcast 2.0</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>CSIS Energy Security and Climate Change Program Director Joseph Majkut discuss the Biden administration's Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. </itunes:subtitle>
      <itunes:summary>In this special episode of the AI Policy Podcast, Andrew, Greg, and CSIS Energy Security and Climate Change Program Director Joseph Majkut discuss the Biden administration's Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. They consider the motivation for this measure and its primary goals (1:07), its reception among AI and hyperscaler companies (12:18), and how the Trump administration might approach AI and energy (17:50). </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special episode of the AI Policy Podcast, Andrew, Greg, and CSIS Energy Security and Climate Change Program Director Joseph Majkut discuss the Biden administration's Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. They consider the motivation for this measure and its primary goals (1:07), its reception among AI and hyperscaler companies (12:18), and how the Trump administration might approach AI and energy (17:50). </p>]]>
      </content:encoded>
      <itunes:duration>1358</itunes:duration>
      <guid isPermaLink="false"><![CDATA[028d127c-d2c8-11ef-87b4-d7b59cb548e8]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6493721342.mp3?updated=1736895251" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI Diffusion Framework - Emergency Podcast</title>
      <description>In this pressing episode, we break down the release of the Biden administration's Framework for Artificial Intelligence Diffusion. We discuss the rationale for this latest control (0:52), and its reception among major AI and semiconductor firms (8:14), U.S. allies (17:15), and the incoming administration (19:48). </description>
      <pubDate>Mon, 13 Jan 2025 22:18:00 -0000</pubDate>
      <itunes:title>AI Diffusion Framework - Emergency Podcast</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this pressing episode, we break down the release of the Biden administration's Framework for Artificial Intelligence Diffusion. We discuss the rationale for this latest control, and its reception among major AI and semiconductor firms, U.S. allies, and the incoming administration. </itunes:subtitle>
      <itunes:summary>In this pressing episode, we break down the release of the Biden administration's Framework for Artificial Intelligence Diffusion. We discuss the rationale for this latest control (0:52), and its reception among major AI and semiconductor firms (8:14), U.S. allies (17:15), and the incoming administration (19:48). </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this pressing episode, we break down the release of the Biden administration's Framework for Artificial Intelligence Diffusion. We discuss the rationale for this latest control (0:52), and its reception among major AI and semiconductor firms (8:14), U.S. allies (17:15), and the incoming administration (19:48). </p>]]>
      </content:encoded>
      <itunes:duration>1686</itunes:duration>
      <guid isPermaLink="false"><![CDATA[84f6431c-d1f8-11ef-85c2-f32b5eafcce1]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9993569891.mp3?updated=1736807086" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Export Control Update, White House AI Czar, OpenAI-Anduril Team-up, and China’s Loyal Wingman</title>
      <description>In this episode, we discuss the December 2nd semiconductor export control update (0:45), the Trump administration’s appointment of David Sacks as the White House AI czar (5:35), the OpenAI and Anduril partnership and its implication for national security (9:31), and the latest from China’s autonomous fighter aircraft program (16:39).  </description>
      <pubDate>Thu, 19 Dec 2024 17:57:14 -0000</pubDate>
      <itunes:title>Export Control Update, White House AI Czar, OpenAI-Anduril Team-up, and China’s Loyal Wingman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>We discuss the December 2nd semiconductor export control update, the Trump administration’s appointment of David Sacks as the White House AI czar, the OpenAI and Anduril partnership and its implication for national security, and the latest from China’s autonomous fighter aircraft program.  </itunes:subtitle>
      <itunes:summary>In this episode, we discuss the December 2nd semiconductor export control update (0:45), the Trump administration’s appointment of David Sacks as the White House AI czar (5:35), the OpenAI and Anduril partnership and its implication for national security (9:31), and the latest from China’s autonomous fighter aircraft program (16:39).  </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss the December 2nd semiconductor export control update (0:45), the Trump administration’s appointment of David Sacks as the White House AI czar (5:35), the OpenAI and Anduril partnership and its implication for national security (9:31), and the latest from China’s autonomous fighter aircraft program (16:39).  </p>]]>
      </content:encoded>
      <itunes:duration>1214</itunes:duration>
      <guid isPermaLink="false"><![CDATA[234d5f94-be32-11ef-98a2-fbac89029d4a]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS2813932900.mp3?updated=1734632341" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>U.S. Priorities for Domestic and International AI Governance with Ben Buchanan</title>
      <description>On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Dr. Ben Buchanan, the White House Special Advisor for AI. They discuss the Biden administration's biggest AI policy achievements including the AI Bill of Rights, the AI Safety Institute, and the Hiroshima AI process, and the National Security Memorandum on AI.</description>
      <pubDate>Tue, 17 Dec 2024 05:00:00 -0000</pubDate>
      <itunes:title>U.S. Priorities for Domestic and International AI Governance with Ben Buchanan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Wadhwani AI Center director Gregory C. Allen is joined by Dr. Ben Buchanan, the White House Special Advisor for AI.</itunes:subtitle>
      <itunes:summary>On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Dr. Ben Buchanan, the White House Special Advisor for AI. They discuss the Biden administration's biggest AI policy achievements including the AI Bill of Rights, the AI Safety Institute, and the Hiroshima AI process, and the National Security Memorandum on AI.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Dr. Ben Buchanan, the White House Special Advisor for AI. They discuss the Biden administration's biggest AI policy achievements including the AI Bill of Rights, the AI Safety Institute, and the Hiroshima AI process, and the National Security Memorandum on AI.</p>]]>
      </content:encoded>
      <itunes:duration>3013</itunes:duration>
      <guid isPermaLink="false"><![CDATA[03d8ad8e-bbf4-11ef-9a5c-5b2507754819]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS2394260407.mp3?updated=1734384519" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI Policy in Trump 2.0</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>On this special episode, New York Times reporter Ana Swanson is joined by Neil Chilson, Head of AI Policy at The Abundance Institute, Kara Frederick, Director, Tech Policy Center at The Heritage Foundation, and Brandon Pugh, Director and Senior Fellow, Cybersecurity and Emerging Threats at R Street Institute. They discuss what we can expect from the incoming Trump administration when it comes to AI policy. </description>
      <pubDate>Mon, 16 Dec 2024 16:00:00 -0000</pubDate>
      <itunes:title>AI Policy in Trump 2.0</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>New York Times reporter Ana Swanson is joined by Neil Chilson, Head of AI Policy at The Abundance Institute, Kara Frederick, Director, Tech Policy Center at The Heritage Foundation, and Brandon Pugh, Director and Senior Fellow, Cybersecurity and Emerging Threats at R Street Institute.</itunes:subtitle>
      <itunes:summary>On this special episode, New York Times reporter Ana Swanson is joined by Neil Chilson, Head of AI Policy at The Abundance Institute, Kara Frederick, Director, Tech Policy Center at The Heritage Foundation, and Brandon Pugh, Director and Senior Fellow, Cybersecurity and Emerging Threats at R Street Institute. They discuss what we can expect from the incoming Trump administration when it comes to AI policy. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>On this special episode, New York Times reporter Ana Swanson is joined by Neil Chilson, Head of AI Policy at The Abundance Institute, Kara Frederick, Director, Tech Policy Center at The Heritage Foundation, and Brandon Pugh, Director and Senior Fellow, Cybersecurity and Emerging Threats at R Street Institute. They discuss what we can expect from the incoming Trump administration when it comes to AI policy. </p>]]>
      </content:encoded>
      <itunes:duration>3444</itunes:duration>
      <guid isPermaLink="false"><![CDATA[e47bec56-bbc3-11ef-a049-6be17e3f3d0b]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6349118411.mp3?updated=1734382272" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Understanding the AI Policy Landscape with Alondra Nelson</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we are joined by Alondra Nelson, the Harold F. Linder Chair in the School of Social Science at the Institute of Advanced Study, and the former acting director of the White House Office of Science and Technology Policy (OSTP). We discuss her background in AI policy (1:30), the Blueprint for the AI Bill of Rights (9:43), its relationship to the White House Executive Order on AI (23:47), the Senate AI Insight Forums (29:55), the European approach to AI governance (29:55), state-level AI regulation (41:20), and how the incoming administration should approach AI policy (47:04).</description>
      <pubDate>Mon, 09 Dec 2024 14:07:49 -0000</pubDate>
      <itunes:title>Understanding the AI Policy Landscape with Alondra Nelson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we are joined by Alondra Nelson, the Harold F. Linder Chair in the School of Social Science at the Institute of Advanced Study, and the former acting director of the White House Office of Science and Technology Policy (OSTP). </itunes:subtitle>
      <itunes:summary>In this episode, we are joined by Alondra Nelson, the Harold F. Linder Chair in the School of Social Science at the Institute of Advanced Study, and the former acting director of the White House Office of Science and Technology Policy (OSTP). We discuss her background in AI policy (1:30), the Blueprint for the AI Bill of Rights (9:43), its relationship to the White House Executive Order on AI (23:47), the Senate AI Insight Forums (29:55), the European approach to AI governance (29:55), state-level AI regulation (41:20), and how the incoming administration should approach AI policy (47:04).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by Alondra Nelson, the Harold F. Linder Chair in the School of Social Science at the Institute of Advanced Study, and the former acting director of the White House Office of Science and Technology Policy (OSTP). We discuss her background in AI policy (1:30), the Blueprint for the AI Bill of Rights (9:43), its relationship to the White House Executive Order on AI (23:47), the Senate AI Insight Forums (29:55), the European approach to AI governance (29:55), state-level AI regulation (41:20), and how the incoming administration should approach AI policy (47:04). </p>]]>
      </content:encoded>
      <itunes:duration>3014</itunes:duration>
      <guid isPermaLink="false"><![CDATA[3670f30e-b637-11ef-af18-63a2a45f4b61]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6741667441.mp3?updated=1733753673" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Scaling Laws, a Chinese AI Ecosystem Update, and a Manhattan Project for AI</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we discuss recent reporting that so called "scaling laws" are slowing and the potential implications for the policy community (0:37), the latest models coming out of the China AI ecosystem (12:37), the U.S. China - Economic Security Review Commission recommendation for a Manhattan Project for AI (19:02), and the biggest takeaways from the first draft of the European Union's General Purpose AI Code of Practice (25:46)
https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft
https://www.csis.org/analysis/ai-safety-institute-international-network-next-steps-and-recommendations
https://www.csis.org/analysis/understanding-military-ai-ecosystem-ukraine
https://www.csis.org/events/international-ai-policy-outlook-2025

Correction: hyperscalers Meta, Microsoft, Google, and Amazon are expected to invest $300 billion in AI and AI infrastructure in 2025.</description>
      <pubDate>Mon, 25 Nov 2024 20:29:00 -0000</pubDate>
      <itunes:title>Scaling Laws, a Chinese AI Ecosystem Update, and a Manhattan Project for AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss recent reporting that so called "scaling laws" are slowing and the potential implications for the policy community (0:37), the latest models coming out of the China AI ecosystem (12:37), the U.S. China - Economic Security Review Commission recommendation for a Manhattan Project for AI (19:02), and the biggest takeaways from the first draft of the European Union's General Purpose AI Code of Practice (25:46)</itunes:subtitle>
      <itunes:summary>In this episode, we discuss recent reporting that so called "scaling laws" are slowing and the potential implications for the policy community (0:37), the latest models coming out of the China AI ecosystem (12:37), the U.S. China - Economic Security Review Commission recommendation for a Manhattan Project for AI (19:02), and the biggest takeaways from the first draft of the European Union's General Purpose AI Code of Practice (25:46)
https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft
https://www.csis.org/analysis/ai-safety-institute-international-network-next-steps-and-recommendations
https://www.csis.org/analysis/understanding-military-ai-ecosystem-ukraine
https://www.csis.org/events/international-ai-policy-outlook-2025

Correction: hyperscalers Meta, Microsoft, Google, and Amazon are expected to invest $300 billion in AI and AI infrastructure in 2025.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss recent reporting that so called "scaling laws" are slowing and the potential implications for the policy community (0:37), the latest models coming out of the China AI ecosystem (12:37), the U.S. China - Economic Security Review Commission recommendation for a Manhattan Project for AI (19:02), and the biggest takeaways from the first draft of the European Union's General Purpose AI Code of Practice (25:46)</p><p><a href="https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft">https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft</a></p><p><a href="https://www.csis.org/analysis/ai-safety-institute-international-network-next-steps-and-recommendations">https://www.csis.org/analysis/ai-safety-institute-international-network-next-steps-and-recommendations</a></p><p><a href="https://www.csis.org/analysis/understanding-military-ai-ecosystem-ukraine">https://www.csis.org/analysis/understanding-military-ai-ecosystem-ukraine</a></p><p><a href="https://www.csis.org/events/international-ai-policy-outlook-2025">https://www.csis.org/events/international-ai-policy-outlook-2025</a></p><p><br></p><p><em>Correction: hyperscalers Meta, Microsoft, Google, and Amazon are expected to invest $300 billion in AI and AI infrastructure in 2025.</em></p>]]>
      </content:encoded>
      <itunes:duration>1958</itunes:duration>
      <guid isPermaLink="false"><![CDATA[fe4389a4-ab6b-11ef-8770-8b41ea7eb6a2]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8884686653.mp3?updated=1733258361" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Promise of AI Governance with Vilas Dhar</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode we are joined by Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation, a 21st century $1.5 billion philanthropy advancing AI and data solutions to create a thriving, equitable, and sustainable future for all. We discuss his background (1:26), the foundation and its approach to AI philanthropy (4:11), building public sector capacity in AI (13:00), the definition of AI governance (20:07), ongoing multilateral governance efforts (23:01), how liberal and authoritarian norms affect AI (28:35), and what the future of AI might look like (30:30).</description>
      <pubDate>Wed, 20 Nov 2024 20:39:00 -0000</pubDate>
      <itunes:title>The Promise of AI Governance with Vilas Dhar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode we are joined by Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation, a 21st century $1.5 billion philanthropy advancing AI and data solutions to create a thriving, equitable, and sustainable future for all.</itunes:subtitle>
      <itunes:summary>In this episode we are joined by Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation, a 21st century $1.5 billion philanthropy advancing AI and data solutions to create a thriving, equitable, and sustainable future for all. We discuss his background (1:26), the foundation and its approach to AI philanthropy (4:11), building public sector capacity in AI (13:00), the definition of AI governance (20:07), ongoing multilateral governance efforts (23:01), how liberal and authoritarian norms affect AI (28:35), and what the future of AI might look like (30:30).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode we are joined by Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation, a 21st century $1.5 billion philanthropy advancing AI and data solutions to create a thriving, equitable, and sustainable future for all. We discuss his background (1:26), the foundation and its approach to AI philanthropy (4:11), building public sector capacity in AI (13:00), the definition of AI governance (20:07), ongoing multilateral governance efforts (23:01), how liberal and authoritarian norms affect AI (28:35), and what the future of AI might look like (30:30).</p>]]>
      </content:encoded>
      <itunes:duration>2145</itunes:duration>
      <guid isPermaLink="false"><![CDATA[9f555c00-a77f-11ef-8707-afeec334d0de]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS3562481949.mp3?updated=1732135505" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Predicting AI Policy in the Second Trump Administration</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this episode, we discuss what AI policy might look like under the second Trump administration. We dive into the first Trump administration's achievements (0:50), how the Trump campaign handled AI policy (3:37), and where the new administration might fall on key issue areas like national security (5:59), safety (7:37), export controls (11:27), open-source (14:04), and more. </description>
      <pubDate>Thu, 07 Nov 2024 22:28:55 -0000</pubDate>
      <itunes:title>Predicting AI Policy in the Second Trump Administration</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss what AI policy might look like under the second Trump administration. </itunes:subtitle>
      <itunes:summary>In this episode, we discuss what AI policy might look like under the second Trump administration. We dive into the first Trump administration's achievements (0:50), how the Trump campaign handled AI policy (3:37), and where the new administration might fall on key issue areas like national security (5:59), safety (7:37), export controls (11:27), open-source (14:04), and more. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss what AI policy might look like under the second Trump administration. We dive into the first Trump administration's achievements (0:50), how the Trump campaign handled AI policy (3:37), and where the new administration might fall on key issue areas like national security (5:59), safety (7:37), export controls (11:27), open-source (14:04), and more. </p>]]>
      </content:encoded>
      <itunes:duration>1654</itunes:duration>
      <guid isPermaLink="false"><![CDATA[baba0874-9d57-11ef-b023-9719a09cc012]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS2156488215.mp3?updated=1731018861" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The National Security Memorandum on AI</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>In this special episode, we discuss the National Security Memorandum on AI the Biden administration released on October 24th, its primary audience and main objectives, and what the upcoming U.S. election could mean for its implementation. </description>
      <pubDate>Fri, 25 Oct 2024 00:14:00 -0000</pubDate>
      <itunes:title>The National Security Memorandum on AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this special episode, we discuss the National Security Memorandum on AI the Biden administration released on October 24th, its primary audience and main objectives, and what the upcoming U.S. election could mean for its implementation. </itunes:subtitle>
      <itunes:summary>In this special episode, we discuss the National Security Memorandum on AI the Biden administration released on October 24th, its primary audience and main objectives, and what the upcoming U.S. election could mean for its implementation. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special episode, we discuss the National Security Memorandum on AI the Biden administration released on October 24th, its primary audience and main objectives, and what the upcoming U.S. election could mean for its implementation. </p>]]>
      </content:encoded>
      <itunes:duration>1804</itunes:duration>
      <guid isPermaLink="false"><![CDATA[a515464a-9263-11ef-bd9e-0f14b778f6bd]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9321445235.mp3?updated=1729814515" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI and Advanced Technologies in the Fight: Combatant Command and Service Collaboration</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Schuyler Moore, the first-ever Chief Technology Officer of U.S. Central Command (CENTCOM), Justin Fanelli, the Chief Technology Officer of the Department of the Navy, and Dr. Alex Miller, the Chief Technology Officer for the Chief of Staff of the Army for a discussion on the warfighter's adoption of emerging technologies. They discuss how U.S. Central Command (CENTCOM), in conjunction with the Army and Navy, has been driving the use of AI and other advanced technologies through a series of exercises such as Desert Sentry, Digital Falcon Oasis, Desert Guardian, and Project Convergence.</description>
      <pubDate>Wed, 02 Oct 2024 08:00:00 -0000</pubDate>
      <itunes:title>AI and Advanced Technologies in the Fight</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>Wadhwani AI Center director Gregory C. Allen is joined by Schuyler Moore, the first-ever Chief Technology Officer of U.S. Central Command (CENTCOM), Justin Fanelli, the Chief Technology Officer of the Department of the Navy, and Dr. Alex Miller, the Chief Technology Officer for the Chief of Staff of the Army for a discussion on the warfighter's adoption of emerging technologies.</itunes:subtitle>
      <itunes:summary>On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Schuyler Moore, the first-ever Chief Technology Officer of U.S. Central Command (CENTCOM), Justin Fanelli, the Chief Technology Officer of the Department of the Navy, and Dr. Alex Miller, the Chief Technology Officer for the Chief of Staff of the Army for a discussion on the warfighter's adoption of emerging technologies. They discuss how U.S. Central Command (CENTCOM), in conjunction with the Army and Navy, has been driving the use of AI and other advanced technologies through a series of exercises such as Desert Sentry, Digital Falcon Oasis, Desert Guardian, and Project Convergence.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Schuyler Moore, the first-ever Chief Technology Officer of U.S. Central Command (CENTCOM), Justin Fanelli, the Chief Technology Officer of the Department of the Navy, and Dr. Alex Miller, the Chief Technology Officer for the Chief of Staff of the Army for a discussion on the warfighter's adoption of emerging technologies. They discuss how U.S. Central Command (CENTCOM), in conjunction with the Army and Navy, has been driving the use of AI and other advanced technologies through a series of exercises such as Desert Sentry, Digital Falcon Oasis, Desert Guardian, and Project Convergence.</p>]]>
      </content:encoded>
      <itunes:duration>5016</itunes:duration>
      <guid isPermaLink="false"><![CDATA[de823788-803b-11ef-afe9-ef15b4662cf6]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6294373457.mp3?updated=1727818310" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The European Perspective on AI Governance with Dragoș Tudorache</title>
      <description>In this episode, we are joined by former MEP Dragoș Tudorache, co-rapporteur of the EU AI Act and Chair of the Special Committee on AI in the Digital Age. We discuss where we are in the EU AI Act roadmap (2:37), how to balance innovation and regulation (11:20), the future of the EU AI Office (25:00), and the increasing energy infrastructure demands of AI (42:30).
The European Approach to Regulating Artificial Intelligence</description>
      <pubDate>Mon, 16 Sep 2024 23:45:00 -0000</pubDate>
      <itunes:title>The European Perspective on AI Governance with Dragoș Tudorache</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we are joined by former MEP Dragoș Tudorache, co-rapporteur of the EU AI Act and Chair of the Special Committee on AI in the Digital Age. We discuss where we are in the EU AI Act roadmap, how to balance innovation and regulation, the future of the EU AI Office, and the increasing energy infrastructure demands of AI.</itunes:subtitle>
      <itunes:summary>In this episode, we are joined by former MEP Dragoș Tudorache, co-rapporteur of the EU AI Act and Chair of the Special Committee on AI in the Digital Age. We discuss where we are in the EU AI Act roadmap (2:37), how to balance innovation and regulation (11:20), the future of the EU AI Office (25:00), and the increasing energy infrastructure demands of AI (42:30).
The European Approach to Regulating Artificial Intelligence</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by former MEP <strong>Dragoș Tudorache</strong>, co-rapporteur of the EU AI Act and Chair of the Special Committee on AI in the Digital Age. We discuss where we are in the EU AI Act roadmap (2:37), how to balance innovation and regulation (11:20), the future of the EU AI Office (25:00), and the increasing energy infrastructure demands of AI (42:30).</p><p><a href="https://www.youtube.com/watch?v=BBmq4T_550U&amp;t=2038s">The European Approach to Regulating Artificial Intelligence</a></p>]]>
      </content:encoded>
      <itunes:duration>3278</itunes:duration>
      <guid isPermaLink="false"><![CDATA[04cbf36e-7486-11ef-a247-6f05b6ea7685]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8802942183.mp3?updated=1726532112" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Nvidia's Earnings Report, Gallium and Germanium Export Controls, and OpenAI's National Security Demo</title>
      <description>In this episode, we discuss Nvidia's earnings report and its implications for the AI industry (0:53), the impact of China's Gallium and Germanium export controls on the global semiconductor competition (9:50), and why OpenAI is demonstrating its capabilities for the national security community (18:00).</description>
      <pubDate>Thu, 05 Sep 2024 21:29:00 -0000</pubDate>
      <itunes:title>Nvidia's Earnings Report, Gallium and Germanium Export Controls, and OpenAI's National Security Demo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss Nvidia's earnings report and its implications for the AI industry, the impact of China's Gallium and Germanium export controls on the global semiconductor competition, and why OpenAI is demonstrating its capabilities for the national security community.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss Nvidia's earnings report and its implications for the AI industry (0:53), the impact of China's Gallium and Germanium export controls on the global semiconductor competition (9:50), and why OpenAI is demonstrating its capabilities for the national security community (18:00).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss Nvidia's earnings report and its implications for the AI industry (0:53), the impact of China's Gallium and Germanium export controls on the global semiconductor competition (9:50), and why OpenAI is demonstrating its capabilities for the national security community (18:00).</p>]]>
      </content:encoded>
      <itunes:duration>1509</itunes:duration>
      <guid isPermaLink="false"><![CDATA[cc321148-6bc3-11ef-8a9f-0fdb1242505a]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5372620963.mp3?updated=1725572532" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Past, Present, and Future of Technology Forecasting with Jeff Alstott</title>
      <description>In this episode, we are joined by Jeff Alstott, expert at the National Science Foundation (NSF) and director of the Center for Technology and Security Policy at RAND, to discuss past technology forecasting across the national security community (20:45) and a new NSF initiative called Assessing and Predicting Technology Outcomes (APTO) (31:30). 
https://urldefense.com/v3/__https:/new.nsf.gov/tip/updates/nsf-invests-nearly-52m-align-science-technology__;!!KRhing!eOu1AsJT51VVjrOK6T3-do43HgthGjQ9H0JkwgwH774TXBgeHKT2IweoShOS_F8P27yWUnkbispIRQ$</description>
      <pubDate>Fri, 30 Aug 2024 21:25:00 -0000</pubDate>
      <itunes:title>The Past, Present, and Future of Technology Forecasting with Jeff Alstott</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we are joined by Jeff Alstott, expert at the National Science Foundation (NSF) and director of the Center for Technology and Security Policy at RAND, to discuss past technology forecasting across the national security community and a new NSF initiative called Assessing and Predicting Technology Outcomes (APTO). </itunes:subtitle>
      <itunes:summary>In this episode, we are joined by Jeff Alstott, expert at the National Science Foundation (NSF) and director of the Center for Technology and Security Policy at RAND, to discuss past technology forecasting across the national security community (20:45) and a new NSF initiative called Assessing and Predicting Technology Outcomes (APTO) (31:30). 
https://urldefense.com/v3/__https:/new.nsf.gov/tip/updates/nsf-invests-nearly-52m-align-science-technology__;!!KRhing!eOu1AsJT51VVjrOK6T3-do43HgthGjQ9H0JkwgwH774TXBgeHKT2IweoShOS_F8P27yWUnkbispIRQ$</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by <strong>Jeff Alstott</strong>, expert at the National Science Foundation (NSF) and director of the Center for Technology and Security Policy at RAND, to discuss past technology forecasting across the national security community (20:45) and a new NSF initiative called Assessing and Predicting Technology Outcomes (APTO) (31:30). </p><p>https://urldefense.com/v3/__https:/new.nsf.gov/tip/updates/nsf-invests-nearly-52m-align-science-technology__;!!KRhing!eOu1AsJT51VVjrOK6T3-do43HgthGjQ9H0JkwgwH774TXBgeHKT2IweoShOS_F8P27yWUnkbispIRQ$</p><p><br></p>]]>
      </content:encoded>
      <itunes:duration>3566</itunes:duration>
      <guid isPermaLink="false"><![CDATA[19aa94b6-6716-11ef-b946-b3d495ad4361]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS1188402538.mp3?updated=1725054234" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>New Collaborative Combat Aircraft (CCA) Report, AI Chip Smuggling, California's SB 1047, and an EU AI Act Update</title>
      <description>In this episode, we discuss the CSIS Wadhwani Center for AI and Advanced Technologies latest report on the DOD's Collaborative Combat Aircraft (CCA) program (0:58), what recent news about AI chip smuggling means for U.S. export controls (13:40), how California's SB 1047 might affect AI regulation (23:18), and our biggest takeaways from the EU AI Act going into force (33:52).
Collaborative Combat Aircraft Program: Good News, Bad News, and Unanswered Questions</description>
      <pubDate>Fri, 23 Aug 2024 19:06:00 -0000</pubDate>
      <itunes:title>New Collaborative Combat Aircraft (CCA) Report, AI Chip Smuggling, California's SB 1047, and an EU AI Act Update</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss the CSIS Wadhwani Center for AI and Advanced Technologies latest report on the DOD's Collaborative Combat Aircraft (CCA) program, what recent news about AI chip smuggling means for U.S. export controls, how California's SB 1047 might affect AI regulation, and our biggest takeaways from the EU AI Act going into force.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss the CSIS Wadhwani Center for AI and Advanced Technologies latest report on the DOD's Collaborative Combat Aircraft (CCA) program (0:58), what recent news about AI chip smuggling means for U.S. export controls (13:40), how California's SB 1047 might affect AI regulation (23:18), and our biggest takeaways from the EU AI Act going into force (33:52).
Collaborative Combat Aircraft Program: Good News, Bad News, and Unanswered Questions</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss the CSIS Wadhwani Center for AI and Advanced Technologies latest report on the DOD's Collaborative Combat Aircraft (CCA) program (0:58), what recent news about AI chip smuggling means for U.S. export controls (13:40), how California's SB 1047 might affect AI regulation (23:18), and our biggest takeaways from the EU AI Act going into force (33:52).</p><p><a href="https://www.csis.org/analysis/department-defenses-collaborative-combat-aircraft-program-good-news-bad-news-and">Collaborative Combat Aircraft Program: Good News, Bad News, and Unanswered Questions</a></p>]]>
      </content:encoded>
      <itunes:duration>2238</itunes:duration>
      <guid isPermaLink="false"><![CDATA[c651e1d6-5e67-11ef-b13e-9f8e5580c547]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8933833072.mp3?updated=1724440379" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Understanding What Intellectual Property Regulation Means For AI with Andrei Iancu</title>
      <description>In this episode, we are joined by Andrei Iancu, former Undersecretary of Commerce for Intellectual Property and former Director of the US Patent and Trademark Office (USPTO), to discuss whether AI-generated works can be copyrighted (15:52), what the latest USPTO guidance means for the patent subject matter eligibility of AI systems (22:31), who can claim inventorship for AI-facilitated inventions (36:00), and the use of AI by patent and trademark applicants and the USPTO (53:43).</description>
      <pubDate>Mon, 05 Aug 2024 17:28:00 -0000</pubDate>
      <itunes:title>Understanding What Intellectual Property Regulation Means For AI with Andrei Iancu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we are joined by Andrei Iancu, former Undersecretary of Commerce for Intellectual Property and former Director of the US Patent and Trademark Office (USPTO), to discuss whether AI-generated works can be copyrighted, what the latest USPTO guidance means for the patent subject matter eligibility of AI systems, who can claim inventorship for AI-facilitated inventions and the use of AI by patent and trademark applicants and the USPTO.</itunes:subtitle>
      <itunes:summary>In this episode, we are joined by Andrei Iancu, former Undersecretary of Commerce for Intellectual Property and former Director of the US Patent and Trademark Office (USPTO), to discuss whether AI-generated works can be copyrighted (15:52), what the latest USPTO guidance means for the patent subject matter eligibility of AI systems (22:31), who can claim inventorship for AI-facilitated inventions (36:00), and the use of AI by patent and trademark applicants and the USPTO (53:43).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by <strong>Andrei Iancu</strong>, former Undersecretary of Commerce for Intellectual Property and former Director of the US Patent and Trademark Office (USPTO), to discuss whether AI-generated works can be copyrighted (15:52), what the latest USPTO guidance means for the patent subject matter eligibility of AI systems (22:31), who can claim inventorship for AI-facilitated inventions (36:00), and the use of AI by patent and trademark applicants and the USPTO (53:43).</p><p><br></p>]]>
      </content:encoded>
      <itunes:duration>3592</itunes:duration>
      <guid isPermaLink="false"><![CDATA[5d1c92fe-5351-11ef-9689-5b0c553e82ea]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7769595933.mp3?updated=1722884120" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The U.S. Vision for AI Safety: A Conversation with Elizabeth Kelly, Director of the U.S. AI Safety Institute</title>
      <description>On this special episode, the CSIS Wadhwani Center for AI and Advanced Technologies is pleased to host Elizabeth Kelly, Director of the United States Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) at the U.S. Department of Commerce.
The U.S. AI Safety Institute (AISI) was announced by Vice President Kamala Harris at the UK AI Safety Summit in November 2023. The institute was established to advance the science, practice, and adoption of AI safety in the face of risks including those to national security, public safety, and individual rights. Director Kelly will discuss the U.S. AISI’s recently released Strategic Vision, its activities under President Biden’s AI Executive Order, and its approach to the AISI global network announced at the AI Seoul Summit. </description>
      <pubDate>Thu, 01 Aug 2024 20:02:00 -0000</pubDate>
      <itunes:title>The U.S. Vision for AI Safety: A Conversation with Elizabeth Kelly, Director of the U.S. AI Safety Institute</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>On this special episode, the CSIS Wadhwani Center for AI and Advanced Technologies is pleased to host Elizabeth Kelly, Director of the United States Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) at the U.S. Department of Commerce.  The U.S. AI Safety Institute (AISI) was announced by Vice President Kamala Harris at the UK AI Safety Summit in November 2023. The institute was established to advance the science, practice, and adoption of AI safety in the face of risks including those to national security, public safety, and individual rights. Director Kelly will discuss the U.S. AISI’s recently released Strategic Vision, its activities under President Biden’s AI Executive Order, and its approach to the AISI global network announced at the AI Seoul Summit. </itunes:subtitle>
      <itunes:summary>On this special episode, the CSIS Wadhwani Center for AI and Advanced Technologies is pleased to host Elizabeth Kelly, Director of the United States Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) at the U.S. Department of Commerce.
The U.S. AI Safety Institute (AISI) was announced by Vice President Kamala Harris at the UK AI Safety Summit in November 2023. The institute was established to advance the science, practice, and adoption of AI safety in the face of risks including those to national security, public safety, and individual rights. Director Kelly will discuss the U.S. AISI’s recently released Strategic Vision, its activities under President Biden’s AI Executive Order, and its approach to the AISI global network announced at the AI Seoul Summit. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>On this special episode, the CSIS Wadhwani Center for AI and Advanced Technologies is pleased to host <strong>Elizabeth Kelly</strong>, Director of the United States Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) at the U.S. Department of Commerce.</p><p>The U.S. AI Safety Institute (AISI) was announced by Vice President Kamala Harris at the UK AI Safety Summit in November 2023. The institute was established to advance the science, practice, and adoption of AI safety in the face of risks including those to national security, public safety, and individual rights. Director Kelly will discuss the U.S. AISI’s recently released <a href="https://www.nist.gov/aisi/strategic-vision">Strategic Vision</a>, its activities under President Biden’s AI Executive Order, and its approach to the <a href="https://www.commerce.gov/news/press-releases/2024/05/us-secretary-commerce-gina-raimondo-releases-strategic-vision-ai-safety#:~:text=The%20AISI%20will%20focus%20on,and%20coordination%20around%20AI%20safety.">AISI global network</a> announced at the AI Seoul Summit. </p>]]>
      </content:encoded>
      <itunes:duration>3003</itunes:duration>
      <guid isPermaLink="false"><![CDATA[2d909434-5042-11ef-a988-47623b086957]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS8796970480.mp3?updated=1722545381" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How are the Harris and Trump Campaigns Thinking about AI Policy?</title>
      <description>In this episode, we discuss what AI policy might like look after the 2024 U.S. presidential election. We dive into the past (1:00), present (9:50), and future (22:50) of both the Trump and Harris campaigns’ AI policy positions and where they fall on the key issue areas like safety (23:01), open-source (25:17), energy infrastructure (33:27), and more.</description>
      <pubDate>Mon, 29 Jul 2024 19:07:00 -0000</pubDate>
      <itunes:title>How are the Harris and Trump Campaigns Thinking about AI Policy?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss what AI policy might like look after the 2024 U.S. presidential election. We dive into the past, present, and future of both the Trump and Harris campaigns’ AI policy positions and where they fall on the key issue areas like safety, open-source, energy infrastructure, and more.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss what AI policy might like look after the 2024 U.S. presidential election. We dive into the past (1:00), present (9:50), and future (22:50) of both the Trump and Harris campaigns’ AI policy positions and where they fall on the key issue areas like safety (23:01), open-source (25:17), energy infrastructure (33:27), and more.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss what AI policy might like look after the 2024 U.S. presidential election. We dive into the past (1:00), present (9:50), and future (22:50) of both the Trump and Harris campaigns’ AI policy positions and where they fall on the key issue areas like safety (23:01), open-source (25:17), energy infrastructure (33:27), and more.</p><p><br></p>]]>
      </content:encoded>
      <itunes:duration>2198</itunes:duration>
      <guid isPermaLink="false"><![CDATA[e3a30a7a-4ddd-11ef-ad70-1f6a80ab9a27]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7136059324.mp3?updated=1722282861" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI Transformation at the DOD: A Conversation with Chief Digital and AI Officer, Dr. Radha Plumb</title>
      <description>In this episode, DOD Chief Digital and AI Officer Dr. Radha Plumb joins Greg Allen to discuss the Chief Digital and Artificial Intelligence Office (CDAO)'s current role at the department and a preview of its upcoming projects. The CDAO was established in 2022 to create, implement, and steer the DOD’s digital transformation and adoption of AI. Under Dr. Plumb’s leadership, the CDAO recently announced a new initiative, Open Data and Applications Government-owned Interoperable Repositories (DAGIR), which will open a multi-vendor ecosystem connecting DOD end users with innovative software solutions. Dr. Plumb discusses the role of Open DAGIR and a series of other transformative projects currently underway at the CDAO.</description>
      <pubDate>Tue, 16 Jul 2024 14:39:00 -0000</pubDate>
      <itunes:title>AI Transformation at the DOD: A Conversation with Chief Digital and AI Officer, Dr. Radha Plumb</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, DOD Chief Digital and AI Officer Dr. Radha Plumb joins Greg Allen to discuss the Chief Digital and Artificial Intelligence Office (CDAO)'s current role at the department and a preview of its upcoming projects.</itunes:subtitle>
      <itunes:summary>In this episode, DOD Chief Digital and AI Officer Dr. Radha Plumb joins Greg Allen to discuss the Chief Digital and Artificial Intelligence Office (CDAO)'s current role at the department and a preview of its upcoming projects. The CDAO was established in 2022 to create, implement, and steer the DOD’s digital transformation and adoption of AI. Under Dr. Plumb’s leadership, the CDAO recently announced a new initiative, Open Data and Applications Government-owned Interoperable Repositories (DAGIR), which will open a multi-vendor ecosystem connecting DOD end users with innovative software solutions. Dr. Plumb discusses the role of Open DAGIR and a series of other transformative projects currently underway at the CDAO.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, DOD Chief Digital and AI Officer Dr. Radha Plumb joins Greg Allen to discuss the Chief Digital and Artificial Intelligence Office (CDAO)'s current role at the department and a preview of its upcoming projects. The CDAO was established in 2022 to create, implement, and steer the DOD’s digital transformation and adoption of AI. Under Dr. Plumb’s leadership, the CDAO recently announced a new initiative, Open Data and Applications Government-owned Interoperable Repositories (DAGIR), which will open a multi-vendor ecosystem connecting DOD end users with innovative software solutions. Dr. Plumb discusses the role of Open DAGIR and a series of other transformative projects currently underway at the CDAO.</p>]]>
      </content:encoded>
      <itunes:duration>3801</itunes:duration>
      <guid isPermaLink="false"><![CDATA[489c4444-4381-11ef-8d5e-0be9c0938afa]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4981579706.mp3?updated=1721141103" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Autonomous Weapons in Ukraine, Overturning the Chevron Doctrine, and the Delayed Deployment of Apple Intelligence in the EU</title>
      <description>In this episode, we discuss the state of autonomous weapons systems adoption in Ukraine (00:55), our takeaways from the Supreme Court's decision to overturn the Chevron Doctrine and the implications for AI regulation (17:35), the delayed deployment of Apple Intelligence in the EU (30:55), and a breakdown of Nvidia's deal to sell its technology to data centers in the Middle East (41:30).</description>
      <pubDate>Mon, 08 Jul 2024 20:24:32 -0000</pubDate>
      <itunes:title>Autonomous Weapons in Ukraine, Overturning the Chevron Doctrine, and the Delayed Deployment of Apple Intelligence in the EU</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss the state of autonomous weapons systems adoption in Ukraine, our takeaways from the Supreme Court's decision to overturn the Chevron Doctrine and the implications for AI regulation, the delayed deployment of Apple Intelligence in the EU, and a breakdown of Nvidia's deal to sell its technology to data centers in the Middle East. </itunes:subtitle>
      <itunes:summary>In this episode, we discuss the state of autonomous weapons systems adoption in Ukraine (00:55), our takeaways from the Supreme Court's decision to overturn the Chevron Doctrine and the implications for AI regulation (17:35), the delayed deployment of Apple Intelligence in the EU (30:55), and a breakdown of Nvidia's deal to sell its technology to data centers in the Middle East (41:30).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss the state of autonomous weapons systems adoption in Ukraine (00:55), our takeaways from the Supreme Court's decision to overturn the Chevron Doctrine and the implications for AI regulation (17:35), the delayed deployment of Apple Intelligence in the EU (30:55), and a breakdown of Nvidia's deal to sell its technology to data centers in the Middle East (41:30).</p>]]>
      </content:encoded>
      <itunes:duration>3056</itunes:duration>
      <guid isPermaLink="false"><![CDATA[c16d8c88-3d51-11ef-af15-d3af9aba92d8]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS1067112583.mp3?updated=1720461620" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI at the G7 Leaders' Summit, Apple and OpenAI's Partnership, and Saudi Arabia's Zhipu AI Investment</title>
      <description>In this episode, we discuss our biggest takeaways from the AI agenda at the G7 Leaders' Summit (0:41), the details of the Apple-OpenAI partnership announcement (8:05), and why Saudi Aramco's investment in Zhipu AI represents a groundbreaking moment in China-Saudi Arabia relations (16:25).</description>
      <pubDate>Tue, 18 Jun 2024 21:22:00 -0000</pubDate>
      <itunes:title>AI at the G7 Leaders' Summit, Apple and OpenAI's Partnership, and Saudi Arabia's Zhipu AI Investment</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss our biggest takeaways from the AI agenda at the G7 Leaders' Summit, the details of the Apple-OpenAI partnership announcement, and why Saudi Aramco's investment in Zhipu AI represents a groundbreaking moment in China-Saudi Arabia relations.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss our biggest takeaways from the AI agenda at the G7 Leaders' Summit (0:41), the details of the Apple-OpenAI partnership announcement (8:05), and why Saudi Aramco's investment in Zhipu AI represents a groundbreaking moment in China-Saudi Arabia relations (16:25).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss our biggest takeaways from the AI agenda at the G7 Leaders' Summit (0:41), the details of the Apple-OpenAI partnership announcement (8:05), and why Saudi Aramco's investment in Zhipu AI represents a groundbreaking moment in China-Saudi Arabia relations (16:25).</p>]]>
      </content:encoded>
      <itunes:duration>1236</itunes:duration>
      <guid isPermaLink="false"><![CDATA[f5462324-2db8-11ef-b8d3-47c71fd6be24]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6501314982.mp3?updated=1718746089" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI Seoul Summit, U.S.-China AI Safety Readout, and the Zhousidun Dataset</title>
      <description>In this episode, we break down the policy outcomes from the AI Seoul Summit (0:22), the latest news from the U.S.-China AI safety talks (7:59), and why the Zhousidun (Zeus's Shield) dataset matters (16:30). </description>
      <pubDate>Mon, 03 Jun 2024 14:26:00 -0000</pubDate>
      <itunes:title>AI Seoul Summit, U.S.-China AI Safety Readout, and the Zhousidun Dataset</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we break down the policy outcomes from the AI Seoul Summit, the latest news from the U.S.-China AI safety talks, and why the Zhousidun (Zeus's Shield) dataset matters. </itunes:subtitle>
      <itunes:summary>In this episode, we break down the policy outcomes from the AI Seoul Summit (0:22), the latest news from the U.S.-China AI safety talks (7:59), and why the Zhousidun (Zeus's Shield) dataset matters (16:30). </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we break down the policy outcomes from the AI Seoul Summit (0:22), the latest news from the U.S.-China AI safety talks (7:59), and why the Zhousidun (Zeus's Shield) dataset matters (16:30). </p>]]>
      </content:encoded>
      <itunes:duration>1288</itunes:duration>
      <guid isPermaLink="false"><![CDATA[7c9926d8-21b5-11ef-8f26-33d4f373ed72]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS1718864765.mp3?updated=1717425687" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Schumer's AI Policy Roadmap, U.S.-China AI Safety Dialogue, and the Replicator Initiative Announcement</title>
      <description>In this episode, we discuss our biggest takeaways from the bipartisan AI policy roadmap led by Senate Majority Leader Chuck Schumer (1:10), what to expect from the U.S.-China AI safety dialogue (9:55), recent updates to the DOD’s Replicator Initiative (19:25), and Microsoft’s new Intelligence Community AI Tool (29:31).</description>
      <pubDate>Mon, 20 May 2024 18:40:00 -0000</pubDate>
      <itunes:title>Schumer's AI Policy Roadmap, U.S.-China AI Safety Dialogue, and the Replicator Initiative Announcement</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss our biggest takeaways from the bipartisan AI policy roadmap led by Senate Majority Leader Chuck Schumer, what to expect from the U.S.-China AI safety dialogue, recent updates to the DOD’s Replicator Initiative, and Microsoft’s new Intelligence Community AI Tool.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss our biggest takeaways from the bipartisan AI policy roadmap led by Senate Majority Leader Chuck Schumer (1:10), what to expect from the U.S.-China AI safety dialogue (9:55), recent updates to the DOD’s Replicator Initiative (19:25), and Microsoft’s new Intelligence Community AI Tool (29:31).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss our biggest takeaways from the bipartisan AI policy roadmap led by Senate Majority Leader Chuck Schumer (1:10), what to expect from the U.S.-China AI safety dialogue (9:55), recent updates to the DOD’s Replicator Initiative (19:25), and Microsoft’s new Intelligence Community AI Tool (29:31).</p>]]>
      </content:encoded>
      <itunes:duration>2201</itunes:duration>
      <guid isPermaLink="false"><![CDATA[3597bb4e-16d6-11ef-8264-0b0e5e8b5e9b]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9212921113.mp3?updated=1716230967" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>E.U.-Japan AI Safety Cooperation, U.S. AI Safety Institute Update, and the Air Force's CCA Contract Award</title>
      <description>In this episode, we discuss our biggest takeaways from the new E.U.-Japan AI safety cooperation agreement (0:39), why the latest staffing update from the U.S. AI Safety Institute matters (4:57), and how the Air Force's Collaborative Combat Aircraft (CCA) contract award is changing the way the DOD develops autonomous systems (11:40).

aipolicypodcast@csis.org
Wadhwani Center for AI and Advanced Technologies | CSIS</description>
      <pubDate>Fri, 03 May 2024 17:48:42 -0000</pubDate>
      <itunes:title>E.U.-Japan AI Safety Cooperation, U.S. AI Safety Institute Update, and the Air Force's CCA Contract Award</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss our biggest takeaways from the new E.U.-Japan AI safety cooperation agreement, why the latest staffing update from the U.S. AI Safety Institute matters, and how the Air Force's Collaborative Combat Aircraft (CCA) contract award is changing the way the DOD develops autonomous systems.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss our biggest takeaways from the new E.U.-Japan AI safety cooperation agreement (0:39), why the latest staffing update from the U.S. AI Safety Institute matters (4:57), and how the Air Force's Collaborative Combat Aircraft (CCA) contract award is changing the way the DOD develops autonomous systems (11:40).

aipolicypodcast@csis.org
Wadhwani Center for AI and Advanced Technologies | CSIS</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss our biggest takeaways from the new E.U.-Japan AI safety cooperation agreement (0:39), why the latest staffing update from the U.S. AI Safety Institute matters (4:57), and how the Air Force's Collaborative Combat Aircraft (CCA) contract award is changing the way the DOD develops autonomous systems (11:40).</p><p><br></p><p>aipolicypodcast@csis.org</p><p><a href="https://www.csis.org/programs/wadhwani-center-ai-and-advanced-technologies">Wadhwani Center for AI and Advanced Technologies | CSIS</a></p>]]>
      </content:encoded>
      <itunes:duration>1312</itunes:duration>
      <guid isPermaLink="false"><![CDATA[a7573e48-0971-11ef-a83c-3f290842e1a7]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5258692997.mp3?updated=1714758897" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>G42's Partnership with Microsoft, the IDF's Reported AI System, and Meta's Watermarking Update</title>
      <description>In this episode, we discuss Microsoft's investment in G42 and questions surrounding G42's ties to China (1:12), the latest reporting about the Israeli military's use of AI and policy implications advanced technologies in warfare (9:23), and Meta's new watermarking policy (23:01).

aipolicypodcast@csis.org
Wadhwani Center for AI and Advanced Technologies | CSIS
The DARPA Perspective on AI and Autonomy at the DOD | CSIS Events
Scaling AI-enabled Capabilities at the DOD: Government and Industry Perspectives:
The State of DOD AI and Autonomy Policy:</description>
      <pubDate>Fri, 19 Apr 2024 21:15:00 -0000</pubDate>
      <itunes:title>G42's Partnership with Microsoft, the IDF's Reported AI System, and Meta's Watermarking Update</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss Microsoft's investment in G42 and questions surrounding G42's ties to China, the latest reporting about the Israeli military's use of AI and policy implications advanced technologies in warfare, and Meta's new watermarking policy.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss Microsoft's investment in G42 and questions surrounding G42's ties to China (1:12), the latest reporting about the Israeli military's use of AI and policy implications advanced technologies in warfare (9:23), and Meta's new watermarking policy (23:01).

aipolicypodcast@csis.org
Wadhwani Center for AI and Advanced Technologies | CSIS
The DARPA Perspective on AI and Autonomy at the DOD | CSIS Events
Scaling AI-enabled Capabilities at the DOD: Government and Industry Perspectives:
The State of DOD AI and Autonomy Policy:</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss Microsoft's investment in G42 and questions surrounding G42's ties to China (1:12), the latest reporting about the Israeli military's use of AI and policy implications advanced technologies in warfare (9:23), and Meta's new watermarking policy (23:01).</p><p><br></p><p>aipolicypodcast@csis.org</p><p><a href="https://www.csis.org/programs/wadhwani-center-ai-and-advanced-technologies">Wadhwani Center for AI and Advanced Technologies | CSIS</a></p><p><a href="https://www.csis.org/events/darpa-perspective-ai-and-autonomy-dod">The DARPA Perspective on AI and Autonomy at the DOD | CSIS Events</a></p><p><a href="https://www.csis.org/events/scaling-ai-enabled-capabilities-dod-government-and-industry-perspectives">Scaling AI-enabled Capabilities at the DOD: Government and Industry Perspectives</a>:</p><p><a href="https://www.csis.org/events/state-dod-ai-and-autonomy-policy">The State of DOD AI and Autonomy Policy</a>:</p>]]>
      </content:encoded>
      <itunes:duration>1798</itunes:duration>
      <guid isPermaLink="false"><![CDATA[fb8f623c-fe91-11ee-ad14-773d09167712]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5786821054.mp3?updated=1713561865" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Three Horizons of AI Policy</title>
      <description>In this episode, we discuss a framework for understanding the rapidly changing AI policy landscape (0:53), the first-of-its-kind U.S. and U.K. partnership on AI Safety (8:20), Open AI's Voice Engine system (10:53), OMB's latest AI policy announcement (18:00), and Mexico's new role in AI infrastructure (21:50).</description>
      <pubDate>Fri, 05 Apr 2024 19:47:00 -0000</pubDate>
      <itunes:title>The Three Horizons of AI Policy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss a framework for understanding the rapidly changing AI policy landscape, the first-of-its-kind U.S. and U.K. partnership on AI Safety, Open AI's Voice Engine system, OMB's latest AI policy announcement, and Mexico's new role in AI infrastructure.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss a framework for understanding the rapidly changing AI policy landscape (0:53), the first-of-its-kind U.S. and U.K. partnership on AI Safety (8:20), Open AI's Voice Engine system (10:53), OMB's latest AI policy announcement (18:00), and Mexico's new role in AI infrastructure (21:50).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss a framework for understanding the rapidly changing AI policy landscape (0:53), the first-of-its-kind U.S. and U.K. partnership on AI Safety (8:20), Open AI's Voice Engine system (10:53), OMB's latest AI policy announcement (18:00), and Mexico's new role in AI infrastructure (21:50).</p>]]>
      </content:encoded>
      <itunes:duration>1658</itunes:duration>
      <guid isPermaLink="false"><![CDATA[62db397e-f385-11ee-8d62-1b06a33485f3]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS6080435075.mp3?updated=1712346773" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How is the G7 Thinking about AI?</title>
      <description>In this episode, we give an insider's view on the March G7's Digital and Tech Ministerial meetings (6:15), the Hiroshima Code of Conduct and it's interaction with the EU AI Act (10:50), a breakdown of the new TikTok bill (23:25), and how AI use impacts the already overstressed power grids (40:49)

AI and Advanced Technology Insights
Wadhwani Center for AI &amp; Advanced Technologies Website
Our New Report: Advancing the Hiroshima AI Process Code of Conduct under the 2024 Italian G7 Presidency: Timeline and Recommendations
Contact us: aipolicypodcast@csis.org</description>
      <pubDate>Fri, 29 Mar 2024 21:20:00 -0000</pubDate>
      <itunes:title>How is the G7 Thinking about AI?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we give an insider's view on the March G7's Digital and Tech Ministerial meetings, the Hiroshima Code of Conduct and it's interaction with the EU AI Act, a breakdown of the new TikTok bill, and how AI use impacts the already overstressed power grids.</itunes:subtitle>
      <itunes:summary>In this episode, we give an insider's view on the March G7's Digital and Tech Ministerial meetings (6:15), the Hiroshima Code of Conduct and it's interaction with the EU AI Act (10:50), a breakdown of the new TikTok bill (23:25), and how AI use impacts the already overstressed power grids (40:49)

AI and Advanced Technology Insights
Wadhwani Center for AI &amp; Advanced Technologies Website
Our New Report: Advancing the Hiroshima AI Process Code of Conduct under the 2024 Italian G7 Presidency: Timeline and Recommendations
Contact us: aipolicypodcast@csis.org</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we give an insider's view on the March G7's Digital and Tech Ministerial meetings (6:15), the Hiroshima Code of Conduct and it's interaction with the EU AI Act (10:50), a breakdown of the new TikTok bill (23:25), and how AI use impacts the already overstressed power grids (40:49)</p><p><br></p><p><a href="https://pardot.csis.org/webmail/906722/2565990477/040ea2cde2daf0938583cbb952ed0c03dbcbbdee2f8b412b885c5e9b4f50bf5f">AI and Advanced Technology Insights</a></p><p><a href="https://www.csis.org/programs/wadhwani-center-ai-and-advanced-technologies">Wadhwani Center for AI &amp; Advanced Technologies Website</a></p><p>Our New Report: <a href="https://www.csis.org/analysis/advancing-hiroshima-ai-process-code-conduct-under-2024-italian-g7-presidency-timeline-and">Advancing the Hiroshima AI Process Code of Conduct under the 2024 Italian G7 Presidency: Timeline and Recommendations</a></p><p>Contact us: <a href="mailto:aipolicypodcast@csis.org">aipolicypodcast@csis.org</a></p>]]>
      </content:encoded>
      <itunes:duration>3040</itunes:duration>
      <guid isPermaLink="false"><![CDATA[2aebf288-ee12-11ee-b29a-5f1be8c53955]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS7129620399.mp3?updated=1711747553" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Future of AI and Autonomy in the Automotive Industry with Volvo CEO Jim Rowan</title>
      <description>In this episode, we are joined by Volvo CEO Jim Rowan to discuss how AI and autonomy are transforming Volvo's business (3:28), the fragmented regulatory environment around autonomous driving (17:00), navigating the increasingly tense U.S.-China relationship (23:10), its implications for the EV industry (33:14), and upcoming policy changes to look out for (52:00).</description>
      <pubDate>Fri, 15 Mar 2024 21:22:00 -0000</pubDate>
      <itunes:title>The Future of AI and Autonomy in the Automotive Industry with Volvo CEO Jim Rowan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we are joined by Volvo CEO Jim Rowan to discuss how AI and autonomy are transforming Volvo's business, the fragmented regulatory environment around autonomous driving, navigating the increasingly tense U.S.-China relationship, its implications for the EV industry, and upcoming policy changes to look out for.</itunes:subtitle>
      <itunes:summary>In this episode, we are joined by Volvo CEO Jim Rowan to discuss how AI and autonomy are transforming Volvo's business (3:28), the fragmented regulatory environment around autonomous driving (17:00), navigating the increasingly tense U.S.-China relationship (23:10), its implications for the EV industry (33:14), and upcoming policy changes to look out for (52:00).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we are joined by <strong>Volvo CEO Jim Rowan</strong> to discuss how AI and autonomy are transforming Volvo's business (3:28), the fragmented regulatory environment around autonomous driving (17:00), navigating the increasingly tense U.S.-China relationship (23:10), its implications for the EV industry (33:14), and upcoming policy changes to look out for (52:00).</p>]]>
      </content:encoded>
      <itunes:duration>3260</itunes:duration>
      <guid isPermaLink="false"><![CDATA[0c9f67b2-e311-11ee-9554-c3a71f0fa0fb]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5370286572.mp3?updated=1710538027" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>OpenAI's legal troubles, Microsoft-Mistral Partnership, and the AI Safety Institute Consortium</title>
      <description>In this episode, we discuss the latest legal issues facing OpenAI (0:38), the new Microsoft-Mistral partnership(15:25), and what we can expect from the newly founded U.S. AI Safety Institute Consortium (18:36).</description>
      <pubDate>Thu, 07 Mar 2024 21:11:00 -0000</pubDate>
      <itunes:title>OpenAI's legal troubles, Microsoft-Mistral Partnership, and the AI Safety Institute Consortium</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>In this episode, we discuss the latest legal issues facing OpenAI, the new Microsoft-Mistral partnership, and what we can expect from the newly founded U.S. AI Safety Institute Consortium.</itunes:subtitle>
      <itunes:summary>In this episode, we discuss the latest legal issues facing OpenAI (0:38), the new Microsoft-Mistral partnership(15:25), and what we can expect from the newly founded U.S. AI Safety Institute Consortium (18:36).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we discuss the latest legal issues facing OpenAI (0:38), the new Microsoft-Mistral partnership(15:25), and what we can expect from the newly founded U.S. AI Safety Institute Consortium (18:36).</p>]]>
      </content:encoded>
      <itunes:duration>1338</itunes:duration>
      <guid isPermaLink="false"><![CDATA[fa18e964-dcc7-11ee-9ed1-7bc0cef608a9]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS3020986907.mp3?updated=1712676668" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The State of the Global Semiconductor Competition with Chris Miller</title>
      <description>On this special episode of the AI Policy Podcast, we are joined by Chris Miller, author of Chip War: the Fight for the World's Most Critical Technology, and Professor of International History at Tufts University. We discuss Secretary of Commerce Gina Raimondo's CHIPS Act announcement (1:38), how the semiconductor landscape has changed since Chip War was published (6:39), why U.S. export controls on Russia and China are leaky (12:29), and the latest news from the Chinese semiconductor industry (22:58)</description>
      <pubDate>Wed, 28 Feb 2024 16:22:00 -0000</pubDate>
      <itunes:title>The State of the Global Semiconductor Competition with Chris Miller</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>On this special episode of the AI Policy Podcast, we are joined by Chris Miller, author of Chip War: the Fight for the World's Most Critical Technology, and Professor of International History at Tufts University. We discuss Secretary of Commerce Gina Raimondo's CHIPS act announcement, how the semiconductor landscape has changed since Chip War was published, why U.S. export controls on Russia and China are leaky, and the latest news from the Chinese semiconductor industry.</itunes:subtitle>
      <itunes:summary>On this special episode of the AI Policy Podcast, we are joined by Chris Miller, author of Chip War: the Fight for the World's Most Critical Technology, and Professor of International History at Tufts University. We discuss Secretary of Commerce Gina Raimondo's CHIPS Act announcement (1:38), how the semiconductor landscape has changed since Chip War was published (6:39), why U.S. export controls on Russia and China are leaky (12:29), and the latest news from the Chinese semiconductor industry (22:58)</itunes:summary>
      <content:encoded>
        <![CDATA[<p>On this special episode of the AI Policy Podcast, we are joined by <strong>Chris Miller</strong>, author of <em>Chip War: the Fight for the World's Most Critical Technology</em>, and Professor of International History at Tufts University. We discuss Secretary of Commerce Gina Raimondo's CHIPS Act announcement (1:38), how the semiconductor landscape has changed since <em>Chip War </em>was published (6:39), why U.S. export controls on Russia and China are leaky (12:29), and the latest news from the Chinese semiconductor industry (22:58)</p>]]>
      </content:encoded>
      <itunes:duration>2026</itunes:duration>
      <guid isPermaLink="false"><![CDATA[13e81946-d656-11ee-b22b-2f4790fc7062]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5058018319.mp3?updated=1709321414" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>OpenAI Releases Sora</title>
      <description>On this episode of the AI Policy Podcast, we discuss the release of OpenAI's Sora tool (0:35), its implications for media (4:10), the risks associated with the model (7:05), and what it all means for elections (18:33) and copyright (22:17).</description>
      <pubDate>Thu, 22 Feb 2024 16:30:00 -0000</pubDate>
      <itunes:title>OpenAI Releases Sora</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>On this episode of the AI Policy Podcast, we discuss the release of OpenAI's Sora tool (0:35), its implications for media (4:10), the risks associated with the model (7:05), and what it all means for elections (18:33) and copyright (22:17).</itunes:subtitle>
      <itunes:summary>On this episode of the AI Policy Podcast, we discuss the release of OpenAI's Sora tool (0:35), its implications for media (4:10), the risks associated with the model (7:05), and what it all means for elections (18:33) and copyright (22:17).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>On this episode of the AI Policy Podcast, we discuss the release of OpenAI's Sora tool (0:35), its implications for media (4:10), the risks associated with the model (7:05), and what it all means for elections (18:33) and copyright (22:17).</p>]]>
      </content:encoded>
      <itunes:duration>1466</itunes:duration>
      <guid isPermaLink="false"><![CDATA[aea927cc-d1a1-11ee-b161-6745a3d2bd22]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9855359340.mp3?updated=1708620943" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>E.U. AI Act Recap, Chip War Update, and U.S.-China AI Safety Dialogues</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>On this episode, we discuss the most recent E.U. AI Act milestone (1:19), the latest from the AI chip war (12:23), and U.S.-China AI safety dialogues (22:15).</description>
      <pubDate>Thu, 08 Feb 2024 13:58:00 -0000</pubDate>
      <itunes:title>E.U. AI Act Recap, Chip War Update, and U.S.-China AI Safety Dialogues</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>On this episode, we discuss the most recent E.U. AI Act milestone (1:19), the latest from the AI chip war (12:23), and U.S.-China AI safety dialogues (22:15).</itunes:subtitle>
      <itunes:summary>On this episode, we discuss the most recent E.U. AI Act milestone (1:19), the latest from the AI chip war (12:23), and U.S.-China AI safety dialogues (22:15).</itunes:summary>
      <content:encoded>
        <![CDATA[<p>On this episode, we discuss the most recent E.U. AI Act milestone (1:19), the latest from the AI chip war (12:23), and U.S.-China AI safety dialogues (22:15).</p>]]>
      </content:encoded>
      <itunes:duration>1595</itunes:duration>
      <guid isPermaLink="false"><![CDATA[462219c4-c68a-11ee-9334-2bf028fcd04d]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS5707623149.mp3?updated=1707402366" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Global Technology Competition in the Age of AI</title>
      <link>https://www.csis.org/podcasts/ai-policy-podcast</link>
      <description>On this episode of the AI Policy Podcast, we discuss global technology competition with Senators Michael Bennet and Todd Young.</description>
      <pubDate>Mon, 29 Jan 2024 18:42:49 -0000</pubDate>
      <itunes:title>Global Technology Competition in the Age of AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>On this episode of the AI Policy Podcast, we discuss global technology competition with Senators Michael Bennet and Todd Young.</itunes:subtitle>
      <itunes:summary>On this episode of the AI Policy Podcast, we discuss global technology competition with Senators Michael Bennet and Todd Young.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>On this episode of the AI Policy Podcast, we discuss global technology competition with Senators Michael Bennet and Todd Young.</p>]]>
      </content:encoded>
      <itunes:duration>2879</itunes:duration>
      <guid isPermaLink="false"><![CDATA[406a5dcc-bed6-11ee-a672-f703ba8bd1f1]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS4255840588.mp3?updated=1706554091" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI 2024 Outlook</title>
      <description>On this episode of the AI Policy Podcast, we discuss our outlook for AI in 2024.  </description>
      <pubDate>Tue, 23 Jan 2024 15:57:00 -0000</pubDate>
      <itunes:title>AI 2024 Outlook</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>On this episode of the AI Policy Podcast, we discuss our outlook for AI in 2024.  </itunes:subtitle>
      <itunes:summary>On this episode of the AI Policy Podcast, we discuss our outlook for AI in 2024.  </itunes:summary>
      <content:encoded>
        <![CDATA[<p>On this episode of the AI Policy Podcast, we discuss our outlook for AI in 2024.  </p>]]>
      </content:encoded>
      <itunes:duration>1983</itunes:duration>
      <guid isPermaLink="false"><![CDATA[36abf2be-ba08-11ee-bf58-17e0de65e521]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9501071553.mp3?updated=1706027500" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI 2023 Year in Review</title>
      <description>On the first episode of the AI Policy Podcast, listen to our 2023 AI year in review. Learn about the biggest developments, newest policies, and our responses to it all. </description>
      <pubDate>Tue, 23 Jan 2024 15:18:00 -0000</pubDate>
      <itunes:title>AI 2023 Year in Review</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>On the first episode of the AI Policy Podcast, listen to our 2023 AI year in review. Learn about the biggest developments, newest policies, and our responses to it all. </itunes:subtitle>
      <itunes:summary>On the first episode of the AI Policy Podcast, listen to our 2023 AI year in review. Learn about the biggest developments, newest policies, and our responses to it all. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>On the first episode of the AI Policy Podcast, listen to our 2023 AI year in review. Learn about the biggest developments, newest policies, and our responses to it all. </p>]]>
      </content:encoded>
      <itunes:duration>2139</itunes:duration>
      <guid isPermaLink="false"><![CDATA[29647a56-ba06-11ee-a219-17ef2bb241cb]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9203950964.mp3?updated=1706024913" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The AI Policy Podcast Trailer </title>
      <description>CSIS’ Gregory C. Allen, Director of the Wadhwani Center for AI and Advanced Technologies, is joined by cohost H. Andrew Schwartz on a deep dive into the world of AI policy. Every two weeks, join for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C. 
 </description>
      <pubDate>Wed, 17 Jan 2024 17:04:00 -0000</pubDate>
      <itunes:title>The AI Policy Podcast Trailer </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Center for Strategic and International Studies</itunes:author>
      <itunes:subtitle>CSIS’ Gregory C. Allen, Director of the Wadhwani Center for AI and Advanced Technologies, is joined by cohost H. Andrew Schwartz on a deep dive into the world of AI policy. Every two weeks, join for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C.    </itunes:subtitle>
      <itunes:summary>CSIS’ Gregory C. Allen, Director of the Wadhwani Center for AI and Advanced Technologies, is joined by cohost H. Andrew Schwartz on a deep dive into the world of AI policy. Every two weeks, join for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C. 
 </itunes:summary>
      <content:encoded>
        <![CDATA[<p>CSIS’ Gregory C. Allen, Director of the Wadhwani Center for AI and Advanced Technologies, is joined by cohost H. Andrew Schwartz on a deep dive into the world of AI policy. Every two weeks, join for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C. </p><p> </p>]]>
      </content:encoded>
      <itunes:duration>60</itunes:duration>
      <guid isPermaLink="false"><![CDATA[81955ab6-b55a-11ee-91b5-bb69d983b987]]></guid>
      <enclosure url="https://traffic.megaphone.fm/CSIS9994082948.mp3?updated=1705596035" length="0" type="audio/mpeg"/>
    </item>
  </channel>
</rss>
