<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <atom:link href="https://feeds.megaphone.fm/trainingdata" rel="self" type="application/rss+xml"/>
    <title>Training Data</title>
    <link>https://www.sequoiacap.com/</link>
    <language>en</language>
    <copyright></copyright>
    <description>Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society.

﻿The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.</description>
    
    <itunes:explicit>no</itunes:explicit>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle></itunes:subtitle>
    <itunes:author>Sequoia Capital</itunes:author>
    <itunes:summary>Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society.

﻿The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.</itunes:summary>
    <content:encoded>
      <![CDATA[<p>Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society.</p><p><br></p><p><em>﻿The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.</em></p>]]>
    </content:encoded>
    <itunes:owner>
      <itunes:name>Sequoia Capital</itunes:name>
      <itunes:email>podcast@sequoiacap.com</itunes:email>
    </itunes:owner>
    <itunes:image href="https://megaphone.imgix.net/podcasts/8ca8a61c-08b4-11ef-97ab-9fe273d59030/image/4a16fd33b06708003827556209cd42b7.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
    <itunes:category text="Technology">
    </itunes:category>
    <itunes:category text="Business">
    </itunes:category>
    <item>
      <title>Demis Hassabis on Building DeepMind, AlphaFold, and the Final Stretch to AGI</title>
      <description>Demis Hassabis, co-founder and CEO of Google DeepMind and 2024 Nobel laureate in chemistry for AlphaFold, joins Sequoia partner Konstantine Buhler at AI Ascent 2026 for a wide-ranging conversation about the path to AGI and what comes after. He explains why he believes AGI is achievable by 2030, why drug discovery could collapse from ten years to days, and why we should think of information, not matter or energy, as the most fundamental substance in the universe. Also: what Einstein would tell us about the limits of today's models, and why the next year or two will be critical for humanity.</description>
      <pubDate>Thu, 30 Apr 2026 17:12:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Demis Hassabis, co-founder and CEO of Google DeepMind and 2024 Nobel laureate in chemistry for AlphaFold, joins Sequoia partner Konstantine Buhler at AI Ascent 2026 for a wide-ranging conversation about the path to AGI and what comes after. He explains why he believes AGI is achievable by 2030, why drug discovery could collapse from ten years to days, and why we should think of information, not matter or energy, as the most fundamental substance in the universe. Also: what Einstein would tell us about the limits of today's models, and why the next year or two will be critical for humanity.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Demis Hassabis, co-founder and CEO of Google DeepMind and 2024 Nobel laureate in chemistry for AlphaFold, joins Sequoia partner Konstantine Buhler at AI Ascent 2026 for a wide-ranging conversation about the path to AGI and what comes after. He explains why he believes AGI is achievable by 2030, why drug discovery could collapse from ten years to days, and why we should think of information, not matter or energy, as the most fundamental substance in the universe. Also: what Einstein would tell us about the limits of today's models, and why the next year or two will be critical for humanity.

</p>]]>
      </content:encoded>
      <itunes:duration>1611</itunes:duration>
      <guid isPermaLink="false"><![CDATA[d029858c-44b7-11f1-b968-43e05671113c]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI1446613297.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Andrej Karpathy: From Vibe Coding to Agentic Engineering</title>
      <description>Andrej Karpathy (co-founder of OpenAI, former head of AI at Tesla, and now founder of Eureka Labs) talks with Sequoia partner Stephanie Zhan at AI Ascent 2026 about what's changed in the year since he coined "vibe coding." He explains why he's never felt more behind as a programmer, why agentic engineering is the more serious discipline taking shape on top of vibe coding, and why we should think of LLMs not as animals but as ghosts: jagged, statistical, summoned entities that require a new kind of taste and judgment to direct. He also touches on Software 3.0, the limits of verifiability, and why you can outsource your thinking but never your understanding.</description>
      <pubDate>Thu, 30 Apr 2026 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Andrej Karpathy (co-founder of OpenAI, former head of AI at Tesla, and now founder of Eureka Labs) talks with Sequoia partner Stephanie Zhan at AI Ascent 2026 about what's changed in the year since he coined "vibe coding." He explains why he's never felt more behind as a programmer, why agentic engineering is the more serious discipline taking shape on top of vibe coding, and why we should think of LLMs not as animals but as ghosts: jagged, statistical, summoned entities that require a new kind of taste and judgment to direct. He also touches on Software 3.0, the limits of verifiability, and why you can outsource your thinking but never your understanding.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>
</p>
<p>Andrej Karpathy (co-founder of OpenAI, former head of AI at Tesla, and now founder of Eureka Labs) talks with Sequoia partner Stephanie Zhan at AI Ascent 2026 about what's changed in the year since he coined "vibe coding." He explains why he's never felt more behind as a programmer, why agentic engineering is the more serious discipline taking shape on top of vibe coding, and why we should think of LLMs not as animals but as ghosts: jagged, statistical, summoned entities that require a new kind of taste and judgment to direct. He also touches on Software 3.0, the limits of verifiability, and why you can outsource your thinking but never your understanding.</p>
<p><br>

</p>]]>
      </content:encoded>
      <itunes:duration>1788</itunes:duration>
      <guid isPermaLink="false"><![CDATA[90151d98-433c-11f1-93d1-9316de7fe3e7]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9375467036.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>From SEO to Agent-Led Growth: Profound’s James Cadwallader</title>
      <description>James Cadwallader, co-founder and CEO of Profound, makes the case that we are living through the biggest platform shift in marketing history. The front door of the internet hasn't changed, but the visitor walking through it has. Where consumers once clicked blue links, AI agents now crawl the web on their behalf, synthesizing answers and steering purchase decisions at scale. 



James explains why Gemini, ChatGPT, and Claude all recommend brands differently, why mapping AI visibility onto traditional SEO is the wrong instinct, and why the real imperative is to equip a superintelligent agent with original insight it couldn't find anywhere else. He also digs into the dead internet theory – the possibility that human browsing could largely cease within three years – how AI advertising may become the most effective form the world has ever seen, and why agent-led marketing isn't just automation of the old work, but an entirely new capability.

Hosted by Sonya Huang, Sequoia Capital</description>
      <pubDate>Tue, 14 Apr 2026 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/65d389a8-3765-11f1-a29e-cf3fbe697ce7/image/4da6e283607460a6b2a7df363344fac7.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>James Cadwallader, co-founder and CEO of Profound, makes the case that we are living through the biggest platform shift in marketing history. The front door of the internet hasn't changed, but the visitor walking through it has. Where consumers once clicked blue links, AI agents now crawl the web on their behalf, synthesizing answers and steering purchase decisions at scale. 



James explains why Gemini, ChatGPT, and Claude all recommend brands differently, why mapping AI visibility onto traditional SEO is the wrong instinct, and why the real imperative is to equip a superintelligent agent with original insight it couldn't find anywhere else. He also digs into the dead internet theory – the possibility that human browsing could largely cease within three years – how AI advertising may become the most effective form the world has ever seen, and why agent-led marketing isn't just automation of the old work, but an entirely new capability.

Hosted by Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>James Cadwallader, co-founder and CEO of Profound, makes the case that we are living through the biggest platform shift in marketing history. The front door of the internet hasn't changed, but the visitor walking through it has. Where consumers once clicked blue links, AI agents now crawl the web on their behalf, synthesizing answers and steering purchase decisions at scale. </p>
<p><br></p>
<p>James explains why Gemini, ChatGPT, and Claude all recommend brands differently, why mapping AI visibility onto traditional SEO is the wrong instinct, and why the real imperative is to equip a superintelligent agent with original insight it couldn't find anywhere else. He also digs into the dead internet theory – the possibility that human browsing could largely cease within three years – how AI advertising may become the most effective form the world has ever seen, and why agent-led marketing isn't just automation of the old work, but an entirely new capability.</p>
<p><br>Hosted by Sonya Huang, Sequoia Capital</p>]]>
      </content:encoded>
      <itunes:duration>1886</itunes:duration>
      <guid isPermaLink="false"><![CDATA[65d389a8-3765-11f1-a29e-cf3fbe697ce7]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4586939097.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How Autonomous Labs Will Transform Scientific Research: Ginkgo Bioworks’ Jason Kelly</title>
      <description>Jason Kelly founded Ginkgo Bioworks in 2008 with a simple but radical idea: DNA is code, and cells are programmable. Sixteen years later, AI is finally making that vision real in ways that could reshape science itself. Jason describes a landmark collaboration with OpenAI in which a reasoning model with access to a robotic lab beat the state of the art in biochemistry by 40% - not by being smarter than scientists, but by running experiments 24 hours a day and sharing data across a hundred parallel hypotheses simultaneously. He argues that the biggest inefficiency in science isn't intelligence, it's manual labor. Once AI helps scale research, the cost of discovery collapses and breakthroughs follow, with profound implications for biopharma, national competitiveness, and human health.



Hosted by Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 24 Mar 2026 14:12:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/99f20e54-26e3-11f1-b50e-fff5703b4285/image/4ae61011e6c54f607315a41eff6369a2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Jason Kelly founded Ginkgo Bioworks in 2008 with a simple but radical idea: DNA is code, and cells are programmable. Sixteen years later, AI is finally making that vision real in ways that could reshape science itself. Jason describes a landmark collaboration with OpenAI in which a reasoning model with access to a robotic lab beat the state of the art in biochemistry by 40% - not by being smarter than scientists, but by running experiments 24 hours a day and sharing data across a hundred parallel hypotheses simultaneously. He argues that the biggest inefficiency in science isn't intelligence, it's manual labor. Once AI helps scale research, the cost of discovery collapses and breakthroughs follow, with profound implications for biopharma, national competitiveness, and human health.



Hosted by Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Jason Kelly founded Ginkgo Bioworks in 2008 with a simple but radical idea: DNA is code, and cells are programmable. Sixteen years later, AI is finally making that vision real in ways that could reshape science itself. Jason describes a landmark collaboration with OpenAI in which a reasoning model with access to a robotic lab beat the state of the art in biochemistry by 40% - not by being smarter than scientists, but by running experiments 24 hours a day and sharing data across a hundred parallel hypotheses simultaneously. He argues that the biggest inefficiency in science isn't intelligence, it's manual labor. Once AI helps scale research, the cost of discovery collapses and breakthroughs follow, with profound implications for biopharma, national competitiveness, and human health.</p>
<p><br></p>
<p>Hosted by Sonya Huang and Pat Grady, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>3507</itunes:duration>
      <guid isPermaLink="false"><![CDATA[99f20e54-26e3-11f1-b50e-fff5703b4285]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7929969804.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Greetings, Earthlings: Philip Johnston of Starcloud on Data Centers in Space</title>
      <description>Philip Johnston, founder and CEO of Starcloud, explains why space will become the primary location for AI compute infrastructure within the next decade. After witnessing SpaceX's massive manufacturing scale at Starbase, Philip realized that declining launch costs would make space-based data centers cheaper than terrestrial ones. He breaks down the physics of heat dissipation in vacuum, the economics of solar power without atmosphere, and why the marginal cost of space infrastructure decreases while Earth-based costs increase. Philip previews a future where close to a trillion dollars per year in CapEx flows to space compute. And, yes, we get his take on aliens.



Hosted by: Sonya Huang and Pat Grady, Sequoia Capital.</description>
      <pubDate>Tue, 17 Mar 2026 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/75c18b74-1f25-11f1-949a-2bfed1442ac3/image/fa6c86da523ae3925cbfdfa269fdc943.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Philip Johnston, founder and CEO of Starcloud, explains why space will become the primary location for AI compute infrastructure within the next decade. After witnessing SpaceX's massive manufacturing scale at Starbase, Philip realized that declining launch costs would make space-based data centers cheaper than terrestrial ones. He breaks down the physics of heat dissipation in vacuum, the economics of solar power without atmosphere, and why the marginal cost of space infrastructure decreases while Earth-based costs increase. Philip previews a future where close to a trillion dollars per year in CapEx flows to space compute. And, yes, we get his take on aliens.



Hosted by: Sonya Huang and Pat Grady, Sequoia Capital.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Philip Johnston, founder and CEO of Starcloud, explains why space will become the primary location for AI compute infrastructure within the next decade. After witnessing SpaceX's massive manufacturing scale at Starbase, Philip realized that declining launch costs would make space-based data centers cheaper than terrestrial ones. He breaks down the physics of heat dissipation in vacuum, the economics of solar power without atmosphere, and why the marginal cost of space infrastructure decreases while Earth-based costs increase. Philip previews a future where close to a trillion dollars per year in CapEx flows to space compute. And, yes, we get his take on aliens.</p>
<p><br></p>
<p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital. </p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2659</itunes:duration>
      <guid isPermaLink="false"><![CDATA[75c18b74-1f25-11f1-949a-2bfed1442ac3]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3959632960.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Physics Gets a Vote: Nominal Cofounders on Hardware Development in an AI World</title>
      <description>Nominal’s cofounders (Cameron McCord, Jason Hoch and Bryce Strauss) realized that the new age of reindustrialization requires a new approach to hardware engineering and testing that’s closer to how software is developed.  They founded Nominal with the insight that while SpaceX, Tesla, and Anduril built proprietary internal platforms for hardware testing, the thousands of new hardware entrants can't afford to replicate that work. 



Nominal serves as the system of record for hardware testing, helping companies move from PDF-based workflows to modern data infrastructure that catalogs telemetry from sensors producing millions of data points per second. 



The platform enables engineers to author validation logic that follows hardware systems from initial testing through manufacturing and field deployment. We discuss their belief that all hardware companies will become physical AI companies, and why they think Nominal's role as the verification layer will be critical - because unlike a video game, physical products require rigorous validation before they enter the real world.



Hosted by: Alfred Lin and Sonya Huang, Sequoia Capital</description>
      <pubDate>Tue, 10 Mar 2026 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9f0b727a-1c04-11f1-ac41-ff188f29fe44/image/06761366098862d33b9f54a587e724c5.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Nominal’s cofounders (Cameron McCord, Jason Hoch and Bryce Strauss) realized that the new age of reindustrialization requires a new approach to hardware engineering and testing that’s closer to how software is developed.  They founded Nominal with the insight that while SpaceX, Tesla, and Anduril built proprietary internal platforms for hardware testing, the thousands of new hardware entrants can't afford to replicate that work. 



Nominal serves as the system of record for hardware testing, helping companies move from PDF-based workflows to modern data infrastructure that catalogs telemetry from sensors producing millions of data points per second. 



The platform enables engineers to author validation logic that follows hardware systems from initial testing through manufacturing and field deployment. We discuss their belief that all hardware companies will become physical AI companies, and why they think Nominal's role as the verification layer will be critical - because unlike a video game, physical products require rigorous validation before they enter the real world.



Hosted by: Alfred Lin and Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Nominal’s cofounders (Cameron McCord, Jason Hoch and Bryce Strauss) realized that the new age of reindustrialization requires a new approach to hardware engineering and testing that’s closer to how software is developed.  They founded Nominal with the insight that while SpaceX, Tesla, and Anduril built proprietary internal platforms for hardware testing, the thousands of new hardware entrants can't afford to replicate that work. </p>
<p><br></p>
<p>Nominal serves as the system of record for hardware testing, helping companies move from PDF-based workflows to modern data infrastructure that catalogs telemetry from sensors producing millions of data points per second. </p>
<p><br></p>
<p>The platform enables engineers to author validation logic that follows hardware systems from initial testing through manufacturing and field deployment. We discuss their belief that all hardware companies will become physical AI companies, and why they think Nominal's role as the verification layer will be critical - because unlike a video game, physical products require rigorous validation before they enter the real world.</p>
<p><br></p>
<p>Hosted by: Alfred Lin and Sonya Huang, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2455</itunes:duration>
      <guid isPermaLink="false"><![CDATA[9f0b727a-1c04-11f1-ac41-ff188f29fe44]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI6588043940.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Building the GitHub for RL Environments: Prime Intellect's Will Brown &amp; Johannes Hagemann</title>
      <description>Will Brown and Johannes Hagemann of Prime Intellect discuss the shift from static prompting to "environment-based" AI development, and their Environments Hub, a platform designed to democratize frontier-level training.

The conversation highlights a major shift: AI progress is moving toward Recursive Language Models that manage their own context and agentic RL that scales through trial and error. Will and Johannes describe their vision for the future in which every company will become an AI research lab. By leveraging institutional knowledge as training data, businesses can build models with decades of experience that far outperform generic, off-the-shelf systems.Hosted by Sonya Huang, Sequoia Capital</description>
      <pubDate>Tue, 10 Feb 2026 07:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d4638cb0-05e7-11f1-b247-b3b1016a4a74/image/2bdecc5c3f64b08f3baaa8cf603d619b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Will Brown and Johannes Hagemann of Prime Intellect discuss the shift from static prompting to "environment-based" AI development, and their Environments Hub, a platform designed to democratize frontier-level training.

The conversation highlights a major shift: AI progress is moving toward Recursive Language Models that manage their own context and agentic RL that scales through trial and error. Will and Johannes describe their vision for the future in which every company will become an AI research lab. By leveraging institutional knowledge as training data, businesses can build models with decades of experience that far outperform generic, off-the-shelf systems.Hosted by Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Will Brown and Johannes Hagemann of Prime Intellect discuss the shift from static prompting to "environment-based" AI development, and their Environments Hub, a platform designed to democratize frontier-level training.</p>
<p>The conversation highlights a major shift: AI progress is moving toward Recursive Language Models that manage their own context and agentic RL that scales through trial and error. Will and Johannes describe their vision for the future in which every company will become an AI research lab. By leveraging institutional knowledge as training data, businesses can build models with decades of experience that far outperform generic, off-the-shelf systems.<br>Hosted by Sonya Huang, Sequoia Capital
<br></p>]]>
      </content:encoded>
      <itunes:duration>2685</itunes:duration>
      <guid isPermaLink="false"><![CDATA[d4638cb0-05e7-11f1-b247-b3b1016a4a74]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI8352076279.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>What’s the Future of Vertical SaaS in an AGI World? Jamie Cuffe, CEO of Pace </title>
      <description>Jamie Cuffe is solving one of AI's hardest problems: getting conservative, regulated industries to trust autonomous agents with mission-critical work. At Pace, he's building AI that replaces traditional BPOs in insurance, handling everything from email triage to claims processing with 50-75% cost savings. Drawing on his experience at Retool, Jamie emphasizes the importance of "closing the distance" with customers through forward-deployed engineering and being "the rock" that clients can rely on. He shares how focusing on top-tier insurance carriers and maintaining exceptionally high standards is enabling Pace to capture a meaningful share of the $400 billion BPO market while building a durable business model - at AI-native velocity.

Hosted by Lauren Reeder and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 03 Feb 2026 10:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ed0e9954-005f-11f1-b4af-ffe0a0718e1a/image/e246fc73ff5cd6b323377d990468c0da.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Jamie Cuffe is solving one of AI's hardest problems: getting conservative, regulated industries to trust autonomous agents with mission-critical work. At Pace, he's building AI that replaces traditional BPOs in insurance, handling everything from email triage to claims processing with 50-75% cost savings. Drawing on his experience at Retool, Jamie emphasizes the importance of "closing the distance" with customers through forward-deployed engineering and being "the rock" that clients can rely on. He shares how focusing on top-tier insurance carriers and maintaining exceptionally high standards is enabling Pace to capture a meaningful share of the $400 billion BPO market while building a durable business model - at AI-native velocity.

Hosted by Lauren Reeder and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Jamie Cuffe is solving one of AI's hardest problems: getting conservative, regulated industries to trust autonomous agents with mission-critical work. At Pace, he's building AI that replaces traditional BPOs in insurance, handling everything from email triage to claims processing with 50-75% cost savings. Drawing on his experience at Retool, Jamie emphasizes the importance of "closing the distance" with customers through forward-deployed engineering and being "the rock" that clients can rely on. He shares how focusing on top-tier insurance carriers and maintaining exceptionally high standards is enabling Pace to capture a meaningful share of the $400 billion BPO market while building a durable business model - at AI-native velocity.</p>
<p>Hosted by Lauren Reeder and Pat Grady, Sequoia Capital</p>
<p><br></p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>3115</itunes:duration>
      <guid isPermaLink="false"><![CDATA[ed0e9954-005f-11f1-b4af-ffe0a0718e1a]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI8570430815.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Making the Case for the Terminal as AI's Workbench: Warp’s Zach Lloyd</title>
      <description>Zach Lloyd built Warp to modernize the terminal for professional developers, but the rise of coding agents transformed his company's trajectory. He discusses the convergence of IDEs and terminals into new workbenches built for prompting and agent orchestration, and why he thinks "coding will be solved" within a few years, making human expression of intent the ultimate bottleneck. Zach explains how Warp competes against subsidized tools from Anthropic and OpenAI, and why the terminal's time-based, text-oriented format makes it perfect for managing swarms of cloud agents.

Hosted by Sonya Huang, Sequoia Capital</description>
      <pubDate>Tue, 27 Jan 2026 16:57:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d5f041e0-fae8-11f0-a812-9b47b3178e15/image/90cd6b9af4be5452d0f706f63cd3d60c.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Zach Lloyd built Warp to modernize the terminal for professional developers, but the rise of coding agents transformed his company's trajectory. He discusses the convergence of IDEs and terminals into new workbenches built for prompting and agent orchestration, and why he thinks "coding will be solved" within a few years, making human expression of intent the ultimate bottleneck. Zach explains how Warp competes against subsidized tools from Anthropic and OpenAI, and why the terminal's time-based, text-oriented format makes it perfect for managing swarms of cloud agents.

Hosted by Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Zach Lloyd built Warp to modernize the terminal for professional developers, but the rise of coding agents transformed his company's trajectory. He discusses the convergence of IDEs and terminals into new workbenches built for prompting and agent orchestration, and why he thinks "coding will be solved" within a few years, making human expression of intent the ultimate bottleneck. Zach explains how Warp competes against subsidized tools from Anthropic and OpenAI, and why the terminal's time-based, text-oriented format makes it perfect for managing swarms of cloud agents.</p>
<p>Hosted by Sonya Huang, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2883</itunes:duration>
      <guid isPermaLink="false"><![CDATA[d5f041e0-fae8-11f0-a812-9b47b3178e15]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4053137942.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Context Engineering Our Way to Long-Horizon Agents: LangChain’s Harrison Chase</title>
      <description>Harrison Chase, cofounder of LangChain and pioneer of AI agent frameworks, discusses the emergence of long-horizon agents that can work autonomously for extended periods. 

Harrison breaks down the evolution from early scaffolding approaches to today's harness-based architectures, explaining why context engineering - not just better models - has become fundamental to agent development. 

He shares insights on why coding agents are leading the way, the role of file systems in agent workflows, and how building agents differs from traditional software development - from the importance of traces as the new source of truth to memory systems that enable agents to improve themselves over time.



Hosted by Sonya Huang and Pat Grady</description>
      <pubDate>Wed, 21 Jan 2026 10:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/20f372c4-f635-11f0-a84f-3709416bfe71/image/62a22cafa6952b14e00ca9b392e4b932.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Harrison Chase, cofounder of LangChain and pioneer of AI agent frameworks, discusses the emergence of long-horizon agents that can work autonomously for extended periods. 

Harrison breaks down the evolution from early scaffolding approaches to today's harness-based architectures, explaining why context engineering - not just better models - has become fundamental to agent development. 

He shares insights on why coding agents are leading the way, the role of file systems in agent workflows, and how building agents differs from traditional software development - from the importance of traces as the new source of truth to memory systems that enable agents to improve themselves over time.



Hosted by Sonya Huang and Pat Grady</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Harrison Chase, cofounder of LangChain and pioneer of AI agent frameworks, discusses the emergence of long-horizon agents that can work autonomously for extended periods. </p>
<p>Harrison breaks down the evolution from early scaffolding approaches to today's harness-based architectures, explaining why context engineering - not just better models - has become fundamental to agent development. </p>
<p>He shares insights on why coding agents are leading the way, the role of file systems in agent workflows, and how building agents differs from traditional software development - from the importance of traces as the new source of truth to memory systems that enable agents to improve themselves over time.</p>
<p><br></p>
<p>Hosted by Sonya Huang and Pat Grady</p>]]>
      </content:encoded>
      <itunes:duration>2387</itunes:duration>
      <guid isPermaLink="false"><![CDATA[20f372c4-f635-11f0-a84f-3709416bfe71]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9761150620.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How Ricursive Intelligence’s Founders are Using AI to Shape The Future of Chip Design</title>
      <description>Anna Goldie and Azalia Mirhoseini created AlphaChip at Google, using AI to design four generations of TPUs and reducing chip floor planning from months to hours. They explain how chip design has become the critical bottleneck for AI progress -- a process that typically takes years and costs hundreds of millions of dollars. Now at Ricursive Intelligence, they're enabling an evolution of the industry from “fabless” to "designless," where any company can create custom silicon with Ricursive Intelligence. Their vision: recursive self-improvement where AI designs more powerful chips, and faster, accelerating AI itself.

Hosted by Stephanie Zhan and Sonya Huang</description>
      <pubDate>Wed, 14 Jan 2026 20:36:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c5941256-f0c8-11f0-9b50-539a245e1b60/image/1427c3a23617226ee46ccb5cc2e12717.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Anna Goldie and Azalia Mirhoseini created AlphaChip at Google, using AI to design four generations of TPUs and reducing chip floor planning from months to hours. They explain how chip design has become the critical bottleneck for AI progress -- a process that typically takes years and costs hundreds of millions of dollars. Now at Ricursive Intelligence, they're enabling an evolution of the industry from “fabless” to "designless," where any company can create custom silicon with Ricursive Intelligence. Their vision: recursive self-improvement where AI designs more powerful chips, and faster, accelerating AI itself.

Hosted by Stephanie Zhan and Sonya Huang</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Anna Goldie and Azalia Mirhoseini created AlphaChip at Google, using AI to design four generations of TPUs and reducing chip floor planning from months to hours. They explain how chip design has become the critical bottleneck for AI progress -- a process that typically takes years and costs hundreds of millions of dollars. Now at Ricursive Intelligence, they're enabling an evolution of the industry from “fabless” to "designless," where any company can create custom silicon with Ricursive Intelligence. Their vision: recursive self-improvement where AI designs more powerful chips, and faster, accelerating AI itself.</p>
<p><br>Hosted by Stephanie Zhan and Sonya Huang</p>]]>
      </content:encoded>
      <itunes:duration>2219</itunes:duration>
      <guid isPermaLink="false"><![CDATA[c5941256-f0c8-11f0-9b50-539a245e1b60]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI8200472129.mp3?updated=1768421374" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Training General Robots for Any Task: Physical Intelligence’s Karol Hausman and Tobi Springenberg</title>
      <description>Physical Intelligence’s Karol Hausman and Tobi Springenberg believe that robotics has been held back not by hardware limitations, but by an intelligence bottleneck that foundation models can solve. Their end-to-end learning approach combines vision, language, and action into models like π0 and π*0.6, enabling robots to learn generalizable behaviors rather than task-specific programs. The team prioritizes real-world deployment and uses RL from experience to push beyond what imitation learning alone can achieve. Their philosophy—that a single general-purpose model can handle diverse physical tasks across different robot embodiments—represents a fundamental shift in how we think about building intelligent machines for the physical world.

Hosted by Alfred Lin and Sonya Huang, Sequoia Capital</description>
      <pubDate>Tue, 06 Jan 2026 07:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f559db38-eaba-11f0-abb4-734c8eec0697/image/1cc988cf9f1f11cb9db7b071afb07db7.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Physical Intelligence’s Karol Hausman and Tobi Springenberg believe that robotics has been held back not by hardware limitations, but by an intelligence bottleneck that foundation models can solve. Their end-to-end learning approach combines vision, language, and action into models like π0 and π*0.6, enabling robots to learn generalizable behaviors rather than task-specific programs. The team prioritizes real-world deployment and uses RL from experience to push beyond what imitation learning alone can achieve. Their philosophy—that a single general-purpose model can handle diverse physical tasks across different robot embodiments—represents a fundamental shift in how we think about building intelligent machines for the physical world.

Hosted by Alfred Lin and Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Physical Intelligence’s Karol Hausman and Tobi Springenberg believe that robotics has been held back not by hardware limitations, but by an intelligence bottleneck that foundation models can solve. Their end-to-end learning approach combines vision, language, and action into models like π0 and π*0.6, enabling robots to learn generalizable behaviors rather than task-specific programs. The team prioritizes real-world deployment and uses RL from experience to push beyond what imitation learning alone can achieve. Their philosophy—that a single general-purpose model can handle diverse physical tasks across different robot embodiments—represents a fundamental shift in how we think about building intelligent machines for the physical world.</p>
<p><br>Hosted by Alfred Lin and Sonya Huang, Sequoia Capital</p>]]>
      </content:encoded>
      <itunes:duration>3697</itunes:duration>
      <guid isPermaLink="false"><![CDATA[f559db38-eaba-11f0-abb4-734c8eec0697]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI2576277674.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Why the Next AI Revolution Will Happen Off-Screen: Samsara CEO Sanjit Biswas</title>
      <description>Sanjit Biswas is one of the rare founders who has scaled AI in the physical world – first with Meraki, and now with Samsara, a $20B+ public company with sensors deployed across millions of vehicles and job sites. Capturing 90 billion miles of driving data each year, Samsara operates at a scale matched only by a small handful of companies. Sanjit discusses why physical AI is fundamentally different from cloud-based AI, from running inference on two- to ten-watt edge devices to managing the messy diversity of real-world data—weather, road conditions, and the long tail of human behavior.

He also shares how advances in foundation models unlock new capabilities like video reasoning, why distributed compute at the edge still beats centralized data centers for many autonomy workloads, and how AI is beginning to coach frontline workers—not just detect risk, but recognize good driving and improve fuel efficiency. Sanjit also explains why connectivity, sensors, and compute were the original “why now” for Samsara, and how those compounding curves will reshape logistics, field service, construction, and every asset-heavy industry.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 16 Dec 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/38ba3326-da00-11f0-9d53-8b0924ed4775/image/11e5fa8fe32308656c462259c492adb7.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Sanjit Biswas is one of the rare founders who has scaled AI in the physical world – first with Meraki, and now with Samsara, a $20B+ public company with sensors deployed across millions of vehicles and job sites. Capturing 90 billion miles of driving data each year, Samsara operates at a scale matched only by a small handful of companies. Sanjit discusses why physical AI is fundamentally different from cloud-based AI, from running inference on two- to ten-watt edge devices to managing the messy diversity of real-world data—weather, road conditions, and the long tail of human behavior.

He also shares how advances in foundation models unlock new capabilities like video reasoning, why distributed compute at the edge still beats centralized data centers for many autonomy workloads, and how AI is beginning to coach frontline workers—not just detect risk, but recognize good driving and improve fuel efficiency. Sanjit also explains why connectivity, sensors, and compute were the original “why now” for Samsara, and how those compounding curves will reshape logistics, field service, construction, and every asset-heavy industry.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Sanjit Biswas is one of the rare founders who has scaled AI in the physical world – first with Meraki, and now with Samsara, a $20B+ public company with sensors deployed across millions of vehicles and job sites. Capturing 90 billion miles of driving data each year, Samsara operates at a scale matched only by a small handful of companies. Sanjit discusses why physical AI is fundamentally different from cloud-based AI, from running inference on two- to ten-watt edge devices to managing the messy diversity of real-world data—weather, road conditions, and the long tail of human behavior.</p>
<p>He also shares how advances in foundation models unlock new capabilities like video reasoning, why distributed compute at the edge still beats centralized data centers for many autonomy workloads, and how AI is beginning to coach frontline workers—not just detect risk, but recognize good driving and improve fuel efficiency. Sanjit also explains why connectivity, sensors, and compute were the original “why now” for Samsara, and how those compounding curves will reshape logistics, field service, construction, and every asset-heavy industry.</p>
<p>Hosted by Sonya Huang and Pat Grady, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2301</itunes:duration>
      <guid isPermaLink="false"><![CDATA[38ba3326-da00-11f0-9d53-8b0924ed4775]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4396694185.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Rise of Generative Media: fal's Bet on Video, Infrastructure, and Speed</title>
      <description>fal is building the infrastructure layer for the generative media boom. In this episode, founders Gorkem Yurtseven, Burkay Gur, and Head of Engineering Batuhan Taskaya explain why video models present a completely different optimization problem than LLMs, one that is compute-bound, architecturally volatile, and changing every 30 days. They discuss how fal's tracing compiler, custom kernels, and globally distributed GPU fleet enable them to run more than 600 image and video models simultaneously, often faster than the labs that trained them. The team also shares what they’re seeing from the demand side: AI-native studios, personalized education, programmatic advertising, and early engagement from Hollywood. They argue that generative video is following a trajectory similar to early CGI—initial skepticism giving way to a new medium with its own workflows, aesthetics, and economic models.Hosted by Sonya Huang, Sequoia Capital</description>
      <pubDate>Wed, 10 Dec 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/23afc5da-d484-11f0-be65-b7d8916b5eab/image/f27e371d7033dc17b7b3af62fc35e660.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>fal is building the infrastructure layer for the generative media boom. In this episode, founders Gorkem Yurtseven, Burkay Gur, and Head of Engineering Batuhan Taskaya explain why video models present a completely different optimization problem than LLMs, one that is compute-bound, architecturally volatile, and changing every 30 days. They discuss how fal's tracing compiler, custom kernels, and globally distributed GPU fleet enable them to run more than 600 image and video models simultaneously, often faster than the labs that trained them. The team also shares what they’re seeing from the demand side: AI-native studios, personalized education, programmatic advertising, and early engagement from Hollywood. They argue that generative video is following a trajectory similar to early CGI—initial skepticism giving way to a new medium with its own workflows, aesthetics, and economic models.Hosted by Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>fal is building the infrastructure layer for the generative media boom. In this episode, founders Gorkem Yurtseven, Burkay Gur, and Head of Engineering Batuhan Taskaya explain why video models present a completely different optimization problem than LLMs, one that is compute-bound, architecturally volatile, and changing every 30 days. They discuss how fal's tracing compiler, custom kernels, and globally distributed GPU fleet enable them to run more than 600 image and video models simultaneously, often faster than the labs that trained them. The team also shares what they’re seeing from the demand side: AI-native studios, personalized education, programmatic advertising, and early engagement from Hollywood. They argue that generative video is following a trajectory similar to early CGI—initial skepticism giving way to a new medium with its own workflows, aesthetics, and economic models.Hosted by Sonya Huang, Sequoia Capital </p>]]>
      </content:encoded>
      <itunes:duration>3738</itunes:duration>
      <guid isPermaLink="false"><![CDATA[23afc5da-d484-11f0-be65-b7d8916b5eab]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI6713513341.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Why IDEs Won't Die in the Age of AI Coding: Zed Founder Nathan Sobo</title>
      <description>Nathan Sobo has spent nearly two decades pursuing one goal: building an IDE that combines the power of full-featured tools like JetBrains with the responsiveness of lightweight editors like Vim. After hitting the performance ceiling with web-based Atom, he founded Zed and rebuilt from scratch in Rust with GPU-accelerated rendering. Now with 170,000 active developers, Zed is positioned at the intersection of human and AI collaboration. Nathan discusses the Agent Client Protocol that makes Zed "Switzerland" for different AI coding agents, and his vision for fine-grained edit tracking that enables permanent, contextual conversations anchored directly to code—a collaborative layer that asynchronous git-based workflows can't provide. Nathan argues that despite terminal-based AI coding tools visual interfaces for code aren't going anywhere, and that source code is a language designed for humans to read, not just machines to execute.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 02 Dec 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ecca5f38-ceef-11f0-9aaa-eb11459bc42e/image/bb2b32072ae528de9b45785795238cba.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Nathan Sobo has spent nearly two decades pursuing one goal: building an IDE that combines the power of full-featured tools like JetBrains with the responsiveness of lightweight editors like Vim. After hitting the performance ceiling with web-based Atom, he founded Zed and rebuilt from scratch in Rust with GPU-accelerated rendering. Now with 170,000 active developers, Zed is positioned at the intersection of human and AI collaboration. Nathan discusses the Agent Client Protocol that makes Zed "Switzerland" for different AI coding agents, and his vision for fine-grained edit tracking that enables permanent, contextual conversations anchored directly to code—a collaborative layer that asynchronous git-based workflows can't provide. Nathan argues that despite terminal-based AI coding tools visual interfaces for code aren't going anywhere, and that source code is a language designed for humans to read, not just machines to execute.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Nathan Sobo has spent nearly two decades pursuing one goal: building an IDE that combines the power of full-featured tools like JetBrains with the responsiveness of lightweight editors like Vim. After hitting the performance ceiling with web-based Atom, he founded Zed and rebuilt from scratch in Rust with GPU-accelerated rendering. Now with 170,000 active developers, Zed is positioned at the intersection of human and AI collaboration. Nathan discusses the Agent Client Protocol that makes Zed "Switzerland" for different AI coding agents, and his vision for fine-grained edit tracking that enables permanent, contextual conversations anchored directly to code—a collaborative layer that asynchronous git-based workflows can't provide. Nathan argues that despite terminal-based AI coding tools visual interfaces for code aren't going anywhere, and that source code is a language designed for humans to read, not just machines to execute.</p>
<p>Hosted by Sonya Huang and Pat Grady, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2413</itunes:duration>
      <guid isPermaLink="false"><![CDATA[ecca5f38-ceef-11f0-9aaa-eb11459bc42e]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7976674176.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How End-to-End Learning Created Autonomous Driving 2.0: Wayve CEO Alex Kendall</title>
      <description>Alex Kendall founded Wayve in 2017 with a contrarian vision: replace the hand-engineered autonomous vehicle stack with end-to-end deep learning. While AV 1.0 companies relied on HD maps, LiDAR retrofits, and city-by-city deployments, Wayve built a generalization-first approach that can adapt to new vehicles and cities in weeks. Alex explains how world models enable reasoning in complex scenarios, why partnering with automotive OEMs creates a path to scale beyond robo-taxis, and how language integration opens up new product possibilities. From driving in 500 cities to deploying with manufacturers like Nissan, Wayve demonstrates how the same AI breakthroughs powering LLMs are transforming the physical economy.

Hosted by: Pat Grady and Sonya Huang</description>
      <pubDate>Tue, 18 Nov 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/bbe8e7e6-c3fe-11f0-abed-6f18b823562c/image/cea2b8f68eb64333bbd8d96ba3bd7c16.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Alex Kendall founded Wayve in 2017 with a contrarian vision: replace the hand-engineered autonomous vehicle stack with end-to-end deep learning. While AV 1.0 companies relied on HD maps, LiDAR retrofits, and city-by-city deployments, Wayve built a generalization-first approach that can adapt to new vehicles and cities in weeks. Alex explains how world models enable reasoning in complex scenarios, why partnering with automotive OEMs creates a path to scale beyond robo-taxis, and how language integration opens up new product possibilities. From driving in 500 cities to deploying with manufacturers like Nissan, Wayve demonstrates how the same AI breakthroughs powering LLMs are transforming the physical economy.

Hosted by: Pat Grady and Sonya Huang</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Alex Kendall founded Wayve in 2017 with a contrarian vision: replace the hand-engineered autonomous vehicle stack with end-to-end deep learning. While AV 1.0 companies relied on HD maps, LiDAR retrofits, and city-by-city deployments, Wayve built a generalization-first approach that can adapt to new vehicles and cities in weeks. Alex explains how world models enable reasoning in complex scenarios, why partnering with automotive OEMs creates a path to scale beyond robo-taxis, and how language integration opens up new product possibilities. From driving in 500 cities to deploying with manufacturers like Nissan, Wayve demonstrates how the same AI breakthroughs powering LLMs are transforming the physical economy.</p>
<p>Hosted by: Pat Grady and Sonya Huang</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2496</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bbe8e7e6-c3fe-11f0-abed-6f18b823562c]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9488284627.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How Google’s Nano Banana Achieved Breakthrough Character Consistency</title>
      <description>When Google launched Nano Banana, it instantly became a global phenomenon, introducing an image model that finally made it possible for people to see themselves in AI-generated worlds. In this episode, Nicole Brichtova and Hansa Srinivasan, the product and engineering leads behind Nano Banana, share the story behind the model’s creation and what it means for the future of visual AI.

Nicole and Hansa discuss how they achieved breakthrough character consistency, why human evaluation remains critical for models that aim to feel right, and how “fun” became a gateway to utility. They explain the craft behind Gemini’s multimodal design, the obsession with data quality that powered Nano Banana’s realism, and how user creativity continues to push the technology in unexpected directions—from personal storytelling to education and professional design. The conversation explores what comes next in visual AI, why accessibility and imagination must evolve together, and how the tools we build can help people capture not just reality but possibility.

Hosted by: Stephanie Zhan and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 11 Nov 2025 17:25:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/573109d4-be89-11f0-92b1-a7f50322b31d/image/dc15e1b702d8c03c4e41a17f7803aa03.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>When Google launched Nano Banana, it instantly became a global phenomenon, introducing an image model that finally made it possible for people to see themselves in AI-generated worlds. In this episode, Nicole Brichtova and Hansa Srinivasan, the product and engineering leads behind Nano Banana, share the story behind the model’s creation and what it means for the future of visual AI.

Nicole and Hansa discuss how they achieved breakthrough character consistency, why human evaluation remains critical for models that aim to feel right, and how “fun” became a gateway to utility. They explain the craft behind Gemini’s multimodal design, the obsession with data quality that powered Nano Banana’s realism, and how user creativity continues to push the technology in unexpected directions—from personal storytelling to education and professional design. The conversation explores what comes next in visual AI, why accessibility and imagination must evolve together, and how the tools we build can help people capture not just reality but possibility.

Hosted by: Stephanie Zhan and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>When Google launched Nano Banana, it instantly became a global phenomenon, introducing an image model that finally made it possible for people to see themselves in AI-generated worlds. In this episode, Nicole Brichtova and Hansa Srinivasan, the product and engineering leads behind Nano Banana, share the story behind the model’s creation and what it means for the future of visual AI.</p>
<p>Nicole and Hansa discuss how they achieved breakthrough character consistency, why human evaluation remains critical for models that aim to feel right, and how “fun” became a gateway to utility. They explain the craft behind Gemini’s multimodal design, the obsession with data quality that powered Nano Banana’s realism, and how user creativity continues to push the technology in unexpected directions—from personal storytelling to education and professional design. The conversation explores what comes next in visual AI, why accessibility and imagination must evolve together, and how the tools we build can help people capture not just reality but possibility.</p>
<p>Hosted by: Stephanie Zhan and Pat Grady, Sequoia Capital </p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2618</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[573109d4-be89-11f0-92b1-a7f50322b31d]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7583811947.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>OpenAI Sora 2 Team: How Generative Video Will Unlock Creativity and World Models</title>
      <description>The OpenAI Sora 2 team (Bill Peebles, Thomas Dimson, Rohan Sahai) discuss how they compressed filmmaking from months to days, enabling anyone to create compelling video. Bill, who invented the diffusion transformer that powers Sora and most video generation models, explains how space-time tokens enable object permanence and physics understanding in AI-generated video, and why Sora 2 represents a leap for video. Thomas and Rohan share how they're intentionally designing the Sora product against mindless scrolling, optimizing for creative inspiration, and building the infrastructure for IP holders to participate in a new creator economy. The conversation goes beyond video generation into the team’s vision for world simulators that could one day run scientific experiments, their perspective on co-evolving society alongside technology, and how digital simulations in alternate realities may become the future of knowledge work.

Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital</description>
      <pubDate>Thu, 06 Nov 2025 12:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f586934e-b8e2-11f0-a7ef-9f352ab19a62/image/deec25166ea09f2f2a288b5b39f143e1.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>The OpenAI Sora 2 team (Bill Peebles, Thomas Dimson, Rohan Sahai) discuss how they compressed filmmaking from months to days, enabling anyone to create compelling video. Bill, who invented the diffusion transformer that powers Sora and most video generation models, explains how space-time tokens enable object permanence and physics understanding in AI-generated video, and why Sora 2 represents a leap for video. Thomas and Rohan share how they're intentionally designing the Sora product against mindless scrolling, optimizing for creative inspiration, and building the infrastructure for IP holders to participate in a new creator economy. The conversation goes beyond video generation into the team’s vision for world simulators that could one day run scientific experiments, their perspective on co-evolving society alongside technology, and how digital simulations in alternate realities may become the future of knowledge work.

Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>The OpenAI Sora 2 team (Bill Peebles, Thomas Dimson, Rohan Sahai) discuss how they compressed filmmaking from months to days, enabling anyone to create compelling video. Bill, who invented the diffusion transformer that powers Sora and most video generation models, explains how space-time tokens enable object permanence and physics understanding in AI-generated video, and why Sora 2 represents a leap for video. Thomas and Rohan share how they're intentionally designing the Sora product against mindless scrolling, optimizing for creative inspiration, and building the infrastructure for IP holders to participate in a new creator economy. The conversation goes beyond video generation into the team’s vision for world simulators that could one day run scientific experiments, their perspective on co-evolving society alongside technology, and how digital simulations in alternate realities may become the future of knowledge work.</p>
<p>Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital </p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>3625</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f586934e-b8e2-11f0-a7ef-9f352ab19a62]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI1671501466.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Nvidia CTO Michael Kagan: Scaling Beyond Moore's Law to Million-GPU Clusters</title>
      <description>Recorded live at Sequoia’s Europe100 event: Michael Kagan, co-founder of Mellanox and CTO of Nvidia, explains how the $7 billion Mellanox acquisition helped transform Nvidia from a chip company into the architect of AI infrastructure. Kagan breaks down the technical challenges of scaling from single GPUs to 100K and eventually million-GPU data centers. He reveals why network performance—not just compute power—determines AI system efficiency. He discusses the shift from training to inference workloads, and his vision for AI as humanity's "spaceship of the mind," and why he thinks AI may help us discover laws of physics we haven’t yet imagined.

Hosted by Sonya Huang and Pat Grady</description>
      <pubDate>Tue, 28 Oct 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/56363538-b39c-11f0-8ea1-3bc8ac01af0c/image/08e602bed01b0c1979c3df455848b890.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Recorded live at Sequoia’s Europe100 event: Michael Kagan, co-founder of Mellanox and CTO of Nvidia, explains how the $7 billion Mellanox acquisition helped transform Nvidia from a chip company into the architect of AI infrastructure. Kagan breaks down the technical challenges of scaling from single GPUs to 100K and eventually million-GPU data centers. He reveals why network performance—not just compute power—determines AI system efficiency. He discusses the shift from training to inference workloads, and his vision for AI as humanity's "spaceship of the mind," and why he thinks AI may help us discover laws of physics we haven’t yet imagined.

Hosted by Sonya Huang and Pat Grady</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Recorded live at Sequoia’s Europe100 event: Michael Kagan, co-founder of Mellanox and CTO of Nvidia, explains how the $7 billion Mellanox acquisition helped transform Nvidia from a chip company into the architect of AI infrastructure. Kagan breaks down the technical challenges of scaling from single GPUs to 100K and eventually million-GPU data centers. He reveals why network performance—not just compute power—determines AI system efficiency. He discusses the shift from training to inference workloads, and his vision for AI as humanity's "spaceship of the mind," and why he thinks AI may help us discover laws of physics we haven’t yet imagined.</p>
<p>Hosted by Sonya Huang and Pat Grady<br>

</p>]]>
      </content:encoded>
      <itunes:duration>2491</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[56363538-b39c-11f0-8ea1-3bc8ac01af0c]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI5059499822.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Securing the AI Frontier: Irregular Co-founder Dan Lahav</title>
      <description>Irregular co-founder Dan Lahav is redefining what cybersecurity means in the age of autonomous AI. Working closely with OpenAI, Anthropic, and Google DeepMind, Dan, co-founder Omer Nevo and team are pioneering “frontier AI security”—a proactive approach to safeguarding systems where AI models act as independent agents. 



Dan shares how emergent behaviors, from models socially engineering each other to outmaneuvering real-world defenses like Windows Defender, signal a coming paradigm shift. Dan explains why tomorrow’s threats will come from AI-on-AI interactions, why anomaly detection will soon break down, and how governments and enterprises alike must rethink defenses from first principles as AI becomes a national security layer.

Hosted by: Sonya Huang and Dean Meyer, Sequoia Capital



00:00 Introduction
03:07 The Future of AI Security
03:55 Thought Experiment: Security in the Age of GPT-10
05:23 Economic Shifts and AI Interaction
07:13 Security in the Autonomous Age
08:50 AI Model Capabilities and Cybersecurity
11:08 Real-World AI Security Simulations
12:31 Working with AI Labs
32:34 Enterprise AI Security Strategies
40:03 Governmental AI Security Considerations
43:41 Final Thoughts</description>
      <pubDate>Tue, 21 Oct 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7f9f0c36-ade0-11f0-a1ac-3f87a20a5114/image/d97e2a4b9435ec46293198529af52d11.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Irregular co-founder Dan Lahav is redefining what cybersecurity means in the age of autonomous AI. Working closely with OpenAI, Anthropic, and Google DeepMind, Dan, co-founder Omer Nevo and team are pioneering “frontier AI security”—a proactive approach to safeguarding systems where AI models act as independent agents. 



Dan shares how emergent behaviors, from models socially engineering each other to outmaneuvering real-world defenses like Windows Defender, signal a coming paradigm shift. Dan explains why tomorrow’s threats will come from AI-on-AI interactions, why anomaly detection will soon break down, and how governments and enterprises alike must rethink defenses from first principles as AI becomes a national security layer.

Hosted by: Sonya Huang and Dean Meyer, Sequoia Capital



00:00 Introduction
03:07 The Future of AI Security
03:55 Thought Experiment: Security in the Age of GPT-10
05:23 Economic Shifts and AI Interaction
07:13 Security in the Autonomous Age
08:50 AI Model Capabilities and Cybersecurity
11:08 Real-World AI Security Simulations
12:31 Working with AI Labs
32:34 Enterprise AI Security Strategies
40:03 Governmental AI Security Considerations
43:41 Final Thoughts</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Irregular co-founder Dan Lahav is redefining what cybersecurity means in the age of autonomous AI. Working closely with OpenAI, Anthropic, and Google DeepMind, Dan, co-founder Omer Nevo and team are pioneering “frontier AI security”—a proactive approach to safeguarding systems where AI models act as independent agents. </p>
<p><br></p>
<p>Dan shares how emergent behaviors, from models socially engineering each other to outmaneuvering real-world defenses like Windows Defender, signal a coming paradigm shift. Dan explains why tomorrow’s threats will come from AI-on-AI interactions, why anomaly detection will soon break down, and how governments and enterprises alike must rethink defenses from first principles as AI becomes a national security layer.</p>
<p><br>Hosted by: Sonya Huang and Dean Meyer, Sequoia Capital</p>
<p><br></p>
<p>00:00 Introduction
03:07 The Future of AI Security
03:55 Thought Experiment: Security in the Age of GPT-10
05:23 Economic Shifts and AI Interaction
07:13 Security in the Autonomous Age
08:50 AI Model Capabilities and Cybersecurity
11:08 Real-World AI Security Simulations
12:31 Working with AI Labs
32:34 Enterprise AI Security Strategies
40:03 Governmental AI Security Considerations
43:41 Final Thoughts</p>]]>
      </content:encoded>
      <itunes:duration>2649</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7f9f0c36-ade0-11f0-a1ac-3f87a20a5114]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI2344270803.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Why AI Will Transform Customer Experience: Cresta CEO Ping Wu and Sequoia’s Doug Leone</title>
      <description>Ping Wu built Google's contact center business before becoming CEO of Cresta, where he's pioneering a unique approach to contact center transformation. Rather than full automation Ping advocates a dual approach, automating what's ready while using AI to assist humans with the rest. He makes the case for an abundance mindset—imagining new customer experiences like talking to airline apps or turning synchronous interactions asynchronous. Ping breaks down the technical challenges of deploying Contact Center AI at scale, from solving latency to orchestrating 20+ models in real-time. Sequoia’s Doug Leone shares his framework for building AI companies at speed and why he believes we're at the front end of an Industrial Revolution 2.0.

Hosted by: Sonya Huang and Doug Leone, Sequoia Capital



00:00 Introduction 

01:13 The Evolution of Contact Centers

02:05 Debating AI's Impact on Call Centers

04:07 Challenges and Opportunities in Contact Centers

08:14 Technological Waves in Contact Centers

11:10 AI vs Human Agents: The Future

13:35 Customer Experience and AI

16:33 The Role of Data in AI Automation

19:05 Competing in the AI Space

22:34 Building a Company in the AI Era

24:05 Instilling Speed in AI Companies

24:53 Management Experience and Growth Challenges

26:01 Identifying Leadership Potential

26:37 Cresta's Leadership Transition

28:34 Future Goals for Cresta

29:56 AI Market Cycles and Investment

35:38 Cresta's Technical Stack

45:11 AI's Impact on Business Communication</description>
      <pubDate>Tue, 14 Oct 2025 17:55:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/59ff2cc0-a856-11f0-91b6-a31ab56b5c2a/image/aea44daf50931644c9d49c23eac05d2d.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Ping Wu built Google's contact center business before becoming CEO of Cresta, where he's pioneering a unique approach to contact center transformation. Rather than full automation Ping advocates a dual approach, automating what's ready while using AI to assist humans with the rest. He makes the case for an abundance mindset—imagining new customer experiences like talking to airline apps or turning synchronous interactions asynchronous. Ping breaks down the technical challenges of deploying Contact Center AI at scale, from solving latency to orchestrating 20+ models in real-time. Sequoia’s Doug Leone shares his framework for building AI companies at speed and why he believes we're at the front end of an Industrial Revolution 2.0.

Hosted by: Sonya Huang and Doug Leone, Sequoia Capital



00:00 Introduction 

01:13 The Evolution of Contact Centers

02:05 Debating AI's Impact on Call Centers

04:07 Challenges and Opportunities in Contact Centers

08:14 Technological Waves in Contact Centers

11:10 AI vs Human Agents: The Future

13:35 Customer Experience and AI

16:33 The Role of Data in AI Automation

19:05 Competing in the AI Space

22:34 Building a Company in the AI Era

24:05 Instilling Speed in AI Companies

24:53 Management Experience and Growth Challenges

26:01 Identifying Leadership Potential

26:37 Cresta's Leadership Transition

28:34 Future Goals for Cresta

29:56 AI Market Cycles and Investment

35:38 Cresta's Technical Stack

45:11 AI's Impact on Business Communication</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Ping Wu built Google's contact center business before becoming CEO of Cresta, where he's pioneering a unique approach to contact center transformation. Rather than full automation Ping advocates a dual approach, automating what's ready while using AI to assist humans with the rest. He makes the case for an abundance mindset—imagining new customer experiences like talking to airline apps or turning synchronous interactions asynchronous. Ping breaks down the technical challenges of deploying Contact Center AI at scale, from solving latency to orchestrating 20+ models in real-time. Sequoia’s Doug Leone shares his framework for building AI companies at speed and why he believes we're at the front end of an Industrial Revolution 2.0.</p>
<p>Hosted by: Sonya Huang and Doug Leone, Sequoia Capital</p>
<p><br></p>
<p>00:00 Introduction </p>
<p>01:13 The Evolution of Contact Centers</p>
<p>02:05 Debating AI's Impact on Call Centers</p>
<p>04:07 Challenges and Opportunities in Contact Centers</p>
<p>08:14 Technological Waves in Contact Centers</p>
<p>11:10 AI vs Human Agents: The Future</p>
<p>13:35 Customer Experience and AI</p>
<p>16:33 The Role of Data in AI Automation</p>
<p>19:05 Competing in the AI Space</p>
<p>22:34 Building a Company in the AI Era</p>
<p>24:05 Instilling Speed in AI Companies</p>
<p>24:53 Management Experience and Growth Challenges</p>
<p>26:01 Identifying Leadership Potential</p>
<p>26:37 Cresta's Leadership Transition</p>
<p>28:34 Future Goals for Cresta</p>
<p>29:56 AI Market Cycles and Investment</p>
<p>35:38 Cresta's Technical Stack</p>
<p>45:11 AI's Impact on Business Communication</p>]]>
      </content:encoded>
      <itunes:duration>2686</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[59ff2cc0-a856-11f0-91b6-a31ab56b5c2a]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI1092043399.mp3?updated=1760465413" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Block CTO Dhanji Prasanna: Building the AI-First Enterprise with Goose, their Open Source Agent</title>
      <description>As CTO of Block, Dhanji Prasanna has overseen a dramatic enterprise AI transformation, with engineers saving 8-10 hours a week through AI automation. Block’s open-source agent goose connects to existing enterprise tools through MCP, enabling everyone from engineers to sales teams to build custom applications without coding. Dhanji shares how Block reorganized from business unit silos to functional teams to accelerate AI adoption, why they chose to open-source their most valuable AI tool and why he believes swarms of smaller AI models will outperform monolithic LLMs.

Hosted by: Sonya Huang and Roelof Botha, Sequoia Capital

Mentioned in the episode:


  
goose: Block’s open-source, general-purpose AI agent used across the company to orchestrate workflows via tools and APIs. 



  
Model Context Protocol (MCP): Open protocol (spearheaded by Anthropic) for connecting AI agents to tools; goose was an early adopter and helped shape.



  
bitchat: Decentralized chat app written by Jack Dorsey



  
Swarm intelligence: Research direction Dhanji highlights for AI’s future where many agents (geese) collaborate to build complex software beyond a single-agent copilot.



  
Travelling Salesman Problem: Classic optimization problem cited by Dhanji in the context of a non-technical user of goose solving a practical optimization task.



  
Amara’s Law: The idea, originated by futurist Roy Amara in 1978, that we overestimate tech impact short term and underestimate long term.



00:00 Introduction 

01:48 AI: Friend or Foe?

03:13 Block's Journey with AI and Technology

04:47 Block's Diverse Product Range

07:04 Driving AI at Block

14:28 The Evolution of Goose

27:45 Integrating Goose with Existing Systems

28:23 Goose's Learning and Recipe Feature

29:41 Tool Use and LLM Providers

31:40 Impact of AI on Developer Productivity

34:37 Block's Commitment to Open Source

39:09 Future of AI and Swarm Intelligence

43:05 Remote Work at Block

45:15 Vibe Coding and AI in Development

48:43 Making Goose More Accessible

51:28 Generative AI in Customer-Facing Products

54:09 Design and Engineering at Block

55:38 Predictions for the Future of AI</description>
      <pubDate>Tue, 30 Sep 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/501be594-9d6c-11f0-b436-c32ec611976e/image/7a725bb6aff68ded6db9a62579b049df.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>As CTO of Block, Dhanji Prasanna has overseen a dramatic enterprise AI transformation, with engineers saving 8-10 hours a week through AI automation. Block’s open-source agent goose connects to existing enterprise tools through MCP, enabling everyone from engineers to sales teams to build custom applications without coding. Dhanji shares how Block reorganized from business unit silos to functional teams to accelerate AI adoption, why they chose to open-source their most valuable AI tool and why he believes swarms of smaller AI models will outperform monolithic LLMs.

Hosted by: Sonya Huang and Roelof Botha, Sequoia Capital

Mentioned in the episode:


  
goose: Block’s open-source, general-purpose AI agent used across the company to orchestrate workflows via tools and APIs. 



  
Model Context Protocol (MCP): Open protocol (spearheaded by Anthropic) for connecting AI agents to tools; goose was an early adopter and helped shape.



  
bitchat: Decentralized chat app written by Jack Dorsey



  
Swarm intelligence: Research direction Dhanji highlights for AI’s future where many agents (geese) collaborate to build complex software beyond a single-agent copilot.



  
Travelling Salesman Problem: Classic optimization problem cited by Dhanji in the context of a non-technical user of goose solving a practical optimization task.



  
Amara’s Law: The idea, originated by futurist Roy Amara in 1978, that we overestimate tech impact short term and underestimate long term.



00:00 Introduction 

01:48 AI: Friend or Foe?

03:13 Block's Journey with AI and Technology

04:47 Block's Diverse Product Range

07:04 Driving AI at Block

14:28 The Evolution of Goose

27:45 Integrating Goose with Existing Systems

28:23 Goose's Learning and Recipe Feature

29:41 Tool Use and LLM Providers

31:40 Impact of AI on Developer Productivity

34:37 Block's Commitment to Open Source

39:09 Future of AI and Swarm Intelligence

43:05 Remote Work at Block

45:15 Vibe Coding and AI in Development

48:43 Making Goose More Accessible

51:28 Generative AI in Customer-Facing Products

54:09 Design and Engineering at Block

55:38 Predictions for the Future of AI</itunes:summary>
      <content:encoded>
        <![CDATA[<p>As CTO of Block, Dhanji Prasanna has overseen a dramatic enterprise AI transformation, with engineers saving 8-10 hours a week through AI automation. Block’s open-source agent goose connects to existing enterprise tools through MCP, enabling everyone from engineers to sales teams to build custom applications without coding. Dhanji shares how Block reorganized from business unit silos to functional teams to accelerate AI adoption, why they chose to open-source their most valuable AI tool and why he believes swarms of smaller AI models will outperform monolithic LLMs.</p>
<p>Hosted by: Sonya Huang and Roelof Botha, Sequoia Capital</p>
<p><u>Mentioned in the episode:</u></p>
<ul>
  <li>
<p><a href="https://github.com/block/goose"><u>goose</u></a>: Block’s open-source, general-purpose AI agent used across the company to orchestrate workflows via tools and APIs. </p>
</li>
  <li>
<p><a href="https://modelcontextprotocol.io/"><u>Model Context Protocol (MCP)</u></a>: Open protocol (spearheaded by Anthropic) for connecting AI agents to tools; goose was an early adopter and helped shape.</p>
</li>
  <li>
<p><a href="https://bitchat.free/"><u>bitchat</u></a>: Decentralized chat app written by Jack Dorsey</p>
</li>
  <li>
<p><a href="https://en.wikipedia.org/wiki/Swarm_intelligence"><u>Swarm intelligence</u></a>: Research direction Dhanji highlights for AI’s future where many agents (geese) collaborate to build complex software beyond a single-agent copilot.</p>
</li>
  <li>
<p><a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem"><u>Travelling Salesman Problem</u></a>: Classic optimization problem cited by Dhanji in the context of a non-technical user of goose solving a practical optimization task.</p>
</li>
  <li>
<p><a href="https://en.wikipedia.org/wiki/Amara%27s_law"><u>Amara’s Law</u></a>: The idea, originated by futurist Roy Amara in 1978, that we overestimate tech impact short term and underestimate long term.</p>
<p><br></p>
<p>00:00 Introduction </p>
<p>01:48 AI: Friend or Foe?</p>
<p>03:13 Block's Journey with AI and Technology</p>
<p>04:47 Block's Diverse Product Range</p>
<p>07:04 Driving AI at Block</p>
<p>14:28 The Evolution of Goose</p>
<p>27:45 Integrating Goose with Existing Systems</p>
<p>28:23 Goose's Learning and Recipe Feature</p>
<p>29:41 Tool Use and LLM Providers</p>
<p>31:40 Impact of AI on Developer Productivity</p>
<p>34:37 Block's Commitment to Open Source</p>
<p>39:09 Future of AI and Swarm Intelligence</p>
<p>43:05 Remote Work at Block</p>
<p>45:15 Vibe Coding and AI in Development</p>
<p>48:43 Making Goose More Accessible</p>
<p>51:28 Generative AI in Customer-Facing Products</p>
<p>54:09 Design and Engineering at Block</p>
<p>55:38 Predictions for the Future of AI</p>
<p><br></p>
<p><br></p>
</li>
</ul>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>3583</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[501be594-9d6c-11f0-b436-c32ec611976e]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9803179915.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Why Businesses Are Rejecting the AI They’ve Asked For: Agency CEO Elias Torres</title>
      <description>Elias Torres has been building AI systems since 1999, from chatbots at IBM to co-founding Drift and now Agency. He believes businesses are caught in an expectation mismatch—demanding AI while rejecting it due to imperfection anxiety. Drawing from his experience scaling HubSpot, Elias explains why human-led customer experience doesn’t scale and how Agency is building AI-first solutions that work autonomously. His contrarian approach focuses on the back-end customer experience rather than front-end AI SDRs, aiming to “deprogram the entire business world” from inefficient human-dependent processes.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital

Mentioned in the episode:


  
Lookery: David Cancel’s first startup that Elias joined after IBM; shut down in 2009



  
Performable: Elias and David’s second startup, acquired by HubSpot in 2011



  
Drift: Elias and David Cancel’s third startup, merged with Salesloft in 2024



  
Klaviyo: B2C CRM company started by Andrew Bialecki after working with Elias at HubSpot



  
Secret: Short-lived anonymous messaging app that inspired one of Drift’s early iterations



  
Tatajuba: Kitesurfing destination in Jericoacoara, Brazil where Elias (briefly) considered retirement




00:00 Introduction 
01:50 AI and Customer Expectations
03:36 Managing Emails with AI
07:21 Elias' Personal Journey
11:27 Early Career
14:28 Joining HubSpot and Scaling Challenges
16:31 Hiring Exceptional Talent
18:53 Founding Drift
20:27 Pivoting to Success with Drift
21:41 Drift's Chatbot Innovation
22:09 Challenges and Limitations of Drift
22:37 The Struggle with Customer Knowledge
23:09 Scaling Challenges and Lessons Learned
25:58 Rediscovering Purpose Post-Drift
28:55 The Birth of Agency
29:42 AI's Role in Customer Experience
35:13 Building a Sustainable Business Model
37:06 The Vision for Agency
38:22 Challenges and Opportunities with AI
41:22 Deprogramming and Embracing Change
43:23 Optimism for the AI Future
44:15 Closing Thoughts</description>
      <pubDate>Tue, 23 Sep 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/de942aa4-97de-11f0-b8ca-63bc5a95e438/image/368c17f81b25c939b6ea8eedb2a8d7ed.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Elias Torres has been building AI systems since 1999, from chatbots at IBM to co-founding Drift and now Agency. He believes businesses are caught in an expectation mismatch—demanding AI while rejecting it due to imperfection anxiety. Drawing from his experience scaling HubSpot, Elias explains why human-led customer experience doesn’t scale and how Agency is building AI-first solutions that work autonomously. His contrarian approach focuses on the back-end customer experience rather than front-end AI SDRs, aiming to “deprogram the entire business world” from inefficient human-dependent processes.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital

Mentioned in the episode:


  
Lookery: David Cancel’s first startup that Elias joined after IBM; shut down in 2009



  
Performable: Elias and David’s second startup, acquired by HubSpot in 2011



  
Drift: Elias and David Cancel’s third startup, merged with Salesloft in 2024



  
Klaviyo: B2C CRM company started by Andrew Bialecki after working with Elias at HubSpot



  
Secret: Short-lived anonymous messaging app that inspired one of Drift’s early iterations



  
Tatajuba: Kitesurfing destination in Jericoacoara, Brazil where Elias (briefly) considered retirement




00:00 Introduction 
01:50 AI and Customer Expectations
03:36 Managing Emails with AI
07:21 Elias' Personal Journey
11:27 Early Career
14:28 Joining HubSpot and Scaling Challenges
16:31 Hiring Exceptional Talent
18:53 Founding Drift
20:27 Pivoting to Success with Drift
21:41 Drift's Chatbot Innovation
22:09 Challenges and Limitations of Drift
22:37 The Struggle with Customer Knowledge
23:09 Scaling Challenges and Lessons Learned
25:58 Rediscovering Purpose Post-Drift
28:55 The Birth of Agency
29:42 AI's Role in Customer Experience
35:13 Building a Sustainable Business Model
37:06 The Vision for Agency
38:22 Challenges and Opportunities with AI
41:22 Deprogramming and Embracing Change
43:23 Optimism for the AI Future
44:15 Closing Thoughts</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Elias Torres has been building AI systems since 1999, from chatbots at IBM to co-founding Drift and now Agency. He believes businesses are caught in an expectation mismatch—demanding AI while rejecting it due to imperfection anxiety. Drawing from his experience scaling HubSpot, Elias explains why human-led customer experience doesn’t scale and how Agency is building AI-first solutions that work autonomously. His contrarian approach focuses on the back-end customer experience rather than front-end AI SDRs, aiming to “deprogram the entire business world” from inefficient human-dependent processes.</p>
<p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</p>
<p><u>Mentioned in the episode:</u></p>
<ul>
  <li>
<p><a href="https://www.crunchbase.com/organization/lookery"><u>Lookery</u></a>: David Cancel’s first startup that Elias joined after IBM; shut down in 2009</p>
</li>
  <li>
<p><a href="https://techcrunch.com/2011/06/16/hubspot-acquires-marketing-software-startup-performable/"><u>Performable</u></a>: Elias and David’s second startup, acquired by HubSpot in 2011</p>
</li>
  <li>
<p><a href="https://www.salesloft.com/company/newsroom/salesloft-acquires-drift"><u>Drift</u></a>: Elias and David Cancel’s third startup, merged with Salesloft in 2024</p>
</li>
  <li>
<p><a href="https://www.klaviyo.com/"><u>Klaviyo</u></a>: B2C CRM company started by Andrew Bialecki after working with Elias at HubSpot</p>
</li>
  <li>
<p><a href="https://en.wikipedia.org/wiki/Secret_(app)"><u>Secret</u></a>: Short-lived anonymous messaging app that inspired one of Drift’s early iterations</p>
</li>
  <li>
<p><a href="https://kiteguide.com/south-america/brazil/tatajuba"><u>Tatajuba</u></a>: Kitesurfing destination in Jericoacoara, Brazil where Elias (briefly) considered retirement</p>
</li>
</ul>
<p>00:00 Introduction 
01:50 AI and Customer Expectations
03:36 Managing Emails with AI
07:21 Elias' Personal Journey
11:27 Early Career
14:28 Joining HubSpot and Scaling Challenges
16:31 Hiring Exceptional Talent
18:53 Founding Drift
20:27 Pivoting to Success with Drift
21:41 Drift's Chatbot Innovation
22:09 Challenges and Limitations of Drift
22:37 The Struggle with Customer Knowledge
23:09 Scaling Challenges and Lessons Learned
25:58 Rediscovering Purpose Post-Drift
28:55 The Birth of Agency
29:42 AI's Role in Customer Experience
35:13 Building a Sustainable Business Model
37:06 The Vision for Agency
38:22 Challenges and Opportunities with AI
41:22 Deprogramming and Embracing Change
43:23 Optimism for the AI Future
44:15 Closing Thoughts</p>]]>
      </content:encoded>
      <itunes:duration>2689</itunes:duration>
      <itunes:explicit>yes</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[de942aa4-97de-11f0-b8ca-63bc5a95e438]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4560153176.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Building the "App Store" for Robots: Hugging Face's Thomas Wolf on Physical AI</title>
      <description>Thomas Wolf, co-founder and Chief Science Officer of Hugging Face, explains how his company is applying the same community-driven approach that made transformers accessible to everyone to the emerging field of robotics. Thomas discusses LeRobot, Hugging Face's ambitious project to democratize robotics through open-source tools, datasets, and affordable hardware. He shares his vision for turning millions of software developers into roboticists, the challenges of data scarcity in robotics versus language models, and why he believes we're at the same inflection point for physical AI that we were for LLMs just a few years ago.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 09 Sep 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0d1d59a2-8d00-11f0-810c-4fc7e6321c8c/image/f7fc2868f9444097fc7194e1c3a62818.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Thomas Wolf, co-founder and Chief Science Officer of Hugging Face, explains how his company is applying the same community-driven approach that made transformers accessible to everyone to the emerging field of robotics. Thomas discusses LeRobot, Hugging Face's ambitious project to democratize robotics through open-source tools, datasets, and affordable hardware. He shares his vision for turning millions of software developers into roboticists, the challenges of data scarcity in robotics versus language models, and why he believes we're at the same inflection point for physical AI that we were for LLMs just a few years ago.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Thomas Wolf, co-founder and Chief Science Officer of Hugging Face, explains how his company is applying the same community-driven approach that made transformers accessible to everyone to the emerging field of robotics. Thomas discusses LeRobot, Hugging Face's ambitious project to democratize robotics through open-source tools, datasets, and affordable hardware. He shares his vision for turning millions of software developers into roboticists, the challenges of data scarcity in robotics versus language models, and why he believes we're at the same inflection point for physical AI that we were for LLMs just a few years ago.</p>
<p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</p>
<p><br></p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2588</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0d1d59a2-8d00-11f0-810c-4fc7e6321c8c]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI5891850195.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Deal Velocity, Not Billable Hours: How Crosby Uses AI to Redefine Legal Contracting</title>
      <description>Ryan Daniels and John Sarihan are reimagining legal services by building Crosby, an AI-powered law firm that focuses on contract negotiations to start. Rather than building legal software, they've structured their company as an actual law firm with lawyers and AI engineers working side-by-side to automate human negotiations. They've eliminated billable hours in favor of per-document pricing, achieving contract turnaround times under an hour. Ryan and John explain why the law firm structure enables faster innovation cycles, how they're using AI to predict negotiation outcomes, and their vision for agents that can simulate entire contract negotiations between parties.

Hosted by Josephine Chen, Sequoia Capital



Mentioned in this episode:


  
Data processing agreement (DPA): GDPR-mandated contract between controllers and processors. Crosby handles DPAs as part of B2B contracting.



  
Credence good: Economic term for services whose quality is hard to judge even after consumption. Used to explain why legal buyers value lawyers-in-the-loop and malpractice coverage.</description>
      <pubDate>Tue, 02 Sep 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/55c8bb6c-878b-11f0-8ef9-db7a46549d94/image/0b03674ef35517acd494b24bd3fefac5.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Ryan Daniels and John Sarihan are reimagining legal services by building Crosby, an AI-powered law firm that focuses on contract negotiations to start. Rather than building legal software, they've structured their company as an actual law firm with lawyers and AI engineers working side-by-side to automate human negotiations. They've eliminated billable hours in favor of per-document pricing, achieving contract turnaround times under an hour. Ryan and John explain why the law firm structure enables faster innovation cycles, how they're using AI to predict negotiation outcomes, and their vision for agents that can simulate entire contract negotiations between parties.

Hosted by Josephine Chen, Sequoia Capital



Mentioned in this episode:


  
Data processing agreement (DPA): GDPR-mandated contract between controllers and processors. Crosby handles DPAs as part of B2B contracting.



  
Credence good: Economic term for services whose quality is hard to judge even after consumption. Used to explain why legal buyers value lawyers-in-the-loop and malpractice coverage.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Ryan Daniels and John Sarihan are reimagining legal services by building Crosby, an AI-powered law firm that focuses on contract negotiations to start. Rather than building legal software, they've structured their company as an actual law firm with lawyers and AI engineers working side-by-side to automate human negotiations. They've eliminated billable hours in favor of per-document pricing, achieving contract turnaround times under an hour. Ryan and John explain why the law firm structure enables faster innovation cycles, how they're using AI to predict negotiation outcomes, and their vision for agents that can simulate entire contract negotiations between parties.</p>
<p>Hosted by Josephine Chen, Sequoia Capital</p>
<p><br></p>
<p><u>Mentioned in this episode:</u></p>
<ul>
  <li>
<p><a href="https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/contracts-and-liabilities-between-controllers-and-processors/"><u>Data processing agreement (DPA)</u></a>: GDPR-mandated contract between controllers and processors. Crosby handles DPAs as part of B2B contracting.</p>
</li>
  <li>
<p><a href="https://en.wikipedia.org/wiki/Credence_goods"><u>Credence good</u></a>: Economic term for services whose quality is hard to judge even after consumption. Used to explain why legal buyers value lawyers-in-the-loop and malpractice coverage.</p>
</li>
</ul>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2999</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[55c8bb6c-878b-11f0-8ef9-db7a46549d94]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI5276137366.mp3?updated=1756783315" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>n8n CEO Jan Oberhauser on Building the Universal AI Automation Layer</title>
      <description>When the AI wave hit, n8n founder Jan Oberhauser faced a critical choice: become irrelevant or become indispensable. He chose the latter, transforming n8n from a simple workflow tool into a comprehensive AI automation platform that lets users connect any LLM to any application. The result? Four times the revenue growth in eight months compared to the previous six years. Jan explains how n8n’s “connect everything to anything” philosophy, combined with a thriving open source community, positioned the company to ride the AI automation wave while avoiding vendor lock-in that plagues enterprise software.

Hosted by George Robson and Pat Grady, Sequoia Capital

Mentioned in this episode:


  
Model Context Protocol (MCP): Open protocol that lets AI models safely use external tools and data that is used extensively by n8n for orchestration.

  
Vector database: A database optimized for storing and searching embeddings. These “vector stores” can pair with LLMs for retrieval-augmented workflows.

  
Granola: AI productivity tool mentioned by Jan as a recent favorite. 

  
Her: A film that Jan says, “a few years ago, it was sci fi, and it’s now suddenly this thing that is just around the corner.”</description>
      <pubDate>Tue, 26 Aug 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/314fd0f8-81ec-11f0-a8b1-930b48eb68fc/image/a961dc4f2c4943b1c88ea3999538f635.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>When the AI wave hit, n8n founder Jan Oberhauser faced a critical choice: become irrelevant or become indispensable. He chose the latter, transforming n8n from a simple workflow tool into a comprehensive AI automation platform that lets users connect any LLM to any application. The result? Four times the revenue growth in eight months compared to the previous six years. Jan explains how n8n’s “connect everything to anything” philosophy, combined with a thriving open source community, positioned the company to ride the AI automation wave while avoiding vendor lock-in that plagues enterprise software.

Hosted by George Robson and Pat Grady, Sequoia Capital

Mentioned in this episode:


  
Model Context Protocol (MCP): Open protocol that lets AI models safely use external tools and data that is used extensively by n8n for orchestration.

  
Vector database: A database optimized for storing and searching embeddings. These “vector stores” can pair with LLMs for retrieval-augmented workflows.

  
Granola: AI productivity tool mentioned by Jan as a recent favorite. 

  
Her: A film that Jan says, “a few years ago, it was sci fi, and it’s now suddenly this thing that is just around the corner.”</itunes:summary>
      <content:encoded>
        <![CDATA[<p>When the AI wave hit, n8n founder Jan Oberhauser faced a critical choice: become irrelevant or become indispensable. He chose the latter, transforming n8n from a simple workflow tool into a comprehensive AI automation platform that lets users connect any LLM to any application. The result? Four times the revenue growth in eight months compared to the previous six years. Jan explains how n8n’s “connect everything to anything” philosophy, combined with a thriving open source community, positioned the company to ride the AI automation wave while avoiding vendor lock-in that plagues enterprise software.</p>
<p>Hosted by George Robson and Pat Grady, Sequoia Capital</p>
<p><u>Mentioned in this episode:</u></p>
<ul>
  <li>
<u></u><a href="https://modelcontextprotocol.io/"><u>Model Context Protocol (MCP)</u></a>: Open protocol that lets AI models safely use external tools and data that is used extensively by n8n for orchestration.</li>
  <li>
<a href="https://en.wikipedia.org/wiki/Vector_database"><u>Vector database</u></a>: A database optimized for storing and searching embeddings. These “vector stores” can pair with LLMs for retrieval-augmented workflows.</li>
  <li>
<a href="https://www.granola.ai/"><u>Granola</u></a>: AI productivity tool mentioned by Jan as a recent favorite. </li>
  <li>
<a href="https://en.wikipedia.org/wiki/Her_(2013_film)"><u>Her</u></a>: A film that Jan says, “a few years ago, it was sci fi, and it’s now suddenly this thing that is just around the corner.”<br>
</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>2144</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[314fd0f8-81ec-11f0-a8b1-930b48eb68fc]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4430428192.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Scaling the ‘Cursor for Slides’ to $50M ARR: Gamma founder Jon Noronha</title>
      <description>Before ChatGPT made AI mainstream, John Noronha was building Gamma with a simple insight: everyone hates making slides but needs visual communication for high-stakes ideas. His background at Optimizely proved crucial as Gamma became a testing laboratory for AI models, running hundreds of experiments to discover that Claude excels at creative taste, Gemini wins on cost efficiency, and reasoning models actually hurt creativity. John explains how solving their own blank page problem inadvertently solved it for millions of users, turning a near-failing startup into a cash flow positive platform with 250 million presentations created. He discusses competing with PowerPoint's 500 million users while expanding beyond slides into documents, websites and visual storytelling.

Hosted by Sonya Huang, Sequoia Capital</description>
      <pubDate>Tue, 19 Aug 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d98ee368-7c6e-11f0-9648-8fbbb6ccd4e1/image/c8c7f1bd8ee7cce4d45b1f14531a6147.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Before ChatGPT made AI mainstream, John Noronha was building Gamma with a simple insight: everyone hates making slides but needs visual communication for high-stakes ideas. His background at Optimizely proved crucial as Gamma became a testing laboratory for AI models, running hundreds of experiments to discover that Claude excels at creative taste, Gemini wins on cost efficiency, and reasoning models actually hurt creativity. John explains how solving their own blank page problem inadvertently solved it for millions of users, turning a near-failing startup into a cash flow positive platform with 250 million presentations created. He discusses competing with PowerPoint's 500 million users while expanding beyond slides into documents, websites and visual storytelling.

Hosted by Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Before ChatGPT made AI mainstream, John Noronha was building Gamma with a simple insight: everyone hates making slides but needs visual communication for high-stakes ideas. His background at Optimizely proved crucial as Gamma became a testing laboratory for AI models, running hundreds of experiments to discover that Claude excels at creative taste, Gemini wins on cost efficiency, and reasoning models actually hurt creativity. John explains how solving their own blank page problem inadvertently solved it for millions of users, turning a near-failing startup into a cash flow positive platform with 250 million presentations created. He discusses competing with PowerPoint's 500 million users while expanding beyond slides into documents, websites and visual storytelling.</p>
<p>Hosted by Sonya Huang, Sequoia Capital</p>]]>
      </content:encoded>
      <itunes:duration>1805</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d98ee368-7c6e-11f0-9648-8fbbb6ccd4e1]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI6180959104.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Delphi’s Dara Ladjevardian: How AI Digital Minds Can Scale Human Connection</title>
      <description>Dara Ladjevardian, founder and CEO of Delphi, is creating digital minds that allow people to scale their thoughts and availability without replacing human connection. Inspired by Ray Kurzweil’s theory of mind as a hierarchy of pattern recognizers, Dara built an adaptive temporal knowledge graph that captures how people think and reason. From helping CEOs train new hires to enabling coaches to monetize their expertise 24/7, Delphi represents a new form of conversational media. Dara explains why authentic human representation matters, how digital minds actually increase desire for real human connection, and why he believes 2026 will be the tipping point for adoption for digital minds.



Hosted by Sonya Huang and Jess Lee, Sequoia Capital



Mentioned in this episode:


  
How to Create a Mind: 2012 book by Ray Kurzweil that inspired Dara

  
The Memoirs of Akbar Ladjevardian: 2008 book about Dara’s grandfather, an Iranian industrialist, that led him to create his first “digital mind”

  
Build: 2022 book by Tony Fadell that refers to itself as “a mentor in a box”; another inspiration for Dara

  
The 2 Sigma Problem: 1984 paper by Benjamin Bloom about how students that receive one-on-one tutoring perform two standard deviations better than students educated in a classroom environment</description>
      <pubDate>Tue, 12 Aug 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ea9afc9e-76e4-11f0-8392-df12cf983147/image/0090a1e6d4c24237be0e60ddd851c26c.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Dara Ladjevardian, founder and CEO of Delphi, is creating digital minds that allow people to scale their thoughts and availability without replacing human connection. Inspired by Ray Kurzweil’s theory of mind as a hierarchy of pattern recognizers, Dara built an adaptive temporal knowledge graph that captures how people think and reason. From helping CEOs train new hires to enabling coaches to monetize their expertise 24/7, Delphi represents a new form of conversational media. Dara explains why authentic human representation matters, how digital minds actually increase desire for real human connection, and why he believes 2026 will be the tipping point for adoption for digital minds.



Hosted by Sonya Huang and Jess Lee, Sequoia Capital



Mentioned in this episode:


  
How to Create a Mind: 2012 book by Ray Kurzweil that inspired Dara

  
The Memoirs of Akbar Ladjevardian: 2008 book about Dara’s grandfather, an Iranian industrialist, that led him to create his first “digital mind”

  
Build: 2022 book by Tony Fadell that refers to itself as “a mentor in a box”; another inspiration for Dara

  
The 2 Sigma Problem: 1984 paper by Benjamin Bloom about how students that receive one-on-one tutoring perform two standard deviations better than students educated in a classroom environment</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Dara Ladjevardian, founder and CEO of Delphi, is creating digital minds that allow people to scale their thoughts and availability without replacing human connection. Inspired by Ray Kurzweil’s theory of mind as a hierarchy of pattern recognizers, Dara built an adaptive temporal knowledge graph that captures how people think and reason. From helping CEOs train new hires to enabling coaches to monetize their expertise 24/7, Delphi represents a new form of conversational media. Dara explains why authentic human representation matters, how digital minds actually increase desire for real human connection, and why he believes 2026 will be the tipping point for adoption for digital minds.</p>
<p><br></p>
<p>Hosted by Sonya Huang and Jess Lee, Sequoia Capital</p>
<p><br></p>
<p><u>Mentioned in this episode:</u></p>
<ul>
  <li>
<u></u><a href="https://en.wikipedia.org/wiki/How_to_Create_a_Mind"><em>How to Create a Mind</em></a>: 2012 book by Ray Kurzweil that inspired Dara</li>
  <li>
<a href="https://www.google.com/books/edition/The_Memoirs_of_Akbar_Ladjevardian/P6IPLAAACAAJ?hl=en"><em>The Memoirs of Akbar Ladjevardian</em></a>: 2008 book about Dara’s grandfather, an Iranian industrialist, that led him to create his first “digital mind”</li>
  <li>
<a href="https://www.buildc.com/the-book"><em>Build</em></a>: 2022 book by Tony Fadell that refers to itself as “a mentor in a box”; another inspiration for Dara</li>
  <li>
<a href="https://web.mit.edu/5.95/www/readings/bloom-two-sigma.pdf"><u>The 2 Sigma Problem</u></a>: 1984 paper by Benjamin Bloom about how students that receive one-on-one tutoring perform two standard deviations better than students educated in a classroom environment</li>
</ul>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2345</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ea9afc9e-76e4-11f0-8392-df12cf983147]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3407041036.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Vercel CEO Guillermo Rauch: Building the Generative Web with AI</title>
      <description>Vercel CEO Guillermo Rauch has spent years obsessing over reducing the friction between having an idea and getting it online. Now with AI, he's achieving something even more ambitious: making software creation accessible to anyone with a keyboard. Guillermo explains how v0 has grown to 3 million users by focusing on reliability and quality, why ChatGPT has become their fastest-growing customer acquisition channel, and how AI is enabling “virtual coworkers” across design, development, and marketing. He shares his contrarian view that the future belongs to ephemeral, generated-on-demand applications rather than traditional installed software, and why he believes we're on the cusp of the biggest transformation to the web in its history.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 05 Aug 2025 15:16:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4c5ae996-6ca7-11f0-b077-331c149643a1/image/b325f0ed11675fb98e197a9005f677ee.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Vercel CEO Guillermo Rauch has spent years obsessing over reducing the friction between having an idea and getting it online. Now with AI, he's achieving something even more ambitious: making software creation accessible to anyone with a keyboard. Guillermo explains how v0 has grown to 3 million users by focusing on reliability and quality, why ChatGPT has become their fastest-growing customer acquisition channel, and how AI is enabling “virtual coworkers” across design, development, and marketing. He shares his contrarian view that the future belongs to ephemeral, generated-on-demand applications rather than traditional installed software, and why he believes we're on the cusp of the biggest transformation to the web in its history.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Vercel CEO Guillermo Rauch has spent years obsessing over reducing the friction between having an idea and getting it online. Now with AI, he's achieving something even more ambitious: making software creation accessible to anyone with a keyboard. Guillermo explains how v0 has grown to 3 million users by focusing on reliability and quality, why ChatGPT has become their fastest-growing customer acquisition channel, and how AI is enabling “virtual coworkers” across design, development, and marketing. He shares his contrarian view that the future belongs to ephemeral, generated-on-demand applications rather than traditional installed software, and why he believes we're on the cusp of the biggest transformation to the web in its history.</p>
<p>Hosted by Sonya Huang and Pat Grady, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>3659</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4c5ae996-6ca7-11f0-b077-331c149643a1]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI6550481372.mp3?updated=1753812926" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>OpenAI’s IMO Team on Why Models Are Finally Solving Elite-Level Math</title>
      <description>In just two months, a scrappy three-person team at OpenAI sprinted to fulfill what the entire AI field has been chasing for years—gold-level performance on the International Mathematical Olympiad problems. Alex Wei, Sheryl Hsu and Noam Brown discuss their unique approach using general-purpose reinforcement learning techniques on hard-to-verify tasks rather than formal verification tools. The model showed surprising self-awareness by admitting it couldn’t solve problem six, and revealed the humbling gap between solving competition problems and genuine mathematical research breakthroughs.

Hosted by Sonya Huang, Sequoia Capital</description>
      <pubDate>Wed, 30 Jul 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5d852e0c-6be4-11f0-bfc8-17c4dd88558b/image/41a061da9efaeebf444f61001cf9aa79.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>In just two months, a scrappy three-person team at OpenAI sprinted to fulfill what the entire AI field has been chasing for years—gold-level performance on the International Mathematical Olympiad problems. Alex Wei, Sheryl Hsu and Noam Brown discuss their unique approach using general-purpose reinforcement learning techniques on hard-to-verify tasks rather than formal verification tools. The model showed surprising self-awareness by admitting it couldn’t solve problem six, and revealed the humbling gap between solving competition problems and genuine mathematical research breakthroughs.

Hosted by Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In just two months, a scrappy three-person team at OpenAI sprinted to fulfill what the entire AI field has been chasing for years—gold-level performance on the International Mathematical Olympiad problems. Alex Wei, Sheryl Hsu and Noam Brown discuss their unique approach using general-purpose reinforcement learning techniques on hard-to-verify tasks rather than formal verification tools. The model showed surprising self-awareness by admitting it couldn’t solve problem six, and revealed the humbling gap between solving competition problems and genuine mathematical research breakthroughs.</p>
<p>Hosted by Sonya Huang, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>1810</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5d852e0c-6be4-11f0-bfc8-17c4dd88558b]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI5438545942.mp3?updated=1753741141" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>OpenAI Just Released ChatGPT Agent, Its Most Powerful Agent Yet</title>
      <description>Isa Fulford, Casey Chu, and Edward Sun from OpenAI's ChatGPT agent team reveal how they combined Deep Research and Operator into a single, powerful AI agent that can perform complex, multi-step tasks lasting up to an hour. By giving the model access to a virtual computer with text browsing, visual browsing, terminal access, and API integrations—all with shared state—they've created what may be the first truly embodied AI assistant. The team discusses their reinforcement learning approach, safety mitigations for real-world actions, and how small teams can build transformative AI products through close research-applied collaboration.

Hosted by Sonya Huang and Lauren Reeder, Sequoia Capital</description>
      <pubDate>Tue, 22 Jul 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Isa Fulford, Casey Chu, and Edward Sun from OpenAI's ChatGPT agent team reveal how they combined Deep Research and Operator into a single, powerful AI agent that can perform complex, multi-step tasks lasting up to an hour. By giving the model access to a virtual computer with text browsing, visual browsing, terminal access, and API integrations—all with shared state—they've created what may be the first truly embodied AI assistant. The team discusses their reinforcement learning approach, safety mitigations for real-world actions, and how small teams can build transformative AI products through close research-applied collaboration.

Hosted by Sonya Huang and Lauren Reeder, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Isa Fulford, Casey Chu, and Edward Sun from OpenAI's ChatGPT agent team reveal how they combined Deep Research and Operator into a single, powerful AI agent that can perform complex, multi-step tasks lasting up to an hour. By giving the model access to a virtual computer with text browsing, visual browsing, terminal access, and API integrations—all with shared state—they've created what may be the first truly embodied AI assistant. The team discusses their reinforcement learning approach, safety mitigations for real-world actions, and how small teams can build transformative AI products through close research-applied collaboration.</p>
<p>Hosted by Sonya Huang and Lauren Reeder, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2256</itunes:duration>
      <guid isPermaLink="false"><![CDATA[fcf7d8e6-667a-11f0-b529-03460f90d957]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3656372926.mp3?updated=1753134664" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>DeepMind's Pushmeet Kohli on AI's Scientific Revolution</title>
      <description>Pushmeet Kohli leads AI for Science at DeepMind, where his team has created AlphaEvolve, an AI system that discovers entirely new algorithms and proves mathematical results that have eluded researchers for decades. From improving 50-year-old matrix multiplication algorithms to generating interpretable code for complex problems like data center scheduling, AlphaEvolve represents a new paradigm where LLMs coupled with evolutionary search can outperform human experts. Pushmeet explains the technical architecture behind these breakthroughs and shares insights from collaborations with mathematicians like Terence Tao, while discussing how AI is accelerating scientific discovery across domains from chip design to materials science.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Fri, 11 Jul 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/374f25ea-5dc9-11f0-8673-e3429a658718/image/7cc505660e55294a4e5ecbbb0a3293b9.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Pushmeet Kohli leads AI for Science at DeepMind, where his team has created AlphaEvolve, an AI system that discovers entirely new algorithms and proves mathematical results that have eluded researchers for decades. From improving 50-year-old matrix multiplication algorithms to generating interpretable code for complex problems like data center scheduling, AlphaEvolve represents a new paradigm where LLMs coupled with evolutionary search can outperform human experts. Pushmeet explains the technical architecture behind these breakthroughs and shares insights from collaborations with mathematicians like Terence Tao, while discussing how AI is accelerating scientific discovery across domains from chip design to materials science.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Pushmeet Kohli leads AI for Science at DeepMind, where his team has created AlphaEvolve, an AI system that discovers entirely new algorithms and proves mathematical results that have eluded researchers for decades. From improving 50-year-old matrix multiplication algorithms to generating interpretable code for complex problems like data center scheduling, AlphaEvolve represents a new paradigm where LLMs coupled with evolutionary search can outperform human experts. Pushmeet explains the technical architecture behind these breakthroughs and shares insights from collaborations with mathematicians like Terence Tao, while discussing how AI is accelerating scientific discovery across domains from chip design to materials science.</p>
<p>Hosted by Sonya Huang and Pat Grady, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2473</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[374f25ea-5dc9-11f0-8673-e3429a658718]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI8285386832.mp3?updated=1752178226" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability</title>
      <description>Eric Ho is building Goodfire to solve one of AI’s most critical challenges: understanding what’s actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society.

Hosted by Sonya Huang and Roelof Botha, Sequoia Capital

Mentioned in this episode:


  
Mech interp: Mechanistic interpretability, list of important papers here



  
Phineas Gage: 19th century railway engineer who lost most of his brain’s left frontal lobe in an accident. Became a famous case study in neuroscience.



  
Human Genome Project: Effort from 1990-2003 to generate the first sequence of the human genome which accelerated the study of human biology



  
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs



  
Zoom In: An Introduction to Circuits: First important mechanistic interpretability paper from OpenAI in 2020



  
Superposition: Concept from physics applied to interpretability that allows neural networks to simulate larger networks (e.g. more concepts than neurons)



  
Apollo Research: AI safety company that designs AI model evaluations and conducts interpretability research



  
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. 2023 Anthropic paper that uses a sparse autoencoder to extract interpretable features; followed by Scaling Monosemanticity



  
Under the Hood of a Reasoning Model: 2025 Goodfire paper that interprets DeepSeek’s reasoning model R1



  
Auto-interpretability: The ability to use LLMs to automatically write explanations for the behavior of neurons in LLMs



  
Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model. (see episode with Arc co-founder Patrick Hsu)



  
Paint with Ember: Canvas interface from Goodfire that lets you steer an LLM’s visual output  in real time (paper here)



  
Model diffing: Interpreting how a model differs from checkpoint to checkpoint during finetuning



  
Feature steering: The ability to change the style of LLM output by up or down weighting features (e.g. talking like a pirate vs factual information about the Andromeda Galaxy)



  
Weight based interpretability: Method for directly decomposing neural network parameters into mechanistic components, instead of using features



  
The Urgency of Interpretability: Essay by Anthropic founder Dario Amodei

On the Biology of a Large Language Model: Goodfire collaboration with Anthropic</description>
      <pubDate>Tue, 08 Jul 2025 17:30:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/cf8b451e-5b67-11f0-8e44-2339eea2f29f/image/cdfabfe9a60345b5248204b077f9693e.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Eric Ho is building Goodfire to solve one of AI’s most critical challenges: understanding what’s actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society.

Hosted by Sonya Huang and Roelof Botha, Sequoia Capital

Mentioned in this episode:


  
Mech interp: Mechanistic interpretability, list of important papers here



  
Phineas Gage: 19th century railway engineer who lost most of his brain’s left frontal lobe in an accident. Became a famous case study in neuroscience.



  
Human Genome Project: Effort from 1990-2003 to generate the first sequence of the human genome which accelerated the study of human biology



  
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs



  
Zoom In: An Introduction to Circuits: First important mechanistic interpretability paper from OpenAI in 2020



  
Superposition: Concept from physics applied to interpretability that allows neural networks to simulate larger networks (e.g. more concepts than neurons)



  
Apollo Research: AI safety company that designs AI model evaluations and conducts interpretability research



  
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. 2023 Anthropic paper that uses a sparse autoencoder to extract interpretable features; followed by Scaling Monosemanticity



  
Under the Hood of a Reasoning Model: 2025 Goodfire paper that interprets DeepSeek’s reasoning model R1



  
Auto-interpretability: The ability to use LLMs to automatically write explanations for the behavior of neurons in LLMs



  
Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model. (see episode with Arc co-founder Patrick Hsu)



  
Paint with Ember: Canvas interface from Goodfire that lets you steer an LLM’s visual output  in real time (paper here)



  
Model diffing: Interpreting how a model differs from checkpoint to checkpoint during finetuning



  
Feature steering: The ability to change the style of LLM output by up or down weighting features (e.g. talking like a pirate vs factual information about the Andromeda Galaxy)



  
Weight based interpretability: Method for directly decomposing neural network parameters into mechanistic components, instead of using features



  
The Urgency of Interpretability: Essay by Anthropic founder Dario Amodei

On the Biology of a Large Language Model: Goodfire collaboration with Anthropic</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Eric Ho is building Goodfire to solve one of AI’s most critical challenges: understanding what’s actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society.</p>
<p>Hosted by Sonya Huang and Roelof Botha, Sequoia Capital</p>
<p><u>Mentioned in this episode:</u></p>
<ul>
  <li>
<p><a href="https://en.wikipedia.org/wiki/Mechanistic_interpretability"><u>Mech interp</u></a>: Mechanistic interpretability, list of important papers <a href="https://www.alignmentforum.org/posts/NfFST5Mio7BCAQHPA/an-extremely-opinionated-annotated-list-of-my-favourite-1"><u>here</u></a></p>
</li>
  <li>
<p><a href="https://en.wikipedia.org/wiki/Phineas_Gage"><u>Phineas Gage</u></a>: 19th century railway engineer who lost most of his brain’s left frontal lobe in an accident. Became a famous case study in neuroscience.</p>
</li>
  <li>
<p><a href="https://www.genome.gov/human-genome-project"><u>Human Genome Project</u></a>: Effort from 1990-2003 to generate the first sequence of the human genome which accelerated the study of human biology</p>
</li>
  <li>
<p><a href="https://www.emergent-misalignment.com/"><u>Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs</u></a></p>
</li>
  <li>
<p><a href="https://distill.pub/2020/circuits/zoom-in/"><u>Zoom In: An Introduction to Circuits</u></a>: First important mechanistic interpretability paper from OpenAI in 2020</p>
</li>
  <li>
<p><a href="https://transformer-circuits.pub/2022/toy_model/index.html"><u>Superposition</u></a>: Concept from physics applied to interpretability that allows neural networks to simulate larger networks (e.g. more concepts than neurons)</p>
</li>
  <li>
<p><a href="https://www.apolloresearch.ai/"><u>Apollo Research</u></a>: AI safety company that designs AI model evaluations and conducts interpretability research</p>
</li>
  <li>
<p><a href="https://transformer-circuits.pub/2023/monosemantic-features"><u>Towards Monosemanticity: Decomposing Language Models With Dictionary Learning</u></a>. 2023 Anthropic paper that uses a <a href="https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf"><u>sparse autoencoder</u></a> to extract interpretable features; followed by <a href="https://transformer-circuits.pub/2024/scaling-monosemanticity/"><u>Scaling Monosemanticity</u></a></p>
</li>
  <li>
<p><a href="https://www.goodfire.ai/blog/under-the-hood-of-a-reasoning-model"><u>Under the Hood of a Reasoning Model</u></a>: 2025 Goodfire paper that interprets DeepSeek’s reasoning model R1</p>
</li>
  <li>
<p><a href="https://openai.com/index/language-models-can-explain-neurons-in-language-models/"><u>Auto-interpretability</u></a>: The ability to use LLMs to automatically write explanations for the behavior of neurons in LLMs</p>
</li>
  <li>
<p><a href="https://www.goodfire.ai/blog/interpreting-evo-2"><u>Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model</u></a>. (see episode with Arc co-founder <a href="https://youtu.be/v-_58dabswU?si=pNY_mOau5cXa8Jzr"><u>Patrick Hsu</u></a>)</p>
</li>
  <li>
<p><a href="https://paint.goodfire.ai/"><u>Paint with Ember</u></a>: Canvas interface from Goodfire that lets you steer an LLM’s visual output  in real time (paper <a href="https://www.goodfire.ai/blog/painting-with-concepts"><u>here</u></a>)</p>
</li>
  <li>
<p><a href="https://transformer-circuits.pub/2024/model-diffing/index.html"><u>Model diffing</u></a>: Interpreting how a model differs from checkpoint to checkpoint during finetuning</p>
</li>
  <li>
<p><a href="https://www.goodfire.ai/papers/mapping-latent-spaces-llama"><u>Feature steering</u></a>: The ability to change the style of LLM output by up or down weighting features (e.g. talking like a pirate vs factual information about the Andromeda Galaxy)</p>
</li>
  <li>
<p><a href="https://www.lesswrong.com/posts/EPefYWjuHNcNH4C7E/attribution-based-parameter-decomposition"><u>Weight based interpretability</u></a>: Method for directly decomposing neural network parameters into mechanistic components, instead of using features</p>
</li>
  <li>
<p><a href="https://www.darioamodei.com/post/the-urgency-of-interpretability"><u>The Urgency of Interpretability:</u></a> Essay by Anthropic founder Dario Amodei</p>
<p><a href="https://transformer-circuits.pub/2025/attribution-graphs/biology.html"><u>On the Biology of a Large Language Model:</u></a> Goodfire collaboration with Anthropic

</p>
</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>2827</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cf8b451e-5b67-11f0-8e44-2339eea2f29f]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9154425844.mp3?updated=1751996708" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>ElevenLabs’ Mati Staniszewski: Why Voice Will Be the Fundamental Interface for Tech</title>
      <description>Mati Staniszewski, co-founder and CEO of ElevenLabs, explains how staying laser-focused on audio innovation has allowed his company to thrive despite the push into multimodality from foundation models. From a high school friendship in Poland to building one of the fastest-growing AI companies, Mati shares how ElevenLabs transformed text-to-speech with contextual understanding and emotional delivery. He discusses the company's viral moments (from Harry Potter by Balenciaga to powering Darth Vader in Fortnite), and explains how ElevenLabs is creating the infrastructure for voice agents and real-time translation that could eliminate language barriers worldwide.

Hosted by: Pat Grady, Sequoia Capital

Mentioned in this episode:


  
Attention Is All You Need: The original Transformers paper



  
Tortoise-tts: Open source text to speech model that was a starting point for ElevenLabs (which now maintains a v2)



  
Harry Potter by Balenciaga: ElevenLabs’ first big viral moment from 2023



  
The first AI that can laugh: 2022 blog post backing up ElevenLab’s claim of laughter (it got better in v3)



  
Darth Vader's voice in Fortnite: ElevenLabs used actual voice clips provided by James Earl Jones before he died



  
Lex Fridman interviews Prime Minister Modi: ElevenLabs enabled Fridman to speak in Hindi and Modi to speak in English.



  
Time Person of the Year 2024: ElevenLabs-powered experiment with “conversational journalism”



  
Iconic Voices: Richard Feynman, Deepak Chopra, Maya Angelou and more available in ElevenLabs reader app



  
SIP trunking: a method of delivering voice, video, and other unified communications over the internet using the Session Initiation Protocol (SIP)



  
Genesys: Leading enterprise CX platform for agentic AI



  
Hitchhiker’s Guide to the Galaxy: Comedy/science-fiction series by Douglas Adams that contains the concept of the Babel Fish instantaneous translator, cited by Mati



  
FYI: communication and productivity app for creatives that Mati uses, founded by will.i.am



  
Lovable: prototyping app that Mati loves</description>
      <pubDate>Tue, 01 Jul 2025 15:56:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1873606c-55f1-11f0-a023-dff571f953f4/image/135df54b0081cf0440bce131492dad73.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Mati Staniszewski, co-founder and CEO of ElevenLabs, explains how staying laser-focused on audio innovation has allowed his company to thrive despite the push into multimodality from foundation models. From a high school friendship in Poland to building one of the fastest-growing AI companies, Mati shares how ElevenLabs transformed text-to-speech with contextual understanding and emotional delivery. He discusses the company's viral moments (from Harry Potter by Balenciaga to powering Darth Vader in Fortnite), and explains how ElevenLabs is creating the infrastructure for voice agents and real-time translation that could eliminate language barriers worldwide.

Hosted by: Pat Grady, Sequoia Capital

Mentioned in this episode:


  
Attention Is All You Need: The original Transformers paper



  
Tortoise-tts: Open source text to speech model that was a starting point for ElevenLabs (which now maintains a v2)



  
Harry Potter by Balenciaga: ElevenLabs’ first big viral moment from 2023



  
The first AI that can laugh: 2022 blog post backing up ElevenLab’s claim of laughter (it got better in v3)



  
Darth Vader's voice in Fortnite: ElevenLabs used actual voice clips provided by James Earl Jones before he died



  
Lex Fridman interviews Prime Minister Modi: ElevenLabs enabled Fridman to speak in Hindi and Modi to speak in English.



  
Time Person of the Year 2024: ElevenLabs-powered experiment with “conversational journalism”



  
Iconic Voices: Richard Feynman, Deepak Chopra, Maya Angelou and more available in ElevenLabs reader app



  
SIP trunking: a method of delivering voice, video, and other unified communications over the internet using the Session Initiation Protocol (SIP)



  
Genesys: Leading enterprise CX platform for agentic AI



  
Hitchhiker’s Guide to the Galaxy: Comedy/science-fiction series by Douglas Adams that contains the concept of the Babel Fish instantaneous translator, cited by Mati



  
FYI: communication and productivity app for creatives that Mati uses, founded by will.i.am



  
Lovable: prototyping app that Mati loves</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Mati Staniszewski, co-founder and CEO of ElevenLabs, explains how staying laser-focused on audio innovation has allowed his company to thrive despite the push into multimodality from foundation models. From a high school friendship in Poland to building one of the fastest-growing AI companies, Mati shares how ElevenLabs transformed text-to-speech with contextual understanding and emotional delivery. He discusses the company's viral moments (from Harry Potter by Balenciaga to powering Darth Vader in Fortnite), and explains how ElevenLabs is creating the infrastructure for voice agents and real-time translation that could eliminate language barriers worldwide.</p>
<p>Hosted by: Pat Grady, Sequoia Capital</p>
<p><u>Mentioned in this episode:</u></p>
<ul>
  <li>
<p><a href="https://arxiv.org/abs/1706.03762"><u>Attention Is All You Need</u></a>: The original Transformers paper</p>
</li>
  <li>
<p><a href="https://github.com/neonbjb/tortoise-tts"><u>Tortoise-tts</u></a>: Open source text to speech model that was a starting point for ElevenLabs (which now maintains a <a href="https://elevenlabs.io/blog/tortoise-tts-v2?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=us_nonbrandsearch_tts_english&amp;utm_id=20586445157&amp;utm_term=ai%20audiobook%20narration&amp;utm_content=tts_-_narrator&amp;gad_campaignid=20586445157"><u>v2</u></a>)</p>
</li>
  <li>
<p><a href="https://www.youtube.com/watch?v=iE39q-IKOzA"><u>Harry Potter by Balenciaga</u></a>: ElevenLabs’ first big viral moment from 2023</p>
</li>
  <li>
<p><a href="https://elevenlabs.io/blog/the_first_ai_that_can_laugh"><u>The first AI that can laugh</u></a>: 2022 blog post backing up ElevenLab’s claim of <a href="https://x.com/elevenlabsio/status/1595850107865501703"><u>laughter</u></a> (it <a href="https://x.com/flavioschneide/status/1930692392065355971"><u>got better in v3</u></a>)</p>
</li>
  <li>
<p><a href="https://www.fortnite.com/news/this-will-be-a-day-long-remembered-speak-with-darth-vader-in-fortnite?lang=en-US"><u>Darth Vader's voice in Fortnite</u></a>: ElevenLabs used actual voice clips provided by James Earl Jones before he died</p>
</li>
  <li>
<p><a href="https://www.youtube.com/watch?v=ZPUtA3W-7_I"><u>Lex Fridman interviews Prime Minister Modi</u></a>: ElevenLabs enabled Fridman to speak in Hindi and Modi to speak in English.</p>
</li>
  <li>
<p><a href="https://time.com/7200212/person-of-the-year-2024-donald-trump/"><u>Time Person of the Year 2024</u></a>: ElevenLabs-powered experiment with “conversational journalism”</p>
</li>
  <li>
<p><a href="https://elevenlabs.io/iconic-voices"><u>Iconic Voices</u></a>: Richard Feynman, Deepak Chopra, Maya Angelou and more available in <a href="https://apps.apple.com/us/app/elevenreader-text-to-speech/id6479373050"><u>ElevenLabs reader app</u></a></p>
</li>
  <li>
<p><a href="https://en.wikipedia.org/wiki/SIP_trunking"><u>SIP trunking</u></a>: a method of delivering voice, video, and other unified communications over the internet using the Session Initiation Protocol (SIP)</p>
</li>
  <li>
<p><a href="https://www.genesys.com/"><u>Genesys</u></a>: Leading enterprise CX platform for agentic AI</p>
</li>
  <li>
<p><a href="https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy"><em>Hitchhiker’s Guide to the Galaxy</em></a>: Comedy/science-fiction series by Douglas Adams that contains the concept of the Babel Fish instantaneous translator, cited by Mati</p>
</li>
  <li>
<p><a href="https://fyi.me/"><u>FYI</u></a>: communication and productivity app for creatives that Mati uses, founded by will.i.am</p>
</li>
  <li>
<p><a href="https://lovable.dev/"><u>Lovable</u></a>: prototyping app that Mati loves</p>
</li>
</ul>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>3593</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1873606c-55f1-11f0-a023-dff571f953f4]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9463195605.mp3?updated=1751386277" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>From DevOps ‘Heart Attacks’ to AI-Powered Diagnostics With Traversal’s AI Agents</title>
      <description>Anish Agarwal and Raj Agrawal, co-founders of Traversal, are transforming how enterprises handle critical system failures. Their AI agents can perform root cause analysis in 2-4 minutes instead of the hours typically spent by teams of engineers scrambling in Slack channels. Drawing from their academic research in causal inference and gene regulatory networks, they’ve built agents that systematically traverse complex dependency maps to identify the smoking gun logs and problematic code changes. As AI-generated code becomes more prevalent, Traversal addresses a growing challenge: debugging systems where humans didn’t write the original code, making AI-powered troubleshooting essential for maintaining reliable software at scale.

Hosted by Sonya Huang and Bogomil Balkansky, Sequoia Capital

Mentioned in this episode:


    


SRE: Site reliability engineering. The function within engineering teams that monitors and improves the availability and performance of software systems and services.


    


Golden signals: four key metrics used by Site Reliability Engineers (SREs) to monitor the health and performance of IT systems: latency, traffic, errors and saturation.

  
MELT data:  Metrics, events, log, and traces. A framework for observability.

  
The Bitter Lesson: Another mention of Nobel Prize  winner Rich Sutton’s influential post.</description>
      <pubDate>Tue, 24 Jun 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c19c7236-506b-11f0-b377-33261e3cdf36/image/930520e50bc473a4a27a21d64e05dd95.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Anish Agarwal and Raj Agrawal, co-founders of Traversal, are transforming how enterprises handle critical system failures. Their AI agents can perform root cause analysis in 2-4 minutes instead of the hours typically spent by teams of engineers scrambling in Slack channels. Drawing from their academic research in causal inference and gene regulatory networks, they’ve built agents that systematically traverse complex dependency maps to identify the smoking gun logs and problematic code changes. As AI-generated code becomes more prevalent, Traversal addresses a growing challenge: debugging systems where humans didn’t write the original code, making AI-powered troubleshooting essential for maintaining reliable software at scale.

Hosted by Sonya Huang and Bogomil Balkansky, Sequoia Capital

Mentioned in this episode:


    


SRE: Site reliability engineering. The function within engineering teams that monitors and improves the availability and performance of software systems and services.


    


Golden signals: four key metrics used by Site Reliability Engineers (SREs) to monitor the health and performance of IT systems: latency, traffic, errors and saturation.

  
MELT data:  Metrics, events, log, and traces. A framework for observability.

  
The Bitter Lesson: Another mention of Nobel Prize  winner Rich Sutton’s influential post.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Anish Agarwal and Raj Agrawal, co-founders of Traversal, are transforming how enterprises handle critical system failures. Their AI agents can perform root cause analysis in 2-4 minutes instead of the hours typically spent by teams of engineers scrambling in Slack channels. Drawing from their academic research in causal inference and gene regulatory networks, they’ve built agents that systematically traverse complex dependency maps to identify the smoking gun logs and problematic code changes. As AI-generated code becomes more prevalent, Traversal addresses a growing challenge: debugging systems where humans didn’t write the original code, making AI-powered troubleshooting essential for maintaining reliable software at scale.</p>
<p>Hosted by Sonya Huang and Bogomil Balkansky, Sequoia Capital</p>
<p><u>Mentioned in this episode:</u></p>
<ul>
  <li>  </li>
<li>
<a href="https://en.wikipedia.org/wiki/Site_reliability_engineering"><u>SRE</u></a>: Site reliability engineering. The function within engineering teams that monitors and improves the availability and performance of software systems and services.</li>

  <li>  </li>
<li>
<a href="https://sre.google/sre-book/monitoring-distributed-systems/"><u>Golden signals</u></a>: four key metrics used by Site Reliability Engineers (SREs) to monitor the health and performance of IT systems: latency, traffic, errors and saturation.</li>
  <li>
<a href="https://www.splunk.com/en_us/blog/learn/melt-metrics-events-logs-traces.html"><u>MELT data</u></a>:  Metrics, events, log, and traces. A framework for observability.</li>
  <li>
<a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html"><em>The Bitter Lesson</em></a>: Another mention of Nobel Prize  winner Rich Sutton’s influential post.</li>

</ul>]]>
      </content:encoded>
      <itunes:duration>2432</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c19c7236-506b-11f0-b377-33261e3cdf36]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3597815854.mp3?updated=1750708889" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Breakthroughs Needed for AGI Have Already Been Made: OpenAI Former Research Head Bob McGrew</title>
      <description>As OpenAI's former Head of Research, Bob McGrew witnessed the company's evolution from GPT-3’s breakthrough to today's reasoning models. He argues that there are three legs of the stool for AGI—Transformers, scaled pre-training, and reasoning—and that the fundamentals that will shape the next decade-plus are already in place. He thinks 2025 will be defined by reasoning while pre-training hits diminishing returns. Bob discusses why the agent economy will price services at compute costs due to near-infinite supply, fundamentally disrupting industries like law and medicine, and how his children use ChatGPT to spark curiosity and agency. From robotics breakthroughs to managing brilliant researchers, Bob offers a unique perspective on AI’s trajectory and where startups can still find defensible opportunities.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital



Mentioned in this episode: 


  
Solving Rubik’s Cube with a robot hand: OpenAI’s original robotics research



  
Computer Use and Operator: Anthropic and OpenAI reasoning breakthroughs that originated with OpenAi researchers



  
Skild and Physical Intelligence: Robotics-oriented companies Bob sees as well-positioned now



  
Distyl: AI company founded by ex-Palintir alums to create enterprise workflows driven by proprietary data



  
Member of the technical staff: Title at OpenAI designed to break down barriers between AI researchers and engineers




Howie.ai: Scheduling app that Bob uses</description>
      <pubDate>Tue, 17 Jun 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0ae02282-47dc-11f0-8118-9f5839eff378/image/bca2064a8b6bd75883653cad9282e9d5.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>As OpenAI's former Head of Research, Bob McGrew witnessed the company's evolution from GPT-3’s breakthrough to today's reasoning models. He argues that there are three legs of the stool for AGI—Transformers, scaled pre-training, and reasoning—and that the fundamentals that will shape the next decade-plus are already in place. He thinks 2025 will be defined by reasoning while pre-training hits diminishing returns. Bob discusses why the agent economy will price services at compute costs due to near-infinite supply, fundamentally disrupting industries like law and medicine, and how his children use ChatGPT to spark curiosity and agency. From robotics breakthroughs to managing brilliant researchers, Bob offers a unique perspective on AI’s trajectory and where startups can still find defensible opportunities.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital



Mentioned in this episode: 


  
Solving Rubik’s Cube with a robot hand: OpenAI’s original robotics research



  
Computer Use and Operator: Anthropic and OpenAI reasoning breakthroughs that originated with OpenAi researchers



  
Skild and Physical Intelligence: Robotics-oriented companies Bob sees as well-positioned now



  
Distyl: AI company founded by ex-Palintir alums to create enterprise workflows driven by proprietary data



  
Member of the technical staff: Title at OpenAI designed to break down barriers between AI researchers and engineers




Howie.ai: Scheduling app that Bob uses</itunes:summary>
      <content:encoded>
        <![CDATA[<p>As OpenAI's former Head of Research, Bob McGrew witnessed the company's evolution from GPT-3’s breakthrough to today's reasoning models. He argues that there are three legs of the stool for AGI—Transformers, scaled pre-training, and reasoning—and that the fundamentals that will shape the next decade-plus are already in place. He thinks 2025 will be defined by reasoning while pre-training hits diminishing returns. Bob discusses why the agent economy will price services at compute costs due to near-infinite supply, fundamentally disrupting industries like law and medicine, and how his children use ChatGPT to spark curiosity and agency. From robotics breakthroughs to managing brilliant researchers, Bob offers a unique perspective on AI’s trajectory and where startups can still find defensible opportunities.</p>
<p><em>Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital</em></p>
<p><br></p>
<p><u>Mentioned in this episode</u>: </p>
<ul>
  <li>
<p><a href="https://openai.com/index/solving-rubiks-cube/"><u>Solving Rubik’s Cube with a robot hand</u></a>: OpenAI’s original robotics research</p>
</li>
  <li>
<p><a href="https://www.anthropic.com/news/3-5-models-and-computer-use"><u>Computer Use</u></a> and <a href="https://openai.com/index/introducing-operator/"><u>Operator</u></a>: Anthropic and OpenAI reasoning breakthroughs that originated with OpenAi researchers</p>
</li>
  <li>
<p><a href="https://www.skild.ai/"><u>Skild</u></a> and <a href="https://www.physicalintelligence.company/"><u>Physical Intelligence</u></a>: Robotics-oriented companies Bob sees as well-positioned now</p>
</li>
  <li>
<p><a href="https://distyl.ai/"><u>Distyl</u></a>: AI company founded by ex-Palintir alums to create enterprise workflows driven by proprietary data</p>
</li>
  <li>
<p><a href="https://x.com/petergyang/status/1728441291569307785?lang=en"><u>Member of the technical staff</u></a>: Title at OpenAI designed to break down barriers between AI researchers and engineers</p>
</li>
</ul>
<p><br><a href="http://howie.ai"><u>Howie.ai</u></a>: Scheduling app that Bob uses</p>]]>
      </content:encoded>
      <itunes:duration>2931</itunes:duration>
      <guid isPermaLink="false"><![CDATA[0ae02282-47dc-11f0-8118-9f5839eff378]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI1197287200.mp3?updated=1750138681" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>OpenAI Codex Team: From Coding Autocomplete to Asynchronous Autonomous Agents</title>
      <description>Hanson Wang and Alexander Embiricos from OpenAI's Codex team discuss their latest AI coding agent that works independently in its own environment for up to 30 minutes, generating full pull requests from simple task descriptions. They explain how they trained the model beyond competitive programming to match real-world software engineering needs, the shift from pairing with AI to delegating to autonomous agents, and their vision for a future where the majority of code is written by agents working on their own computers. The conversation covers the technical challenges of long-running inference, the importance of creating realistic training environments, and how developers are already using Codex to fix bugs and implement features at OpenAI.

Hosted by Sonya Huang and Lauren Reeder, Sequoia Capital 

Mentioned in this episode: 


  
The Culture: Sci-Fi series by Iain Banks portraying an optimistic view of AI




The Bitter Lesson: Influential paper by Rich Sutton on the importance of scale as a strategic unlock for AI.</description>
      <pubDate>Tue, 10 Jun 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4c94bf4a-4560-11f0-9b8d-a375401dd345/image/022e3c24eac3f6171303de98dcd1bd7b.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Hanson Wang and Alexander Embiricos from OpenAI's Codex team discuss their latest AI coding agent that works independently in its own environment for up to 30 minutes, generating full pull requests from simple task descriptions. They explain how they trained the model beyond competitive programming to match real-world software engineering needs, the shift from pairing with AI to delegating to autonomous agents, and their vision for a future where the majority of code is written by agents working on their own computers. The conversation covers the technical challenges of long-running inference, the importance of creating realistic training environments, and how developers are already using Codex to fix bugs and implement features at OpenAI.

Hosted by Sonya Huang and Lauren Reeder, Sequoia Capital 

Mentioned in this episode: 


  
The Culture: Sci-Fi series by Iain Banks portraying an optimistic view of AI




The Bitter Lesson: Influential paper by Rich Sutton on the importance of scale as a strategic unlock for AI.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Hanson Wang and Alexander Embiricos from OpenAI's Codex team discuss their latest AI coding agent that works independently in its own environment for up to 30 minutes, generating full pull requests from simple task descriptions. They explain how they trained the model beyond competitive programming to match real-world software engineering needs, the shift from pairing with AI to delegating to autonomous agents, and their vision for a future where the majority of code is written by agents working on their own computers. The conversation covers the technical challenges of long-running inference, the importance of creating realistic training environments, and how developers are already using Codex to fix bugs and implement features at OpenAI.</p>
<p>Hosted by Sonya Huang and Lauren Reeder, Sequoia Capital </p>
<p><u>Mentioned in this episode</u>: </p>
<ul>
  <li>
<p><a href="https://en.wikipedia.org/wiki/Culture_series"><u>The Culture</u></a>: Sci-Fi series by Iain Banks portraying an optimistic view of AI</p>
</li>
</ul>
<p><br><a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html"><u>The Bitter Lesson</u></a>: Influential paper by Rich Sutton on the importance of scale as a strategic unlock for AI. </p>]]>
      </content:encoded>
      <itunes:duration>2264</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4c94bf4a-4560-11f0-9b8d-a375401dd345]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3386040744.mp3?updated=1749494336" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Google I/O Afterparty: The Future of Human-AI Collaboration, From Veo to Mariner</title>
      <description>Fresh off impressive releases at Google’s I/O event, three Google Labs leaders explain how they’re reimagining creative tools and productivity workflows. Thomas Iljic details how video generation is merging filmmaking with gaming through generative AI cameras and world-building interfaces in Whisk and Veo. Jaclyn Konzelmann demonstrates how Project Mariner evolved from a disruptive browser takeover to an intelligent background assistant that remembers context across multiple tasks. Simon Tokumine reveals NotebookLM’s expansion beyond viral audio overviews into a comprehensive platform for transforming information into personalized formats. The conversation explores the shift from prompting to showing and telling, the economics of AI-powered e-commerce, and why being “too early” has become Google Labs’ biggest challenge and advantage.

Hosted by Sonya Huang, Sequoia Capital



00:00 Introduction

02:12 Google's AI models and public perception

04:18 Google's history in image and video generation

06:45 Where Whisk and Flow fit

10:30 How close are we to having the ideal tool for the craft?

13:05 Where do the movie and game worlds start to merge?

16:25 Introduction to Project Mariner

17:15 How Mariner works

22:34 Mariner user behaviors

27:07 Temporary tattoos and URL memory

27:53 Project Mariner's future

29:26 Agent capabilities and use cases

31:09 E-commerce and agent interaction

35:03 Notebook LM evolution

48:26 Predictions and future of AI



Mentioned in this episode: 


  
Whisk: Image and video generation app for consumers



  
Flow: AI-powered filmmaking with new Veo 3 model



  
Project Mariner: research prototype exploring the future of human-agent interaction, starting with browsers



  
NotebookLM: tool for understanding and engaging with complex information including Audio Overviews and now a mobile app



  
Shop with AI Mode: Shopping app with a virtual try-on tool based on your own photos



  
Stitch: New prompt-based interface to design UI for mobile and web applications.

ControlNet paper: Outlined an architecture for adding conditional language to direct the outputs of image generation with diffusion models</description>
      <pubDate>Tue, 03 Jun 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/218a23b8-3ff6-11f0-af25-738e98987a00/image/98d6c3888ccb0148b757f790a5f09c03.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Fresh off impressive releases at Google’s I/O event, three Google Labs leaders explain how they’re reimagining creative tools and productivity workflows. Thomas Iljic details how video generation is merging filmmaking with gaming through generative AI cameras and world-building interfaces in Whisk and Veo. Jaclyn Konzelmann demonstrates how Project Mariner evolved from a disruptive browser takeover to an intelligent background assistant that remembers context across multiple tasks. Simon Tokumine reveals NotebookLM’s expansion beyond viral audio overviews into a comprehensive platform for transforming information into personalized formats. The conversation explores the shift from prompting to showing and telling, the economics of AI-powered e-commerce, and why being “too early” has become Google Labs’ biggest challenge and advantage.

Hosted by Sonya Huang, Sequoia Capital



00:00 Introduction

02:12 Google's AI models and public perception

04:18 Google's history in image and video generation

06:45 Where Whisk and Flow fit

10:30 How close are we to having the ideal tool for the craft?

13:05 Where do the movie and game worlds start to merge?

16:25 Introduction to Project Mariner

17:15 How Mariner works

22:34 Mariner user behaviors

27:07 Temporary tattoos and URL memory

27:53 Project Mariner's future

29:26 Agent capabilities and use cases

31:09 E-commerce and agent interaction

35:03 Notebook LM evolution

48:26 Predictions and future of AI



Mentioned in this episode: 


  
Whisk: Image and video generation app for consumers



  
Flow: AI-powered filmmaking with new Veo 3 model



  
Project Mariner: research prototype exploring the future of human-agent interaction, starting with browsers



  
NotebookLM: tool for understanding and engaging with complex information including Audio Overviews and now a mobile app



  
Shop with AI Mode: Shopping app with a virtual try-on tool based on your own photos



  
Stitch: New prompt-based interface to design UI for mobile and web applications.

ControlNet paper: Outlined an architecture for adding conditional language to direct the outputs of image generation with diffusion models</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Fresh off impressive releases at Google’s I/O event, three Google Labs leaders explain how they’re reimagining creative tools and productivity workflows. Thomas Iljic details how video generation is merging filmmaking with gaming through generative AI cameras and world-building interfaces in Whisk and Veo. Jaclyn Konzelmann demonstrates how Project Mariner evolved from a disruptive browser takeover to an intelligent background assistant that remembers context across multiple tasks. Simon Tokumine reveals NotebookLM’s expansion beyond viral audio overviews into a comprehensive platform for transforming information into personalized formats. The conversation explores the shift from prompting to showing and telling, the economics of AI-powered e-commerce, and why being “too early” has become Google Labs’ biggest challenge and advantage.</p>
<p><em>Hosted by Sonya Huang, Sequoia Capital</em></p>
<p><br></p>
<p>00:00 Introduction</p>
<p>02:12 Google's AI models and public perception</p>
<p>04:18 Google's history in image and video generation</p>
<p>06:45 Where Whisk and Flow fit</p>
<p>10:30 How close are we to having the ideal tool for the craft?</p>
<p>13:05 Where do the movie and game worlds start to merge?</p>
<p>16:25 Introduction to Project Mariner</p>
<p>17:15 How Mariner works</p>
<p>22:34 Mariner user behaviors</p>
<p>27:07 Temporary tattoos and URL memory</p>
<p>27:53 Project Mariner's future</p>
<p>29:26 Agent capabilities and use cases</p>
<p>31:09 E-commerce and agent interaction</p>
<p>35:03 Notebook LM evolution</p>
<p>48:26 Predictions and future of AI</p>
<p><br></p>
<p><u>Mentioned in this episode: </u></p>
<ul>
  <li>
<p><a href="https://labs.google/fx/tools/whisk"><u>Whisk</u></a>: Image and video generation app for consumers</p>
</li>
  <li>
<p><a href="https://labs.google/flow/about"><u>Flow</u></a>: AI-powered filmmaking with new Veo 3 model</p>
</li>
  <li>
<p><a href="https://deepmind.google/models/project-mariner/"><u>Project Mariner</u></a>: research prototype exploring the future of human-agent interaction, starting with browsers</p>
</li>
  <li>
<p><a href="https://blog.google/technology/ai/notebooklm-app/"><u>NotebookLM</u></a>: tool for understanding and engaging with complex information including Audio Overviews and now a mobile app</p>
</li>
  <li>
<p><a href="https://blog.google/products/shopping/google-shopping-ai-mode-virtual-try-on-update/"><u>Shop with AI Mode</u></a>: Shopping app with a virtual try-on tool based on your own photos</p>
</li>
  <li>
<p><a href="https://stitch.withgoogle.com/"><u>Stitch</u></a>: New prompt-based interface to design UI for mobile and web applications.</p>
<p><a href="https://arxiv.org/abs/2302.05543"><u>ControlNet paper</u></a>: Outlined an architecture for adding conditional language to direct the outputs of image generation with diffusion models</p>
</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>3231</itunes:duration>
      <guid isPermaLink="false"><![CDATA[218a23b8-3ff6-11f0-af25-738e98987a00]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI5011322493.mp3?updated=1748923896" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>From Data Centers to Dyson Spheres: P-1 AI's Path to Hardware Engineering AGI</title>
      <description>Former Airbus CTO Paul Eremenko shares his vision for bringing AI to physical engineering, starting with Archie—an AI agent that works alongside human engineers. P-1 AI is tackling the challenge of generating synthetic training data to teach AI systems about complex physical systems, from data center cooling to aircraft design and beyond. Eremenko explains how Archie breaks down engineering tasks into primitive operations and uses a federated approach combining multiple AI models. The goal is to progress from entry-level engineering capabilities to eventually achieving engineering AGI that can design things humans cannot.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 27 May 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/aa97b31a-373a-11f0-b165-cb01a0914fe3/image/7a0fd341d03865ea8d0da37ad940f4e2.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Former Airbus CTO Paul Eremenko shares his vision for bringing AI to physical engineering, starting with Archie—an AI agent that works alongside human engineers. P-1 AI is tackling the challenge of generating synthetic training data to teach AI systems about complex physical systems, from data center cooling to aircraft design and beyond. Eremenko explains how Archie breaks down engineering tasks into primitive operations and uses a federated approach combining multiple AI models. The goal is to progress from entry-level engineering capabilities to eventually achieving engineering AGI that can design things humans cannot.

Hosted by Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Former Airbus CTO Paul Eremenko shares his vision for bringing AI to physical engineering, starting with Archie—an AI agent that works alongside human engineers. P-1 AI is tackling the challenge of generating synthetic training data to teach AI systems about complex physical systems, from data center cooling to aircraft design and beyond. Eremenko explains how Archie breaks down engineering tasks into primitive operations and uses a federated approach combining multiple AI models. The goal is to progress from entry-level engineering capabilities to eventually achieving engineering AGI that can design things humans cannot.</p>
<p>Hosted by Sonya Huang and Pat Grady, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2305</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[aa97b31a-373a-11f0-b165-cb01a0914fe3]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI1894390805.mp3?updated=1747938856" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Gong’s Amit Bendov: From Meeting Recordings to Revenue AI</title>
      <description>CEO Amit Bendov shares how Gong evolved from a meeting transcription tool to an AI-powered revenue platform that's increasing sales capacity by up to 60%. He explains why task-specific AI agents are the key to enterprise adoption, and why human accountability will remain crucial even as AI takes over routine sales tasks. Amit also reveals how Gong survived recent market headwinds by expanding their product suite while maintaining their customer-first approach.



Hosted by Sonya Huang and Pat Grady, Sequoia Capital 



Mentioned in this episode: 


  “New paradigm of AI architectures”: Yann LeCun’s talk at Davos where he talks about the world beyond transformers and LLMs in the next 3-5 years.

  
The Beginning of Infinity: Book by David Deutsch that Amit says is “mind-changing.”</description>
      <pubDate>Tue, 20 May 2025 23:31:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/65773554-3586-11f0-8c3b-ff4c582059ac/image/2a29b96f18c31b2accc24886bba47a1b.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>CEO Amit Bendov shares how Gong evolved from a meeting transcription tool to an AI-powered revenue platform that's increasing sales capacity by up to 60%. He explains why task-specific AI agents are the key to enterprise adoption, and why human accountability will remain crucial even as AI takes over routine sales tasks. Amit also reveals how Gong survived recent market headwinds by expanding their product suite while maintaining their customer-first approach.



Hosted by Sonya Huang and Pat Grady, Sequoia Capital 



Mentioned in this episode: 


  “New paradigm of AI architectures”: Yann LeCun’s talk at Davos where he talks about the world beyond transformers and LLMs in the next 3-5 years.

  
The Beginning of Infinity: Book by David Deutsch that Amit says is “mind-changing.”</itunes:summary>
      <content:encoded>
        <![CDATA[<p>CEO Amit Bendov shares how Gong evolved from a meeting transcription tool to an AI-powered revenue platform that's increasing sales capacity by up to 60%. He explains why task-specific AI agents are the key to enterprise adoption, and why human accountability will remain crucial even as AI takes over routine sales tasks. Amit also reveals how Gong survived recent market headwinds by expanding their product suite while maintaining their customer-first approach.</p>
<p><br></p>
<p>Hosted by Sonya Huang and Pat Grady, Sequoia Capital </p>
<p><br></p>
<p><u>Mentioned in this episode</u>: </p>
<ul>
  <li>“<a href="https://techcrunch.com/2025/01/23/metas-yann-lecun-predicts-a-new-ai-architectures-paradigm-within-5-years-and-decade-of-robotics/"><u>New paradigm of AI architectures</u></a>”: Yann LeCun’s talk at Davos where he talks about the world beyond transformers and LLMs in the next 3-5 years.</li>
  <li>
<a href="https://www.thebeginningofinfinity.com/"><em>The Beginning of Infinity</em></a>: Book by David Deutsch that Amit says is “mind-changing.”</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>2525</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[65773554-3586-11f0-8c3b-ff4c582059ac]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI6858689358.mp3?updated=1747784170" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>LIVE: Google's Jeff Dean on the Coming Transformations in AI</title>
      <description>At AI Ascent 2025, Jeff Dean makes bold predictions. Discover how the pioneer behind Google's TPUs and foundational AI research sees the technology evolving, from specialized hardware to more organic systems, and future engineering capabilities.</description>
      <pubDate>Fri, 16 May 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/445b7ab0-3063-11f0-8c60-c7b5c1c04834/image/5ac0fe35ae9e7b9b5a3d74ae41ef9a04.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>At AI Ascent 2025, Jeff Dean makes bold predictions. Discover how the pioneer behind Google's TPUs and foundational AI research sees the technology evolving, from specialized hardware to more organic systems, and future engineering capabilities.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>At AI Ascent 2025, Jeff Dean makes bold predictions. Discover how the pioneer behind Google's TPUs and foundational AI research sees the technology evolving, from specialized hardware to more organic systems, and future engineering capabilities.</p>]]>
      </content:encoded>
      <itunes:duration>1859</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[445b7ab0-3063-11f0-8c60-c7b5c1c04834]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9861411995.mp3?updated=1747759710" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>LIVE: Ambient Agents and the New Agent Inbox ft. Harrison Chase of LangChain</title>
      <description>Recorded live at Sequoia’s AI Ascent 2025: LangChain CEO Harrison Chase introduces the concept of ambient agents, AI systems that operate continuously in the background responding to events rather than direct human prompts. Learn how these agents differ from traditional chatbots, why human oversight remains essential and how this approach could dramatically scale our ability to leverage AI.</description>
      <pubDate>Thu, 15 May 2025 15:57:00 -0000</pubDate>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7aef0ace-3063-11f0-b4e0-d77a665b645e/image/2f5376bddffd2a2cbc9e41e5be6ab16a.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Recorded live at Sequoia’s AI Ascent 2025: LangChain CEO Harrison Chase introduces the concept of ambient agents, AI systems that operate continuously in the background responding to events rather than direct human prompts. Learn how these agents differ from traditional chatbots, why human oversight remains essential and how this approach could dramatically scale our ability to leverage AI.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Recorded live at Sequoia’s AI Ascent 2025: LangChain CEO Harrison Chase introduces the concept of ambient agents, AI systems that operate continuously in the background responding to events rather than direct human prompts. Learn how these agents differ from traditional chatbots, why human oversight remains essential and how this approach could dramatically scale our ability to leverage AI.</p>]]>
      </content:encoded>
      <itunes:duration>508</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7aef0ace-3063-11f0-b4e0-d77a665b645e]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4695410539.mp3?updated=1747688161" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>LIVE: How AI is Reinventing Software Business Models ft. Bret Taylor of Sierra</title>
      <description>Recorded live at Sequoia’s AI Ascent 2025: Sierra co-founder Bret Taylor discusses why AI is driving a fundamental shift from subscription-based pricing to outcomes-based models. Learn why this transition is harder for incumbents than startups, why applied AI and vertical specialization represent the biggest opportunities for entrepreneurs and how to position your AI company for success in this new paradigm.</description>
      <pubDate>Wed, 14 May 2025 12:00:00 -0000</pubDate>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b77c3dee-3064-11f0-9909-43fa231b2eba/image/51095350a4756ef99905f73fe7b07ffb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Recorded live at Sequoia’s AI Ascent 2025: Sierra co-founder Bret Taylor discusses why AI is driving a fundamental shift from subscription-based pricing to outcomes-based models. Learn why this transition is harder for incumbents than startups, why applied AI and vertical specialization represent the biggest opportunities for entrepreneurs and how to position your AI company for success in this new paradigm.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Recorded live at Sequoia’s AI Ascent 2025: Sierra co-founder Bret Taylor discusses why AI is driving a fundamental shift from subscription-based pricing to outcomes-based models. Learn why this transition is harder for incumbents than startups, why applied AI and vertical specialization represent the biggest opportunities for entrepreneurs and how to position your AI company for success in this new paradigm.</p>]]>
      </content:encoded>
      <itunes:duration>2070</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b77c3dee-3064-11f0-9909-43fa231b2eba]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9567164362.mp3?updated=1747760861" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>LIVE: Sam Altman of OpenAI on Building the ‘Core AI Subscription’ for Your Life </title>
      <description>Recorded live at Sequoia’s AI Ascent 2025: Sam reflects on OpenAI’s evolution from a 14-person research lab to a dominant AI platform. He envisions transforming ChatGPT into a deeply personal AI service that remembers your entire life's context—from conversations to emails—while working seamlessly across all services. Sam describes the generation gap in how users engage with ChatGPT, and makes surprisingly specific predictions for the next 2-3 years of AI evolution.</description>
      <pubDate>Wed, 14 May 2025 01:36:00 -0000</pubDate>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3edaf0a4-2f71-11f0-a10e-e7c869b7bacd/image/bc4eb9c769feb0b8fbc06470ebcc378a.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Recorded live at Sequoia’s AI Ascent 2025: Sam reflects on OpenAI’s evolution from a 14-person research lab to a dominant AI platform. He envisions transforming ChatGPT into a deeply personal AI service that remembers your entire life's context—from conversations to emails—while working seamlessly across all services. Sam describes the generation gap in how users engage with ChatGPT, and makes surprisingly specific predictions for the next 2-3 years of AI evolution.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Recorded live at Sequoia’s AI Ascent 2025: Sam reflects on OpenAI’s evolution from a 14-person research lab to a dominant AI platform. He envisions transforming ChatGPT into a deeply personal AI service that remembers your entire life's context—from conversations to emails—while working seamlessly across all services. Sam describes the generation gap in how users engage with ChatGPT, and makes surprisingly specific predictions for the next 2-3 years of AI evolution. </p>]]>
      </content:encoded>
      <itunes:duration>1947</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3edaf0a4-2f71-11f0-a10e-e7c869b7bacd]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9115438162.mp3?updated=1747761906" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Workday CEO Carl Eschenbach: Building the System of Record for the AI Era</title>
      <description>Workday CEO and Sequoia partner Carl Eschenbach explains how the company is evolving its platform to handle both human and AI workers. He shares Workday’s three-pronged approach to monetizing AI through seat-based pricing, role-based agents and consumption-based API access. Eschenbach discusses why domain-specific agents with curated data will be more valuable than general-purpose models in the enterprise, and how Workday is helping enterprises navigate the transition to an AI-powered workplace while maintaining human connection.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 06 May 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/00bd288c-29e5-11f0-bf17-5bbccf75de3b/image/32a0962d130783a3b2a461e56bdc2042.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Workday CEO and Sequoia partner Carl Eschenbach explains how the company is evolving its platform to handle both human and AI workers. He shares Workday’s three-pronged approach to monetizing AI through seat-based pricing, role-based agents and consumption-based API access. Eschenbach discusses why domain-specific agents with curated data will be more valuable than general-purpose models in the enterprise, and how Workday is helping enterprises navigate the transition to an AI-powered workplace while maintaining human connection.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Workday CEO and Sequoia partner Carl Eschenbach explains how the company is evolving its platform to handle both human and AI workers. He shares Workday’s three-pronged approach to monetizing AI through seat-based pricing, role-based agents and consumption-based API access. Eschenbach discusses why domain-specific agents with curated data will be more valuable than general-purpose models in the enterprise, and how Workday is helping enterprises navigate the transition to an AI-powered workplace while maintaining human connection.</p>
<p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</p>
<p><br></p>]]>
      </content:encoded>
      <itunes:duration>2869</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[00bd288c-29e5-11f0-bf17-5bbccf75de3b]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9236633719.mp3?updated=1746472699" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The Quest to ‘Solve All Diseases’ with AI: Isomorphic Labs’ Max Jaderberg</title>
      <description>After pioneering reinforcement learning breakthroughs at DeepMind with Capture the Flag and AlphaStar, Max Jaderberg aims to revolutionize drug discovery with AI as Chief AI Officer of Isomorphic Labs, which was spun out of DeepMind. He discusses how AlphaFold 3's diffusion-based architecture enables unprecedented understanding of molecular interactions, and why we're approaching a "Move 37 moment" in AI-powered drug design where models will surpass human intuition. Max shares his vision for general AI models that can solve all diseases, and the importance of developing agents that can learn to search through the whole potential design space.

Hosted by Stephanie Zhan, Sequoia capital

Mentioned in this episode: 


  
Playing Atari with Deep Reinforcement Learning: Seminal 2013 paper on Reinforcement Learning 



  
Capture the Flag: 2019 DeepMind paper on the emergence of cooperative agents



  
AlphaStar: 2019 DeepMind paper on attaining grandmaster level in StarCraft II using multi-agent RL

AlphaFold Server: Web interface for AlphaFold 3 model for non-commercial academic use</description>
      <pubDate>Tue, 29 Apr 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/25a8734c-246b-11f0-bc46-2be325d93284/image/876b2a51a0e4648d83e17e10782a9382.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>After pioneering reinforcement learning breakthroughs at DeepMind with Capture the Flag and AlphaStar, Max Jaderberg aims to revolutionize drug discovery with AI as Chief AI Officer of Isomorphic Labs, which was spun out of DeepMind. He discusses how AlphaFold 3's diffusion-based architecture enables unprecedented understanding of molecular interactions, and why we're approaching a "Move 37 moment" in AI-powered drug design where models will surpass human intuition. Max shares his vision for general AI models that can solve all diseases, and the importance of developing agents that can learn to search through the whole potential design space.

Hosted by Stephanie Zhan, Sequoia capital

Mentioned in this episode: 


  
Playing Atari with Deep Reinforcement Learning: Seminal 2013 paper on Reinforcement Learning 



  
Capture the Flag: 2019 DeepMind paper on the emergence of cooperative agents



  
AlphaStar: 2019 DeepMind paper on attaining grandmaster level in StarCraft II using multi-agent RL

AlphaFold Server: Web interface for AlphaFold 3 model for non-commercial academic use</itunes:summary>
      <content:encoded>
        <![CDATA[<p>After pioneering reinforcement learning breakthroughs at DeepMind with Capture the Flag and AlphaStar, Max Jaderberg aims to revolutionize drug discovery with AI as Chief AI Officer of Isomorphic Labs, which was spun out of DeepMind. He discusses how AlphaFold 3's diffusion-based architecture enables unprecedented understanding of molecular interactions, and why we're approaching a "Move 37 moment" in AI-powered drug design where models will surpass human intuition. Max shares his vision for general AI models that can solve all diseases, and the importance of developing agents that can learn to search through the whole potential design space.</p>
<p>Hosted by Stephanie Zhan, Sequoia capital</p>
<p><u>Mentioned in this episode</u>: </p>
<ul>
  <li>
<p><a href="https://arxiv.org/abs/1312.5602"><u>Playing Atari with Deep Reinforcement Learning</u></a>: Seminal 2013 paper on Reinforcement Learning </p>
</li>
  <li>
<p><a href="https://deepmind.google/discover/blog/capture-the-flag-the-emergence-of-complex-cooperative-agents/"><u>Capture the Flag</u></a>: 2019 DeepMind paper on the emergence of cooperative agents</p>
</li>
  <li>
<p><a href="https://deepmind.google/discover/blog/alphastar-grandmaster-level-in-starcraft-ii-using-multi-agent-reinforcement-learning/"><u>AlphaStar</u></a>: 2019 DeepMind paper on attaining grandmaster level in StarCraft II using multi-agent RL</p>
<p><a href="https://alphafoldserver.com/welcome"><u>AlphaFold Server</u></a>: Web interface for AlphaFold 3 model for non-commercial academic use</p>
</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>3340</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[25a8734c-246b-11f0-bc46-2be325d93284]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9345244137.mp3?updated=1745875147" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Pricing in the AI Era: From Inputs to Outcomes, with Paid CEO Manny Medina</title>
      <description>Former Outreach CEO Manny Medina discusses his new company Paid, which provides billing, pricing and margin management tools for AI companies. He explains why traditional SaaS pricing models don’t work for AI businesses, and breaks down emerging approaches like outcome-based and agent-based pricing. Manny shares why he believes focused AI applications targeting specific workflows will win over broad platforms, while emphasizing that AI companies need better tools to understand their unit economics and capture more value.

Hosted by Pat Grady and Lauren Reeder, Sequoia Capital

Mentioned in this episode:


CPQ: Configure, Price, Quote


Invent and Wander: Book by Jeff Bezos and Walter Isaacson


Foundations of Statistical Natural Language Processing: 1999 book by Chris Manning and Hinrich Schütze that Manny cites as a piece of AI content every AI founder should read. (still in print, companion site here)

The fox and the hedgehog: 


Quandri 

XBOW


HappyRobot 

Owl

Crosby</description>
      <pubDate>Tue, 22 Apr 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/bdaa8bd8-1edc-11f0-8e1a-7f6ccbba1f21/image/9e1a61d63c76b5731f2b238b690ac703.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Former Outreach CEO Manny Medina discusses his new company Paid, which provides billing, pricing and margin management tools for AI companies. He explains why traditional SaaS pricing models don’t work for AI businesses, and breaks down emerging approaches like outcome-based and agent-based pricing. Manny shares why he believes focused AI applications targeting specific workflows will win over broad platforms, while emphasizing that AI companies need better tools to understand their unit economics and capture more value.

Hosted by Pat Grady and Lauren Reeder, Sequoia Capital

Mentioned in this episode:


CPQ: Configure, Price, Quote


Invent and Wander: Book by Jeff Bezos and Walter Isaacson


Foundations of Statistical Natural Language Processing: 1999 book by Chris Manning and Hinrich Schütze that Manny cites as a piece of AI content every AI founder should read. (still in print, companion site here)

The fox and the hedgehog: 


Quandri 

XBOW


HappyRobot 

Owl

Crosby</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Former Outreach CEO Manny Medina discusses his new company Paid, which provides billing, pricing and margin management tools for AI companies. He explains why traditional SaaS pricing models don’t work for AI businesses, and breaks down emerging approaches like outcome-based and agent-based pricing. Manny shares why he believes focused AI applications targeting specific workflows will win over broad platforms, while emphasizing that AI companies need better tools to understand their unit economics and capture more value.</p><p><br></p><p>Hosted by Pat Grady and Lauren Reeder, Sequoia Capital</p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://en.wikipedia.org/wiki/Configure,_price_and_quote">CPQ</a>: Configure, Price, Quote</li>
<li>
<a href="https://store.hbr.org/product/invent-and-wander-the-collected-writings-of-jeff-bezos-with-an-introduction-by-walter-isaacson/10466"><em>Invent and Wander</em></a>: Book by Jeff Bezos and Walter Isaacson</li>
<li>
<a href="https://www.amazon.com/Foundations-Statistical-Natural-Language-Processing/dp/0262133601"><em>Foundations of Statistical Natural Language Processing</em></a>: 1999 book by Chris Manning and Hinrich Schütze that Manny cites as a piece of AI content every AI founder should read. (still in print, companion site <a href="https://nlp.stanford.edu/fsnlp/">here</a>)</li>
<li>The fox and the hedgehog: </li>
<li class="ql-indent-1">
<a href="https://www.quandri.io/us">Quandri</a> </li>
<li class="ql-indent-1"><a href="https://xbow.com/">XBOW</a></li>
<li class="ql-indent-1">
<a href="https://www.happyrobot.ai/">HappyRobot</a> </li>
<li class="ql-indent-1"><a href="https://owl.co/">Owl</a></li>
<li class="ql-indent-1"><a href="https://crosbylegal.com/">Crosby</a></li>
</ul>]]>
      </content:encoded>
      <itunes:duration>2729</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bdaa8bd8-1edc-11f0-8e1a-7f6ccbba1f21]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3016010881.mp3?updated=1745259699" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Arc Institute's Patrick Hsu on Building an App Store for Biology with AI</title>
      <description>Patrick Hsu, co-founder of Arc Institute, discusses the opportunities for AI in biology beyond just drug development, and how Evo 2, their new biology foundation model, is enabling a broad ecosystem of applications. Evo 2 was trained on a vast dataset of genomic data to learn evolutionary patterns that would have taken years to find; as a result, the model can be used for applications from identifying mutations that cause disease to designing new molecular and even genome scale biological systems.

Hosted by Josephine Chen and Pat Grady, Sequoia Capital

Mentioned in this episode:


Sequence modeling and design from molecular to genome scale with Evo: Public pre-print of original Evo paper


Genome modeling and design across all domains of life with Evo 2: Public pre-print of Evo 2 paper


ClinVar: NIH database of the genes that are known to cause disease, and mutations in those genes causally associated with disease state


Sequence Read Archive: Massive NIH database of gene sequencing data 


Machines of Loving Grace: Daria Amodei essay that Patrick cites on how AI could transform the world for the better


Arc Virtual Cell Atlas: Arc’s first step toward assembling, curating and generating large-scale cellular data from AI-driven biological discovery (among many other tools)


Protein Data Bank (PDB): a global archive of 3D structural information of biomolecules used by DeepMind to train AlphaFold


OpenAI Deep Research: The one AI app Patrick uses daily</description>
      <pubDate>Tue, 15 Apr 2025 13:55:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/15c9e3ce-1972-11f0-a5c8-c76e38cc3e1d/image/5dff90ab8f605786831d147710868a08.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Patrick Hsu, co-founder of Arc Institute, discusses the opportunities for AI in biology beyond just drug development, and how Evo 2, their new biology foundation model, is enabling a broad ecosystem of applications. Evo 2 was trained on a vast dataset of genomic data to learn evolutionary patterns that would have taken years to find; as a result, the model can be used for applications from identifying mutations that cause disease to designing new molecular and even genome scale biological systems.

Hosted by Josephine Chen and Pat Grady, Sequoia Capital

Mentioned in this episode:


Sequence modeling and design from molecular to genome scale with Evo: Public pre-print of original Evo paper


Genome modeling and design across all domains of life with Evo 2: Public pre-print of Evo 2 paper


ClinVar: NIH database of the genes that are known to cause disease, and mutations in those genes causally associated with disease state


Sequence Read Archive: Massive NIH database of gene sequencing data 


Machines of Loving Grace: Daria Amodei essay that Patrick cites on how AI could transform the world for the better


Arc Virtual Cell Atlas: Arc’s first step toward assembling, curating and generating large-scale cellular data from AI-driven biological discovery (among many other tools)


Protein Data Bank (PDB): a global archive of 3D structural information of biomolecules used by DeepMind to train AlphaFold


OpenAI Deep Research: The one AI app Patrick uses daily</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Patrick Hsu, co-founder of Arc Institute, discusses the opportunities for AI in biology beyond just drug development, and how Evo 2, their new biology foundation model, is enabling a broad ecosystem of applications. Evo 2 was trained on a vast dataset of genomic data to learn evolutionary patterns that would have taken years to find; as a result, the model can be used for applications from identifying mutations that cause disease to designing new molecular and even genome scale biological systems.</p><p><br></p><p>Hosted by Josephine Chen and Pat Grady, Sequoia Capital</p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://arcinstitute.org/manuscripts/Evo">Sequence modeling and design from molecular to genome scale with Evo</a>: Public pre-print of original Evo paper</li>
<li>
<a href="https://arcinstitute.org/manuscripts/Evo2">Genome modeling and design across all domains of life with Evo 2</a>: Public pre-print of Evo 2 paper</li>
<li>
<a href="https://www.ncbi.nlm.nih.gov/clinvar/">ClinVar</a>: NIH database of the genes that are known to cause disease, and mutations in those genes causally associated with disease state</li>
<li>
<a href="https://www.ncbi.nlm.nih.gov/sra">Sequence Read Archive</a>: Massive NIH database of gene sequencing data </li>
<li>
<a href="https://darioamodei.com/machines-of-loving-grace">Machines of Loving Grace</a>: Daria Amodei essay that Patrick cites on how AI could transform the world for the better</li>
<li>
<a href="https://arcinstitute.org/news/news/arc-virtual-cell-atlas-launch">Arc Virtual Cell Atlas</a>: Arc’s first step toward assembling, curating and generating large-scale cellular data from AI-driven biological discovery (among <a href="https://arcinstitute.org/tools">many other tools</a>)</li>
<li>
<a href="https://www.rcsb.org/">Protein Data Bank</a> (PDB): a global archive of 3D structural information of biomolecules used by DeepMind to train AlphaFold</li>
<li>
<a href="https://openai.com/index/introducing-deep-research/">OpenAI Deep Research</a>: The one AI app Patrick uses daily</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>3491</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[15c9e3ce-1972-11f0-a5c8-c76e38cc3e1d]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI6181193044.mp3?updated=1744664492" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Replit CEO Amjad Masad on 1 Billion Developers: A Better End State than AGI?</title>
      <description>Amjad Masad set out more than a decade ago to pursue the dream of unleashing 1B software creators around the world. With millions of Replit users pre-ChatGPT, that vision was already becoming a reality. Turbocharged by LLMs, the vision of enabling anyone to code—from 12-year-olds in India to knowledge workers in the U.S.—seems less and less radical. In this episode, Amjad explains how an explosion in the developer population could change the economy, society and more. He also discusses his early days programming in Jordan, his unique management approach and what AI will mean for the global economy.

Hosted by David Cahn and Sonya Huang, Sequoia Capital 

Mentioned in this episode:

On the Naturalness of Software: 2012 paper on applying NLP to code 
Attention Is All You Need: Seminal 2017 paper on transformers
I Am a Strange Loop: 2007 follow up to Douglas Hofstadter’s 1979 classic Gödel, Escher, Bach that explores how self-referential systems can describe minds
On Lisp: Paul Graham’s 1993 book on the original programming language of AI</description>
      <pubDate>Tue, 08 Apr 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/2da47ea2-140c-11f0-afca-d7551c42a717/image/dc059b2c40c5d04bd0ea930ecdb96b06.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Amjad Masad set out more than a decade ago to pursue the dream of unleashing 1B software creators around the world. With millions of Replit users pre-ChatGPT, that vision was already becoming a reality. Turbocharged by LLMs, the vision of enabling anyone to code—from 12-year-olds in India to knowledge workers in the U.S.—seems less and less radical. In this episode, Amjad explains how an explosion in the developer population could change the economy, society and more. He also discusses his early days programming in Jordan, his unique management approach and what AI will mean for the global economy.

Hosted by David Cahn and Sonya Huang, Sequoia Capital 

Mentioned in this episode:

On the Naturalness of Software: 2012 paper on applying NLP to code 
Attention Is All You Need: Seminal 2017 paper on transformers
I Am a Strange Loop: 2007 follow up to Douglas Hofstadter’s 1979 classic Gödel, Escher, Bach that explores how self-referential systems can describe minds
On Lisp: Paul Graham’s 1993 book on the original programming language of AI</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Amjad Masad set out more than a decade ago to pursue the dream of unleashing 1B software creators around the world. With millions of Replit users pre-ChatGPT, that vision was already becoming a reality. Turbocharged by LLMs, the vision of enabling anyone to code—from 12-year-olds in India to knowledge workers in the U.S.—seems less and less radical. In this episode, Amjad explains how an explosion in the developer population could change the economy, society and more. He also discusses his early days programming in Jordan, his unique management approach and what AI will mean for the global economy.</p><p><br></p><p>Hosted by David Cahn and Sonya Huang, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><p><br></p><p><a href="https://people.inf.ethz.ch/suz/publications/natural.pdf">On the Naturalness of Software</a>: 2012 paper on applying NLP to code </p><p><a href="https://arxiv.org/abs/1706.03762">Attention Is All You Need</a>: Seminal 2017 paper on transformers</p><p><a href="https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop">I Am a Strange Loop</a>: 2007 follow up to Douglas Hofstadter’s 1979 classic <a href="https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach"><em>Gödel, Escher, Bach</em></a> that explores how self-referential systems can describe minds</p><p><a href="https://www.paulgraham.com/onlisp.html">On Lisp</a>: Paul Graham’s 1993 book on the original programming language of AI</p><p><br></p><p><br></p>]]>
      </content:encoded>
      <itunes:duration>5178</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2da47ea2-140c-11f0-afca-d7551c42a717]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3845961326.mp3?updated=1744070642" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Why CRM Needs an AI Revolution, with Day.ai Founder Christopher O’Donnell</title>
      <description>Christopher O’Donnell believes the fundamental problems with CRM—incomplete data, complex workflows, siloed work products and the fear of leads falling through the cracks—can finally be solved through AI. Founder of Day.ai and former Chief Product Officer of HubSpot, Christopher explains how his team is building a system that automatically captures the full context of customer relationships while giving users transparency and control. He shares lessons from building HubSpot’s CRM and why he’s taking a deliberate approach to product development despite the pressure to scale quickly in the AI era.

Hosted by Pat Grady, Sequoia Capital 

Mentioned in this episode:


The Innovator's Dilemma: Classic book by Clay Christensen (referenced regarding HubSpot's second S-curve strategy)


Hubspot CRM: The only product to successfully challenge Salesforce’s dominance in the CRM category

From Super Mario Brothers to Elden Ring: Analogy to what an AI-powered CRM experience can be through comparison of video games launched in 1985 vs 2022


Punk’d: Hidden camera–practical joke reality television series that premiered on MTV in 2003, created by Ashton Kutcher and Jason Goldberg


Slow is smooth and smooth is fast: SEALs-derived concept mentioned regarding product development)


Aga stove (highlighted as extraordinary product design example)</description>
      <pubDate>Tue, 01 Apr 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f007c2f0-0e81-11f0-a4d6-ebea52fd4128/image/68c2ebf1ba83d0b84e14c2bdf53c911f.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Christopher O’Donnell believes the fundamental problems with CRM—incomplete data, complex workflows, siloed work products and the fear of leads falling through the cracks—can finally be solved through AI. Founder of Day.ai and former Chief Product Officer of HubSpot, Christopher explains how his team is building a system that automatically captures the full context of customer relationships while giving users transparency and control. He shares lessons from building HubSpot’s CRM and why he’s taking a deliberate approach to product development despite the pressure to scale quickly in the AI era.

Hosted by Pat Grady, Sequoia Capital 

Mentioned in this episode:


The Innovator's Dilemma: Classic book by Clay Christensen (referenced regarding HubSpot's second S-curve strategy)


Hubspot CRM: The only product to successfully challenge Salesforce’s dominance in the CRM category

From Super Mario Brothers to Elden Ring: Analogy to what an AI-powered CRM experience can be through comparison of video games launched in 1985 vs 2022


Punk’d: Hidden camera–practical joke reality television series that premiered on MTV in 2003, created by Ashton Kutcher and Jason Goldberg


Slow is smooth and smooth is fast: SEALs-derived concept mentioned regarding product development)


Aga stove (highlighted as extraordinary product design example)</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Christopher O’Donnell believes the fundamental problems with CRM—incomplete data, complex workflows, siloed work products and the fear of leads falling through the cracks—can finally be solved through AI. Founder of Day.ai and former Chief Product Officer of HubSpot, Christopher explains how his team is building a system that automatically captures the full context of customer relationships while giving users transparency and control. He shares lessons from building HubSpot’s CRM and why he’s taking a deliberate approach to product development despite the pressure to scale quickly in the AI era.</p><p><br></p><p>Hosted by Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://www.amazon.com/Innovators-Dilemma-Revolutionary-Change-Business/dp/0062060244">The Innovator's Dilemma</a>: Classic book by Clay Christensen (referenced regarding HubSpot's second S-curve strategy)</li>
<li>
<a href="https://www.hubspot.com/products/crm">Hubspot CRM</a>: The only product to successfully challenge Salesforce’s dominance in the CRM category</li>
<li>From <a href="https://en.wikipedia.org/wiki/Super_Mario_Bros.">Super Mario Brothers</a> to <a href="https://en.wikipedia.org/wiki/Elden_Ring">Elden Ring</a>: Analogy to what an AI-powered CRM experience can be through comparison of video games launched in 1985 vs 2022</li>
<li>
<a href="https://en.wikipedia.org/wiki/Punk'd">Punk’d</a>: Hidden camera–practical joke reality television series that premiered on MTV in 2003, created by Ashton Kutcher and Jason Goldberg</li>
<li>
<a href="https://www.quora.com/Who-first-said-slow-is-smooth-and-smooth-is-fast">Slow is smooth and smooth is fast</a>: SEALs-derived concept mentioned regarding product development)</li>
<li>
<a href="https://www.agarangeusa.com/">Aga stove</a> (highlighted as extraordinary product design example)</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>4265</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f007c2f0-0e81-11f0-a4d6-ebea52fd4128]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7652918508.mp3?updated=1743461490" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>From Software Engineers to AI Word Artisans: Filip Kozera of Wordware</title>
      <description>Filip Kozera sees parallels between Excel’s democratization of data analytics and Wordware’s mission to put AI development in the hands of knowledge workers. Drawing inspiration from Excel’s 750 million users (compared to 30 million software developers), Wordware is creating tools that balance the rigid structure of programming with the fuzziness of natural language. Filip explains why effective AI development requires working across multiple abstraction layers—from high-level concepts to detailed implementation—while preserving human creative control. He shares his vision for “word artisans” who will use AI to amplify their creative impact.

Hosted by Sonya Huang, Sequoia Capital

Mentioned in this episode:


Lovable: Generative AI app that builds UIs and web apps


Her: 2013 Spike Jonze film that Filip uses as an example of how voice will not be the best modality to express knowledge work.


Descript: AI video editing app that Filip uses a lot. 


Granola: AI notetaking app Filip uses every day.. 


Gemini 2.0 Pro: Google’s newest long context model that can handle 6000 page pdfs.


Limitless pendant: Wearable device for collecting personal conversational context to drive AI experiences that Filip can’t wait for to ship.


DeepLearning.AI: Andrew Ng’s amazing resource for learning about AI

3Blue1Brown: Grant Sanderson’s incredible channel on YouTube that explains math and AI visually.</description>
      <pubDate>Tue, 25 Mar 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c7b59f70-08d2-11f0-bc40-f37d4bb00282/image/400fc75398db8c089ea82a39b584e42e.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Filip Kozera sees parallels between Excel’s democratization of data analytics and Wordware’s mission to put AI development in the hands of knowledge workers. Drawing inspiration from Excel’s 750 million users (compared to 30 million software developers), Wordware is creating tools that balance the rigid structure of programming with the fuzziness of natural language. Filip explains why effective AI development requires working across multiple abstraction layers—from high-level concepts to detailed implementation—while preserving human creative control. He shares his vision for “word artisans” who will use AI to amplify their creative impact.

Hosted by Sonya Huang, Sequoia Capital

Mentioned in this episode:


Lovable: Generative AI app that builds UIs and web apps


Her: 2013 Spike Jonze film that Filip uses as an example of how voice will not be the best modality to express knowledge work.


Descript: AI video editing app that Filip uses a lot. 


Granola: AI notetaking app Filip uses every day.. 


Gemini 2.0 Pro: Google’s newest long context model that can handle 6000 page pdfs.


Limitless pendant: Wearable device for collecting personal conversational context to drive AI experiences that Filip can’t wait for to ship.


DeepLearning.AI: Andrew Ng’s amazing resource for learning about AI

3Blue1Brown: Grant Sanderson’s incredible channel on YouTube that explains math and AI visually.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Filip Kozera sees parallels between Excel’s democratization of data analytics and Wordware’s mission to put AI development in the hands of knowledge workers. Drawing inspiration from Excel’s 750 million users (compared to 30 million software developers), Wordware is creating tools that balance the rigid structure of programming with the fuzziness of natural language. Filip explains why effective AI development requires working across multiple abstraction layers—from high-level concepts to detailed implementation—while preserving human creative control. He shares his vision for “word artisans” who will use AI to amplify their creative impact.</p><p><br></p><p>Hosted by Sonya Huang, Sequoia Capital</p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://lovable.dev/">Lovable</a>: Generative AI app that builds UIs and web apps</li>
<li>
<a href="https://en.wikipedia.org/wiki/Her_(2013_film)"><em>Her</em></a>: 2013 Spike Jonze film that Filip uses as an example of how voice will not be the best modality to express knowledge work.</li>
<li>
<a href="https://www.descript.com/">Descript</a>: AI video editing app that Filip uses a lot. </li>
<li>
<a href="https://www.granola.ai/">Granola</a>: AI notetaking app Filip uses every day.. </li>
<li>
<a href="https://deepmind.google/technologies/gemini/pro/">Gemini 2.0 Pro</a>: Google’s newest long context model that can handle 6000 page pdfs.</li>
<li>
<a href="https://www.limitless.ai/">Limitless pendant</a>: Wearable device for collecting personal conversational context to drive AI experiences that Filip can’t wait for to ship.</li>
<li>
<a href="http://deeplearning.ai">DeepLearning.AI</a>: Andrew Ng’s amazing resource for learning about AI</li>
</ul><p><a href="https://www.youtube.com/c/3blue1brown">3Blue1Brown</a>: Grant Sanderson’s incredible channel on YouTube that explains math and AI visually.</p>]]>
      </content:encoded>
      <itunes:duration>2584</itunes:duration>
      <guid isPermaLink="false"><![CDATA[c7b59f70-08d2-11f0-bc40-f37d4bb00282]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI2166058098.mp3?updated=1742836484" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Josh Woodward: Google Labs is Rapidly Building AI Products from 0-to-1</title>
      <description>As VP of Google Labs, Josh Woodward leads teams exploring the frontiers of AI applications. He shares insights on their rapid development process, why today’s written prompts will become outdated and how AI is transforming everything from video generation to computer control. He reveals that 25% of Google’s code is now written by AI and explains why coding could see major leaps forward this year. He emphasizes the importance of taste, design and human values in building AI tools that will shape how future generations work and create.

Mentioned in this episode:


Notebook LM: Personal research product based on Gemini 2 (previously discussed on Training Data.)


Veo 2: Google DeepMind’s new video generation model.


Paul Graham on X replying to Aaron Levie’s post that “One approach to take in building in AI is to do something that's too expensive to be reasonably practical right now, and just bet that the costs will drop by 10X or 100X over time. The cost curve is on your side.”


Where Good Ideas Come From: Book on the history of innovation by Steven Johnson.


Project Mariner: Google DeepMind’s research prototype exploring human-agent interaction starting with browser use.


Replit Agent: Josh’s favorite new AI app


The Lego Story: Book on the history of Lego.


Hosted by: Ravi Gupta and Sonya Huang, Sequoia Capital </description>
      <pubDate>Tue, 18 Mar 2025 09:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0f4964c8-0371-11f0-a7de-d359cbe867ee/image/24e3884c105ba29f4856d6c60cecad18.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>As VP of Google Labs, Josh Woodward leads teams exploring the frontiers of AI applications. He shares insights on their rapid development process, why today’s written prompts will become outdated and how AI is transforming everything from video generation to computer control. He reveals that 25% of Google’s code is now written by AI and explains why coding could see major leaps forward this year. He emphasizes the importance of taste, design and human values in building AI tools that will shape how future generations work and create.

Mentioned in this episode:


Notebook LM: Personal research product based on Gemini 2 (previously discussed on Training Data.)


Veo 2: Google DeepMind’s new video generation model.


Paul Graham on X replying to Aaron Levie’s post that “One approach to take in building in AI is to do something that's too expensive to be reasonably practical right now, and just bet that the costs will drop by 10X or 100X over time. The cost curve is on your side.”


Where Good Ideas Come From: Book on the history of innovation by Steven Johnson.


Project Mariner: Google DeepMind’s research prototype exploring human-agent interaction starting with browser use.


Replit Agent: Josh’s favorite new AI app


The Lego Story: Book on the history of Lego.


Hosted by: Ravi Gupta and Sonya Huang, Sequoia Capital </itunes:summary>
      <content:encoded>
        <![CDATA[<p>As VP of Google Labs, Josh Woodward leads teams exploring the frontiers of AI applications. He shares insights on their rapid development process, why today’s written prompts will become outdated and how AI is transforming everything from video generation to computer control. He reveals that 25% of Google’s code is now written by AI and explains why coding could see major leaps forward this year. He emphasizes the importance of taste, design and human values in building AI tools that will shape how future generations work and create.</p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://notebooklm.google/">Notebook LM</a>: Personal research product based on Gemini 2 (previously discussed on <a href="https://www.sequoiacap.com/podcast/training-data-google-notebooklm/">Training Data</a>.)</li>
<li>
<a href="https://deepmind.google/technologies/veo/veo-2/">Veo 2</a>: Google DeepMind’s new video generation model.</li>
<li>
<a href="https://x.com/paulg/status/1889286407325176098">Paul Graham</a> on X replying to Aaron Levie’s post that “One approach to take in building in AI is to do something that's too expensive to be reasonably practical right now, and just bet that the costs will drop by 10X or 100X over time. The cost curve is on your side.”</li>
<li>
<a href="https://stevenberlinjohnson.com/where-good-ideas-come-from-763bb8957069">Where Good Ideas Come From</a>: Book on the history of innovation by Steven Johnson.</li>
<li>
<a href="https://deepmind.google/technologies/project-mariner/">Project Mariner</a>: Google DeepMind’s research prototype exploring human-agent interaction starting with browser use.</li>
<li>
<a href="https://replit.com/ai">Replit Agent</a>: Josh’s favorite new AI app</li>
<li>
<a href="https://www.harpercollins.com/products/the-lego-story-jens-andersen?variant=40828472852514">The Lego Story</a>: Book on the history of Lego.</li>
</ul><p><br></p><p>Hosted by: Ravi Gupta and Sonya Huang, Sequoia Capital </p>]]>
      </content:encoded>
      <itunes:duration>3076</itunes:duration>
      <guid isPermaLink="false"><![CDATA[0f4964c8-0371-11f0-a7de-d359cbe867ee]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI2961001947.mp3?updated=1742271117" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How AI Breakout Harvey is Transforming Legal Services, with CEO Winston Weinberg</title>
      <description>Harvey CEO Winston Weinberg explains why success in legal AI requires more than just model capabilities—it demands deep process expertise that doesn’t exist online. He shares how Harvey balances rapid product development with earning trust from law firms through hyper-personalized demos and deep industry expertise. The discussion covers Harvey’s approach to product development—expanding specialized capabilities then collapsing them into unified workflows—and why focusing on complex work like international mergers creates the most defensible position in legal AI.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 11 Mar 2025 15:29:29 -0000</pubDate>
      <itunes:title>How AI Breakout Harvey is Transforming Legal Services, with CEO Winston Weinberg</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0f14ec56-fe0e-11ef-b603-a35ac89367c0/image/9c625b4eca8ad0ab11e5c39ee4ad1743.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Harvey CEO Winston Weinberg explains why success in legal AI requires more than just model capabilities.</itunes:subtitle>
      <itunes:summary>Harvey CEO Winston Weinberg explains why success in legal AI requires more than just model capabilities—it demands deep process expertise that doesn’t exist online. He shares how Harvey balances rapid product development with earning trust from law firms through hyper-personalized demos and deep industry expertise. The discussion covers Harvey’s approach to product development—expanding specialized capabilities then collapsing them into unified workflows—and why focusing on complex work like international mergers creates the most defensible position in legal AI.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Harvey CEO Winston Weinberg explains why success in legal AI requires more than just model capabilities—it demands deep process expertise that doesn’t exist online. He shares how Harvey balances rapid product development with earning trust from law firms through hyper-personalized demos and deep industry expertise. The discussion covers Harvey’s approach to product development—expanding specialized capabilities then collapsing them into unified workflows—and why focusing on complex work like international mergers creates the most defensible position in legal AI.</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</p>]]>
      </content:encoded>
      <itunes:duration>3249</itunes:duration>
      <guid isPermaLink="false"><![CDATA[0f14ec56-fe0e-11ef-b603-a35ac89367c0]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4850855251.mp3?updated=1741707222" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The AI Product Going Viral With Doctors: OpenEvidence, with CEO Daniel Nadler</title>
      <description>OpenEvidence is transforming how doctors access medical knowledge at the point of care, from the biggest medical establishments to small practices serving rural communities. Founder Daniel Nadler explains his team’s insight that training smaller, specialized AI models on peer-reviewed literature outperforms large general models for medical applications. He discusses how making the platform freely available to all physicians led to widespread organic adoption and strategic partnerships with publishers like the New England Journal of Medicine. In an industry where organizations move glacially, 10-20% of all U.S. doctors began using OpenEvidence overnight to find information buried deep in the long tail of new medical studies, to validate edge cases and improve diagnoses. Nadler emphasizes the importance of accuracy and transparency in AI healthcare applications.

Hosted by: Pat Grady, Sequoia Capital 

Mentioned in this episode: 


Do We Still Need Clinical Language Models?: Paper from OpenEvidence founders showing that small, specialized models outperformed large models for healthcare diagnostics


Chinchilla paper: Seminal 2022 paper about scaling laws in large language models


Understand: Ted Chiang sci-fi novella published in 1991</description>
      <pubDate>Tue, 04 Mar 2025 10:00:00 -0000</pubDate>
      <itunes:title>Is This Doctors’ New Favorite AI App? OpenEvidence, with CEO Daniel Nadler </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1d2fd45e-f891-11ef-acd6-c3cb1a3969be/image/ec9e19e99c8d0c0a85f92847a61b2494.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>OpenEvidence is transforming how doctors access medical knowledge at the point of care, from the biggest medical establishments to small practices serving rural communities.</itunes:subtitle>
      <itunes:summary>OpenEvidence is transforming how doctors access medical knowledge at the point of care, from the biggest medical establishments to small practices serving rural communities. Founder Daniel Nadler explains his team’s insight that training smaller, specialized AI models on peer-reviewed literature outperforms large general models for medical applications. He discusses how making the platform freely available to all physicians led to widespread organic adoption and strategic partnerships with publishers like the New England Journal of Medicine. In an industry where organizations move glacially, 10-20% of all U.S. doctors began using OpenEvidence overnight to find information buried deep in the long tail of new medical studies, to validate edge cases and improve diagnoses. Nadler emphasizes the importance of accuracy and transparency in AI healthcare applications.

Hosted by: Pat Grady, Sequoia Capital 

Mentioned in this episode: 


Do We Still Need Clinical Language Models?: Paper from OpenEvidence founders showing that small, specialized models outperformed large models for healthcare diagnostics


Chinchilla paper: Seminal 2022 paper about scaling laws in large language models


Understand: Ted Chiang sci-fi novella published in 1991</itunes:summary>
      <content:encoded>
        <![CDATA[<p>OpenEvidence is transforming how doctors access medical knowledge at the point of care, from the biggest medical establishments to small practices serving rural communities. Founder Daniel Nadler explains his team’s insight that training smaller, specialized AI models on peer-reviewed literature outperforms large general models for medical applications. He discusses how making the platform freely available to all physicians led to widespread organic adoption and strategic partnerships with publishers like the New England Journal of Medicine. In an industry where organizations move glacially, 10-20% of all U.S. doctors began using OpenEvidence overnight to find information buried deep in the long tail of new medical studies, to validate edge cases and improve diagnoses. Nadler emphasizes the importance of accuracy and transparency in AI healthcare applications.</p><p><br></p><p>Hosted by: Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned in this episode: </p><ul>
<li>
<a href="https://arxiv.org/abs/2302.08091">Do We Still Need Clinical Language Models?</a>: Paper from OpenEvidence founders showing that small, specialized models outperformed large models for healthcare diagnostics</li>
<li>
<a href="https://arxiv.org/abs/2203.15556">Chinchilla paper</a>: Seminal 2022 paper about scaling laws in large language models</li>
<li>
<a href="https://en.wikipedia.org/wiki/Understand_(story)">Understand</a>: Ted Chiang sci-fi novella published in 1991</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>3892</itunes:duration>
      <guid isPermaLink="false"><![CDATA[1d2fd45e-f891-11ef-acd6-c3cb1a3969be]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7686673568.mp3?updated=1741101717" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>OpenAI’s Deep Research Team on Why Reinforcement Learning is the Future for AI Agents</title>
      <description>OpenAI’s Isa Fulford and Josh Tobin discuss how the company’s newest agent, Deep Research, represents a breakthrough in AI research capabilities by training models end-to-end rather than using hand-coded operational graphs. The product leads explain how high-quality training data and the o3 model’s reasoning abilities enable adaptable research strategies, and why OpenAI thinks Deep Research will capture a meaningful percentage of knowledge work. Key product decisions that build transparency and trust include citations and clarification flows. By compressing hours of work into minutes, Deep Research transforms what’s possible for many business and consumer use cases.

Hosted by: Sonya Huang and Lauren Reeder, Sequoia Capital 

Mentioned in this episode:

Yann Lecun’s Cake: An analogy Meta AI’s leader shared in his 2016 NIPS keynote</description>
      <pubDate>Tue, 25 Feb 2025 10:00:00 -0000</pubDate>
      <itunes:title>OpenAI’s Deep Research Team on Why Reinforcement Learning is the Future for AI Agents</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/e5145824-f2d7-11ef-8b0a-efc5a34cf4cf/image/dedfa6230c9899b4fccf2a2dd94aa363.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>OpenAI’s Isa Fulford and Josh Tobin discuss how the company’s newest agent represents a breakthrough in AI research capabilities by training models end-to-end rather than using rigid operational graphs. </itunes:subtitle>
      <itunes:summary>OpenAI’s Isa Fulford and Josh Tobin discuss how the company’s newest agent, Deep Research, represents a breakthrough in AI research capabilities by training models end-to-end rather than using hand-coded operational graphs. The product leads explain how high-quality training data and the o3 model’s reasoning abilities enable adaptable research strategies, and why OpenAI thinks Deep Research will capture a meaningful percentage of knowledge work. Key product decisions that build transparency and trust include citations and clarification flows. By compressing hours of work into minutes, Deep Research transforms what’s possible for many business and consumer use cases.

Hosted by: Sonya Huang and Lauren Reeder, Sequoia Capital 

Mentioned in this episode:

Yann Lecun’s Cake: An analogy Meta AI’s leader shared in his 2016 NIPS keynote</itunes:summary>
      <content:encoded>
        <![CDATA[<p>OpenAI’s Isa Fulford and Josh Tobin discuss how the company’s newest agent, Deep Research, represents a breakthrough in AI research capabilities by training models end-to-end rather than using hand-coded operational graphs. The product leads explain how high-quality training data and the o3 model’s reasoning abilities enable adaptable research strategies, and why OpenAI thinks Deep Research will capture a meaningful percentage of knowledge work. Key product decisions that build transparency and trust include citations and clarification flows. By compressing hours of work into minutes, Deep Research transforms what’s possible for many business and consumer use cases.</p><p><br></p><p>Hosted by: Sonya Huang and Lauren Reeder, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><ul><li>
<a href="https://x.com/ylecun/status/1731594763512557982">Yann Lecun’s Cake</a>: An analogy Meta AI’s leader shared in his 2016 NIPS keynote</li></ul>]]>
      </content:encoded>
      <itunes:duration>1965</itunes:duration>
      <guid isPermaLink="false"><![CDATA[e5145824-f2d7-11ef-8b0a-efc5a34cf4cf]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7097750063.mp3?updated=1740442579" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Palo Alto Networks’ Nikesh Arora: AI, Security and the New World Order</title>
      <description>Palo Alto Networks’s CEO Nikesh Arora dispels DeepSeek hype by detailing all of the guardrails enterprises need to have in place to give AI agents “arms and legs.” No matter the model, deploying applications for precision-use cases means superimposing better controls. Arora emphasizes that the real challenge isn’t just blocking threats but matching the accelerated pace of AI-powered attacks, requiring a fundamental shift from prevention-focused to real-time detection and response systems. CISOs are risk managers, but legacy companies competing with more risk-tolerant startups need to move quickly and embrace change. 

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:

Cortex XSIAM: Security operations and incident remediation platform from Palo Alto Networks</description>
      <pubDate>Tue, 18 Feb 2025 10:00:00 -0000</pubDate>
      <itunes:title>Palo Alto Networks’ Nikesh Arora: AI, Security and the New World Order</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>30</itunes:episode>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/335ce02e-ed7d-11ef-8371-df24fdb17d20/image/294d0f57ca66e4e0558b6ab0138490b2.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Palo Alto Networks’ CEO Nikesh Arora dispels DeepSeek hype by detailing all of the guardrails enterprises need to have in place to give AI agents “arms and legs.”</itunes:subtitle>
      <itunes:summary>Palo Alto Networks’s CEO Nikesh Arora dispels DeepSeek hype by detailing all of the guardrails enterprises need to have in place to give AI agents “arms and legs.” No matter the model, deploying applications for precision-use cases means superimposing better controls. Arora emphasizes that the real challenge isn’t just blocking threats but matching the accelerated pace of AI-powered attacks, requiring a fundamental shift from prevention-focused to real-time detection and response systems. CISOs are risk managers, but legacy companies competing with more risk-tolerant startups need to move quickly and embrace change. 

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:

Cortex XSIAM: Security operations and incident remediation platform from Palo Alto Networks</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Palo Alto Networks’s CEO Nikesh Arora dispels DeepSeek hype by detailing all of the guardrails enterprises need to have in place to give AI agents “arms and legs.” No matter the model, deploying applications for precision-use cases means superimposing better controls. Arora emphasizes that the real challenge isn’t just blocking threats but matching the accelerated pace of AI-powered attacks, requiring a fundamental shift from prevention-focused to real-time detection and response systems. CISOs are risk managers, but legacy companies competing with more risk-tolerant startups need to move quickly and embrace change. </p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><ul><li>
<a href="https://www.paloaltonetworks.com/cortex/cortex-xsiam">Cortex XSIAM</a>: Security operations and incident remediation platform from Palo Alto Networks</li></ul>]]>
      </content:encoded>
      <itunes:duration>3608</itunes:duration>
      <guid isPermaLink="false"><![CDATA[335ce02e-ed7d-11ef-8371-df24fdb17d20]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7104965903.mp3?updated=1741376579" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>MongoDB’s Sahir Azam: Vector Databases and the Data Structure of AI</title>
      <description>MongoDB product leader Sahir Azam explains how vector databases have evolved from semantic search to become the essential memory and state layer for AI applications. He describes his view of how AI is transforming software development generally, and how combining vectors, graphs and traditional data structures enables high-quality retrieval needed for mission-critical enterprise AI use cases. Drawing from MongoDB's successful cloud transformation, Azam shares his vision for democratizing AI development by making sophisticated capabilities accessible to mainstream developers through integrated tools and abstractions.﻿

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:


Introducing ambient agents: Blog post by Langchain on a new UX pattern where AI agents can listen to an event stream and act on it 


Google Gemini Deep Research: Sahir enjoys its amazing product experience


Perplexity: AI search app that Sahir admires for its product craft


Snipd: AI powered podcast app Sahir likes</description>
      <pubDate>Thu, 13 Feb 2025 10:00:00 -0000</pubDate>
      <itunes:title>MongoDB’s Sahir Azam: Vector Databases and the Data Structure of AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/eca071d6-e7f1-11ef-9d5b-3b420a3cbea7/image/0e13763ffbf58800ee64744ca68f5246.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>MongoDB product leader Sahir Azam explains how vector databases have evolved from semantic search to become the essential memory and state layer for AI applications. He describes his view of how AI is transforming software development generally, and how combining vectors, graphs and traditional data structures enables high-quality retrieval needed for mission-critical enterprise AI use cases. Drawing from MongoDB's successful cloud transformation, Azam shares his vision for democratizing AI development by making sophisticated capabilities accessible to mainstream developers through integrated tools and abstractions.﻿

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:


Introducing ambient agents: Blog post by Langchain on a new UX pattern where AI agents can listen to an event stream and act on it 


Google Gemini Deep Research: Sahir enjoys its amazing product experience


Perplexity: AI search app that Sahir admires for its product craft


Snipd: AI powered podcast app Sahir likes</itunes:summary>
      <content:encoded>
        <![CDATA[<p>MongoDB product leader Sahir Azam explains how vector databases have evolved from semantic search to become the essential memory and state layer for AI applications. He describes his view of how AI is transforming software development generally, and how combining vectors, graphs and traditional data structures enables high-quality retrieval needed for mission-critical enterprise AI use cases. Drawing from MongoDB's successful cloud transformation, Azam shares his vision for democratizing AI development by making sophisticated capabilities accessible to mainstream developers through integrated tools and abstractions.﻿</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://blog.langchain.dev/introducing-ambient-agents/">Introducing ambient agents</a>: Blog post by Langchain on a new UX pattern where AI agents can listen to an event stream and act on it </li>
<li>
<a href="https://blog.google/products/gemini/google-gemini-deep-research/">Google Gemini Deep Research</a>: Sahir enjoys its amazing product experience</li>
<li>
<a href="https://www.perplexity.ai/">Perplexity</a>: AI search app that Sahir admires for its product craft</li>
<li>
<a href="https://www.snipd.com/">Snipd</a>: AI powered podcast app Sahir likes</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>2666</itunes:duration>
      <guid isPermaLink="false"><![CDATA[eca071d6-e7f1-11ef-9d5b-3b420a3cbea7]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI2602273912.mp3?updated=1739399014" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Roblox Studio Head Stef Corazza: Using AI to Empower Creators</title>
      <description>Stef Corazza leads generative AI development at Roblox after previously building Adobe’s 3D and AR platforms. His technical expertise, combined with Roblox’s unique relationship with its users, has led to the infusion of AI into its creation tools. Roblox has assembled the world’s largest multimodal dataset. Stef previews the Roblox Assistant and the company’s new 3D foundation model, while emphasizing the importance of maintaining positive experiences and civility on the platform. 

Mentioned in this episode:


Driving Empire: A Roblox car racing game Stef particularly enjoys


RDC: Roblox Developer Conference


Ego.live: Roblox app to create and share synthetic worlds populated with human-like generative agents and simulated communities|


PINNs: Physics Informed Neural Networks


ControlNet: A model for controlling image diffusion by conditioning on an additional input image that Stef says can be used as a 2.5D approach to 3D generation.


Neural rendering: A combination of deep learning with computer graphics principles developed by Nvidia in its RTX platform


Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital</description>
      <pubDate>Tue, 04 Feb 2025 10:00:00 -0000</pubDate>
      <itunes:title>Roblox Studio Head Stef Corazza: Using AI to Empower Creators</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4ea1d540-e280-11ef-99dc-13b91512a718/image/67db250951b4a24f3e57f353679541fd.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Stef Corazza leads generative AI development at Roblox after previously building Adobe’s 3D and AR platforms. His technical expertise, combined with Roblox’s unique relationship with its users, has led to the infusion of AI into its creation tools. Roblox has assembled the world’s largest multimodal dataset. Stef previews the Roblox Assistant and the company’s new 3D foundation model, while emphasizing the importance of maintaining positive experiences and civility on the platform. 

Mentioned in this episode:


Driving Empire: A Roblox car racing game Stef particularly enjoys


RDC: Roblox Developer Conference


Ego.live: Roblox app to create and share synthetic worlds populated with human-like generative agents and simulated communities|


PINNs: Physics Informed Neural Networks


ControlNet: A model for controlling image diffusion by conditioning on an additional input image that Stef says can be used as a 2.5D approach to 3D generation.


Neural rendering: A combination of deep learning with computer graphics principles developed by Nvidia in its RTX platform


Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Stef Corazza leads generative AI development at Roblox after previously building Adobe’s 3D and AR platforms. His technical expertise, combined with Roblox’s unique relationship with its users, has led to the infusion of AI into its creation tools. Roblox has assembled the world’s largest multimodal dataset. Stef previews the Roblox Assistant and the company’s new 3D foundation model, while emphasizing the importance of maintaining positive experiences and civility on the platform. </p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://www.roblox.com/games/3351674303/AUDI-Driving-Empire-Car-Racing">Driving Empire</a>: A Roblox car racing game Stef particularly enjoys</li>
<li>
<a href="https://corp.roblox.com/newsroom/2024/09/rdc-2024-robloxs-next-frontier">RDC</a>: Roblox Developer Conference</li>
<li>
<a href="https://www.egoai.com/">Ego.live</a>: Roblox app to create and share synthetic worlds populated with human-like generative agents and simulated communities|</li>
<li>
<a href="https://en.wikipedia.org/wiki/Physics-informed_neural_networks">PINNs</a>: Physics Informed Neural Networks</li>
<li>
<a href="https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet">ControlNet</a>: A model for controlling image diffusion by conditioning on an additional input image that Stef says can be used as a 2.5D approach to 3D generation.</li>
<li>
<a href="https://developer.nvidia.com/blog/nvidia-rtx-neural-rendering-introduces-next-era-of-ai-powered-graphics-innovation/">Neural rendering</a>: A combination of deep learning with computer graphics principles developed by Nvidia in its RTX platform</li>
</ul><p><br></p><p>Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital</p>]]>
      </content:encoded>
      <itunes:duration>3286</itunes:duration>
      <guid isPermaLink="false"><![CDATA[4ea1d540-e280-11ef-99dc-13b91512a718]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3267265498.mp3?updated=1738629084" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>ReflectionAI Founder Ioannis Antonoglou: From AlphaGo to AGI</title>
      <description>Ioannis Antonoglou, founding engineer at DeepMind and co-founder of ReflectionAI, has seen the triumphs of reinforcement learning firsthand. From AlphaGo to AlphaZero and MuZero, Ioannis has built the most powerful agents in the world. Ioannis breaks down key moments in AlphaGo's game against Lee Sodol (Moves 37 and 78), the importance of self-play and the impact of scale, reliability, planning and in-context learning as core factors that will unlock the next level of progress in AI.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital

Mentioned in this episode:


PPO: Proximal Policy Optimization algorithm developed by DeepMind in game environments. Also used by OpenAI for RLHF in ChatGPT.


MuJoCo: Open source physics engine used to develop PPO


Monte Carlo Tree Search: Heuristic search algorithm used in AlphaGo as well as video compression for YouTube and the self-driving system at Tesla


AlphaZero: The DeepMind model that taught itself from scratch how to master the games of chess, shogi and Go


MuZero: The DeepMind follow up to AlphaZero that mastered games without knowing the rules and able to plan winning strategies in unknown environments


AlphaChem: Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies


DQN: Deep Q-Network, Introduced in 2013 paper, Playing Atari with Deep Reinforcement Learning



AlphaFold: DeepMind model for predicting protein structures for which Demis Hassabis, John Jumper and David Baker won the 2024 Nobel Prize in Chemistry</description>
      <pubDate>Tue, 28 Jan 2025 10:00:00 -0000</pubDate>
      <itunes:title>ReflectionAI Founder Ioannis Antonoglou: From AlphaGo to AGI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ca77b012-dd18-11ef-84b1-7302ab161486/image/7f1ce465a4c14a128517ece55efd6da0.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Ioannis Antonoglou, founding engineer at DeepMind and co-founder of ReflectionAI, has seen the triumphs of reinforcement learning firsthand. From AlphaGo to AlphaZero and MuZero, Ioannis has built the most powerful agents in the world. Ioannis breaks down key moments in AlphaGo's game against Lee Sodol (Moves 37 and 78), the importance of self-play and the impact of scale, reliability, planning and in-context learning as core factors that will unlock the next level of progress in AI.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital

Mentioned in this episode:


PPO: Proximal Policy Optimization algorithm developed by DeepMind in game environments. Also used by OpenAI for RLHF in ChatGPT.


MuJoCo: Open source physics engine used to develop PPO


Monte Carlo Tree Search: Heuristic search algorithm used in AlphaGo as well as video compression for YouTube and the self-driving system at Tesla


AlphaZero: The DeepMind model that taught itself from scratch how to master the games of chess, shogi and Go


MuZero: The DeepMind follow up to AlphaZero that mastered games without knowing the rules and able to plan winning strategies in unknown environments


AlphaChem: Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies


DQN: Deep Q-Network, Introduced in 2013 paper, Playing Atari with Deep Reinforcement Learning



AlphaFold: DeepMind model for predicting protein structures for which Demis Hassabis, John Jumper and David Baker won the 2024 Nobel Prize in Chemistry</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Ioannis Antonoglou, founding engineer at DeepMind and co-founder of ReflectionAI, has seen the triumphs of reinforcement learning firsthand. From AlphaGo to AlphaZero and MuZero, Ioannis has built the most powerful agents in the world. Ioannis breaks down key moments in AlphaGo's game against Lee Sodol (Moves 37 and 78), the importance of self-play and the impact of scale, reliability, planning and in-context learning as core factors that will unlock the next level of progress in AI.</p><p><br></p><p>Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital</p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://spinningup.openai.com/en/latest/algorithms/ppo.html">PPO</a>: Proximal Policy Optimization algorithm developed by DeepMind in game environments. Also used by OpenAI for RLHF in ChatGPT.</li>
<li>
<a href="https://mujoco.org/">MuJoCo</a>: Open source physics engine used to develop PPO</li>
<li>
<a href="https://en.wikipedia.org/wiki/Monte_Carlo_tree_search">Monte Carlo Tree Search</a>: Heuristic search algorithm used in <a href="https://deepmind.google/research/breakthroughs/alphago/">AlphaGo</a> as well as video compression for YouTube and the self-driving system at Tesla</li>
<li>
<a href="https://deepmind.google/discover/blog/alphazero-shedding-new-light-on-chess-shogi-and-go/">AlphaZero</a>: The DeepMind model that taught itself from scratch how to master the games of chess, shogi and Go</li>
<li>
<a href="https://deepmind.google/discover/blog/muzero-mastering-go-chess-shogi-and-atari-without-rules/">MuZero</a>: The DeepMind follow up to AlphaZero that mastered games without knowing the rules and able to plan winning strategies in unknown environments</li>
<li>
<a href="https://arxiv.org/abs/1702.00020">AlphaChem</a>: Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies</li>
<li>
<a href="https://paperswithcode.com/method/dqn">DQN</a>: Deep Q-Network, Introduced in 2013 paper, <a href="https://arxiv.org/abs/1312.5602v1">Playing Atari with Deep Reinforcement Learning</a>
</li>
<li>
<a href="https://deepmind.google/technologies/alphafold/">AlphaFold</a>: DeepMind model for predicting protein structures for which Demis Hassabis, John Jumper and David Baker won the 2024 Nobel Prize in Chemistry</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>3149</itunes:duration>
      <guid isPermaLink="false"><![CDATA[ca77b012-dd18-11ef-84b1-7302ab161486]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9834139396.mp3?updated=1738028703" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Kumo’s Hema Raghavan: Turning Graph AI into ROI</title>
      <description>Hema Raghavan is co-founder of Kumo, a company that makes graph neural networks accessible to enterprises by connecting to their relational data stored in Snowflake and Databricks. Hema talks about how running GNNs on GPUs has led to breakthroughs in performance as well as the query language Kumo developed to help companies predict future data points. Although approachable for non-technical users, the product provides full control for data scientists who use Kumo to automate time-consuming feature engineering pipelines.

Mentioned in this episode:


Graph Neural Networks: Learning mechanism for data in graph format, the basis of the Kumo product


Graph RAG: Popular extension of retrieval-augmented generation using GNNs


LiGNN: Graph Neural Networks at LinkedIn paper 


KDD: Knowledge Discovery and Data Mining Conference


Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital</description>
      <pubDate>Tue, 21 Jan 2025 10:00:00 -0000</pubDate>
      <itunes:title>Kumo’s Hema Raghavan: Turning Graph AI into ROI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/93587a04-d510-11ef-acae-2bdd53651d78/image/585b72b7a6bf626d7d5df174fbd73a90.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Hema Raghavan is co-founder of Kumo, a company that makes graph neural networks accessible to enterprises by connecting to their relational data stored in Snowflake and Databricks. Hema talks about how running GNNs on GPUs has led to breakthroughs in performance as well as the query language Kumo developed to help companies predict future data points. Although approachable for non-technical users, the product provides full control for data scientists who use Kumo to automate time-consuming feature engineering pipelines.

Mentioned in this episode:


Graph Neural Networks: Learning mechanism for data in graph format, the basis of the Kumo product


Graph RAG: Popular extension of retrieval-augmented generation using GNNs


LiGNN: Graph Neural Networks at LinkedIn paper 


KDD: Knowledge Discovery and Data Mining Conference


Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Hema Raghavan is co-founder of Kumo, a company that makes graph neural networks accessible to enterprises by connecting to their relational data stored in Snowflake and Databricks. Hema talks about how running GNNs on GPUs has led to breakthroughs in performance as well as the query language Kumo developed to help companies predict future data points. Although approachable for non-technical users, the product provides full control for data scientists who use Kumo to automate time-consuming feature engineering pipelines.</p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://distill.pub/2021/gnn-intro/">Graph Neural Networks</a>: Learning mechanism for data in graph format, the basis of the Kumo product</li>
<li>
<a href="https://www.microsoft.com/en-us/research/publication/from-local-to-global-a-graph-rag-approach-to-query-focused-summarization/">Graph RAG</a>: Popular extension of retrieval-augmented generation using GNNs</li>
<li>
<a href="https://arxiv.org/abs/2402.11139">LiGNN</a>: Graph Neural Networks at LinkedIn paper </li>
<li>
<a href="https://kdd2024.kdd.org/">KDD</a>: Knowledge Discovery and Data Mining Conference</li>
</ul><p><br></p><p>Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital</p>]]>
      </content:encoded>
      <itunes:duration>3126</itunes:duration>
      <guid isPermaLink="false"><![CDATA[93587a04-d510-11ef-acae-2bdd53651d78]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7965589198.mp3?updated=1737156040" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Databricks Founder Ion Stoica: Turning Academic Open Source into Startup Success</title>
      <description>Berkeley professor Ion Stoica, co-founder of Databricks and Anyscale, transformed the open source projects Spark and Ray into successful AI infrastructure companies. He talks about what mattered most for Databricks' success -- the focus on making Spark win and making Databricks the best place to run Spark. He highlights the importance of striking key partnerships -- the Microsoft partnership in particular that accelerated Databricks' growth and contributed to Spark's dominance among data scientists and AI engineers. He also shares his perspective on finding new problems to work on, which holds lessons for aspiring founders and builders: 1) building systems in new areas that, if widely adopted, put you in the best position to understand the new problem space, and 2) focusing on a problem that is more important tomorrow than today.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital

Mentioned in this episode: 


Spark: The open source platform for data engineering that Databricks was originally based on.


Ray: Open source framework to manage, executes and optimizes compute needs across AI workloads, now productized through Anyscale


MosaicML: Generative AI startups founded by Naveen Rao that Databricks acquired in 2023.


Unity Catalog: Data and AI governance solution from Databricks.


CIB Berkeley: Multi-strategy hedge fund at UC Berkeley that commercializes research in the UC system.


Hadoop: A long-time leading platform for large scale distributed computing.


VLLM and Chatbot Arena: Two of Ion’s students’ projects that he wanted to highlight.</description>
      <pubDate>Tue, 14 Jan 2025 10:00:00 -0000</pubDate>
      <itunes:title>Databricks Founder Ion Stoica: Turning Academic Open Source into Startup Success</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/267c6a94-cd54-11ef-8cb9-27942977ce88/image/498432949334170b8b9bf986cfd40fb0.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Berkeley professor Ion Stoica, co-founder of Databricks and Anyscale, transformed the open source projects Spark and Ray into successful AI infrastructure companies.</itunes:subtitle>
      <itunes:summary>Berkeley professor Ion Stoica, co-founder of Databricks and Anyscale, transformed the open source projects Spark and Ray into successful AI infrastructure companies. He talks about what mattered most for Databricks' success -- the focus on making Spark win and making Databricks the best place to run Spark. He highlights the importance of striking key partnerships -- the Microsoft partnership in particular that accelerated Databricks' growth and contributed to Spark's dominance among data scientists and AI engineers. He also shares his perspective on finding new problems to work on, which holds lessons for aspiring founders and builders: 1) building systems in new areas that, if widely adopted, put you in the best position to understand the new problem space, and 2) focusing on a problem that is more important tomorrow than today.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital

Mentioned in this episode: 


Spark: The open source platform for data engineering that Databricks was originally based on.


Ray: Open source framework to manage, executes and optimizes compute needs across AI workloads, now productized through Anyscale


MosaicML: Generative AI startups founded by Naveen Rao that Databricks acquired in 2023.


Unity Catalog: Data and AI governance solution from Databricks.


CIB Berkeley: Multi-strategy hedge fund at UC Berkeley that commercializes research in the UC system.


Hadoop: A long-time leading platform for large scale distributed computing.


VLLM and Chatbot Arena: Two of Ion’s students’ projects that he wanted to highlight.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Berkeley professor Ion Stoica, co-founder of Databricks and Anyscale, transformed the open source projects Spark and Ray into successful AI infrastructure companies. He talks about what mattered most for Databricks' success -- the focus on making Spark win and making Databricks the best place to run Spark. He highlights the importance of striking key partnerships -- the Microsoft partnership in particular that accelerated Databricks' growth and contributed to Spark's dominance among data scientists and AI engineers. He also shares his perspective on finding new problems to work on, which holds lessons for aspiring founders and builders: 1) building systems in new areas that, if widely adopted, put you in the best position to understand the new problem space, and 2) focusing on a problem that is more important tomorrow than today.</p><p><br></p><p>Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital</p><p><br></p><p>Mentioned in this episode: </p><ul>
<li>
<a href="https://spark.apache.org/">Spark</a>: The open source platform for data engineering that Databricks was originally based on.</li>
<li>
<a href="https://www.ray.io/">Ray</a>: Open source framework to manage, executes and optimizes compute needs across AI workloads, now productized through Anyscale</li>
<li>
<a href="https://www.databricks.com/research/mosaic">MosaicML</a>: Generative AI startups founded by Naveen Rao that Databricks acquired in 2023.</li>
<li>
<a href="https://www.databricks.com/product/unity-catalog">Unity Catalog</a>: Data and AI governance solution from Databricks.</li>
<li>
<a href="https://www.ciberkeley.com/home">CIB Berkeley</a>: Multi-strategy hedge fund at UC Berkeley that commercializes research in the UC system.</li>
<li>
<a href="https://hadoop.apache.org/">Hadoop</a>: A long-time leading platform for large scale distributed computing.</li>
<li>
<a href="https://github.com/vllm-project/vllm">VLLM</a> and <a href="https://lmarena.ai/">Chatbot Arena</a>: Two of Ion’s students’ projects that he wanted to highlight.</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>3605</itunes:duration>
      <guid isPermaLink="false"><![CDATA[267c6a94-cd54-11ef-8cb9-27942977ce88]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI6710068648.mp3?updated=1736797296" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>XBOW CEO and GitHub Copilot Creator Oege de Moor: Cracking the Code on Offensive Security With AI</title>
      <description>Oege de Moor, the creator of GitHub Copilot, discusses how XBOW’s AI offensive security system matches and even outperforms top human penetration testers, completing security assessments in minutes instead of days. The team’s speed and focus is transforming the niche market of pen testing with an always-on service-as-a-software platform. Oege describes how he is building a large and sustainable business while also creating a product that will “protect all the software in the free world.” XBOW shows how AI is essential for protecting software systems as the amount of AI-generated code increases along with the scale and sophistication of cyber threats.

Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital

Mentioned in this episode: 


Semmle: Oege’s previous startup, a code analysis tool to secure software, acquired in 2019 by GitHub


Nico Waisman: Head of security at XBOW, previously a researcher at Semmle


The Bitter Lesson: Highly influential post by Richard Sutton


HackerOne: Cybersecurity company that runs one of the largest bug bounty programs


Suno: AI songwriting app that Oege loves


Machines of Loving Grace: Essay by Anthropic founder, Dario Amodei</description>
      <pubDate>Tue, 10 Dec 2024 10:00:00 -0000</pubDate>
      <itunes:title>XBOW CEO and GitHub Copilot Creator Oege de Moor: Cracking the Code on Offensive Security With AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b42b92f0-b67c-11ef-84e3-3bb54b15f0b5/image/b5d98ee1cf49464ca4be519c2d51a9a7.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Oege de Moor, the creator of GitHub Copilot, discusses how XBOW’s AI offensive security system matches and even outperforms top human penetration testers, completing security assessments in minutes instead of days. </itunes:subtitle>
      <itunes:summary>Oege de Moor, the creator of GitHub Copilot, discusses how XBOW’s AI offensive security system matches and even outperforms top human penetration testers, completing security assessments in minutes instead of days. The team’s speed and focus is transforming the niche market of pen testing with an always-on service-as-a-software platform. Oege describes how he is building a large and sustainable business while also creating a product that will “protect all the software in the free world.” XBOW shows how AI is essential for protecting software systems as the amount of AI-generated code increases along with the scale and sophistication of cyber threats.

Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital

Mentioned in this episode: 


Semmle: Oege’s previous startup, a code analysis tool to secure software, acquired in 2019 by GitHub


Nico Waisman: Head of security at XBOW, previously a researcher at Semmle


The Bitter Lesson: Highly influential post by Richard Sutton


HackerOne: Cybersecurity company that runs one of the largest bug bounty programs


Suno: AI songwriting app that Oege loves


Machines of Loving Grace: Essay by Anthropic founder, Dario Amodei</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Oege de Moor, the creator of GitHub Copilot, discusses how XBOW’s AI offensive security system matches and even outperforms top human penetration testers, completing security assessments in minutes instead of days. The team’s speed and focus is transforming the niche market of pen testing with an always-on service-as-a-software platform. Oege describes how he is building a large and sustainable business while also creating a product that will “protect all the software in the free world.” XBOW shows how AI is essential for protecting software systems as the amount of AI-generated code increases along with the scale and sophistication of cyber threats.</p><p><br></p><p>Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital</p><p><br></p><p>Mentioned in this episode: </p><ul>
<li>
<a href="https://github.com/semmle">Semmle</a>: Oege’s previous startup, a code analysis tool to secure software, acquired in 2019 by GitHub</li>
<li>
<a href="https://www.linkedin.com/in/nwaisman/?originalSubdomain=ar">Nico Waisman</a>: Head of security at XBOW, previously a researcher at Semmle</li>
<li>
<a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html">The Bitter Lesson</a>: Highly influential post by Richard Sutton</li>
<li>
<a href="https://www.hackerone.com/">HackerOne</a>: Cybersecurity company that runs one of the largest bug bounty programs</li>
<li>
<a href="https://suno.com/">Suno</a>: AI songwriting app that Oege loves</li>
<li>
<a href="https://darioamodei.com/machines-of-loving-grace">Machines of Loving Grace</a>: Essay by Anthropic founder, Dario Amodei</li>
</ul><p><br></p>]]>
      </content:encoded>
      <itunes:duration>3097</itunes:duration>
      <guid isPermaLink="false"><![CDATA[b42b92f0-b67c-11ef-84e3-3bb54b15f0b5]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI5694805221.mp3?updated=1733783520" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Ramp CEO Eric Glyman: Using AI to Build “Self-Driving Money”</title>
      <description>When ChatGPT ushered in a new paradigm of AI in everyday use, many companies attempted to adapt to the new paradigm by rushing to add chat interfaces to their products. Eric has a different take—he doesn’t think chatbots are the right form factor for everything. He thinks “zero-touch” automation that works invisibly in the background can be more valuable in many cases. He cites self-driving cars as an analogy—or in this case, “self-driving money.” Ramp is a new kind of finance management company for businesses, offering AI-powered financial tools to help companies handle spending and expense processes. We’ll hear why Eric thinks AI that you never see is one of the most powerful instruments for reducing time spent on drudgery and unlocking more time for meaningful work.  

Hosted by: Ravi Gupta and Sonya Huang, Sequoia Capital

Mentioned in this episode:


Paribus: Glyman’s previous company, acquired by Capital One in 2016


Karim Atiyeh: Cofounder and CTO at Ramp and Glyman’s cofounder at Paribus


Devin: AI agent product from Cognition Labs and Glyman’s favorite AI app


Hit Refresh: Book by Satya Nadella</description>
      <pubDate>Tue, 03 Dec 2024 10:00:00 -0000</pubDate>
      <itunes:title>Ramp CEO Eric Glyman: Using AI to Build “Self-Driving Money”</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5cdbbbb8-ac24-11ef-9d0d-ff37896ee911/image/6c5ef0f847ee757302d80872460a6a7b.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>When ChatGPT ushered in a new paradigm of AI in everyday use, many companies attempted to adapt to the new paradigm by rushing to add chat interfaces to their products.</itunes:subtitle>
      <itunes:summary>When ChatGPT ushered in a new paradigm of AI in everyday use, many companies attempted to adapt to the new paradigm by rushing to add chat interfaces to their products. Eric has a different take—he doesn’t think chatbots are the right form factor for everything. He thinks “zero-touch” automation that works invisibly in the background can be more valuable in many cases. He cites self-driving cars as an analogy—or in this case, “self-driving money.” Ramp is a new kind of finance management company for businesses, offering AI-powered financial tools to help companies handle spending and expense processes. We’ll hear why Eric thinks AI that you never see is one of the most powerful instruments for reducing time spent on drudgery and unlocking more time for meaningful work.  

Hosted by: Ravi Gupta and Sonya Huang, Sequoia Capital

Mentioned in this episode:


Paribus: Glyman’s previous company, acquired by Capital One in 2016


Karim Atiyeh: Cofounder and CTO at Ramp and Glyman’s cofounder at Paribus


Devin: AI agent product from Cognition Labs and Glyman’s favorite AI app


Hit Refresh: Book by Satya Nadella</itunes:summary>
      <content:encoded>
        <![CDATA[<p>When ChatGPT ushered in a new paradigm of AI in everyday use, many companies attempted to adapt to the new paradigm by rushing to add chat interfaces to their products. Eric has a different take—he doesn’t think chatbots are the right form factor for everything. He thinks “zero-touch” automation that works invisibly in the background can be more valuable in many cases. He cites self-driving cars as an analogy—or in this case, “self-driving money.” Ramp is a new kind of finance management company for businesses, offering AI-powered financial tools to help companies handle spending and expense processes. We’ll hear why Eric thinks AI <em>that you never see</em> is one of the most powerful instruments for reducing time spent on drudgery and unlocking more time for meaningful work.  </p><p><br></p><p>Hosted by: Ravi Gupta and Sonya Huang, Sequoia Capital</p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://en.wikipedia.org/wiki/Paribus">Paribus</a>: Glyman’s previous company, acquired by Capital One in 2016</li>
<li>
<a href="https://app.sequoiacap.com/companies/667219/">Karim Atiyeh</a>: Cofounder and CTO at Ramp and Glyman’s cofounder at Paribus</li>
<li>
<a href="https://devin.ai/">Devin</a>: AI agent product from Cognition Labs and Glyman’s favorite AI app</li>
<li>
<a href="https://news.microsoft.com/hitrefresh/">Hit Refresh</a>: Book by Satya Nadella</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>2328</itunes:duration>
      <guid isPermaLink="false"><![CDATA[5cdbbbb8-ac24-11ef-9d0d-ff37896ee911]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI2257299052.mp3?updated=1733170160" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Dust’s Gabriel Hubert and Stanislas Polu: Getting the Most From AI With Multiple Custom Agents</title>
      <description>Founded in early 2023 after spending years at Stripe and OpenAI, Gabriel Hubert and Stanislas Polu started Dust with the view that one model will not rule them all, and that multi-model integration will be key to getting the most value out of AI assistants. In this episode we’ll hear why they believe the proprietary data you have in silos will be key to unlocking the full power of AI, get their perspective on the evolving model landscape, and how AI can augment rather than replace human capabilities.

Hosted by: Konstantine Buhler and Pat Grady, Sequoia Capital

00:00 - Introduction
02:16 - One model will not rule them all 
07:15 - Reasoning breakthroughs
11:15 - Trends in AI models
13:32 - The future of the open source ecosystem
16:16 - Model quality and performance
21:44 - “No GPUs before PMF”
27:24 - Dust in action 
37:40 - How do you find “the makers”
42:36 - The beliefs Dust lives by
50:03 - Keeping the human in the loop
52:33 - Second time founders
56:15 - Lightning round</description>
      <pubDate>Tue, 26 Nov 2024 10:00:00 -0000</pubDate>
      <itunes:title>Dust’s Gabriel Hubert and Stanislas Polu: Getting the Most From AI With Multiple Custom Agents</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c662fd26-ab75-11ef-aa14-ff49d9592c89/image/81949763677172f942c012222e49dfdb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Founded in early 2023 after spending years at Stripe and OpenAI, Gabriel Hubert and Stanislas Polu started Dust with the view that one model will not rule them all,</itunes:subtitle>
      <itunes:summary>Founded in early 2023 after spending years at Stripe and OpenAI, Gabriel Hubert and Stanislas Polu started Dust with the view that one model will not rule them all, and that multi-model integration will be key to getting the most value out of AI assistants. In this episode we’ll hear why they believe the proprietary data you have in silos will be key to unlocking the full power of AI, get their perspective on the evolving model landscape, and how AI can augment rather than replace human capabilities.

Hosted by: Konstantine Buhler and Pat Grady, Sequoia Capital

00:00 - Introduction
02:16 - One model will not rule them all 
07:15 - Reasoning breakthroughs
11:15 - Trends in AI models
13:32 - The future of the open source ecosystem
16:16 - Model quality and performance
21:44 - “No GPUs before PMF”
27:24 - Dust in action 
37:40 - How do you find “the makers”
42:36 - The beliefs Dust lives by
50:03 - Keeping the human in the loop
52:33 - Second time founders
56:15 - Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Founded in early 2023 after spending years at Stripe and OpenAI, Gabriel Hubert and Stanislas Polu started Dust with the view that one model will not rule them all, and that multi-model integration will be key to getting the most value out of AI assistants. In this episode we’ll hear why they believe the proprietary data you have in silos will be key to unlocking the full power of AI, get their perspective on the evolving model landscape, and how AI can augment rather than replace human capabilities.</p><p><br></p><p>Hosted by: Konstantine Buhler and Pat Grady, Sequoia Capital</p><p><br></p><p>00:00 - Introduction</p><p>02:16 - One model will not rule them all </p><p>07:15 - Reasoning breakthroughs</p><p>11:15 - Trends in AI models</p><p>13:32 - The future of the open source ecosystem</p><p>16:16 - Model quality and performance</p><p>21:44 - “No GPUs before PMF”</p><p>27:24 - Dust in action </p><p>37:40 - How do you find “the makers”</p><p>42:36 - The beliefs Dust lives by</p><p>50:03 - Keeping the human in the loop</p><p>52:33 - Second time founders</p><p>56:15 - Lightning round</p>]]>
      </content:encoded>
      <itunes:duration>3787</itunes:duration>
      <guid isPermaLink="false"><![CDATA[c662fd26-ab75-11ef-aa14-ff49d9592c89]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7799192741.mp3?updated=1732581003" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Clay’s Kareem Amin on Building the Sales ‘System of Action’ with AI</title>
      <description>Clay is leveraging AI to help go-to-market teams unleash creativity and be more effective in their work, powering custom workflows for everything from targeted outreach to personalized landing pages. It’s one of the fastest growing AI-native applications, with over 4,500 customers and 100,000 users. Founder and CEO Kareem Amin describes Clay’s technology, and its approach to balancing imagination and automation in order to help its customers achieve new levels of go-to-market success. 

Hosted by: Alfred Lin, Sequoia Capital</description>
      <pubDate>Tue, 19 Nov 2024 10:00:00 -0000</pubDate>
      <itunes:title>Clay’s Kareem Amin on Building the Sales ‘System of Action’ with AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/95090f38-a5f6-11ef-9656-af9fef6a2c0d/image/c4460a708c9bc501e6c3db046eafd89b.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Clay is leveraging AI to help go-to-market teams unleash creativity and be more effective in their work, powering custom workflows for everything from targeted outreach to personalized landing pages. </itunes:subtitle>
      <itunes:summary>Clay is leveraging AI to help go-to-market teams unleash creativity and be more effective in their work, powering custom workflows for everything from targeted outreach to personalized landing pages. It’s one of the fastest growing AI-native applications, with over 4,500 customers and 100,000 users. Founder and CEO Kareem Amin describes Clay’s technology, and its approach to balancing imagination and automation in order to help its customers achieve new levels of go-to-market success. 

Hosted by: Alfred Lin, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Clay is leveraging AI to help go-to-market teams unleash creativity and be more effective in their work, powering custom workflows for everything from targeted outreach to personalized landing pages. It’s one of the fastest growing AI-native applications, with over 4,500 customers and 100,000 users. Founder and CEO Kareem Amin describes Clay’s technology, and its approach to balancing imagination and automation in order to help its customers achieve new levels of go-to-market success. </p><p><br></p><p>Hosted by: Alfred Lin, Sequoia Capital</p>]]>
      </content:encoded>
      <itunes:duration>3098</itunes:duration>
      <guid isPermaLink="false"><![CDATA[95090f38-a5f6-11ef-9656-af9fef6a2c0d]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI2291906498.mp3?updated=1731968810" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Decart’s Dean Leitersdorf on AI-Generated Video Games and Worlds</title>
      <description>Can GenAI allow us to connect our imagination to what we see on our screens? Decart’s Dean Leitersdorf believes it can.

In this episode, Dean Leitersdorf breaks down how Decart is pushing the boundaries of compute in order to create AI-generated consumer experiences, from fully playable video games to immersive worlds. From achieving real-time video inference on existing hardware to building a fully vertically integrated stack, Dean explains why solving fundamental limitations rather than specific problems could lead to the next trillion-dollar company.

Hosted by: Sonya Huang and Shaun Maguire, Sequoia Capital

00:00 Introduction
03:22 About Oasis
05:25 Solving a problem vs overcoming a limitation
08:42 The role of game engines
11:15 How video real-time inference works
14:10 World model vs pixel representation
17:17 Vertical integration
34:20 Building a moat
41:35 The future of consumer entertainment
43:17 Rapid fire questions</description>
      <pubDate>Wed, 13 Nov 2024 10:00:00 -0000</pubDate>
      <itunes:title>Decart’s Dean Leitersdorf on AI-Generated Video Games and Worlds</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f796ab06-a12d-11ef-92e7-d70d5dfd91eb/image/145c9293f16a570bb4c127b9c28bb09a.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Can GenAI allow us to connect our imagination to what we see on our screens? Decart’s Dean Leitersdorf believes it can. </itunes:subtitle>
      <itunes:summary>Can GenAI allow us to connect our imagination to what we see on our screens? Decart’s Dean Leitersdorf believes it can.

In this episode, Dean Leitersdorf breaks down how Decart is pushing the boundaries of compute in order to create AI-generated consumer experiences, from fully playable video games to immersive worlds. From achieving real-time video inference on existing hardware to building a fully vertically integrated stack, Dean explains why solving fundamental limitations rather than specific problems could lead to the next trillion-dollar company.

Hosted by: Sonya Huang and Shaun Maguire, Sequoia Capital

00:00 Introduction
03:22 About Oasis
05:25 Solving a problem vs overcoming a limitation
08:42 The role of game engines
11:15 How video real-time inference works
14:10 World model vs pixel representation
17:17 Vertical integration
34:20 Building a moat
41:35 The future of consumer entertainment
43:17 Rapid fire questions</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Can GenAI allow us to connect our imagination to what we see on our screens? Decart’s Dean Leitersdorf believes it can.</p><p><br></p><p>In this episode, Dean Leitersdorf breaks down how Decart is pushing the boundaries of compute in order to create AI-generated consumer experiences, from fully playable video games to immersive worlds. From achieving real-time video inference on existing hardware to building a fully vertically integrated stack, Dean explains why solving fundamental limitations rather than specific problems could lead to the next trillion-dollar company.</p><p><br></p><p>Hosted by: Sonya Huang and Shaun Maguire, Sequoia Capital</p><p><br></p><p>00:00 Introduction</p><p>03:22 About Oasis</p><p>05:25 Solving a problem vs overcoming a limitation</p><p>08:42 The role of game engines</p><p>11:15 How video real-time inference works</p><p>14:10 World model vs pixel representation</p><p>17:17 Vertical integration</p><p>34:20 Building a moat</p><p>41:35 The future of consumer entertainment</p><p>43:17 Rapid fire questions</p>]]>
      </content:encoded>
      <itunes:duration>2794</itunes:duration>
      <guid isPermaLink="false"><![CDATA[f796ab06-a12d-11ef-92e7-d70d5dfd91eb]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7938039267.mp3?updated=1731446633" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How Glean CEO Arvind Jain Solved the Enterprise Search Problem – and What It Means for AI at Work</title>
      <description>Years before co-founding Glean, Arvind was an early Google employee who helped design the search algorithm. Today, Glean is building search and work assistants inside the enterprise, which is arguably an even harder problem. One of the reasons enterprise search is so difficult is that each individual at the company has different permissions and access to different documents and information, meaning that every search needs to be fully personalized. Solving this difficult ingestion and ranking problem also unlocks a key problem for AI: feeding the right context into LLMs to make them useful for your enterprise context. Arvind and his team are harnessing generative AI to synthesize, make connections, and turbo-change knowledge work. Hear Arvind’s vision for what kind of work we’ll do when work AI assistants reach their potential. 

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

00:00 - Introduction
08:35 - Search rankings 
11:30 - Retrieval-Augmented Generation
15:52 - Where enterprise search meets RAG
19:13 - How is Glean changing work? 
26:08 - Agentic reasoning 
31:18 - Act 2: application platform 
33:36 - Developers building on Glean 
35:54 - 5 years into the future 
38:48 - Advice for founders</description>
      <pubDate>Tue, 29 Oct 2024 09:00:00 -0000</pubDate>
      <itunes:title>How Glean CEO Arvind Jain Solved the Enterprise Search Problem – and What It Means for AI at Work</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8f0a91b8-9568-11ef-9129-c706b2d9d655/image/15ff953c1973f8dc39b4946b7988566c.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Years before co-founding Glean, Arvind was an early Google employee who helped design the search algorithm.</itunes:subtitle>
      <itunes:summary>Years before co-founding Glean, Arvind was an early Google employee who helped design the search algorithm. Today, Glean is building search and work assistants inside the enterprise, which is arguably an even harder problem. One of the reasons enterprise search is so difficult is that each individual at the company has different permissions and access to different documents and information, meaning that every search needs to be fully personalized. Solving this difficult ingestion and ranking problem also unlocks a key problem for AI: feeding the right context into LLMs to make them useful for your enterprise context. Arvind and his team are harnessing generative AI to synthesize, make connections, and turbo-change knowledge work. Hear Arvind’s vision for what kind of work we’ll do when work AI assistants reach their potential. 

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

00:00 - Introduction
08:35 - Search rankings 
11:30 - Retrieval-Augmented Generation
15:52 - Where enterprise search meets RAG
19:13 - How is Glean changing work? 
26:08 - Agentic reasoning 
31:18 - Act 2: application platform 
33:36 - Developers building on Glean 
35:54 - 5 years into the future 
38:48 - Advice for founders</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Years before co-founding Glean, Arvind was an early Google employee who helped design the search algorithm. Today, Glean is building search and work assistants inside the enterprise, which is arguably an even harder problem. One of the reasons enterprise search is so difficult is that each individual at the company has different permissions and access to different documents and information, meaning that every search needs to be fully personalized. Solving this difficult ingestion and ranking problem also unlocks a key problem for AI: feeding the right context into LLMs to make them useful for your enterprise context. Arvind and his team are harnessing generative AI to synthesize, make connections, and turbo-change knowledge work. Hear Arvind’s vision for what kind of work we’ll do when work AI assistants reach their potential. </p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><br></p><p>00:00 - Introduction</p><p>08:35 - Search rankings </p><p>11:30 - Retrieval-Augmented Generation</p><p>15:52 - Where enterprise search meets RAG</p><p>19:13 - How is Glean changing work? </p><p>26:08 - Agentic reasoning </p><p>31:18 - Act 2: application platform </p><p>33:36 - Developers building on Glean </p><p>35:54 - 5 years into the future </p><p>38:48 - Advice for founders</p><p><br></p><p><br></p><p><br></p>]]>
      </content:encoded>
      <itunes:duration>2688</itunes:duration>
      <guid isPermaLink="false"><![CDATA[8f0a91b8-9568-11ef-9129-c706b2d9d655]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7182880302.mp3?updated=1730155599" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>OpenAI Researcher Dan Roberts on What Physics Can Teach Us About AI</title>
      <description>In recent years there’s been an influx of theoretical physicists into the leading AI labs. Do they have unique capabilities suited to studying large models or is it just herd behavior? To find out, we talked to our former AI Fellow (and now OpenAI researcher) Dan Roberts.

Roberts, co-author of The Principles of Deep Learning Theory, is at the forefront of research that applies the tools of theoretical physics to another type of large complex system, deep neural networks. Dan believes that DLLs, and eventually LLMs, are interpretable in the same way a large collection of atoms is—at the system level. He also thinks that emphasis on scaling laws will balance with new ideas and architectures over time as scaling asymptotes economically.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:


The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks, by Daniel A. Roberts, Sho Yaida, Boris Hanin


Black Holes and the Intelligence Explosion: Extreme scenarios of AI focus on what is logically possible rather than what is physically possible. What does physics have to say about AI risk?


Yang-Mills &amp; The Mass Gap: An unsolved Millennium Prize problem

AI Math Olympiad: Dan is on the prize committee</description>
      <pubDate>Tue, 22 Oct 2024 09:00:00 -0000</pubDate>
      <itunes:title>OpenAI Researcher Dan Roberts on What Physics Can Teach Us About AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d17bfcec-8ff2-11ef-ba06-c3db1fbb2417/image/595f5916207b2772e881856530b6785a.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In recent years there’s been an influx of theoretical physicists into the leading AI labs.</itunes:subtitle>
      <itunes:summary>In recent years there’s been an influx of theoretical physicists into the leading AI labs. Do they have unique capabilities suited to studying large models or is it just herd behavior? To find out, we talked to our former AI Fellow (and now OpenAI researcher) Dan Roberts.

Roberts, co-author of The Principles of Deep Learning Theory, is at the forefront of research that applies the tools of theoretical physics to another type of large complex system, deep neural networks. Dan believes that DLLs, and eventually LLMs, are interpretable in the same way a large collection of atoms is—at the system level. He also thinks that emphasis on scaling laws will balance with new ideas and architectures over time as scaling asymptotes economically.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:


The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks, by Daniel A. Roberts, Sho Yaida, Boris Hanin


Black Holes and the Intelligence Explosion: Extreme scenarios of AI focus on what is logically possible rather than what is physically possible. What does physics have to say about AI risk?


Yang-Mills &amp; The Mass Gap: An unsolved Millennium Prize problem

AI Math Olympiad: Dan is on the prize committee</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In recent years there’s been an influx of theoretical physicists into the leading AI labs. Do they have unique capabilities suited to studying large models or is it just herd behavior? To find out, we talked to our former AI Fellow (and now OpenAI researcher) Dan Roberts.</p><p><br></p><p>Roberts, co-author of <em>The Principles of Deep Learning Theory</em>, is at the forefront of research that applies the tools of theoretical physics to another type of large complex system, deep neural networks. Dan believes that DLLs, and eventually LLMs, are interpretable in the same way a large collection of atoms is—at the system level. He also thinks that emphasis on scaling laws will balance with new ideas and architectures over time as scaling asymptotes economically.</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://deeplearningtheory.com/"><strong><em>The Principles of Deep Learning Theory</em></strong></a><strong><em>:</em></strong><em> An Effective Theory Approach to Understanding Neural Networks</em>, by Daniel A. Roberts, Sho Yaida, Boris Hanin</li>
<li>
<a href="https://www.sequoiacap.com/article/black-holes-perspective/"><strong><em>Black Holes and the Intelligence Explosion</em></strong></a><strong><em>:</em></strong> Extreme scenarios of AI focus on what is logically possible rather than what is physically possible. What does physics have to say about AI risk?</li>
<li>
<a href="https://www.claymath.org/millennium/yang-mills-the-maths-gap/">Yang-Mills &amp; The Mass Gap</a>: An unsolved Millennium Prize problem</li>
</ul><p><a href="https://aimoprize.com/">AI Math Olympiad</a>: Dan is on the prize committee</p>]]>
      </content:encoded>
      <itunes:duration>2502</itunes:duration>
      <guid isPermaLink="false"><![CDATA[d17bfcec-8ff2-11ef-ba06-c3db1fbb2417]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3810831988.mp3?updated=1729546154" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Google NotebookLM’s Raiza Martin and Jason Spielman on Creating Delightful AI Podcast Hosts and the Potential for Source-Grounded AI</title>
      <description>NotebookLM from Google Labs has become the breakout viral AI product of the year. The feature that catapulted it to viral fame is Audio Overview, which generates eerily realistic two-host podcast audio from any input you upload—written doc, audio or video file, or even a PDF. But to describe NotebookLM as a “podcast generator” is to vastly undersell it. The real magic of the product is in offering multi-modal dimensions to explore your own content in new ways—with context that’s surprisingly additive. 200-page training manuals become synthesized into digestible chapters, turned into a 10-minute podcast—or both—and shared with the sales team, just to cite one example. Raiza Martin and Jason Speilman join us to discuss how the magic happens, and what’s next for source-grounded AI.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</description>
      <pubDate>Tue, 15 Oct 2024 09:00:00 -0000</pubDate>
      <itunes:title>Google NotebookLM’s Raiza Martin and Jason Spielman on Creating Delightful AI Podcast Hosts and the Potential for Source-Grounded AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/bf576c50-8a6f-11ef-86af-13c224eed9f4/image/f7aa6aa04723e357c218f34bc9e2d648.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>NotebookLM from Google Labs has become the breakout viral AI product of the year.</itunes:subtitle>
      <itunes:summary>NotebookLM from Google Labs has become the breakout viral AI product of the year. The feature that catapulted it to viral fame is Audio Overview, which generates eerily realistic two-host podcast audio from any input you upload—written doc, audio or video file, or even a PDF. But to describe NotebookLM as a “podcast generator” is to vastly undersell it. The real magic of the product is in offering multi-modal dimensions to explore your own content in new ways—with context that’s surprisingly additive. 200-page training manuals become synthesized into digestible chapters, turned into a 10-minute podcast—or both—and shared with the sales team, just to cite one example. Raiza Martin and Jason Speilman join us to discuss how the magic happens, and what’s next for source-grounded AI.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</itunes:summary>
      <content:encoded>
        <![CDATA[<p>NotebookLM from Google Labs has become the breakout viral AI product of the year. The feature that catapulted it to viral fame is Audio Overview, which generates eerily realistic two-host podcast audio from any input you upload—written doc, audio or video file, or even a PDF. But to describe NotebookLM as a “podcast generator” is to vastly undersell it. The real magic of the product is in offering multi-modal dimensions to explore your own content in new ways—with context that’s surprisingly additive. 200-page training manuals become synthesized into digestible chapters, turned into a 10-minute podcast—or both—and shared with the sales team, just to cite one example. Raiza Martin and Jason Speilman join us to discuss how the magic happens, and what’s next for source-grounded AI.</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</p><p><br></p>]]>
      </content:encoded>
      <itunes:duration>1927</itunes:duration>
      <guid isPermaLink="false"><![CDATA[bf576c50-8a6f-11ef-86af-13c224eed9f4]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4197953594.mp3?updated=1728965922" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Snowflake CEO Sridhar Ramaswamy on Using Data to Create Simple, Reliable AI for Businesses</title>
      <description>All of us as consumers have felt the magic of ChatGPT—but also the occasional errors and hallucinations that make off-the-shelf language models problematic for business use cases with no tolerance for errors. Case in point: A model deployed to help create a summary for this episode stated that Sridhar Ramaswamy previously led PyTorch at Meta. He did not. He spent years running Google’s ads business and now serves as CEO of Snowflake, which he describes as the data cloud for the AI era.

Ramaswamy discusses how smart systems design helped Snowflake create reliable "talk-to-your-data" applications with over 90% accuracy, compared to around 45% for out-of-the-box solutions using off the shelf LLMs. He describes Snowflake's commitment to making reliable AI simple for their customers, turning complex software engineering projects into straightforward tasks. 

Finally, he stresses that even as frontier models progress, there is significant value to be unlocked from current models by applying them more effectively across various domains.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital


Mentioned in this episode: 
Cortex Analyst: Snowflake’s talk-to-your-data API
Document AI: Snowflake feature that extracts in structured information from documents</description>
      <pubDate>Tue, 08 Oct 2024 09:00:00 -0000</pubDate>
      <itunes:title>Snowflake CEO Sridhar Ramaswamy on Using Data to Create Simple, Reliable AI for Businesses</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5e866de0-84de-11ef-92c9-2fb1d47a401b/image/3d753a4393eae06da377e7a4ff428898.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>All of us as consumers have felt the magic of Chat GPT—but also the occasional errors and hallucinations that make off-the-shelf models problematic for business use cases with no tolerance for errors.</itunes:subtitle>
      <itunes:summary>All of us as consumers have felt the magic of ChatGPT—but also the occasional errors and hallucinations that make off-the-shelf language models problematic for business use cases with no tolerance for errors. Case in point: A model deployed to help create a summary for this episode stated that Sridhar Ramaswamy previously led PyTorch at Meta. He did not. He spent years running Google’s ads business and now serves as CEO of Snowflake, which he describes as the data cloud for the AI era.

Ramaswamy discusses how smart systems design helped Snowflake create reliable "talk-to-your-data" applications with over 90% accuracy, compared to around 45% for out-of-the-box solutions using off the shelf LLMs. He describes Snowflake's commitment to making reliable AI simple for their customers, turning complex software engineering projects into straightforward tasks. 

Finally, he stresses that even as frontier models progress, there is significant value to be unlocked from current models by applying them more effectively across various domains.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital


Mentioned in this episode: 
Cortex Analyst: Snowflake’s talk-to-your-data API
Document AI: Snowflake feature that extracts in structured information from documents</itunes:summary>
      <content:encoded>
        <![CDATA[<p>All of us as consumers have felt the magic of ChatGPT—but also the occasional errors and hallucinations that make off-the-shelf language models problematic for business use cases with no tolerance for errors. Case in point: A model deployed to help create a summary for this episode stated that Sridhar Ramaswamy previously led PyTorch at Meta. He did not. He spent years running Google’s ads business and now serves as CEO of Snowflake, which he describes as the data cloud for the AI era.</p><p><br></p><p>Ramaswamy discusses how smart systems design helped Snowflake create reliable "talk-to-your-data" applications with over 90% accuracy, compared to around 45% for out-of-the-box solutions using off the shelf LLMs. He describes Snowflake's commitment to making reliable AI simple for their customers, turning complex software engineering projects into straightforward tasks. </p><p><br></p><p>Finally, he stresses that even as frontier models progress, there is significant value to be unlocked from current models by applying them more effectively across various domains.</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</p><p><br></p><p><br></p><p>Mentioned in this episode: </p><p><a href="https://docs.snowflake.com/en/user-guide/snowflake-cortex/cortex-analyst">Cortex Analyst</a>: Snowflake’s talk-to-your-data API</p><p><a href="https://docs.snowflake.com/en/user-guide/snowflake-cortex/document-ai/overview">Document AI</a>: Snowflake feature that extracts in structured information from documents</p>]]>
      </content:encoded>
      <itunes:duration>3569</itunes:duration>
      <guid isPermaLink="false"><![CDATA[5e866de0-84de-11ef-92c9-2fb1d47a401b]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3006145813.mp3?updated=1728335232" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>OpenAI's Noam Brown, Ilge Akkaya and Hunter Lightman on o1 and Teaching LLMs to Reason Better</title>
      <description>Combining LLMs with AlphaGo-style deep reinforcement learning has been a holy grail for many leading AI labs, and with o1 (aka Strawberry) we are seeing the most general merging of the two modes to date. o1 is admittedly better at math than essay writing, but it has already achieved SOTA on a number of math, coding and reasoning benchmarks.
Deep RL legend and now OpenAI researcher Noam Brown and teammates Ilge Akkaya and Hunter Lightman discuss the ah-ha moments on the way to the release of o1, how it uses chains of thought and backtracking to think through problems, the discovery of strong test-time compute scaling laws and what to expect as the model gets better. 
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 
Mentioned in this episode:


Learning to Reason with LLMs: Technical report accompanying the launch of OpenAI o1.


Generator verifier gap: Concept Noam explains in terms of what kinds of problems benefit from more inference-time compute.


Agent57: Outperforming the human Atari benchmark, 2020 paper where DeepMind demonstrated “the first deep reinforcement learning agent to obtain a score that is above the human baseline on all 57 Atari 2600 games.”


Move 37: Pivotal move in AlphaGo’s second game against Lee Sedol where it made a move so surprising that Sedol thought it must be a mistake, and only later discovered he had lost the game to a superhuman move.


IOI competition: OpenAI entered o1 into the International Olympiad in Informatics and received a Silver Medal.


System 1, System 2: The thesis if Danial Khaneman’s pivotal book of behavioral economics, Thinking, Fast and Slow, that positied two distinct modes of thought, with System 1 being fast and instinctive and System 2 being slow and rational.


AlphaZero: The predecessor to AlphaGo which learned a variety of games completely from scratch through self-play. Interestingly, self-play doesn’t seem to have a role in o1.


Solving Rubik’s Cube with a robot hand: Early OpenAI robotics paper that Ilge Akkaya worked on.


The Last Question: Science fiction story by Isaac Asimov with interesting parallels to scaling inference-time compute.


Strawberry: Why?


O1-mini: A smaller, more efficient version of 1 for applications that require reasoning without broad world knowledge.


00:00 - Introduction
01:33 - Conviction in o1
04:24 - How o1 works
05:04 - What is reasoning?
07:02 - Lessons from gameplay
09:14 - Generation vs verification
10:31 - What is surprising about o1 so far
11:37 - The trough of disillusionment
14:03 - Applying deep RL
14:45 - o1’s AlphaGo moment?
17:38 - A-ha moments
21:10 - Why is o1 good at STEM?
24:10 - Capabilities vs usefulness
25:29 - Defining AGI
26:13 - The importance of reasoning
28:39 - Chain of thought
30:41 - Implication of inference-time scaling laws
35:10 - Bottlenecks to scaling test-time compute
38:46 - Biggest misunderstanding about o1?
41:13 - o1-mini
42:15 - How should founders think about o1?</description>
      <pubDate>Wed, 02 Oct 2024 09:00:00 -0000</pubDate>
      <itunes:title>OpenAI's Noam Brown, Ilge Akkaya and Hunter Lightman on o1 and Teaching LLMs to Reason Better</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/439017ca-7f66-11ef-a656-7fff69ded708/image/5939e5a1d8c28c1c2328a853cccdd51d.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Combining LLMs with AlphaGo-style deep reinforcement learning has been a holy grail for many leading AI labs, and with o1 (aka Strawberry) we are seeing the most general merging of the two modes to date.</itunes:subtitle>
      <itunes:summary>Combining LLMs with AlphaGo-style deep reinforcement learning has been a holy grail for many leading AI labs, and with o1 (aka Strawberry) we are seeing the most general merging of the two modes to date. o1 is admittedly better at math than essay writing, but it has already achieved SOTA on a number of math, coding and reasoning benchmarks.
Deep RL legend and now OpenAI researcher Noam Brown and teammates Ilge Akkaya and Hunter Lightman discuss the ah-ha moments on the way to the release of o1, how it uses chains of thought and backtracking to think through problems, the discovery of strong test-time compute scaling laws and what to expect as the model gets better. 
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 
Mentioned in this episode:


Learning to Reason with LLMs: Technical report accompanying the launch of OpenAI o1.


Generator verifier gap: Concept Noam explains in terms of what kinds of problems benefit from more inference-time compute.


Agent57: Outperforming the human Atari benchmark, 2020 paper where DeepMind demonstrated “the first deep reinforcement learning agent to obtain a score that is above the human baseline on all 57 Atari 2600 games.”


Move 37: Pivotal move in AlphaGo’s second game against Lee Sedol where it made a move so surprising that Sedol thought it must be a mistake, and only later discovered he had lost the game to a superhuman move.


IOI competition: OpenAI entered o1 into the International Olympiad in Informatics and received a Silver Medal.


System 1, System 2: The thesis if Danial Khaneman’s pivotal book of behavioral economics, Thinking, Fast and Slow, that positied two distinct modes of thought, with System 1 being fast and instinctive and System 2 being slow and rational.


AlphaZero: The predecessor to AlphaGo which learned a variety of games completely from scratch through self-play. Interestingly, self-play doesn’t seem to have a role in o1.


Solving Rubik’s Cube with a robot hand: Early OpenAI robotics paper that Ilge Akkaya worked on.


The Last Question: Science fiction story by Isaac Asimov with interesting parallels to scaling inference-time compute.


Strawberry: Why?


O1-mini: A smaller, more efficient version of 1 for applications that require reasoning without broad world knowledge.


00:00 - Introduction
01:33 - Conviction in o1
04:24 - How o1 works
05:04 - What is reasoning?
07:02 - Lessons from gameplay
09:14 - Generation vs verification
10:31 - What is surprising about o1 so far
11:37 - The trough of disillusionment
14:03 - Applying deep RL
14:45 - o1’s AlphaGo moment?
17:38 - A-ha moments
21:10 - Why is o1 good at STEM?
24:10 - Capabilities vs usefulness
25:29 - Defining AGI
26:13 - The importance of reasoning
28:39 - Chain of thought
30:41 - Implication of inference-time scaling laws
35:10 - Bottlenecks to scaling test-time compute
38:46 - Biggest misunderstanding about o1?
41:13 - o1-mini
42:15 - How should founders think about o1?</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Combining LLMs with AlphaGo-style deep reinforcement learning has been a holy grail for many leading AI labs, and with o1 (aka Strawberry) we are seeing the most general merging of the two modes to date. o1 is admittedly better at math than essay writing, but it has already achieved SOTA on a number of math, coding and reasoning benchmarks.</p><p>Deep RL legend and now OpenAI researcher Noam Brown and teammates Ilge Akkaya and Hunter Lightman discuss the ah-ha moments on the way to the release of o1, how it uses chains of thought and backtracking to think through problems, the discovery of strong test-time compute scaling laws and what to expect as the model gets better. </p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><strong>Mentioned in this episode:</strong></p><ul>
<li>
<a href="https://openai.com/index/learning-to-reason-with-llms/"><strong>Learning to Reason with LLMs</strong></a><strong>:</strong> Technical report accompanying the launch of OpenAI o1.</li>
<li>
<strong>Generator verifier gap</strong>: Concept Noam explains in terms of what kinds of problems benefit from more inference-time compute.</li>
<li>
<a href="https://deepmind.google/discover/blog/agent57-outperforming-the-human-atari-benchmark/"><strong>Agent57: Outperforming the human Atari benchmark</strong></a>, 2020 paper where DeepMind demonstrated “the first deep reinforcement learning agent to obtain a score that is above the human baseline on all 57 Atari 2600 games.”</li>
<li>
<a href="https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol#Game_2"><strong>Move 37</strong></a><strong>:</strong> Pivotal move in AlphaGo’s second game against Lee Sedol where it made a move so surprising that Sedol thought it must be a mistake, and only later discovered he had lost the game to a superhuman move.</li>
<li>
<a href="https://www.ioi2024.eg/"><strong>IOI competition</strong></a><strong>:</strong> OpenAI entered o1 into the International Olympiad in Informatics and received a Silver Medal.</li>
<li>
<strong>System 1, System 2:</strong> The thesis if Danial Khaneman’s pivotal book of behavioral economics, <a href="https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow"><em>Thinking, Fast and Slow</em></a>, that positied two distinct modes of thought, with System 1 being fast and instinctive and System 2 being slow and rational.</li>
<li>
<a href="https://deepmind.google/discover/blog/alphazero-shedding-new-light-on-chess-shogi-and-go/"><strong>AlphaZero</strong></a>: The predecessor to AlphaGo which learned a variety of games completely from scratch through self-play. Interestingly, self-play doesn’t seem to have a role in o1.</li>
<li>
<a href="https://openai.com/index/solving-rubiks-cube/"><strong>Solving Rubik’s Cube with a robot hand</strong></a><strong>:</strong> Early OpenAI robotics paper that Ilge Akkaya worked on.</li>
<li>
<a href="https://en.wikipedia.org/wiki/The_Last_Question"><strong>The Last Question</strong></a><strong>:</strong> Science fiction story by Isaac Asimov with interesting parallels to scaling inference-time compute.</li>
<li>
<strong>Strawberry:</strong> Why?</li>
<li>
<a href="https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/"><strong>O1-mini</strong></a><strong>:</strong> A smaller, more efficient version of 1 for applications that require reasoning without broad world knowledge.</li>
</ul><p><br></p><p>00:00 - Introduction</p><p>01:33 - Conviction in o1</p><p>04:24 - How o1 works</p><p>05:04 - What is reasoning?</p><p>07:02 - Lessons from gameplay</p><p>09:14 - Generation vs verification</p><p>10:31 - What is surprising about o1 so far</p><p>11:37 - The trough of disillusionment</p><p>14:03 - Applying deep RL</p><p>14:45 - o1’s AlphaGo moment?</p><p>17:38 - A-ha moments</p><p>21:10 - Why is o1 good at STEM?</p><p>24:10 - Capabilities vs usefulness</p><p>25:29 - Defining AGI</p><p>26:13 - The importance of reasoning</p><p>28:39 - Chain of thought</p><p>30:41 - Implication of inference-time scaling laws</p><p>35:10 - Bottlenecks to scaling test-time compute</p><p>38:46 - Biggest misunderstanding about o1?</p><p>41:13 - o1-mini</p><p>42:15 - How should founders think about o1?</p>]]>
      </content:encoded>
      <itunes:duration>2722</itunes:duration>
      <guid isPermaLink="false"><![CDATA[439017ca-7f66-11ef-a656-7fff69ded708]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI9323260941.mp3?updated=1727894425" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Why Vlad Tenev and Tudor Achim of Harmonic Think AI Is About to Change Math—and Why It Matters</title>
      <description>Adding code to LLM training data is a known method of improving a model’s reasoning skills. But wouldn’t math, the basis of all reasoning, be even better? Up until recently, there just wasn’t enough usable data that describes mathematics to make this feasible.
A few years ago, Vlad Tenev (also founder of Robinhood) and Tudor Achim noticed the rise of the community around an esoteric programming language called Lean that was gaining traction among mathematicians. The combination of that and the past decade’s rise of autoregressive models capable of fast, flexible learning made them think the time was now and they founded Harmonic. Their mission is both lofty—mathematical superintelligence—and imminently practical, verifying all safety-critical software.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 
Mentioned in this episode:


IMO and the Millennium Prize: Two significant global competitions Harmonic hopes to win (soon)


Riemann hypothesis: One of the most difficult unsolved math conjectures (and a Millenium Prize problem) most recently in the sights of MIT mathematician Larry Guth



Terry Tao: perhaps the greatest living mathematician and Vlad’s professor at UCLA


Lean: an open source functional language for code verification launched by Leonardo de Moura when at Microsoft Research in 2013 that powers the Lean Theorem Prover


mathlib: the largest math textbook in the world, all written in Lean


Metaculus: online prediction platform that tracks and scores thousands of forecasters


Minecraft Beaten in 20 Seconds: The video Vlad references as an analogy to AI math


Navier-Stokes equations: another important Millenium Prize math problem. Vlad considers this more tractable that Riemann


John von Neumann: Hungarian mathematician and polymath that made foundational contributions to computing, the Manhattan Project and game theory


Gottfried Wilhelm Leibniz: co-inventor of calculus and (remarkably) creator of the “universal characteristic,” a system for reasoning through a language of symbols and calculations—anticipating Lean and Harmonic by 350 years!


00:00 - Introduction
01:42 - Math is reasoning
06:16 - Studying with the world's greatest living mathematician
10:18 - What does the math community think of AI math?
15:11 - Recursive self-improvement
18:31 - What is Lean?
21:05 - Why now?
22:46 - Synthetic data is the fuel for the model
27:29 - How fast will your model get better?
29:45 - Exploring the frontiers of human knowledge
34:11 - Lightning round</description>
      <pubDate>Tue, 24 Sep 2024 09:00:00 -0000</pubDate>
      <itunes:title>Why Vlad Tenev and Tudor Achim of Harmonic Think AI Is About to Change Math—and Why It Matters</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/41f16434-79d3-11ef-b145-1b674a134d5d/image/30e54342d35f4a52736fcb3bf06c9438.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Adding code to LLM training data is a known method of improving a model’s reasoning skills. But wouldn’t math, the basis of all reasoning, be even better?</itunes:subtitle>
      <itunes:summary>Adding code to LLM training data is a known method of improving a model’s reasoning skills. But wouldn’t math, the basis of all reasoning, be even better? Up until recently, there just wasn’t enough usable data that describes mathematics to make this feasible.
A few years ago, Vlad Tenev (also founder of Robinhood) and Tudor Achim noticed the rise of the community around an esoteric programming language called Lean that was gaining traction among mathematicians. The combination of that and the past decade’s rise of autoregressive models capable of fast, flexible learning made them think the time was now and they founded Harmonic. Their mission is both lofty—mathematical superintelligence—and imminently practical, verifying all safety-critical software.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 
Mentioned in this episode:


IMO and the Millennium Prize: Two significant global competitions Harmonic hopes to win (soon)


Riemann hypothesis: One of the most difficult unsolved math conjectures (and a Millenium Prize problem) most recently in the sights of MIT mathematician Larry Guth



Terry Tao: perhaps the greatest living mathematician and Vlad’s professor at UCLA


Lean: an open source functional language for code verification launched by Leonardo de Moura when at Microsoft Research in 2013 that powers the Lean Theorem Prover


mathlib: the largest math textbook in the world, all written in Lean


Metaculus: online prediction platform that tracks and scores thousands of forecasters


Minecraft Beaten in 20 Seconds: The video Vlad references as an analogy to AI math


Navier-Stokes equations: another important Millenium Prize math problem. Vlad considers this more tractable that Riemann


John von Neumann: Hungarian mathematician and polymath that made foundational contributions to computing, the Manhattan Project and game theory


Gottfried Wilhelm Leibniz: co-inventor of calculus and (remarkably) creator of the “universal characteristic,” a system for reasoning through a language of symbols and calculations—anticipating Lean and Harmonic by 350 years!


00:00 - Introduction
01:42 - Math is reasoning
06:16 - Studying with the world's greatest living mathematician
10:18 - What does the math community think of AI math?
15:11 - Recursive self-improvement
18:31 - What is Lean?
21:05 - Why now?
22:46 - Synthetic data is the fuel for the model
27:29 - How fast will your model get better?
29:45 - Exploring the frontiers of human knowledge
34:11 - Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Adding code to LLM training data is a known method of improving a model’s reasoning skills. But wouldn’t math, the basis of all reasoning, be even better? Up until recently, there just wasn’t enough usable data that describes mathematics to make this feasible.</p><p>A few years ago, Vlad Tenev (also founder of Robinhood) and Tudor Achim noticed the rise of the community around an esoteric programming language called Lean that was gaining traction among mathematicians. The combination of that and the past decade’s rise of autoregressive models capable of fast, flexible learning made them think the time was now and they founded Harmonic. Their mission is both lofty—mathematical superintelligence—and imminently practical, verifying all safety-critical software.</p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://www.imo-official.org/">IMO</a> and the <a href="https://www.claymath.org/millennium-problems/">Millennium Prize</a>: Two significant global competitions Harmonic hopes to win (soon)</li>
<li>
<a href="https://en.wikipedia.org/wiki/Riemann_hypothesis">Riemann hypothesis</a>: One of the most difficult unsolved math conjectures (and a Millenium Prize problem) most recently in the sights of MIT mathematician <a href="https://arxiv.org/abs/2405.20552">Larry Guth</a>
</li>
<li>
<a href="https://terrytao.wordpress.com/">Terry Tao</a>: perhaps the greatest living mathematician and Vlad’s professor at UCLA</li>
<li>
<a href="https://lean-lang.org/">Lean</a>: an open source functional language for code verification launched by <a href="https://leodemoura.github.io/">Leonardo de Moura</a> when at Microsoft Research in 2013 that powers the Lean Theorem Prover</li>
<li>
<a href="https://leanprover-community.github.io/mathlib-overview.html">mathlib</a>: the largest math textbook in the world, all written in Lean</li>
<li>
<a href="https://www.metaculus.com/home/">Metaculus</a>: online prediction platform that tracks and scores thousands of forecasters</li>
<li>
<a href="https://www.youtube.com/watch?v=do08uW0N5Qs">Minecraft Beaten in 20 Seconds</a>: The video Vlad references as an analogy to AI math</li>
<li>
<a href="https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations">Navier-Stokes equations</a>: another important Millenium Prize math problem. Vlad considers this more tractable that Riemann</li>
<li>
<a href="https://en.wikipedia.org/wiki/John_von_Neumann">John von Neumann</a>: Hungarian mathematician and polymath that made foundational contributions to computing, the Manhattan Project and game theory</li>
<li>
<a href="https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz">Gottfried Wilhelm Leibniz</a>: co-inventor of calculus and (remarkably) creator of the “universal characteristic,” a system for reasoning through a language of symbols and calculations—anticipating Lean and Harmonic by 350 years!</li>
</ul><p><br></p><p>00:00 - Introduction</p><p>01:42 - Math is reasoning</p><p>06:16 - Studying with the world's greatest living mathematician</p><p>10:18 - What does the math community think of AI math?</p><p>15:11 - Recursive self-improvement</p><p>18:31 - What is Lean?</p><p>21:05 - Why now?</p><p>22:46 - Synthetic data is the fuel for the model</p><p>27:29 - How fast will your model get better?</p><p>29:45 - Exploring the frontiers of human knowledge</p><p>34:11 - Lightning round</p>]]>
      </content:encoded>
      <itunes:duration>2385</itunes:duration>
      <guid isPermaLink="false"><![CDATA[41f16434-79d3-11ef-b145-1b674a134d5d]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7750646273.mp3?updated=1727202335" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Jim Fan on Nvidia’s Embodied AI Lab and Jensen Huang’s Prediction that All Robots will be Autonomous</title>
      <description>AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds.
Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”
Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital
Mentioned in this episode:


World of Bits: Early OpenAI project Jim worked on as an intern with Andrej Karpathy. Part of a bigger initiative called Universe



Fei-Fei Li: Jim’s PhD advisor at Stanford who founded the ImageNet project in 2010 that revolutionized the field of visual recognition, led the Stanford Vision Lab and just launched her own AI startup, World Labs



Project GR00T: Nvidia’s “moonshot effort” at a robotic foundation model, premiered at this year’s GTC


Thinking Fast and Slow: Influential book by Daniel Kahneman that popularized some of his teaching from behavioral economics


Jetson Orin chip: The dedicated series of edge computing chips Nvidia is developing to power Project GR00T


Eureka: Project by Jim’s team that trained a five finger robot hand to do pen spinning

MineDojo: A project Jim did when he first got to Nvidia that developed a platform for general purpose agents in the game of Minecraft. Won NeurIPS 2022 Outstanding Paper Award

ADI: artificial dog intelligence


Mamba: Selective State Space Models, an alternative architecture to Transformers that Jim is interested in (original paper here)


00:00 Introduction
01:35 Jim’s journey to embodied intelligence
04:53 The GEAR Group
07:32 Three kinds of data for robotics
10:32 A GPT-3 moment for robotics
16:05 Choosing the humanoid robot form factor
19:37 Specialized generalists
21:59 GR00T gets its own chip
23:35 Eureka and Issac Sim
25:23 Why now for robotics?
28:53 Exploring virtual worlds
36:28 Implications for games
39:13 Is the virtual world in service of the physical world?
42:10 Alternative architectures to Transformers
44:15 Lightning round</description>
      <pubDate>Tue, 17 Sep 2024 09:00:00 -0000</pubDate>
      <itunes:title>Jim Fan on Nvidia’s Embodied AI Lab and Jensen Huang’s Prediction that All Robots will be Autonomous</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f0614f5c-746f-11ef-b014-17e5a317e802/image/df97a530ff5f3bb6eaa54b8740de173e.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li.</itunes:subtitle>
      <itunes:summary>AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds.
Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”
Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital
Mentioned in this episode:


World of Bits: Early OpenAI project Jim worked on as an intern with Andrej Karpathy. Part of a bigger initiative called Universe



Fei-Fei Li: Jim’s PhD advisor at Stanford who founded the ImageNet project in 2010 that revolutionized the field of visual recognition, led the Stanford Vision Lab and just launched her own AI startup, World Labs



Project GR00T: Nvidia’s “moonshot effort” at a robotic foundation model, premiered at this year’s GTC


Thinking Fast and Slow: Influential book by Daniel Kahneman that popularized some of his teaching from behavioral economics


Jetson Orin chip: The dedicated series of edge computing chips Nvidia is developing to power Project GR00T


Eureka: Project by Jim’s team that trained a five finger robot hand to do pen spinning

MineDojo: A project Jim did when he first got to Nvidia that developed a platform for general purpose agents in the game of Minecraft. Won NeurIPS 2022 Outstanding Paper Award

ADI: artificial dog intelligence


Mamba: Selective State Space Models, an alternative architecture to Transformers that Jim is interested in (original paper here)


00:00 Introduction
01:35 Jim’s journey to embodied intelligence
04:53 The GEAR Group
07:32 Three kinds of data for robotics
10:32 A GPT-3 moment for robotics
16:05 Choosing the humanoid robot form factor
19:37 Specialized generalists
21:59 GR00T gets its own chip
23:35 Eureka and Issac Sim
25:23 Why now for robotics?
28:53 Exploring virtual worlds
36:28 Implications for games
39:13 Is the virtual world in service of the physical world?
42:10 Alternative architectures to Transformers
44:15 Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds.</p><p>Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”</p><p>Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital</p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://proceedings.mlr.press/v70/shi17a.html">World of Bits</a>: Early OpenAI project Jim worked on as an intern with Andrej Karpathy. Part of a bigger initiative called <a href="https://openai.com/index/universe/">Universe</a>
</li>
<li>
<a href="https://en.wikipedia.org/wiki/Fei-Fei_Li">Fei-Fei Li</a>: Jim’s PhD advisor at Stanford who founded the ImageNet project in 2010 that revolutionized the field of visual recognition, led the Stanford Vision Lab and just launched her own AI startup, <a href="https://www.worldlabs.ai/">World Labs</a>
</li>
<li>
<a href="https://developer.nvidia.com/project-gr00t">Project GR00T</a>: Nvidia’s “moonshot effort” at a robotic foundation model, premiered at this year’s GTC</li>
<li>
<a href="https://us.macmillan.com/books/9780374533557/thinkingfastandslow"><em>Thinking Fast and Slow</em></a>: Influential book by Daniel Kahneman that popularized some of his teaching from behavioral economics</li>
<li>
<a href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/">Jetson Orin chip</a>: The dedicated series of edge computing chips Nvidia is developing to power Project GR00T</li>
<li>
<a href="https://blogs.nvidia.com/blog/eureka-robotics-research/">Eureka</a>: Project by Jim’s team that trained a five finger robot hand to do pen spinning</li>
<li>MineDojo: A project Jim did when he first got to Nvidia that developed a platform for general purpose agents in the game of Minecraft. Won NeurIPS 2022 Outstanding Paper Award</li>
<li>ADI: artificial dog intelligence</li>
<li>
<a href="https://github.com/state-spaces/mamba?tab=readme-ov-file">Mamba</a>: Selective State Space Models, an alternative architecture to Transformers that Jim is interested in (original paper <a href="https://arxiv.org/abs/2312.00752">here</a>)</li>
</ul><p><br></p><p>00:00 Introduction</p><p>01:35 Jim’s journey to embodied intelligence</p><p>04:53 The GEAR Group</p><p>07:32 Three kinds of data for robotics</p><p>10:32 A GPT-3 moment for robotics</p><p>16:05 Choosing the humanoid robot form factor</p><p>19:37 Specialized generalists</p><p>21:59 GR00T gets its own chip</p><p>23:35 Eureka and Issac Sim</p><p>25:23 Why now for robotics?</p><p>28:53 Exploring virtual worlds</p><p>36:28 Implications for games</p><p>39:13 Is the virtual world in service of the physical world?</p><p>42:10 Alternative architectures to Transformers</p><p>44:15 Lightning round</p>]]>
      </content:encoded>
      <itunes:duration>2953</itunes:duration>
      <guid isPermaLink="false"><![CDATA[f0614f5c-746f-11ef-b014-17e5a317e802]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI1518157518.mp3?updated=1727806879" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Founder Eric Steinberger on Magic’s Counterintuitive Approach to Pursuing AGI </title>
      <description>There’s a new archetype in Silicon Valley, the AI researcher turned founder. Instead of tinkering in a garage they write papers that earn them the right to collaborate with cutting-edge labs until they break out and start their own.

This is the story of wunderkind Eric Steinberger, the founder and CEO of Magic.dev. Eric came to programming through his obsession with AI and caught the attention of DeepMind researchers as a high school student. In 2022 he realized that AGI was closer than he had previously thought and started Magic to automate the software engineering necessary to get there. Among his counterintuitive ideas are the need to train proprietary large models, that value will not accrue in the application layer and that the best agents will manage themselves. Eric also talks about Magic’s recent 100M token context window model and the HashHop eval they’re open sourcing.

Hosted by: Sonya Huang, Sequoia Capital

Mentioned in this episode:


David Silver: DeepMind researcher that led the AlphaGo team


Johannes Heinrich: a PhD student of Silver’s and DeepMind researcher who mentored Eric as a highschooler


Reinforcement Learning from Self-Play in Imperfect-Information Games: Johannes’s dissertation that inspired Eric 


Noam Brown: DeepMind, Meta and now OpenAI reinforcement learning researcher who eventually collaborated with Eric and brought him to FAIR


ClimateScience: NGO that Eric co-founded in 2019 while a university student 


Noam Shazeer: One of the original Transformers researchers at Google and founder of Charater.ai 


DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker: the first AI paper Eric ever tried to deeply understand


LTM-2-mini: Magic’s first 100M token context model, build using the HashHop eval (now available open source)


00:00 - Introduction
01:39 - Vienna-born wunderkind
04:56 - Working with Noam Brown
8:00 - “I can do two things. I cannot do three.”
10:37 - AGI to-do list
13:27 - Advice for young researchers
20:35 - Reading every paper voraciously
23:06 - The army of Noams
26:46 - The leaps still needed in research
29:59 - What is Magic?
36:12 - Competing against the 800-pound gorillas
38:21 - Ideal team size for researchers
40:10 - AI that feels like a colleague
44:30 - Lightning round
47:50 - Bonus round: 200M token context announcement</description>
      <pubDate>Tue, 10 Sep 2024 09:00:00 -0000</pubDate>
      <itunes:title>Founder Eric Steinberger on Magic’s Counterintuitive Approach to Pursuing AGI </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/12ea47cc-6eda-11ef-a4e1-b7d333840a0f/image/bcde850b2eb4e3f22605a69527662027.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>There’s a new archetype in Silicon Valley, the AI researcher turned founder. Instead of tinkering in a garage they write papers that earn them the right to collaborate with cutting-edge labs until they break out and start their own.</itunes:subtitle>
      <itunes:summary>There’s a new archetype in Silicon Valley, the AI researcher turned founder. Instead of tinkering in a garage they write papers that earn them the right to collaborate with cutting-edge labs until they break out and start their own.

This is the story of wunderkind Eric Steinberger, the founder and CEO of Magic.dev. Eric came to programming through his obsession with AI and caught the attention of DeepMind researchers as a high school student. In 2022 he realized that AGI was closer than he had previously thought and started Magic to automate the software engineering necessary to get there. Among his counterintuitive ideas are the need to train proprietary large models, that value will not accrue in the application layer and that the best agents will manage themselves. Eric also talks about Magic’s recent 100M token context window model and the HashHop eval they’re open sourcing.

Hosted by: Sonya Huang, Sequoia Capital

Mentioned in this episode:


David Silver: DeepMind researcher that led the AlphaGo team


Johannes Heinrich: a PhD student of Silver’s and DeepMind researcher who mentored Eric as a highschooler


Reinforcement Learning from Self-Play in Imperfect-Information Games: Johannes’s dissertation that inspired Eric 


Noam Brown: DeepMind, Meta and now OpenAI reinforcement learning researcher who eventually collaborated with Eric and brought him to FAIR


ClimateScience: NGO that Eric co-founded in 2019 while a university student 


Noam Shazeer: One of the original Transformers researchers at Google and founder of Charater.ai 


DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker: the first AI paper Eric ever tried to deeply understand


LTM-2-mini: Magic’s first 100M token context model, build using the HashHop eval (now available open source)


00:00 - Introduction
01:39 - Vienna-born wunderkind
04:56 - Working with Noam Brown
8:00 - “I can do two things. I cannot do three.”
10:37 - AGI to-do list
13:27 - Advice for young researchers
20:35 - Reading every paper voraciously
23:06 - The army of Noams
26:46 - The leaps still needed in research
29:59 - What is Magic?
36:12 - Competing against the 800-pound gorillas
38:21 - Ideal team size for researchers
40:10 - AI that feels like a colleague
44:30 - Lightning round
47:50 - Bonus round: 200M token context announcement</itunes:summary>
      <content:encoded>
        <![CDATA[<p>There’s a new archetype in Silicon Valley, the AI researcher turned founder. Instead of tinkering in a garage they write papers that earn them the right to collaborate with cutting-edge labs until they break out and start their own.</p><p><br></p><p>This is the story of wunderkind Eric Steinberger, the founder and CEO of Magic.dev. Eric came to programming through his obsession with AI and caught the attention of DeepMind researchers as a high school student. In 2022 he realized that AGI was closer than he had previously thought and started Magic to automate the software engineering necessary to get there. Among his counterintuitive ideas are the need to train proprietary large models, that value will not accrue in the application layer and that the best agents will manage themselves. Eric also talks about Magic’s recent 100M token context window model and the HashHop eval they’re open sourcing.</p><p><br></p><p>Hosted by: Sonya Huang, Sequoia Capital</p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://www.davidsilver.uk/">David Silver</a>: DeepMind researcher that led the AlphaGo team</li>
<li>
<a href="https://www.linkedin.com/in/dr-johannes-heinrich/?originalSubdomain=uk">Johannes Heinrich</a>: a PhD student of Silver’s and DeepMind researcher who mentored Eric as a highschooler</li>
<li>
<a href="https://arxiv.org/abs/1603.01121">Reinforcement Learning from Self-Play in Imperfect-Information Games</a>: Johannes’s dissertation that inspired Eric </li>
<li>
<a href="https://noambrown.github.io/">Noam Brown</a>: DeepMind, Meta and now OpenAI reinforcement learning researcher who eventually <a href="https://arxiv.org/abs/2006.10410">collaborated</a> with Eric and brought him to FAIR</li>
<li>
<a href="https://climatescience.org/team">ClimateScience</a>: NGO that Eric co-founded in 2019 while a university student </li>
<li>
<a href="https://www.linkedin.com/in/noam-shazeer-3b27288/">Noam Shazeer</a>: One of the original Transformers researchers at Google and founder of Charater.ai </li>
<li>
<a href="https://arxiv.org/abs/1701.01724">DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker</a>: the first AI paper Eric ever tried to deeply understand</li>
<li>
<a href="https://magic.dev/blog/100m-token-context-windows">LTM-2-mini</a>: Magic’s first 100M token context model, build using the <a href="https://github.com/magicproduct/hash-hop">HashHop</a> eval (now available open source)</li>
</ul><p><br></p><p>00:00 - Introduction</p><p>01:39 - Vienna-born wunderkind</p><p>04:56 - Working with Noam Brown</p><p>8:00 - “I can do two things. I cannot do three.”</p><p>10:37 - AGI to-do list</p><p>13:27 - Advice for young researchers</p><p>20:35 - Reading every paper voraciously</p><p>23:06 - The army of Noams</p><p>26:46 - The leaps still needed in research</p><p>29:59 - What is Magic?</p><p>36:12 - Competing against the 800-pound gorillas</p><p>38:21 - Ideal team size for researchers</p><p>40:10 - AI that feels like a colleague</p><p>44:30 - Lightning round</p><p>47:50 - Bonus round: 200M token context announcement</p>]]>
      </content:encoded>
      <itunes:duration>3075</itunes:duration>
      <guid isPermaLink="false"><![CDATA[12ea47cc-6eda-11ef-a4e1-b7d333840a0f]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4739438465.mp3?updated=1725907138" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Crucible Moments Returns for S2: The ServiceNow Story ft. CEO Frank Slootman &amp; Founder Fred Luddy</title>
      <description>On Training Data, we learn from innovators pushing forward the frontier of AI’s capabilities. Today we’re bringing you something different. It’s the story of a company currently implementing AI at scale in the enterprise, and how it was built from a bootstrapped idea in the pre-AI era to a 150 billion dollar market cap giant. 

It’s the Season 2 premiere of Sequoia’s other podcast, Crucible Moments, where we hear from the founders and leaders of some legendary companies about the crossroads and inflection points that shaped their journeys. In this episode, you’ll hear from Fred Luddy and Frank Slootman about building and scaling ServiceNow. Listen to Crucible Moments wherever you get your podcasts or go to:
Spotify: https://open.spotify.com/show/40bWCUSan0boCn0GZJNpPn
Apple: https://podcasts.apple.com/us/podcast/crucible-moments/id1705282398

Hosted by: Roelof Botha, Sequoia Capital
Transcript: https://www.sequoiacap.com/podcast/crucible-moments-servicenow/</description>
      <pubDate>Tue, 03 Sep 2024 19:53:32 -0000</pubDate>
      <itunes:title>Crucible Moments Returns for S2: The ServiceNow Story ft. CEO Frank Slootman &amp; Founder Fred Luddy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8f80bdd2-6a27-11ef-8c33-9b52e99bfa9c/image/d640bcb1aab55d30fd6a245db1124c57.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>On Training Data, we learn from innovators pushing forward the frontier of AI’s capabilities. Today we’re bringing you something different.</itunes:subtitle>
      <itunes:summary>On Training Data, we learn from innovators pushing forward the frontier of AI’s capabilities. Today we’re bringing you something different. It’s the story of a company currently implementing AI at scale in the enterprise, and how it was built from a bootstrapped idea in the pre-AI era to a 150 billion dollar market cap giant. 

It’s the Season 2 premiere of Sequoia’s other podcast, Crucible Moments, where we hear from the founders and leaders of some legendary companies about the crossroads and inflection points that shaped their journeys. In this episode, you’ll hear from Fred Luddy and Frank Slootman about building and scaling ServiceNow. Listen to Crucible Moments wherever you get your podcasts or go to:
Spotify: https://open.spotify.com/show/40bWCUSan0boCn0GZJNpPn
Apple: https://podcasts.apple.com/us/podcast/crucible-moments/id1705282398

Hosted by: Roelof Botha, Sequoia Capital
Transcript: https://www.sequoiacap.com/podcast/crucible-moments-servicenow/</itunes:summary>
      <content:encoded>
        <![CDATA[<p>On Training Data, we learn from innovators pushing forward the frontier of AI’s capabilities. Today we’re bringing you something different. It’s the story of a company currently implementing AI at scale in the enterprise, and how it was built from a bootstrapped idea in the pre-AI era to a 150 billion dollar market cap giant. </p><p><br></p><p>It’s the Season 2 premiere of Sequoia’s other podcast, Crucible Moments, where we hear from the founders and leaders of some legendary companies about the crossroads and inflection points that shaped their journeys. In this episode, you’ll hear from Fred Luddy and Frank Slootman about building and scaling ServiceNow. Listen to Crucible Moments wherever you get your podcasts or go to:</p><p>Spotify: <a href="https://open.spotify.com/show/40bWCUSan0boCn0GZJNpPn">https://open.spotify.com/show/40bWCUSan0boCn0GZJNpPn</a></p><p>Apple: <a href="https://podcasts.apple.com/us/podcast/crucible-moments/id1705282398">https://podcasts.apple.com/us/podcast/crucible-moments/id1705282398</a></p><p><br></p><p>Hosted by: Roelof Botha, Sequoia Capital</p><p>Transcript: <a href="https://www.sequoiacap.com/podcast/crucible-moments-servicenow/">https://www.sequoiacap.com/podcast/crucible-moments-servicenow/</a></p>]]>
      </content:encoded>
      <itunes:duration>2573</itunes:duration>
      <guid isPermaLink="false"><![CDATA[8f80bdd2-6a27-11ef-8c33-9b52e99bfa9c]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7797180218.mp3?updated=1725393683" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Sierra Co-Founder Clay Bavor on Making Customer-Facing AI Agents Delightful</title>
      <description>Customer service is hands down the first killer app of generative AI for businesses. The reasons are simple: the costs of existing solutions are so high, the satisfaction so low and the margin for ROI so wide. But trusting your interactions with customers to hallucination-prone LLMs can be daunting.
Enter Sierra. Co-founder Clay Bavor walks us through the sophisticated engineering challenges his team solved along the way to delivering AI agents for all aspects of the customer experience that are delightful, safe and reliable—and being deployed widely by Sierra’s customers. The Company’s AgentOS enables businesses to create branded AI agents to interact with customers, follow nuanced policies and even handle customer retention and upsell. Clay describes how companies can capture their brand voice, values and internal processes to create AI agents that truly represent the business.
Hosted by: Ravi Gupta and Pat Grady, Sequoia Capital

Mentioned in this episode:


Bret Taylor: co-founder of Sierra


Towards a Human-like Open-Domain Chatbot: 2020 Google paper that introduced Meena, a predecessor of ChatGPT (followed by LaMDA in 2021)


PaLM: Scaling Language Modeling with Pathways: 2022 Google paper about their unreleased 540B parameter transformer model (GPT-3, at the time, had 175B) 


Avocado chair: Images generated by OpenAI’s DALL·E model in 2022


Large Language Models Understand and Can be Enhanced by Emotional Stimuli: 2023 Microsoft paper on how models like GPT-4 can be manipulated into providing better results


𝛕-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains: 2024 paper authored by Sierra research team, led by Karthik Narasimhan (co-author of the 2022 ReACT paper and the 2023 Reflexion paper)


00:00:00 Introduction
00:01:21 Clay’s background
00:03:20 Google before the ChatGPT moment
00:07:31 What is Sierra?
00:12:03 What’s possible now that wasn’t possible 18 months ago?
00:17:11 AgentOS
00:23:45 The solution to many problems with AI is more AI
00:28:37 𝛕-bench
00:33:19 Engineering task vs research task
00:37:27 What tasks can you trust an agent with now?
00:43:21 What metrics will move?
00:46:22 The reality of deploying AI to customers today
00:53:33 The experience manager
01:03:54 Outcome-based pricing
01:05:55 Lightning Round</description>
      <pubDate>Tue, 27 Aug 2024 09:00:00 -0000</pubDate>
      <itunes:title>Sierra Co-Founder Clay Bavor on Making Customer-Facing AI Agents Delightful</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8b161690-63ea-11ef-bd0b-f32d9265f776/image/d5871eaa02c7de2f2496728545f7379d.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Customer service is hands down the first killer app of generative AI for businesses. The reasons are simple: the costs of existing solutions are so high, the satisfaction so low and the margin for ROI so wide.</itunes:subtitle>
      <itunes:summary>Customer service is hands down the first killer app of generative AI for businesses. The reasons are simple: the costs of existing solutions are so high, the satisfaction so low and the margin for ROI so wide. But trusting your interactions with customers to hallucination-prone LLMs can be daunting.
Enter Sierra. Co-founder Clay Bavor walks us through the sophisticated engineering challenges his team solved along the way to delivering AI agents for all aspects of the customer experience that are delightful, safe and reliable—and being deployed widely by Sierra’s customers. The Company’s AgentOS enables businesses to create branded AI agents to interact with customers, follow nuanced policies and even handle customer retention and upsell. Clay describes how companies can capture their brand voice, values and internal processes to create AI agents that truly represent the business.
Hosted by: Ravi Gupta and Pat Grady, Sequoia Capital

Mentioned in this episode:


Bret Taylor: co-founder of Sierra


Towards a Human-like Open-Domain Chatbot: 2020 Google paper that introduced Meena, a predecessor of ChatGPT (followed by LaMDA in 2021)


PaLM: Scaling Language Modeling with Pathways: 2022 Google paper about their unreleased 540B parameter transformer model (GPT-3, at the time, had 175B) 


Avocado chair: Images generated by OpenAI’s DALL·E model in 2022


Large Language Models Understand and Can be Enhanced by Emotional Stimuli: 2023 Microsoft paper on how models like GPT-4 can be manipulated into providing better results


𝛕-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains: 2024 paper authored by Sierra research team, led by Karthik Narasimhan (co-author of the 2022 ReACT paper and the 2023 Reflexion paper)


00:00:00 Introduction
00:01:21 Clay’s background
00:03:20 Google before the ChatGPT moment
00:07:31 What is Sierra?
00:12:03 What’s possible now that wasn’t possible 18 months ago?
00:17:11 AgentOS
00:23:45 The solution to many problems with AI is more AI
00:28:37 𝛕-bench
00:33:19 Engineering task vs research task
00:37:27 What tasks can you trust an agent with now?
00:43:21 What metrics will move?
00:46:22 The reality of deploying AI to customers today
00:53:33 The experience manager
01:03:54 Outcome-based pricing
01:05:55 Lightning Round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Customer service is hands down the first killer app of generative AI for businesses. The reasons are simple: the costs of existing solutions are so high, the satisfaction so low and the margin for ROI so wide. But trusting your interactions with customers to hallucination-prone LLMs can be daunting.</p><p>Enter Sierra. Co-founder Clay Bavor walks us through the sophisticated engineering challenges his team solved along the way to delivering AI agents for all aspects of the customer experience that are delightful, safe and reliable—and being deployed widely by Sierra’s customers. The Company’s AgentOS enables businesses to create branded AI agents to interact with customers, follow nuanced policies and even handle customer retention and upsell. Clay describes how companies can capture their brand voice, values and internal processes to create AI agents that truly represent the business.</p><p>Hosted by: Ravi Gupta and Pat Grady, Sequoia Capital</p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://www.linkedin.com/in/brettaylor/">Bret Taylor</a>: co-founder of Sierra</li>
<li>
<a href="https://arxiv.org/abs/2001.09977">Towards a Human-like Open-Domain Chatbot</a>: 2020 Google paper that introduced Meena, a predecessor of ChatGPT (followed by <a href="https://blog.google/technology/ai/lamda/">LaMDA</a> in 2021)</li>
<li>
<a href="https://arxiv.org/abs/2204.02311">PaLM: Scaling Language Modeling with Pathways</a>: 2022 Google paper about their unreleased 540B parameter transformer model (GPT-3, at the time, had 175B) </li>
<li>
<a href="https://www.technologyreview.com/2021/01/05/1015754/avocado-armchair-future-ai-openai-deep-learning-nlp-gpt3-computer-vision-common-sense/">Avocado chair</a>: Images generated by OpenAI’s DALL·E model in 2022</li>
<li>
<a href="https://arxiv.org/abs/2307.11760">Large Language Models Understand and Can be Enhanced by Emotional Stimuli</a>: 2023 Microsoft paper on how models like GPT-4 can be manipulated into providing better results</li>
<li>
<a href="https://arxiv.org/abs/2406.12045">𝛕-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains</a>: 2024 paper authored by Sierra research team, led by <a href="https://karthikncode.github.io/">Karthik Narasimhan</a> (co-author of the 2022 <a href="https://arxiv.org/abs/2210.03629">ReACT paper</a> and the 2023 <a href="https://arxiv.org/abs/2303.11366">Reflexion</a> paper)</li>
</ul><p><br></p><p>00:00:00 Introduction</p><p>00:01:21 Clay’s background</p><p>00:03:20 Google before the ChatGPT moment</p><p>00:07:31 What is Sierra?</p><p>00:12:03 What’s possible now that wasn’t possible 18 months ago?</p><p>00:17:11 AgentOS</p><p>00:23:45 The solution to many problems with AI is more AI</p><p>00:28:37 𝛕-bench</p><p>00:33:19 Engineering task vs research task</p><p>00:37:27 What tasks can you trust an agent with now?</p><p>00:43:21 What metrics will move?</p><p>00:46:22 The reality of deploying AI to customers today</p><p>00:53:33 The experience manager</p><p>01:03:54 Outcome-based pricing</p><p>01:05:55 Lightning Round</p>]]>
      </content:encoded>
      <itunes:duration>4351</itunes:duration>
      <guid isPermaLink="false"><![CDATA[8b161690-63ea-11ef-bd0b-f32d9265f776]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI5105179357.mp3?updated=1724712070" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Phaidra’s Jim Gao on Building the Fourth Industrial Revolution with Reinforcement Learning</title>
      <description>After AlphaGo beat Lee Sedol, a young mechanical engineer at Google thought of another game reinforcement learning could win: energy optimization at data centers. Jim Gao convinced his bosses at the Google data center team to let him work with the DeepMind team to try. The initial pilot resulted in a 40% energy savings and led he and his co-founders to start Phaidra to turn this technology into a product.

Jim discusses the challenges of AI readiness in industrial settings and how we have to build on top of the control systems of the 70s and 80s to achieve the promise of the Fourth Industrial Revolution. He believes this new world of self-learning systems and self-improving infrastructure is a key factor in addressing global climate change.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:


Mustafa Suleyman: Co-founder of DeepMind and Inflection AI and currently CEO of Microsoft AI, known to his friends as “Moose”


Joe Kava: Google VP of data centers who Jim sent his initial email to pitching the idea that would eventually become Phaidra


Constrained optimization: the class of problem that reinforcement learning can be applied to in real world systems 


Vedavyas Panneershelvam: co-founder and CTO of Phaidra; one of the original engineers on the AlphaGo project


Katie Hoffman: co-founder, President and COO of Phaidra 


Demis Hassabis: CEO of DeepMind</description>
      <pubDate>Tue, 20 Aug 2024 09:00:00 -0000</pubDate>
      <itunes:title>Phaidra’s Jim Gao on Building the Fourth Industrial Revolution with Reinforcement Learning</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/52aa3e86-5e7b-11ef-934e-974555431f99/image/95f3ea43fde944ebb7b1af4e13a602f8.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Jim Gao discusses the challenges of AI readiness in industrial settings and how we have to build on top of the control systems of the 70s and 80s to achieve the promise of the Fourth Industrial Revolution. </itunes:subtitle>
      <itunes:summary>After AlphaGo beat Lee Sedol, a young mechanical engineer at Google thought of another game reinforcement learning could win: energy optimization at data centers. Jim Gao convinced his bosses at the Google data center team to let him work with the DeepMind team to try. The initial pilot resulted in a 40% energy savings and led he and his co-founders to start Phaidra to turn this technology into a product.

Jim discusses the challenges of AI readiness in industrial settings and how we have to build on top of the control systems of the 70s and 80s to achieve the promise of the Fourth Industrial Revolution. He believes this new world of self-learning systems and self-improving infrastructure is a key factor in addressing global climate change.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:


Mustafa Suleyman: Co-founder of DeepMind and Inflection AI and currently CEO of Microsoft AI, known to his friends as “Moose”


Joe Kava: Google VP of data centers who Jim sent his initial email to pitching the idea that would eventually become Phaidra


Constrained optimization: the class of problem that reinforcement learning can be applied to in real world systems 


Vedavyas Panneershelvam: co-founder and CTO of Phaidra; one of the original engineers on the AlphaGo project


Katie Hoffman: co-founder, President and COO of Phaidra 


Demis Hassabis: CEO of DeepMind</itunes:summary>
      <content:encoded>
        <![CDATA[<p>After AlphaGo beat Lee Sedol, a young mechanical engineer at Google thought of another game reinforcement learning could win: energy optimization at data centers. Jim Gao convinced his bosses at the Google data center team to let him work with the DeepMind team to try. The initial pilot resulted in a 40% energy savings and led he and his co-founders to start Phaidra to turn this technology into a product.</p><p><br></p><p>Jim discusses the challenges of AI readiness in industrial settings and how we have to build on top of the control systems of the 70s and 80s to achieve the promise of the Fourth Industrial Revolution. He believes this new world of self-learning systems and self-improving infrastructure is a key factor in addressing global climate change.</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://www.linkedin.com/in/mustafa-suleyman/">Mustafa Suleyman</a>: Co-founder of DeepMind and Inflection AI and currently CEO of Microsoft AI, known to his friends as “Moose”</li>
<li>
<a href="https://www.linkedin.com/in/josephkava/">Joe Kava</a>: Google VP of data centers who Jim sent his initial email to pitching the idea that would eventually become Phaidra</li>
<li>
<a href="https://en.wikipedia.org/wiki/Constrained_optimization">Constrained optimization</a>: the class of problem that reinforcement learning can be applied to in real world systems </li>
<li>
<a href="https://www.linkedin.com/in/vedavyas-panneershelvam-22080214/">Vedavyas Panneershelvam</a>: co-founder and CTO of Phaidra; one of the original engineers on the AlphaGo project</li>
<li>
<a href="https://www.linkedin.com/in/katiehoffmaninleo/">Katie Hoffman</a>: co-founder, President and COO of Phaidra </li>
<li>
<a href="https://www.linkedin.com/in/demishassabis/">Demis Hassabis</a>: CEO of DeepMind</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>3033</itunes:duration>
      <guid isPermaLink="false"><![CDATA[52aa3e86-5e7b-11ef-934e-974555431f99]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI8665139084.mp3?updated=1724107224" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Fireworks Founder Lin Qiao on How Fast Inference and Small Models Will Benefit Businesses</title>
      <description>In the first wave of the generative AI revolution, startups and enterprises built on top of the best closed-source models available, mostly from OpenAI. The AI customer journey moves from training to inference, and as these first products find PMF, many are hitting a wall on latency and cost.

Fireworks Founder and CEO Lin Qiao led the PyTorch team at Meta that rebuilt the whole stack to meet the complex needs of the world’s largest B2C company. Meta moved PyTorch to its own non-profit foundation in 2022 and Lin started Fireworks with the mission to compress the timeframe of training and inference and democratize access to GenAI beyond the hyperscalers to let a diversity of AI applications thrive.

Lin predicts when open and closed source models will converge and reveals her goal to build simple API access to the totality of knowledge.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:


Pytorch: the leading framework for building deep learning models, originated at Meta and now part of the Linux Foundation umbrella


Caffe2 and ONNX: ML frameworks Meta used that PyTorch eventually replaced


Conservation of complexity: the idea that that every computer application has inherent complexity that cannot be reduced but merely moved between the backend and frontend, originated by Xerox PARC researcher Larry Tesler 


Mixture of Experts: a class of transformer models that route requests between different subsets of a model based on use case


Fathom: a product the Fireworks team uses for video conference summarization 


LMSYS Chatbot Arena: crowdsourced open platform for LLM evals hosted on Hugging Face


 00:00 - Introduction
02:01 - What is Fireworks?
02:48 - Leading Pytorch
05:01 - What do researchers like about PyTorch?
07:50 - How Fireworks compares to open source
10:38 - Simplicity scales
12:51 - From training to inference
17:46 - Will open and closed source converge?
22:18 - Can you match OpenAI on the Fireworks stack?
26:53 - What is your vision for the Fireworks platform?
31:17 - Competition for Nvidia?
32:47 - Are returns to scale starting to slow down?
34:28 - Competition
36:32 - Lightning round</description>
      <pubDate>Tue, 13 Aug 2024 09:00:00 -0000</pubDate>
      <itunes:title>Fireworks Founder Lin Qiao on How Fast Inference and Small Models Will Benefit Businesses</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/84e3014a-5907-11ef-9a8c-cfe6fb0c150f/image/2f774acaaaa7298ad0ace4d68b239d3c.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In the first wave of the generative AI revolution, startups and enterprises built on top of the best closed-source models available, mostly from OpenAI.</itunes:subtitle>
      <itunes:summary>In the first wave of the generative AI revolution, startups and enterprises built on top of the best closed-source models available, mostly from OpenAI. The AI customer journey moves from training to inference, and as these first products find PMF, many are hitting a wall on latency and cost.

Fireworks Founder and CEO Lin Qiao led the PyTorch team at Meta that rebuilt the whole stack to meet the complex needs of the world’s largest B2C company. Meta moved PyTorch to its own non-profit foundation in 2022 and Lin started Fireworks with the mission to compress the timeframe of training and inference and democratize access to GenAI beyond the hyperscalers to let a diversity of AI applications thrive.

Lin predicts when open and closed source models will converge and reveals her goal to build simple API access to the totality of knowledge.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:


Pytorch: the leading framework for building deep learning models, originated at Meta and now part of the Linux Foundation umbrella


Caffe2 and ONNX: ML frameworks Meta used that PyTorch eventually replaced


Conservation of complexity: the idea that that every computer application has inherent complexity that cannot be reduced but merely moved between the backend and frontend, originated by Xerox PARC researcher Larry Tesler 


Mixture of Experts: a class of transformer models that route requests between different subsets of a model based on use case


Fathom: a product the Fireworks team uses for video conference summarization 


LMSYS Chatbot Arena: crowdsourced open platform for LLM evals hosted on Hugging Face


 00:00 - Introduction
02:01 - What is Fireworks?
02:48 - Leading Pytorch
05:01 - What do researchers like about PyTorch?
07:50 - How Fireworks compares to open source
10:38 - Simplicity scales
12:51 - From training to inference
17:46 - Will open and closed source converge?
22:18 - Can you match OpenAI on the Fireworks stack?
26:53 - What is your vision for the Fireworks platform?
31:17 - Competition for Nvidia?
32:47 - Are returns to scale starting to slow down?
34:28 - Competition
36:32 - Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In the first wave of the generative AI revolution, startups and enterprises built on top of the best closed-source models available, mostly from OpenAI. The AI customer journey moves from training to inference, and as these first products find PMF, many are hitting a wall on latency and cost.</p><p><br></p><p>Fireworks Founder and CEO Lin Qiao led the PyTorch team at Meta that rebuilt the whole stack to meet the complex needs of the world’s largest B2C company. Meta moved PyTorch to its own non-profit foundation in 2022 and Lin started Fireworks with the mission to compress the timeframe of training and inference and democratize access to GenAI beyond the hyperscalers to let a diversity of AI applications thrive.</p><p><br></p><p>Lin predicts when open and closed source models will converge and reveals her goal to build simple API access to the totality of knowledge.</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://pytorch.org/">Pytorch</a>: the leading framework for building deep learning models, originated at Meta and now part of the Linux Foundation umbrella</li>
<li>
<a href="https://caffe2.ai/">Caffe2</a> and <a href="https://github.com/onnx/onnx">ONNX</a>: ML frameworks Meta used that PyTorch eventually replaced</li>
<li>
<a href="https://en.wikipedia.org/wiki/Law_of_conservation_of_complexity">Conservation of complexity</a>: the idea that that every computer application has inherent complexity that cannot be reduced but merely moved between the backend and frontend, originated by Xerox PARC researcher Larry Tesler </li>
<li>
<a href="https://huggingface.co/blog/moe">Mixture of Experts</a>: a class of transformer models that route requests between different subsets of a model based on use case</li>
<li>
<a href="https://fathom.video/">Fathom</a>: a product the Fireworks team uses for video conference summarization </li>
<li>
<a href="https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard">LMSYS Chatbot Arena</a>: crowdsourced open platform for LLM evals hosted on Hugging Face</li>
</ul><p><br></p><p> 00:00 - Introduction</p><p>02:01 - What is Fireworks?</p><p>02:48 - Leading Pytorch</p><p>05:01 - What do researchers like about PyTorch?</p><p>07:50 - How Fireworks compares to open source</p><p>10:38 - Simplicity scales</p><p>12:51 - From training to inference</p><p>17:46 - Will open and closed source converge?</p><p>22:18 - Can you match OpenAI on the Fireworks stack?</p><p>26:53 - What is your vision for the Fireworks platform?</p><p>31:17 - Competition for Nvidia?</p><p>32:47 - Are returns to scale starting to slow down?</p><p>34:28 - Competition</p><p>36:32 - Lightning round</p>]]>
      </content:encoded>
      <itunes:duration>2358</itunes:duration>
      <guid isPermaLink="false"><![CDATA[84e3014a-5907-11ef-9a8c-cfe6fb0c150f]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI2778481768.mp3?updated=1723555274" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>GitHub CEO Thomas Dohmke on Building Copilot, and the the Future of Software Development</title>
      <description>GithHub invented collaborative coding and in the process changed how open source projects, startups and eventually enterprises write code. GitHub Copilot is the first blockbuster product built on top of OpenAI’s GPT models. It now accounts for more than 40 percent of GitHub revenue growth for an annual revenue run rate of $2 billion. Copilot itself is already a larger business than all of GitHub was when Microsoft acquired it in 2018.

We talk to CEO Thomas Dohmke about how a small team at GitHub built on top of GPT-3 and quickly created a product that developers love—and can’t live without. Thomas describes how the product has grown from simple autocomplete to a fully featured workspace for enterprise teams. He also believes that tools like Copilot will bring the power of coding to a billion developers by 2030.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital 

Mentioned in this episode:


Nat Friedman: Former Microsoft VP (and now investor) who came up with the idea that Microsoft should buy GitHub


Oege de Moor: Github developer (and now founder of XBOW) who came up with the idea of using GPT-3 for code and went on to create Copilot


Alex Graveley: principal engineer and Chief Architect for Copilot (now CEO of Minion.ai) who came up with the name Copilot (because his boss, Nat Firedman, is an amateur pilot)


Productivity Assessment of Neural Code Completion: Original GitHub research paper on the impact of Copilot on Developer productivity


Escaping a room in Minecraft with an AI-powered NPC: Recent Minecraft AI assistant demo from Microsoft


With AI, anyone can be a coder now: TED2024 talk by Thomas Dohmke


JFrog: The software supply chain platform that GitHub just partnered with


00:00:00 - Introduction
00:01:18 - Getting started with code
00:03:43 - Microsoft’s acquisition of GitHub
00:11:40 - Evolving Copilot beyond autocomplete
00:14:18 - In hindsight, you can always move faster
00:15:56 - Building on top of OpenAI
00:20:21 - The latest metrics
00:22:11 - The surprise of Copilot’s impact
00:25:11 - Teaching kids to code in the age of Copilot
00:26:38 - The momentum mindset
00:29:46 - Agents vs Copilots
00:32:06 - The Roadmap
00:37:31 - Making maintaining software easier
00:38:48 - The creative new world
00:42:38 - The AI 10x software engineer
00:45:12 - Creativity and systems engineering in AI
00:48:55 - What about COBOL?
00:50:23 - Will GitHub build its own models?
00:57:19 - Rapid incubation at GitHub Next
00:59:21 - The future of AI?
01:03:18 - Advice for founders
01:05:08 - Lightning round</description>
      <pubDate>Tue, 06 Aug 2024 09:00:00 -0000</pubDate>
      <itunes:title>GitHub CEO Thomas Dohmke on Building Copilot, and the the Future of Software Development</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6f7ac4e6-5371-11ef-ae98-df1c65a83821/image/39a85a1b9a151daeae17e594504e077d.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>GithHub invented collaborative coding and in the process changed how open source projects, startups and eventually enterprises write code.</itunes:subtitle>
      <itunes:summary>GithHub invented collaborative coding and in the process changed how open source projects, startups and eventually enterprises write code. GitHub Copilot is the first blockbuster product built on top of OpenAI’s GPT models. It now accounts for more than 40 percent of GitHub revenue growth for an annual revenue run rate of $2 billion. Copilot itself is already a larger business than all of GitHub was when Microsoft acquired it in 2018.

We talk to CEO Thomas Dohmke about how a small team at GitHub built on top of GPT-3 and quickly created a product that developers love—and can’t live without. Thomas describes how the product has grown from simple autocomplete to a fully featured workspace for enterprise teams. He also believes that tools like Copilot will bring the power of coding to a billion developers by 2030.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital 

Mentioned in this episode:


Nat Friedman: Former Microsoft VP (and now investor) who came up with the idea that Microsoft should buy GitHub


Oege de Moor: Github developer (and now founder of XBOW) who came up with the idea of using GPT-3 for code and went on to create Copilot


Alex Graveley: principal engineer and Chief Architect for Copilot (now CEO of Minion.ai) who came up with the name Copilot (because his boss, Nat Firedman, is an amateur pilot)


Productivity Assessment of Neural Code Completion: Original GitHub research paper on the impact of Copilot on Developer productivity


Escaping a room in Minecraft with an AI-powered NPC: Recent Minecraft AI assistant demo from Microsoft


With AI, anyone can be a coder now: TED2024 talk by Thomas Dohmke


JFrog: The software supply chain platform that GitHub just partnered with


00:00:00 - Introduction
00:01:18 - Getting started with code
00:03:43 - Microsoft’s acquisition of GitHub
00:11:40 - Evolving Copilot beyond autocomplete
00:14:18 - In hindsight, you can always move faster
00:15:56 - Building on top of OpenAI
00:20:21 - The latest metrics
00:22:11 - The surprise of Copilot’s impact
00:25:11 - Teaching kids to code in the age of Copilot
00:26:38 - The momentum mindset
00:29:46 - Agents vs Copilots
00:32:06 - The Roadmap
00:37:31 - Making maintaining software easier
00:38:48 - The creative new world
00:42:38 - The AI 10x software engineer
00:45:12 - Creativity and systems engineering in AI
00:48:55 - What about COBOL?
00:50:23 - Will GitHub build its own models?
00:57:19 - Rapid incubation at GitHub Next
00:59:21 - The future of AI?
01:03:18 - Advice for founders
01:05:08 - Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>GithHub invented collaborative coding and in the process changed how open source projects, startups and eventually enterprises write code. GitHub Copilot is the first blockbuster product built on top of OpenAI’s GPT models. It now accounts for more than 40 percent of GitHub revenue growth for an annual revenue run rate of $2 billion. Copilot itself is already a larger business than all of GitHub was when Microsoft acquired it in 2018.</p><p><br></p><p>We talk to CEO Thomas Dohmke about how a small team at GitHub built on top of GPT-3 and quickly created a product that developers love—and can’t live without. Thomas describes how the product has grown from simple autocomplete to a fully featured workspace for enterprise teams. He also believes that tools like Copilot will bring the power of coding to a billion developers by 2030.</p><p><br></p><p>Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><ul>
<li>
<a href="https://nat.org/">Nat Friedman</a>: Former Microsoft VP (and now investor) who came up with the idea that Microsoft should buy GitHub</li>
<li>
<a href="https://www.linkedin.com/in/oegedemoor/?originalSubdomain=mt">Oege de Moor</a>: Github developer (and now founder of XBOW) who came up with the idea of using GPT-3 for code and went on to create Copilot</li>
<li>
<a href="https://alexgraveley.com/">Alex Graveley</a>: principal engineer and Chief Architect for Copilot (now CEO of Minion.ai) who came up with the name Copilot (because his boss, Nat Firedman, is an amateur pilot)</li>
<li>
<a href="https://dl.acm.org/doi/pdf/10.1145/3520312.3534864">Productivity Assessment of Neural Code Completion</a>: Original GitHub research paper on the impact of Copilot on Developer productivity</li>
<li>
<a href="https://www.microsoft.com/en-us/research/video/escaping-a-room-in-minecraft-with-an-ai-powered-npc/">Escaping a room in Minecraft with an AI-powered NPC</a>: Recent Minecraft AI assistant demo from Microsoft</li>
<li>
<a href="https://www.ted.com/talks/thomas_dohmke_with_ai_anyone_can_be_a_coder_now?subtitle=en">With AI, anyone can be a coder now</a>: TED2024 talk by Thomas Dohmke</li>
<li>
<a href="https://jfrog.com/">JFrog</a>: The software supply chain platform that GitHub just partnered with</li>
</ul><p><br></p><p>00:00:00 - Introduction</p><p>00:01:18 - Getting started with code</p><p>00:03:43 - Microsoft’s acquisition of GitHub</p><p>00:11:40 - Evolving Copilot beyond autocomplete</p><p>00:14:18 - In hindsight, you can always move faster</p><p>00:15:56 - Building on top of OpenAI</p><p>00:20:21 - The latest metrics</p><p>00:22:11 - The surprise of Copilot’s impact</p><p>00:25:11 - Teaching kids to code in the age of Copilot</p><p>00:26:38 - The momentum mindset</p><p>00:29:46 - Agents vs Copilots</p><p>00:32:06 - The Roadmap</p><p>00:37:31 - Making maintaining software easier</p><p>00:38:48 - The creative new world</p><p>00:42:38 - The AI 10x software engineer</p><p>00:45:12 - Creativity and systems engineering in AI</p><p>00:48:55 - What about COBOL?</p><p>00:50:23 - Will GitHub build its own models?</p><p>00:57:19 - Rapid incubation at GitHub Next</p><p>00:59:21 - The future of AI?</p><p>01:03:18 - Advice for founders</p><p>01:05:08 - Lightning round</p>]]>
      </content:encoded>
      <itunes:duration>4054</itunes:duration>
      <guid isPermaLink="false"><![CDATA[6f7ac4e6-5371-11ef-ae98-df1c65a83821]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4987704053.mp3?updated=1722906967" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Meta’s Joe Spisak on Llama 3.1 405B and the Democratization of Frontier Models</title>
      <description>As head of Product Management for Generative AI at Meta, Joe Spisak leads the team behind Llama, which just released the new 3.1 405B model. We spoke with Joe just two days after the model’s release to ask what’s new, what it enables, and how Meta sees the role of open source in the AI ecosystem.

Joe shares that where Llama 3.1 405B really focused is on pushing scale (it was trained on 15 trillion tokens using 16,000 GPUs) and he’s excited about the zero-shot tool use it will enable, as well as its role in distillation and generating synthetic data to teach smaller models. He tells us why he thinks even frontier models will ultimately commoditize—and why that’s a good thing for the startup ecosystem.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital 

Mentioned in this episode: 
Llama 3.1 405B paper
Open Source AI Is the Way Forward: Mark Zuckerberg essay released with Llama 3.1.
Mistral Large 2
The Bitter Lesson by Rich Sutton

00:00 Introduction
01:28 The Llama 3.1 405B launch
05:02 The open source license
07:01 What's in it for Meta?
10:19 Why not open source?
11:16 Will frontier models commoditize?
12:41 What about startups?
16:29 The Mistral team
19:36 Are all frontier strategies comparable?
22:38 Is model development becoming more like software development?
26:34 Agentic reasoning
29:09 What future levers will unlock reasoning?
31:20 Will coding and math lead to unlocks?
33:09 Small models
34:08 7X more data
37:36 Are we going to hit a wall?
39:49 Lightning round</description>
      <pubDate>Tue, 30 Jul 2024 12:00:00 -0000</pubDate>
      <itunes:title>Meta’s Joe Spisak on Llama 3.1 405B and the Democratization of Frontier Models</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/32e969ca-4df3-11ef-b9c8-97293cd3833d/image/20a90a0a1318561c51bd97666a1dfc96.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>As head of Product Management for Generative AI at Meta, Joe Spisak leads the team behind Llama, which just released the new 3.1 405B model. </itunes:subtitle>
      <itunes:summary>As head of Product Management for Generative AI at Meta, Joe Spisak leads the team behind Llama, which just released the new 3.1 405B model. We spoke with Joe just two days after the model’s release to ask what’s new, what it enables, and how Meta sees the role of open source in the AI ecosystem.

Joe shares that where Llama 3.1 405B really focused is on pushing scale (it was trained on 15 trillion tokens using 16,000 GPUs) and he’s excited about the zero-shot tool use it will enable, as well as its role in distillation and generating synthetic data to teach smaller models. He tells us why he thinks even frontier models will ultimately commoditize—and why that’s a good thing for the startup ecosystem.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital 

Mentioned in this episode: 
Llama 3.1 405B paper
Open Source AI Is the Way Forward: Mark Zuckerberg essay released with Llama 3.1.
Mistral Large 2
The Bitter Lesson by Rich Sutton

00:00 Introduction
01:28 The Llama 3.1 405B launch
05:02 The open source license
07:01 What's in it for Meta?
10:19 Why not open source?
11:16 Will frontier models commoditize?
12:41 What about startups?
16:29 The Mistral team
19:36 Are all frontier strategies comparable?
22:38 Is model development becoming more like software development?
26:34 Agentic reasoning
29:09 What future levers will unlock reasoning?
31:20 Will coding and math lead to unlocks?
33:09 Small models
34:08 7X more data
37:36 Are we going to hit a wall?
39:49 Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>As head of Product Management for Generative AI at Meta, Joe Spisak leads the team behind Llama, which just released the new 3.1 405B model. We spoke with Joe just two days after the model’s release to ask what’s new, what it enables, and how Meta sees the role of open source in the AI ecosystem.</p><p><br></p><p>Joe shares that where Llama 3.1 405B really focused is on pushing scale (it was trained on 15 trillion tokens using 16,000 GPUs) and he’s excited about the zero-shot tool use it will enable, as well as its role in distillation and generating synthetic data to teach smaller models. He tells us why he thinks even frontier models will ultimately commoditize—and why that’s a good thing for the startup ecosystem.</p><p><br></p><p>Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital </p><p><br></p><p>Mentioned in this episode: </p><p><a href="https://scontent-sjc3-1.xx.fbcdn.net/v/t39.2365-6/452387774_1036916434819166_4173978747091533306_n.pdf?_nc_cat=104&amp;ccb=1-7&amp;_nc_sid=3c67a6&amp;_nc_ohc=7qSoXLG5aAYQ7kNvgHTWh7Y&amp;_nc_ht=scontent-sjc3-1.xx&amp;oh=00_AYBWzVx63NMxruydee1ltGBgSpIh31gZ8_KaVFbsDhGXvw&amp;oe=66A9C6CD">Llama 3.1 405B paper</a></p><p><a href="https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/">Open Source AI Is the Way Forward</a>: Mark Zuckerberg essay released with Llama 3.1.</p><p><a href="https://mistral.ai/news/mistral-large-2407/">Mistral Large 2</a></p><p><a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html">The Bitter Lesson</a> by Rich Sutton</p><p><br></p><p>00:00 Introduction</p><p>01:28 The Llama 3.1 405B launch</p><p>05:02 The open source license</p><p>07:01 What's in it for Meta?</p><p>10:19 Why not open source?</p><p>11:16 Will frontier models commoditize?</p><p>12:41 What about startups?</p><p>16:29 The Mistral team</p><p>19:36 Are all frontier strategies comparable?</p><p>22:38 Is model development becoming more like software development?</p><p>26:34 Agentic reasoning</p><p>29:09 What future levers will unlock reasoning?</p><p>31:20 Will coding and math lead to unlocks?</p><p>33:09 Small models</p><p>34:08 7X more data</p><p>37:36 Are we going to hit a wall?</p><p>39:49 Lightning round</p>]]>
      </content:encoded>
      <itunes:duration>2527</itunes:duration>
      <guid isPermaLink="false"><![CDATA[32e969ca-4df3-11ef-b9c8-97293cd3833d]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3029012972.mp3?updated=1722345193" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Klarna CEO Sebastian Siemiatkowski on Getting AI to Do the Work of 700 Customer Service Reps</title>
      <description>In February, Sebastian Siemiatkowski boldly announced that Klarna’s new OpenAI-powered assistant handled two thirds of the Swedish fintech’s customer service chats in its first month. Not only were customer satisfaction metrics better, but by replacing 700 full-time contractors the bottom line impact is projected to be $40M. Since then, every company we talk to wants to know, “How do we get the Klarna customer support thing?”

Co-founder and CEO Sebastian Siemiatkowski tells us how the Klarna team shipped this new product in record time—and how embracing AI internally with an experimental mindset is transforming the company. He discusses how AI development is proliferating inside the company, from customer support to marketing to internal knowledge to customer-facing experiences. 

Sebastian also reflects on the impacts of AI on employment, society, and the arts while encouraging lawmakers to be open minded about the benefits.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:
DeepL: Language translation app that Sebastian says makes 10,000 translators in Brussels redundant
The Klarna brand: The offbeat optimism that the company is now augmenting with AI
Neo4j: The graph database management system that Klarna is using to build Kiki, their internal knowledge base

00:00 Introduction
01:57 Klarna’s business
03:00 Pitching OpenAI
08:51 How we built this
10:46 Will Klara ever completely replace its CS team with AI?
14:22 The benefits
17:25 If you had a policy magic wand…
21:12 What jobs will be most affected by AI?
23:58 How about marketing?
27:55 How creative are LLMs?
30:11 Klarna’s knowledge graph, Kiki
33:10 Reducing the number of enterprise systems
35:24 Build vs buy?
39:59 What’s next for Klarna with AI?
48:48 Lightning round</description>
      <pubDate>Tue, 23 Jul 2024 09:00:00 -0000</pubDate>
      <itunes:title>Klarna CEO Sebastian Siemiatkowski on Getting AI to Do the Work of 700 Customer Service Reps</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/295b4624-4864-11ef-879c-9fff35fc0dc9/image/ec2f15baf01dd21e72bde5d4a7eafb42.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In February, Sebastian Siemiatkowski boldly announced that Klarna’s new OpenAI-powered assistant handled two thirds of the Swedish fintech’s customer service chats in its first month.</itunes:subtitle>
      <itunes:summary>In February, Sebastian Siemiatkowski boldly announced that Klarna’s new OpenAI-powered assistant handled two thirds of the Swedish fintech’s customer service chats in its first month. Not only were customer satisfaction metrics better, but by replacing 700 full-time contractors the bottom line impact is projected to be $40M. Since then, every company we talk to wants to know, “How do we get the Klarna customer support thing?”

Co-founder and CEO Sebastian Siemiatkowski tells us how the Klarna team shipped this new product in record time—and how embracing AI internally with an experimental mindset is transforming the company. He discusses how AI development is proliferating inside the company, from customer support to marketing to internal knowledge to customer-facing experiences. 

Sebastian also reflects on the impacts of AI on employment, society, and the arts while encouraging lawmakers to be open minded about the benefits.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned in this episode:
DeepL: Language translation app that Sebastian says makes 10,000 translators in Brussels redundant
The Klarna brand: The offbeat optimism that the company is now augmenting with AI
Neo4j: The graph database management system that Klarna is using to build Kiki, their internal knowledge base

00:00 Introduction
01:57 Klarna’s business
03:00 Pitching OpenAI
08:51 How we built this
10:46 Will Klara ever completely replace its CS team with AI?
14:22 The benefits
17:25 If you had a policy magic wand…
21:12 What jobs will be most affected by AI?
23:58 How about marketing?
27:55 How creative are LLMs?
30:11 Klarna’s knowledge graph, Kiki
33:10 Reducing the number of enterprise systems
35:24 Build vs buy?
39:59 What’s next for Klarna with AI?
48:48 Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In February, Sebastian Siemiatkowski boldly announced that Klarna’s new OpenAI-powered assistant handled two thirds of the Swedish fintech’s customer service chats in its first month. Not only were customer satisfaction metrics better, but by replacing 700 full-time contractors the bottom line impact is projected to be $40M. Since then, every company we talk to wants to know, “How do we get the Klarna customer support thing?”</p><p><br></p><p>Co-founder and CEO Sebastian Siemiatkowski tells us how the Klarna team shipped this new product in record time—and how embracing AI internally with an experimental mindset is transforming the company. He discusses how AI development is proliferating inside the company, from customer support to marketing to internal knowledge to customer-facing experiences. </p><p><br></p><p>Sebastian also reflects on the impacts of AI on employment, society, and the arts while encouraging lawmakers to be open minded about the benefits.</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned in this episode:</p><p><a href="https://www.deepl.com/en/translator">DeepL</a>: Language translation app that Sebastian says makes 10,000 translators in Brussels redundant</p><p><a href="https://brand.klarna.com/our-brand">The Klarna brand</a>: The offbeat optimism that the company is now augmenting with AI</p><p><a href="https://neo4j.com/">Neo4j</a>: The graph database management system that Klarna is using to build Kiki, their internal knowledge base</p><p><br></p><p>00:00 Introduction</p><p>01:57 Klarna’s business</p><p>03:00 Pitching OpenAI</p><p>08:51 How we built this</p><p>10:46 Will Klara ever completely replace its CS team with AI?</p><p>14:22 The benefits</p><p>17:25 If you had a policy magic wand…</p><p>21:12 What jobs will be most affected by AI?</p><p>23:58 How about marketing?</p><p>27:55 How creative are LLMs?</p><p>30:11 Klarna’s knowledge graph, Kiki</p><p>33:10 Reducing the number of enterprise systems</p><p>35:24 Build vs buy?</p><p>39:59 What’s next for Klarna with AI?</p><p>48:48 Lightning round</p><p><br></p>]]>
      </content:encoded>
      <itunes:duration>3095</itunes:duration>
      <guid isPermaLink="false"><![CDATA[295b4624-4864-11ef-879c-9fff35fc0dc9]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI4197462355.mp3?updated=1721692854" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Reflection AI’s Misha Laskin on the AlphaGo Moment for LLMs</title>
      <description>LLMs are democratizing digital intelligence, but we’re all waiting for AI agents to take this to the next level by planning tasks and executing actions to actually transform the way we work and live our lives. 

Yet despite incredible hype around AI agents, we’re still far from that “tipping point” with best in class models today. As one measure: coding agents are now scoring in the high-teens % on the SWE-bench benchmark for resolving GitHub issues, which far exceeds the previous unassisted baseline of 2% and the assisted baseline of 5%, but we’ve still got a long way to go.

Why is that? What do we need to truly unlock agentic capability for LLMs? What can we learn from researchers who have built both the most powerful agents in the world, like AlphaGo, and the most powerful LLMs in the world? 

To find out, we’re talking to Misha Laskin, former research scientist at DeepMind. Misha is embarking on his vision to build the best agent models by bringing the search capabilities of RL together with LLMs at his new company, Reflection AI. He and his cofounder Ioannis Antonoglou, co-creator of AlphaGo and AlphaZero and RLHF lead for Gemini, are leveraging their unique insights to train the most reliable models for developers building agentic workflows.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital 

00:00 Introduction
01:11 Leaving Russia, discovering science
10:01 Getting into AI with Ioannis Antonoglou
15:54 Reflection AI and agents
25:41 The current state of Ai agents
29:17 AlphaGo, AlphaZero and Gemini
32:58 LLMs don’t have a ground truth reward
37:53 The importance of post-training
44:12 Task categories for agents
45:54 Attracting talent
50:52 How far away are capable agents?
56:01 Lightning round

Mentioned: 



The Feynman Lectures on Physics: The classic text that got Misha interested in science.


Mastering the game of Go with deep neural networks and tree search: The original 2016 AlphaGo paper.


Mastering the game of Go without human knowledge: 2017 AlphaGo Zero paper


Scaling Laws for Reward Model Overoptimization: OpenAI paper on how reward models can be gamed at all scales for all algorithms.


Mapping the Mind of a Large Language Model: Article about Anthropic mechanistic interpretability paper that identifies how millions of concepts are represented inside Claude Sonnet


Pieter Abeel: Berkeley professor and founder of Covariant who Misha studied with


A2C and A3C: Advantage Actor Critic and Asynchronous Advantage Actor Critic, the two algorithms developed by Misha’s manager at DeepMind, Volodymyr Mnih, that defined reinforcement learning and deep reinforcement learning</description>
      <pubDate>Tue, 16 Jul 2024 09:00:00 -0000</pubDate>
      <itunes:title>Reflection AI’s Misha Laskin on the AlphaGo Moment for LLMs</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/07e17634-37df-11ef-a5e9-bb3393be302a/image/cf1d0ea3309742877caf2d0d3ded5c1b.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>LLMs are democratizing digital intelligence, but we’re all waiting for AI agents to take this to the next level by planning tasks and executing actions to actually transform the way we work and live our lives. </itunes:subtitle>
      <itunes:summary>LLMs are democratizing digital intelligence, but we’re all waiting for AI agents to take this to the next level by planning tasks and executing actions to actually transform the way we work and live our lives. 

Yet despite incredible hype around AI agents, we’re still far from that “tipping point” with best in class models today. As one measure: coding agents are now scoring in the high-teens % on the SWE-bench benchmark for resolving GitHub issues, which far exceeds the previous unassisted baseline of 2% and the assisted baseline of 5%, but we’ve still got a long way to go.

Why is that? What do we need to truly unlock agentic capability for LLMs? What can we learn from researchers who have built both the most powerful agents in the world, like AlphaGo, and the most powerful LLMs in the world? 

To find out, we’re talking to Misha Laskin, former research scientist at DeepMind. Misha is embarking on his vision to build the best agent models by bringing the search capabilities of RL together with LLMs at his new company, Reflection AI. He and his cofounder Ioannis Antonoglou, co-creator of AlphaGo and AlphaZero and RLHF lead for Gemini, are leveraging their unique insights to train the most reliable models for developers building agentic workflows.

Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital 

00:00 Introduction
01:11 Leaving Russia, discovering science
10:01 Getting into AI with Ioannis Antonoglou
15:54 Reflection AI and agents
25:41 The current state of Ai agents
29:17 AlphaGo, AlphaZero and Gemini
32:58 LLMs don’t have a ground truth reward
37:53 The importance of post-training
44:12 Task categories for agents
45:54 Attracting talent
50:52 How far away are capable agents?
56:01 Lightning round

Mentioned: 



The Feynman Lectures on Physics: The classic text that got Misha interested in science.


Mastering the game of Go with deep neural networks and tree search: The original 2016 AlphaGo paper.


Mastering the game of Go without human knowledge: 2017 AlphaGo Zero paper


Scaling Laws for Reward Model Overoptimization: OpenAI paper on how reward models can be gamed at all scales for all algorithms.


Mapping the Mind of a Large Language Model: Article about Anthropic mechanistic interpretability paper that identifies how millions of concepts are represented inside Claude Sonnet


Pieter Abeel: Berkeley professor and founder of Covariant who Misha studied with


A2C and A3C: Advantage Actor Critic and Asynchronous Advantage Actor Critic, the two algorithms developed by Misha’s manager at DeepMind, Volodymyr Mnih, that defined reinforcement learning and deep reinforcement learning</itunes:summary>
      <content:encoded>
        <![CDATA[<p>LLMs are democratizing digital intelligence, but we’re all waiting for AI agents to take this to the next level by planning tasks and executing actions to actually transform the way we work and live our lives. </p><p><br></p><p>Yet despite incredible hype around AI agents, we’re still far from that “tipping point” with best in class models today. As one measure: coding agents are now scoring in the high-teens % on the SWE-bench benchmark for resolving GitHub issues, which far exceeds the previous unassisted baseline of 2% and the assisted baseline of 5%, but we’ve still got a long way to go.</p><p><br></p><p>Why is that? What do we need to truly unlock agentic capability for LLMs? What can we learn from researchers who have built both the most powerful agents in the world, like AlphaGo, and the most powerful LLMs in the world? </p><p><br></p><p>To find out, we’re talking to Misha Laskin, former research scientist at DeepMind. Misha is embarking on his vision to build the best agent models by bringing the search capabilities of RL together with LLMs at his new company, Reflection AI. He and his cofounder Ioannis Antonoglou, co-creator of AlphaGo and AlphaZero and RLHF lead for Gemini, are leveraging their unique insights to train the most reliable models for developers building agentic workflows.</p><p><br></p><p>Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital </p><p><br></p><p>00:00 Introduction</p><p>01:11 Leaving Russia, discovering science</p><p>10:01 Getting into AI with Ioannis Antonoglou</p><p>15:54 Reflection AI and agents</p><p>25:41 The current state of Ai agents</p><p>29:17 AlphaGo, AlphaZero and Gemini</p><p>32:58 LLMs don’t have a ground truth reward</p><p>37:53 The importance of post-training</p><p>44:12 Task categories for agents</p><p>45:54 Attracting talent</p><p>50:52 How far away are capable agents?</p><p>56:01 Lightning round</p><p><br></p><p>Mentioned: </p><p><br></p><ul>
<li>
<a href="https://www.feynmanlectures.caltech.edu/I_toc.html">The Feynman Lectures on Physics</a>: The classic text that got Misha interested in science.</li>
<li>
<a href="https://research.google/pubs/mastering-the-game-of-go-with-deep-neural-networks-and-tree-search/">Mastering the game of Go with deep neural networks and tree search</a>: The original 2016 AlphaGo paper.</li>
<li>
<a href="https://www.nature.com/articles/nature24270.epdf?author_access_token=VJXbVjaSHxFoctQQ4p2k4tRgN0jAjWel9jnR3ZoTv0PVW4gB86EEpGqTRDtpIz-2rmo8-KG06gqVobU5NSCFeHILHcVFUeMsbvwS-lxjqQGg98faovwjxeTUgZAUMnRQ">Mastering the game of Go without human knowledge</a>: 2017 AlphaGo Zero paper</li>
<li>
<a href="https://arxiv.org/abs/2210.10760">Scaling Laws for Reward Model Overoptimization</a>: OpenAI paper on how reward models can be gamed at all scales for all algorithms.</li>
<li>
<a href="https://www.anthropic.com/news/mapping-mind-language-model">Mapping the Mind of a Large Language Model</a>: Article about Anthropic mechanistic interpretability paper that identifies how millions of concepts are represented inside Claude Sonnet</li>
<li>
<a href="https://people.eecs.berkeley.edu/~pabbeel/">Pieter Abeel</a>: Berkeley professor and founder of Covariant who Misha studied with</li>
<li>
<a href="https://paperswithcode.com/method/a2c#:~:text=A2C%2C%20or%20Advantage%20Actor%20Critic,over%20all%20of%20the%20actors.">A2C</a> and <a href="https://paperswithcode.com/method/a3c">A3C</a>: Advantage Actor Critic and Asynchronous Advantage Actor Critic, the two algorithms developed by Misha’s manager at DeepMind, Volodymyr Mnih, that defined reinforcement learning and deep reinforcement learning</li>
</ul>]]>
      </content:encoded>
      <itunes:duration>4024</itunes:duration>
      <guid isPermaLink="false"><![CDATA[07e17634-37df-11ef-a5e9-bb3393be302a]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI2618323153.mp3?updated=1721088578" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Microsoft CTO Kevin Scott on How Far Scaling Laws Will Extend</title>
      <description>The current LLM era is the result of scaling the size of models in successive waves (and the compute to train them). It is also the result of better-than-Moore’s-Law price vs performance ratios in each new generation of Nvidia GPUs. The largest platform companies are continuing to invest in scaling as the prime driver of AI innovation.

Are they right, or will marginal returns level off soon, leaving hyperscalers with too much hardware and too few customer use cases? To find out, we talk to Microsoft CTO Kevin Scott who has led their AI strategy for the past seven years. Scott describes himself as a “short-term pessimist, long-term optimist” and he sees the scaling trend as durable for the industry and critical for the establishment of Microsoft’s AI platform.

Scott believes there will be a shift across the compute ecosystem from training to inference as the frontier models continue to improve, serving wider and more reliable use cases. He also discusses the coming business models for training data, and even what ad units might look like for autonomous agents.

Hosted by: Pat Grady and Bill Coughran, Sequoia Capital

Mentioned:
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, the 2018 Google paper that convinced Kevin that Microsoft wasn’t moving fast enough on AI. 
Dennard scaling: The scaling law that describes the proportional relationship between transistor size and power use; has not held since 2012 and is often confused with Moore’s Law.
Textbooks Are All You Need: Microsoft paper that introduces a new large language model for code, phi-1, that achieves smaller size by using higher quality “textbook” data.
GPQA and MMLU: Benchmarks for reasoning
Copilot: Microsoft product line of GPT consumer assistants from general productivity to design, vacation planning, cooking and fitness.
Devin: Autonomous AI code agent from Cognition Labs that Microsoft recently announced a partnership with.
Ray Solomonoff: Participant in the 1956 Dartmouth Summer Research Project on Artificial Intelligence that named the field; Kevin admires his prescience about the importance of probabilistic methods decades before anyone else.

00:00 - Introduction
01:20 - Kevin’s backstory
06:56 - The role of PhDs in AI engineering
09:56 - Microsoft’s AI strategy
12:40 - Highlights and lowlights
16:28 - Accelerating investments
18:38 - The OpenAI partnership
22:46 - Soon inference will dwarf training
27:56 - Will the demand/supply balance change?
30:51 - Business models for data
36:54 - The value function
39:58 - Copilots
44:47 - The 98/2 rule
49:34 - Solving zero-sum games
57:13 - Lightning round</description>
      <pubDate>Tue, 09 Jul 2024 09:00:00 -0000</pubDate>
      <itunes:title>Microsoft CTO Kevin Scott on How Far Scaling Laws Will Extend</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ef855cbe-37dd-11ef-bc7f-8f47192a0688/image/c687cfebb82902f79b4366dabfa158fb.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>The current LLM era is the result of scaling the size of models in successive waves (and the compute to train them). It is also the result of better-than-Moore’s-Law price vs performance ratios in each new generation of Nvidia GPUs. The largest platform companies are continuing to invest in scaling as the prime driver of AI innovation.

Are they right, or will marginal returns level off soon, leaving hyperscalers with too much hardware and too few customer use cases? To find out, we talk to Microsoft CTO Kevin Scott who has led their AI strategy for the past seven years. Scott describes himself as a “short-term pessimist, long-term optimist” and he sees the scaling trend as durable for the industry and critical for the establishment of Microsoft’s AI platform.

Scott believes there will be a shift across the compute ecosystem from training to inference as the frontier models continue to improve, serving wider and more reliable use cases. He also discusses the coming business models for training data, and even what ad units might look like for autonomous agents.

Hosted by: Pat Grady and Bill Coughran, Sequoia Capital

Mentioned:
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, the 2018 Google paper that convinced Kevin that Microsoft wasn’t moving fast enough on AI. 
Dennard scaling: The scaling law that describes the proportional relationship between transistor size and power use; has not held since 2012 and is often confused with Moore’s Law.
Textbooks Are All You Need: Microsoft paper that introduces a new large language model for code, phi-1, that achieves smaller size by using higher quality “textbook” data.
GPQA and MMLU: Benchmarks for reasoning
Copilot: Microsoft product line of GPT consumer assistants from general productivity to design, vacation planning, cooking and fitness.
Devin: Autonomous AI code agent from Cognition Labs that Microsoft recently announced a partnership with.
Ray Solomonoff: Participant in the 1956 Dartmouth Summer Research Project on Artificial Intelligence that named the field; Kevin admires his prescience about the importance of probabilistic methods decades before anyone else.

00:00 - Introduction
01:20 - Kevin’s backstory
06:56 - The role of PhDs in AI engineering
09:56 - Microsoft’s AI strategy
12:40 - Highlights and lowlights
16:28 - Accelerating investments
18:38 - The OpenAI partnership
22:46 - Soon inference will dwarf training
27:56 - Will the demand/supply balance change?
30:51 - Business models for data
36:54 - The value function
39:58 - Copilots
44:47 - The 98/2 rule
49:34 - Solving zero-sum games
57:13 - Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>The current LLM era is the result of scaling the size of models in successive waves (and the compute to train them). It is also the result of better-than-Moore’s-Law price vs performance ratios in each new generation of Nvidia GPUs. The largest platform companies are continuing to invest in scaling as the prime driver of AI innovation.</p><p><br></p><p>Are they right, or will marginal returns level off soon, leaving hyperscalers with too much hardware and too few customer use cases? To find out, we talk to Microsoft CTO Kevin Scott who has led their AI strategy for the past seven years. Scott describes himself as a “short-term pessimist, long-term optimist” and he sees the scaling trend as durable for the industry and critical for the establishment of Microsoft’s AI platform.</p><p><br></p><p>Scott believes there will be a shift across the compute ecosystem from training to inference as the frontier models continue to improve, serving wider and more reliable use cases. He also discusses the coming business models for training data, and even what ad units might look like for autonomous agents.</p><p><br></p><p>Hosted by: Pat Grady and Bill Coughran, Sequoia Capital</p><p><br></p><p>Mentioned:</p><p><a href="http://arxiv.org/pdf/1810.04805">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a>, the 2018 Google paper that convinced Kevin that Microsoft wasn’t moving fast enough on AI. </p><p><a href="https://en.wikipedia.org/wiki/Dennard_scaling">Dennard scaling</a>: The scaling law that describes the proportional relationship between transistor size and power use; has not held since 2012 and is often confused with Moore’s Law.</p><p><a href="https://www.microsoft.com/en-us/research/publication/textbooks-are-all-you-need/">Textbooks Are All You Need</a>: Microsoft paper that introduces a new large language model for code, phi-1, that achieves smaller size by using higher quality “textbook” data.</p><p><a href="https://paperswithcode.com/dataset/gpqa">GPQA</a> and <a href="https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu">MMLU</a>: Benchmarks for reasoning</p><p><a href="https://copilot.microsoft.com/">Copilot</a>: Microsoft product line of GPT consumer assistants from general productivity to design, vacation planning, cooking and fitness.</p><p><a href="https://www.maginative.com/article/microsoft-partners-with-cognition-labs-to-bring-devin-to-developers/">Devin</a>: Autonomous AI code agent from Cognition Labs that Microsoft recently announced a partnership with.</p><p><a href="https://raysolomonoff.com/">Ray Solomonoff</a>: Participant in the 1956 Dartmouth Summer Research Project on Artificial Intelligence that named the field; Kevin admires his prescience about the importance of probabilistic methods decades before anyone else.</p><p><br></p><p>00:00 - Introduction</p><p>01:20 - Kevin’s backstory</p><p>06:56 - The role of PhDs in AI engineering</p><p>09:56 - Microsoft’s AI strategy</p><p>12:40 - Highlights and lowlights</p><p>16:28 - Accelerating investments</p><p>18:38 - The OpenAI partnership</p><p>22:46 - Soon inference will dwarf training</p><p>27:56 - Will the demand/supply balance change?</p><p>30:51 - Business models for data</p><p>36:54 - The value function</p><p>39:58 - Copilots</p><p>44:47 - The 98/2 rule</p><p>49:34 - Solving zero-sum games</p><p>57:13 - Lightning round</p>]]>
      </content:encoded>
      <itunes:duration>3627</itunes:duration>
      <guid isPermaLink="false"><![CDATA[ef855cbe-37dd-11ef-bc7f-8f47192a0688]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI1597985326.mp3?updated=1720475440" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Zapier’s Mike Knoop launches ARC Prize to Jumpstart New Ideas for AGI</title>
      <description>As impressive as LLMs are, the growing consensus is that language, scale and compute won’t get us to AGI. Although many AI benchmarks have quickly achieved human-level performance, there is one eval that has barely budged since it was created in 2019.

Google researcher François Chollet wrote a paper that year defining intelligence as skill-acquisition efficiency—the ability to learn new skills as humans do, from a small number of examples. To make it testable he proposed a new benchmark, the Abstraction and Reasoning Corpus (ARC), designed to be easy for humans, but hard for AI. Notably, it doesn’t rely on language.

Zapier co-founder Mike Knoop read Chollet’s paper as the LLM wave was rising. He worked quickly to integrate generative AI into Zapier’s product, but kept coming back to the lack of progress on the ARC benchmark. In June, Knoop and Chollet launched the ARC Prize, a public competition offering more than $1M to beat and open-source a solution to the ARC-AGI eval.

In this episode Mike talks about the new ideas required to solve ARC, shares updates from the first two weeks of the competition, and shares why he’s excited for AGI systems that can innovate alongside humans.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned:


Chain-of-Thought Prompting Elicits Reasoning in Large Language Models: The 2019 paper that first caught Mike’s attention about the capabilities of LLMs


On the Measure of Intelligence: 2019 paper by Google researcher François Chollet that introduced the ARC benchmark, which remains unbeaten


ARC Prize 2024: The $1M+ competition Mike and François have launched to drive interest in solving the ARC-AGI eval


Sequence to Sequence Learning with Neural Networks: Ilya Sutskever paper from 2014 that influenced the direction of machine translation with deep neural networks.


Etched: Luke Miles on LessWrong wrote about the first ASIC chip that accelerates transformers on silicon


Kaggle: The leading data science competition platform and online community, acquired by Google in 2017


Lab42: Swiss AU lab that hosted ARCathon precursor to ARC Prize


Jack Cole: Researcher on team that was #1 on the leaderboard for ARCathon


Ryan Greenblatt: Researcher with current high score (50%) on ARC public leaderboard


(00:00) Introduction
(01:51) AI at Zapier
(08:31) What is ARC AGI?
(13:25) What does it mean to efficiently acquire a new skill?
(19:03) What approaches will succeed?
(21:11) A little bit of a different shape
(25:59) The role of code generation and program synthesis
(29:11) What types of people are working on this?
(31:45) Trying to prove you wrong
(34:50) Where are the big labs?
(38:21) The world post-AGI
(42:51) When will we cross 85% on ARC AGI?
(46:12) Will LLMs be part of the solution?
(50:13) Lightning round</description>
      <pubDate>Tue, 02 Jul 2024 09:00:00 -0000</pubDate>
      <itunes:title>Zapier’s Mike Knoop launches ARC Prize to Jumpstart New Ideas for AGI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a14664b2-37dd-11ef-999f-0f0d12bf1d0a/image/8ecdbb849660a54e56dda4fef25c5b3c.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>As impressive as LLMs are, the growing consensus is that language, scale and compute won’t get us to AGI</itunes:subtitle>
      <itunes:summary>As impressive as LLMs are, the growing consensus is that language, scale and compute won’t get us to AGI. Although many AI benchmarks have quickly achieved human-level performance, there is one eval that has barely budged since it was created in 2019.

Google researcher François Chollet wrote a paper that year defining intelligence as skill-acquisition efficiency—the ability to learn new skills as humans do, from a small number of examples. To make it testable he proposed a new benchmark, the Abstraction and Reasoning Corpus (ARC), designed to be easy for humans, but hard for AI. Notably, it doesn’t rely on language.

Zapier co-founder Mike Knoop read Chollet’s paper as the LLM wave was rising. He worked quickly to integrate generative AI into Zapier’s product, but kept coming back to the lack of progress on the ARC benchmark. In June, Knoop and Chollet launched the ARC Prize, a public competition offering more than $1M to beat and open-source a solution to the ARC-AGI eval.

In this episode Mike talks about the new ideas required to solve ARC, shares updates from the first two weeks of the competition, and shares why he’s excited for AGI systems that can innovate alongside humans.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned:


Chain-of-Thought Prompting Elicits Reasoning in Large Language Models: The 2019 paper that first caught Mike’s attention about the capabilities of LLMs


On the Measure of Intelligence: 2019 paper by Google researcher François Chollet that introduced the ARC benchmark, which remains unbeaten


ARC Prize 2024: The $1M+ competition Mike and François have launched to drive interest in solving the ARC-AGI eval


Sequence to Sequence Learning with Neural Networks: Ilya Sutskever paper from 2014 that influenced the direction of machine translation with deep neural networks.


Etched: Luke Miles on LessWrong wrote about the first ASIC chip that accelerates transformers on silicon


Kaggle: The leading data science competition platform and online community, acquired by Google in 2017


Lab42: Swiss AU lab that hosted ARCathon precursor to ARC Prize


Jack Cole: Researcher on team that was #1 on the leaderboard for ARCathon


Ryan Greenblatt: Researcher with current high score (50%) on ARC public leaderboard


(00:00) Introduction
(01:51) AI at Zapier
(08:31) What is ARC AGI?
(13:25) What does it mean to efficiently acquire a new skill?
(19:03) What approaches will succeed?
(21:11) A little bit of a different shape
(25:59) The role of code generation and program synthesis
(29:11) What types of people are working on this?
(31:45) Trying to prove you wrong
(34:50) Where are the big labs?
(38:21) The world post-AGI
(42:51) When will we cross 85% on ARC AGI?
(46:12) Will LLMs be part of the solution?
(50:13) Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>As impressive as LLMs are, the growing consensus is that language, scale and compute won’t get us to AGI. Although many AI benchmarks have quickly achieved human-level performance, there is one eval that has barely budged since it was created in 2019.</p><p><br></p><p>Google researcher François Chollet wrote a paper that year defining intelligence as skill-acquisition efficiency—the ability to learn new skills as humans do, from a small number of examples. To make it testable he proposed a new benchmark, the Abstraction and Reasoning Corpus (ARC), designed to be easy for humans, but hard for AI. Notably, it doesn’t rely on language.</p><p><br></p><p>Zapier co-founder Mike Knoop read Chollet’s paper as the LLM wave was rising. He worked quickly to integrate generative AI into Zapier’s product, but kept coming back to the lack of progress on the ARC benchmark. In June, Knoop and Chollet launched the ARC Prize, a public competition offering more than $1M to beat and open-source a solution to the ARC-AGI eval.</p><p><br></p><p>In this episode Mike talks about the new ideas required to solve ARC, shares updates from the first two weeks of the competition, and shares why he’s excited for AGI systems that can innovate alongside humans.</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned:</p><ul>
<li>
<a href="https://arxiv.org/abs/2201.11903">Chain-of-Thought Prompting Elicits Reasoning in Large Language Models</a>: The 2019 paper that first caught Mike’s attention about the capabilities of LLMs</li>
<li>
<a href="https://arxiv.org/abs/1911.01547">On the Measure of Intelligence</a>: 2019 paper by Google researcher François Chollet that introduced the ARC benchmark, which remains unbeaten</li>
<li>
<a href="https://arcprize.org/">ARC Prize 2024</a>: The $1M+ competition Mike and François have launched to drive interest in solving the ARC-AGI eval</li>
<li>
<a href="https://research.google/pubs/sequence-to-sequence-learning-with-neural-networks/">Sequence to Sequence Learning with Neural Networks</a>: Ilya Sutskever paper from 2014 that influenced the direction of machine translation with deep neural networks.</li>
<li>
<a href="https://www.etched.com/">Etched</a>: Luke Miles on LessWrong wrote about the first ASIC chip that accelerates transformers on silicon</li>
<li>
<a href="https://www.kaggle.com/">Kaggle</a>: The leading data science competition platform and online community, acquired by Google in 2017</li>
<li>
<a href="https://lab42.global/arcathon/">Lab42</a>: Swiss AU lab that hosted ARCathon precursor to ARC Prize</li>
<li>
<a href="https://twitter.com/Jcole75Cole/status/1804136781824045464">Jack Cole</a>: Researcher on team that was #1 on the leaderboard for ARCathon</li>
<li>
<a href="https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt">Ryan Greenblatt</a>: Researcher with current high score (50%) on ARC public leaderboard</li>
</ul><p><br></p><p>(00:00) Introduction</p><p>(01:51) AI at Zapier</p><p>(08:31) What is ARC AGI?</p><p>(13:25) What does it mean to efficiently acquire a new skill?</p><p>(19:03) What approaches will succeed?</p><p>(21:11) A little bit of a different shape</p><p>(25:59) The role of code generation and program synthesis</p><p>(29:11) What types of people are working on this?</p><p>(31:45) Trying to prove you wrong</p><p>(34:50) Where are the big labs?</p><p>(38:21) The world post-AGI</p><p>(42:51) When will we cross 85% on ARC AGI?</p><p>(46:12) Will LLMs be part of the solution?</p><p>(50:13) Lightning round</p>]]>
      </content:encoded>
      <itunes:duration>3312</itunes:duration>
      <guid isPermaLink="false"><![CDATA[a14664b2-37dd-11ef-999f-0f0d12bf1d0a]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI5994281884.mp3?updated=1719870627" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Factory’s Matan Grinberg and Eno Reyes Unleash the Droids on Software Development</title>
      <description>Archimedes said that with a large enough lever, you can move the world. For decades, software engineering has been that lever. And now, AI is compounding that lever. How will we use AI to apply 100 or 1000x leverage to the greatest lever to move the world?

Matan Grinberg and Eno Reyes, co-founders of Factory, have chosen to do things differently than many of their peers in this white-hot space. They sell a fleet of “Droids,” purpose-built dev agents which accomplish different tasks in the software development lifecycle (like code review, testing, pull requests or writing code). Rather than training their own foundation model, their approach is to build something useful for engineering orgs today on top of the rapidly improving models, aligning with the developer and evolving with them. 

Matan and Eno are optimistic about the effects of autonomy in software development and on building a company in the application layer. Their advice to founders, “The only way you can win is by executing faster and being more obsessed.”

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned: 


Juan Maldacena, Institute for Advanced Study, string theorist that Matan cold called as an undergrad


SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering, small-model open-source software engineering agent


SWE-bench: Can Language Models Resolve Real-World GitHub Issues?, an evaluation framework for GitHub issues


Monte Carlo tree search, a 2006 algorithm for solving decision making in games (and used in AlphaGo)


Language agent tree search, a framework for LLM planning, acting and reasoning


The Bitter Lesson, Rich Sutton’s essay on scaling in search and learning 


Code churn, time to merge, cycle time, metrics Factory thinks are important to eng orgs


Transcript: https://www.sequoiacap.com/podcast/training-data-factory/

00:00 Introduction
01:36 Personal backgrounds
10:54 The compound lever
12:41 What is Factory? 
16:29 Cognitive architectures 
21:13 800 engineers at OpenAI are working on my margins 
24:00 Jeff Dean doesn't understand your code base
25:40 Individual dev productivity vs system-wide optimization 
30:04 Results: Factory in action 
32:54 Learnings along the way 
35:36 Fully autonomous Jeff Deans
37:56 Beacons of the upcoming age
40:04 How far are we? 
43:02 Competition 
45:32 Lightning round
49:34 Bonus round: Factory's SWE-bench results</description>
      <pubDate>Tue, 25 Jun 2024 11:00:00 -0000</pubDate>
      <itunes:title>Factory’s Matan Grinberg and Eno Reyes Unleash the Droids on Software Development</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/af04fbc0-324e-11ef-a799-73b3d7be953b/image/0593e11bd417300a53bbd09d809121b4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>How will we use AI to apply 100 or 1000x leverage to the greatest lever to move the world?</itunes:subtitle>
      <itunes:summary>Archimedes said that with a large enough lever, you can move the world. For decades, software engineering has been that lever. And now, AI is compounding that lever. How will we use AI to apply 100 or 1000x leverage to the greatest lever to move the world?

Matan Grinberg and Eno Reyes, co-founders of Factory, have chosen to do things differently than many of their peers in this white-hot space. They sell a fleet of “Droids,” purpose-built dev agents which accomplish different tasks in the software development lifecycle (like code review, testing, pull requests or writing code). Rather than training their own foundation model, their approach is to build something useful for engineering orgs today on top of the rapidly improving models, aligning with the developer and evolving with them. 

Matan and Eno are optimistic about the effects of autonomy in software development and on building a company in the application layer. Their advice to founders, “The only way you can win is by executing faster and being more obsessed.”

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital 

Mentioned: 


Juan Maldacena, Institute for Advanced Study, string theorist that Matan cold called as an undergrad


SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering, small-model open-source software engineering agent


SWE-bench: Can Language Models Resolve Real-World GitHub Issues?, an evaluation framework for GitHub issues


Monte Carlo tree search, a 2006 algorithm for solving decision making in games (and used in AlphaGo)


Language agent tree search, a framework for LLM planning, acting and reasoning


The Bitter Lesson, Rich Sutton’s essay on scaling in search and learning 


Code churn, time to merge, cycle time, metrics Factory thinks are important to eng orgs


Transcript: https://www.sequoiacap.com/podcast/training-data-factory/

00:00 Introduction
01:36 Personal backgrounds
10:54 The compound lever
12:41 What is Factory? 
16:29 Cognitive architectures 
21:13 800 engineers at OpenAI are working on my margins 
24:00 Jeff Dean doesn't understand your code base
25:40 Individual dev productivity vs system-wide optimization 
30:04 Results: Factory in action 
32:54 Learnings along the way 
35:36 Fully autonomous Jeff Deans
37:56 Beacons of the upcoming age
40:04 How far are we? 
43:02 Competition 
45:32 Lightning round
49:34 Bonus round: Factory's SWE-bench results</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Archimedes said that with a large enough lever, you can move the world. For decades, software engineering has been that lever. And now, AI is compounding that lever. How will we use AI to apply 100 or 1000x leverage to the greatest lever to move the world?</p><p><br></p><p>Matan Grinberg and Eno Reyes, co-founders of Factory, have chosen to do things differently than many of their peers in this white-hot space. They sell a fleet of “Droids,” purpose-built dev agents which accomplish different tasks in the software development lifecycle (like code review, testing, pull requests or writing code). Rather than training their own foundation model, their approach is to build something useful for engineering orgs today on top of the rapidly improving models, aligning with the developer and evolving with them. </p><p><br></p><p>Matan and Eno are optimistic about the effects of autonomy in software development and on building a company in the application layer. Their advice to founders, “The only way you can win is by executing faster and being more obsessed.”</p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital </p><p><br></p><p>Mentioned: </p><ul>
<li>
<a href="https://www.ias.edu/scholars/maldacena">Juan Maldacena, Institute for Advanced Study</a>, string theorist that Matan cold called as an undergrad</li>
<li>
<a href="https://arxiv.org/abs/2405.15793">SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering</a>, small-model open-source software engineering agent</li>
<li>
<a href="https://arxiv.org/abs/2310.06770">SWE-bench: Can Language Models Resolve Real-World GitHub Issues?</a>, an evaluation framework for GitHub issues</li>
<li>
<a href="https://www.remi-coulom.fr/CG2006/">Monte Carlo tree search</a>, a 2006 algorithm for solving decision making in games (and used in AlphaGo)</li>
<li>
<a href="https://arxiv.org/abs/2310.04406">Language agent tree search</a>, a framework for LLM planning, acting and reasoning</li>
<li>
<a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html">The Bitter Lesson</a>, Rich Sutton’s essay on scaling in search and learning </li>
<li>
<a href="https://www.pluralsight.com/blog/tutorials/code-churn">Code churn</a>, <a href="https://docs.velocity.codeclimate.com/en/articles/2913578-time-to-merge">time to merge</a>, <a href="https://codeclimate.com/blog/software-engineering-cycle-time">cycle time</a>, metrics Factory thinks are important to eng orgs</li>
</ul><p><br></p><p>Transcript: https://www.sequoiacap.com/podcast/training-data-factory/</p><p><br></p><p>00:00 Introduction</p><p>01:36 Personal backgrounds</p><p>10:54 The compound lever</p><p>12:41 What is Factory? </p><p>16:29 Cognitive architectures </p><p>21:13 800 engineers at OpenAI are working on my margins </p><p>24:00 Jeff Dean doesn't understand your code base</p><p>25:40 Individual dev productivity vs system-wide optimization </p><p>30:04 Results: Factory in action </p><p>32:54 Learnings along the way </p><p>35:36 Fully autonomous Jeff Deans</p><p>37:56 Beacons of the upcoming age</p><p>40:04 How far are we? </p><p>43:02 Competition </p><p>45:32 Lightning round</p><p>49:34 Bonus round: Factory's SWE-bench results</p>]]>
      </content:encoded>
      <itunes:duration>3550</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[af04fbc0-324e-11ef-a799-73b3d7be953b]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI1261809222.mp3?updated=1719351451" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>LangChain’s Harrison Chase on Building the Orchestration Layer for AI Agents </title>
      <description>Last year, AutoGPT and Baby AGI captured our imaginations—agents quickly became the buzzword of the day…and then things went quiet. AutoGPT and Baby AGI may have marked a peak in the hype cycle, but this year has seen a wave of agentic breakouts on the product side, from Klarna’s customer support AI to Cognition’s Devin, etc.

Harrison Chase of LangChain is focused on enabling the orchestration layer for agents. In this conversation, he explains what’s changed that’s allowing agents to improve performance and find traction. 

Harrison shares what he’s optimistic about, where he sees promise for agents vs. what he thinks will be trained into models themselves, and discusses novel kinds of UX that he imagines might transform how we experience agents in the future.     

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital

Mentioned: 


ReAct: Synergizing Reasoning and Acting in Language Models, the first cognitive architecture for agents


SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering, small-model open-source software engineering agent from researchers at Princeton


Devin, autonomous software engineering from Cognition


V0: Generative UI agent from Vercel


GPT Researcher, a research agent 


Language Model Cascades: 2022 paper by Google Brain and now OpenAI researcher David Dohan that was influential for Harrison in developing LangChain


Transcript: https://www.sequoiacap.com/podcast/training-data-harrison-chase/

00:00 Introduction
01:21 What are agents? 
05:00 What is LangChain’s role in the agent ecosystem?
11:13 What is a cognitive architecture? 
13:20 Is bespoke and hard coded the way the world is going, or a stop gap?
18:48 Focus on what makes your beer taste better
20:37 So what? 
22:20 Where are agents getting traction?
25:35 Reflection, chain of thought, other techniques?
30:42 UX can influence the effectiveness of the architecture
35:30 What’s out of scope?
38:04 Fine tuning vs prompting?
42:17 Existing observability tools for LLMs vs needing a new architecture/approach
45:38 Lightning round</description>
      <pubDate>Tue, 18 Jun 2024 11:00:00 -0000</pubDate>
      <itunes:title>LangChain’s Harrison Chase on Building the Orchestration Layer for AI Agents </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/508cbf82-2d18-11ef-b32d-4711f6368100/image/86444f00a77c9cd5b450b79c5df4633b.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>LangChain's Harrison Chase explains what’s allowing AI agents to improve performance and find traction</itunes:subtitle>
      <itunes:summary>Last year, AutoGPT and Baby AGI captured our imaginations—agents quickly became the buzzword of the day…and then things went quiet. AutoGPT and Baby AGI may have marked a peak in the hype cycle, but this year has seen a wave of agentic breakouts on the product side, from Klarna’s customer support AI to Cognition’s Devin, etc.

Harrison Chase of LangChain is focused on enabling the orchestration layer for agents. In this conversation, he explains what’s changed that’s allowing agents to improve performance and find traction. 

Harrison shares what he’s optimistic about, where he sees promise for agents vs. what he thinks will be trained into models themselves, and discusses novel kinds of UX that he imagines might transform how we experience agents in the future.     

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital

Mentioned: 


ReAct: Synergizing Reasoning and Acting in Language Models, the first cognitive architecture for agents


SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering, small-model open-source software engineering agent from researchers at Princeton


Devin, autonomous software engineering from Cognition


V0: Generative UI agent from Vercel


GPT Researcher, a research agent 


Language Model Cascades: 2022 paper by Google Brain and now OpenAI researcher David Dohan that was influential for Harrison in developing LangChain


Transcript: https://www.sequoiacap.com/podcast/training-data-harrison-chase/

00:00 Introduction
01:21 What are agents? 
05:00 What is LangChain’s role in the agent ecosystem?
11:13 What is a cognitive architecture? 
13:20 Is bespoke and hard coded the way the world is going, or a stop gap?
18:48 Focus on what makes your beer taste better
20:37 So what? 
22:20 Where are agents getting traction?
25:35 Reflection, chain of thought, other techniques?
30:42 UX can influence the effectiveness of the architecture
35:30 What’s out of scope?
38:04 Fine tuning vs prompting?
42:17 Existing observability tools for LLMs vs needing a new architecture/approach
45:38 Lightning round</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Last year, AutoGPT and Baby AGI captured our imaginations—agents quickly became the buzzword of the day…and then things went quiet. AutoGPT and Baby AGI may have marked a peak in the hype cycle, but this year has seen a wave of agentic breakouts on the product side, from Klarna’s customer support AI to Cognition’s Devin, etc.</p><p><br></p><p>Harrison Chase of LangChain is focused on enabling the orchestration layer for agents. In this conversation, he explains what’s changed that’s allowing agents to improve performance and find traction. </p><p><br></p><p>Harrison shares what he’s optimistic about, where he sees promise for agents vs. what he thinks will be trained into models themselves, and discusses novel kinds of UX that he imagines might transform how we experience agents in the future.     </p><p><br></p><p>Hosted by: Sonya Huang and Pat Grady, Sequoia Capital</p><p><br></p><p>Mentioned: </p><ul>
<li>
<a href="https://research.google/blog/react-synergizing-reasoning-and-acting-in-language-models/">ReAct: Synergizing Reasoning and Acting in Language Models</a>, the first cognitive architecture for agents</li>
<li>
<a href="https://arxiv.org/abs/2405.15793">SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering</a>, small-model open-source software engineering agent from researchers at Princeton</li>
<li>
<a href="https://www.cognition.ai/blog/introducing-devin">Devin</a>, autonomous software engineering from Cognition</li>
<li>
<a href="https://vercel.com/blog/announcing-v0-generative-ui">V0</a>: Generative UI agent from Vercel</li>
<li>
<a href="https://github.com/assafelovic/gpt-researcher">GPT Researcher</a>, a research agent </li>
<li>
<a href="https://arxiv.org/abs/2207.10342">Language Model Cascades</a>: 2022 paper by Google Brain and now OpenAI researcher <a href="https://www.ddohan.com/">David Dohan</a> that was influential for Harrison in developing LangChain</li>
</ul><p><br></p><p>Transcript: <a href="https://www.sequoiacap.com/podcast/training-data-harrison-chase/">https://www.sequoiacap.com/podcast/training-data-harrison-chase/</a></p><p><br></p><p>00:00 Introduction</p><p>01:21 What are agents? </p><p>05:00 What is LangChain’s role in the agent ecosystem?</p><p>11:13 What is a cognitive architecture? </p><p>13:20 Is bespoke and hard coded the way the world is going, or a stop gap?</p><p>18:48 Focus on what makes your beer taste better</p><p>20:37 So what? </p><p>22:20 Where are agents getting traction?</p><p>25:35 Reflection, chain of thought, other techniques?</p><p>30:42 UX can influence the effectiveness of the architecture</p><p>35:30 What’s out of scope?</p><p>38:04 Fine tuning vs prompting?</p><p>42:17 Existing observability tools for LLMs vs needing a new architecture/approach</p><p>45:38 Lightning round</p>]]>
      </content:encoded>
      <itunes:duration>2990</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[508cbf82-2d18-11ef-b32d-4711f6368100]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI7343169125.mp3?updated=1719341743" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Introducing "Training Data" </title>
      <description>Join us as we train our neural nets on the theme of the century: AI. Sequoia Capital partners Sonya Huang and Pat Grady host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies and their implications for technology, business and society.

The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.</description>
      <pubDate>Wed, 05 Jun 2024 23:28:26 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sequoia Capital</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a62839a4-236c-11ef-b0da-53dfccbfba11/image/4a16fd33b06708003827556209cd42b7.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Join us as we train our neural nets on the theme of the century: AI. Sequoia Capital partners Sonya Huang and Pat Grady host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies and their implications for technology, business and society.

The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Join us as we train our neural nets on the theme of the century: AI. Sequoia Capital partners Sonya Huang and Pat Grady host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies and their implications for technology, business and society.</p><p><br></p><p><em>The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.</em></p>]]>
      </content:encoded>
      <itunes:duration>86</itunes:duration>
      <guid isPermaLink="false"><![CDATA[a62839a4-236c-11ef-b0da-53dfccbfba11]]></guid>
      <enclosure url="https://pscrb.fm/rss/p/traffic.megaphone.fm/CPUAI3056114565.mp3?updated=1717613803" length="0" type="audio/mpeg"/>
    </item>
  </channel>
</rss>
