<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <atom:link href="https://feeds.megaphone.fm/UTEAU6270845142" rel="self" type="application/rss+xml"/>
    <title>Beyond The Pilot: Enterprise AI in Action</title>
    <language>en</language>
    <copyright>© 2025 VentureBeat. All rights reserved.</copyright>
    <description>AI gets real here. On “Beyond the Pilot,” top business execs share what actually happens after the AI proof of concept — from infrastructure and org design to wins, failures, and ROI. Not theory, but deep dives into how they scaled AI that works.</description>
    
    <itunes:explicit>no</itunes:explicit>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Enterprise AI in Action</itunes:subtitle>
    <itunes:author>VentureBeat</itunes:author>
    <itunes:summary>AI gets real here. On “Beyond the Pilot,” top business execs share what actually happens after the AI proof of concept — from infrastructure and org design to wins, failures, and ROI. Not theory, but deep dives into how they scaled AI that works.</itunes:summary>
    <content:encoded>
      <![CDATA[<p>AI gets real here. On “Beyond the Pilot,” top business execs share what actually happens after the AI proof of concept — from infrastructure and org design to wins, failures, and ROI. Not theory, but deep dives into how they scaled AI that works. </p>]]>
    </content:encoded>
    <itunes:owner>
      <itunes:name>VentureBeat</itunes:name>
      <itunes:email>podcasts@venturebeat.com</itunes:email>
    </itunes:owner>
    <itunes:image href="https://megaphone.imgix.net/podcasts/2eacf1e6-89b2-11f0-8a6f-13263c2b3be9/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
    <itunes:category text="News">
      <itunes:category text="Tech News"/>
      <itunes:category text="Business News"/>
    </itunes:category>
    <itunes:category text="Technology">
    </itunes:category>
    <item>
      <title>The Protocol Stack AI Is Missing</title>
      <description>Cisco's OutShift deployed a multi-agent network configuration system that raised error detection from 10–15% to 100% and cut full change validation from 2–3 weeks to 6–7 minutes. The reason it worked — and why most enterprise multi-agent deployments still fail — comes down to a single gap nobody is talking about: agents can connect, but they cannot think together.

Vijoy Pandey, SVP and General Manager of OutShift by Cisco, joins Matt and Sam to explain why A2A, MCP, and existing agent protocols solve connectivity but leave out an entire layer: shared cognition. OutShift's research identifies this as a missing "Layer 9" — a semantic and cognitive communication stack above today's syntactic protocols — and they're already building it.

The conversation covers the four pillars of enterprise-grade multi-agent infrastructure (discovery, identity/access, communication, observability), why standard IAM models break when agents enter the picture, and how OutShift extended OpenTelemetry with Microsoft to cover multi-agent evaluation. Vijoy introduces three new cognition-state protocols — SSTP (Semantic State Transfer), LSTP (Latent Space Transfer), and CSTP (Compressed State Transfer) — and explains the staged rollout path for each, including a published MIT collaboration called the Ripple Effect Protocol.

The healthcare scheduling case study is particularly instructive: three independent third-party agents — insurance, diagnostics, scheduling — each with competing optimization functions and siloed context, and zero shared intent. That's the real multi-vendor, multi-org enterprise problem. Vijoy explains what an orchestrator can't fix, and what a cognitive fabric layer would.

🎙️ GUEST: Vijoy Pandey | SVP &amp; General Manager, OutShift by Cisco

🎙️ HOSTS: Matt Marshall | VentureBeat, Sam Witteveen | VentureBeat

---

**CHAPTERS**

00:00 Intro &amp; Cold Open: Agents Connect But Can't Think Together

00:03 Welcome &amp; Guest Introduction: Vijoy Pandey, OutShift by Cisco

00:04 Do Agents Work Outside Coding &amp; Customer Support? Challenging Amjad Masad's Diagnosis

00:05 What's Wrong With A2A and MCP? The Four Pillars of AGNTCY

00:08 Identity &amp; Access Management for Agents: Why IAM Breaks and What TBAC Fixes

00:12 The Network Digital Twin: How OutShift Achieved 100% Error Detection in Production

00:13 From 2–3 Weeks to 6–7 Minutes: Real Results From Deployed Multi-Agent Networking

00:15 Agents Can Connect But Can't Think Together: The Core Thesis

00:20 The Cognitive Revolution Analogy: Shared Intent, Shared Context, Collective Innovation

00:25 The Healthcare Scheduling Case Study: Three Competing Agents, Zero Shared Intent

00:31 Why Orchestrators Fail in Multi-Vendor, Multi-Org Environments

00:36 Introducing Layer 9: SSTP, LSTP, and CSTP — The Cognition-State Protocol Stack

00:41 What OutShift Is Building Now: Protocols, Fabric, and Cognition Engines

00:44 MIT Collaboration: The Ripple Effect Protocol and Phase One Rollout

00:46 Cisco's 40-Year Networking Playbook Applied to the Internet of Cognition

00:49 Closing: Where to Find the Research, AGNTCY, and OpenClaw Integration

---

Subscribe to VentureBeat: https://www.youtube.com/@VentureBeat

Apple Podcasts: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

Website: https://venturebeat.com

LinkedIn: https://www.linkedin.com/company/venturebeat

Newsletter: https://venturebeat.com/newsletters

#EnterpriseAI #AIAgents #MultiAgentSystems #AIInfrastructure #LLM

—

“Scaling Out Superintelligence”  Vijoy Pandey, January 2026. The technical whitepaper detailing the Internet of Cognition architecture, three-layer stack, and cognition state protocols. 



Internet of Cognition Interactive Demo Clickable walkthrough showing per-agent activity, intent, context, and collective reasoning across a multi-agent SRE system. 



“A Layered Protocol Architecture for the Internet of Agents”  Fleming, Muscariello, Pandey, Kompella. The OSI Layer 8/9 extension. 



AGNTCY Open source multi-agent infrastructure under Linux Foundation governance. Covers discovery, identity, communication, observability. 

Formative members: Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat. 


Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 15 Apr 2026 18:48:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>11</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4c2c91fc-3837-11f1-891e-bfec444e15fa/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Cisco's OutShift deployed a multi-agent network configuration system that raised error detection from 10–15% to 100% and cut full change validation from 2–3 weeks to 6–7 minutes. The reason it worked — and why most enterprise multi-agent deployments still fail — comes down to a single gap nobody is talking about: agents can connect, but they cannot think together.

Vijoy Pandey, SVP and General Manager of OutShift by Cisco, joins Matt and Sam to explain why A2A, MCP, and existing agent protocols solve connectivity but leave out an entire layer: shared cognition. OutShift's research identifies this as a missing "Layer 9" — a semantic and cognitive communication stack above today's syntactic protocols — and they're already building it.

The conversation covers the four pillars of enterprise-grade multi-agent infrastructure (discovery, identity/access, communication, observability), why standard IAM models break when agents enter the picture, and how OutShift extended OpenTelemetry with Microsoft to cover multi-agent evaluation. Vijoy introduces three new cognition-state protocols — SSTP (Semantic State Transfer), LSTP (Latent Space Transfer), and CSTP (Compressed State Transfer) — and explains the staged rollout path for each, including a published MIT collaboration called the Ripple Effect Protocol.

The healthcare scheduling case study is particularly instructive: three independent third-party agents — insurance, diagnostics, scheduling — each with competing optimization functions and siloed context, and zero shared intent. That's the real multi-vendor, multi-org enterprise problem. Vijoy explains what an orchestrator can't fix, and what a cognitive fabric layer would.

🎙️ GUEST: Vijoy Pandey | SVP &amp; General Manager, OutShift by Cisco

🎙️ HOSTS: Matt Marshall | VentureBeat, Sam Witteveen | VentureBeat

---

**CHAPTERS**

00:00 Intro &amp; Cold Open: Agents Connect But Can't Think Together

00:03 Welcome &amp; Guest Introduction: Vijoy Pandey, OutShift by Cisco

00:04 Do Agents Work Outside Coding &amp; Customer Support? Challenging Amjad Masad's Diagnosis

00:05 What's Wrong With A2A and MCP? The Four Pillars of AGNTCY

00:08 Identity &amp; Access Management for Agents: Why IAM Breaks and What TBAC Fixes

00:12 The Network Digital Twin: How OutShift Achieved 100% Error Detection in Production

00:13 From 2–3 Weeks to 6–7 Minutes: Real Results From Deployed Multi-Agent Networking

00:15 Agents Can Connect But Can't Think Together: The Core Thesis

00:20 The Cognitive Revolution Analogy: Shared Intent, Shared Context, Collective Innovation

00:25 The Healthcare Scheduling Case Study: Three Competing Agents, Zero Shared Intent

00:31 Why Orchestrators Fail in Multi-Vendor, Multi-Org Environments

00:36 Introducing Layer 9: SSTP, LSTP, and CSTP — The Cognition-State Protocol Stack

00:41 What OutShift Is Building Now: Protocols, Fabric, and Cognition Engines

00:44 MIT Collaboration: The Ripple Effect Protocol and Phase One Rollout

00:46 Cisco's 40-Year Networking Playbook Applied to the Internet of Cognition

00:49 Closing: Where to Find the Research, AGNTCY, and OpenClaw Integration

---

Subscribe to VentureBeat: https://www.youtube.com/@VentureBeat

Apple Podcasts: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

Website: https://venturebeat.com

LinkedIn: https://www.linkedin.com/company/venturebeat

Newsletter: https://venturebeat.com/newsletters

#EnterpriseAI #AIAgents #MultiAgentSystems #AIInfrastructure #LLM

—

“Scaling Out Superintelligence”  Vijoy Pandey, January 2026. The technical whitepaper detailing the Internet of Cognition architecture, three-layer stack, and cognition state protocols. 



Internet of Cognition Interactive Demo Clickable walkthrough showing per-agent activity, intent, context, and collective reasoning across a multi-agent SRE system. 



“A Layered Protocol Architecture for the Internet of Agents”  Fleming, Muscariello, Pandey, Kompella. The OSI Layer 8/9 extension. 



AGNTCY Open source multi-agent infrastructure under Linux Foundation governance. Covers discovery, identity, communication, observability. 

Formative members: Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat. 


Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Cisco's OutShift deployed a multi-agent network configuration system that raised error detection from 10–15% to 100% and cut full change validation from 2–3 weeks to 6–7 minutes. The reason it worked — and why most enterprise multi-agent deployments still fail — comes down to a single gap nobody is talking about: agents can connect, but they cannot think together.</p>
<p>Vijoy Pandey, SVP and General Manager of OutShift by Cisco, joins Matt and Sam to explain why A2A, MCP, and existing agent protocols solve connectivity but leave out an entire layer: shared cognition. OutShift's research identifies this as a missing "Layer 9" — a semantic and cognitive communication stack above today's syntactic protocols — and they're already building it.</p>
<p>The conversation covers the four pillars of enterprise-grade multi-agent infrastructure (discovery, identity/access, communication, observability), why standard IAM models break when agents enter the picture, and how OutShift extended OpenTelemetry with Microsoft to cover multi-agent evaluation. Vijoy introduces three new cognition-state protocols — SSTP (Semantic State Transfer), LSTP (Latent Space Transfer), and CSTP (Compressed State Transfer) — and explains the staged rollout path for each, including a published MIT collaboration called the Ripple Effect Protocol.</p>
<p>The healthcare scheduling case study is particularly instructive: three independent third-party agents — insurance, diagnostics, scheduling — each with competing optimization functions and siloed context, and zero shared intent. That's the real multi-vendor, multi-org enterprise problem. Vijoy explains what an orchestrator can't fix, and what a cognitive fabric layer would.</p>
<p>🎙️ GUEST: Vijoy Pandey | SVP &amp; General Manager, OutShift by Cisco</p>
<p>🎙️ HOSTS: Matt Marshall | VentureBeat, Sam Witteveen | VentureBeat</p>
<p>---</p>
<p>**CHAPTERS**</p>
<p>00:00 Intro &amp; Cold Open: Agents Connect But Can't Think Together</p>
<p>00:03 Welcome &amp; Guest Introduction: Vijoy Pandey, OutShift by Cisco</p>
<p>00:04 Do Agents Work Outside Coding &amp; Customer Support? Challenging Amjad Masad's Diagnosis</p>
<p>00:05 What's Wrong With A2A and MCP? The Four Pillars of AGNTCY</p>
<p>00:08 Identity &amp; Access Management for Agents: Why IAM Breaks and What TBAC Fixes</p>
<p>00:12 The Network Digital Twin: How OutShift Achieved 100% Error Detection in Production</p>
<p>00:13 From 2–3 Weeks to 6–7 Minutes: Real Results From Deployed Multi-Agent Networking</p>
<p>00:15 Agents Can Connect But Can't Think Together: The Core Thesis</p>
<p>00:20 The Cognitive Revolution Analogy: Shared Intent, Shared Context, Collective Innovation</p>
<p>00:25 The Healthcare Scheduling Case Study: Three Competing Agents, Zero Shared Intent</p>
<p>00:31 Why Orchestrators Fail in Multi-Vendor, Multi-Org Environments</p>
<p>00:36 Introducing Layer 9: SSTP, LSTP, and CSTP — The Cognition-State Protocol Stack</p>
<p>00:41 What OutShift Is Building Now: Protocols, Fabric, and Cognition Engines</p>
<p>00:44 MIT Collaboration: The Ripple Effect Protocol and Phase One Rollout</p>
<p>00:46 Cisco's 40-Year Networking Playbook Applied to the Internet of Cognition</p>
<p>00:49 Closing: Where to Find the Research, AGNTCY, and OpenClaw Integration</p>
<p>---</p>
<p>Subscribe to VentureBeat: https://www.youtube.com/@VentureBeat</p>
<p>Apple Podcasts: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</p>
<p>Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</p>
<p>Website: https://venturebeat.com</p>
<p>LinkedIn: https://www.linkedin.com/company/venturebeat</p>
<p>Newsletter: https://venturebeat.com/newsletters</p>
<p>#EnterpriseAI #AIAgents #MultiAgentSystems #AIInfrastructure #LLM</p>
<p>—</p>
<p><a href="https://outshift.cisco.com/internet-of-cognition/whitepaper?utm_campaign=fy26q3_ioc_ww_paid-media_ioc-vbep1-wp_podcast&amp;utm_channel=podcast&amp;utm_source=podcast"><strong>“Scaling Out Superintelligence”</strong></a><strong> </strong> Vijoy Pandey, January 2026. The technical whitepaper detailing the Internet of Cognition architecture, three-layer stack, and cognition state protocols. </p>
<p><br></p>
<p><a href="https://outshift.cisco.com/internet-of-cognition/explore?utm_campaign=fy26q3_ioc_ww_paid-media_ioc-vbep1-wpdemo_podcast&amp;utm_channel=podcast&amp;utm_source=podcast"><strong>Internet of Cognition Interactive Demo</strong></a><strong> </strong>Clickable walkthrough showing per-agent activity, intent, context, and collective reasoning across a multi-agent SRE system. </p>
<p><br></p>
<p><a href="https://arxiv.org/abs/2511.19699"><strong>“A Layered Protocol Architecture for the Internet of Agents”</strong> </a> Fleming, Muscariello, Pandey, Kompella. The OSI Layer 8/9 extension. </p>
<p><br></p>
<p><a href="https://agntcy.org/"><strong>AGNTCY</strong></a> Open source multi-agent infrastructure under Linux Foundation governance. Covers discovery, identity, communication, observability. </p>
<p>Formative members: Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat. </p>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>3059</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[10bc082a-39b1-11f1-a4d7-83a1e8ef997b]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU1290725613.mp3?updated=1776208199" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>100M Agents: Scaling the New Execution stack with Intuit</title>
      <description>A QuickBooks customer discovered significant fraud by asking their AI assistant follow-up questions about transaction amounts that didn't add up. This isn't a demo — it's one of 3 million customers now using Intuit's AI agents in production, with 80.5% returning to use them again.

Marianna Tessel, EVP and GM of QuickBooks (formerly CTO of Intuit), walks through the architecture decisions behind one of the first enterprise AI deployments at true scale. Intuit's "done-for-you" agents now automate book closing, reconciliation, transaction categorization, and payroll — but the breakthrough came when they realized chatbots alone weren't enough. Businesses wanted human experts integrated directly into AI workflows, creating what Intuit calls the "AI + HI" model (artificial intelligence + human intelligence). The results: invoices paid 5 days faster, 90% more paid in full, 30% reduction in manual work, and 62% of users reporting bookkeeping is easier.

Tessel reveals the technical evolution: moving from monolithic agents to a dynamic orchestration layer that routes queries across multiple LLMs (including Intuit's proprietary FinLM built on open-source), 24,000 bank connections, and 600,000 customer attributes. The system now handles proactive anomaly detection, benchmarking against similar businesses, and even nascent vibe coding — all without requiring users to understand they're essentially programming workflows through natural language. She also addresses the "SaaS apocalypse" narrative head-on, explaining why QuickBooks saw 18% growth last quarter while competitors faced market pressure: durable data advantages and customer trust in financial accuracy matter more than ever when AI enters the mix.

For enterprise builders navigating agent architecture, data grounding, and human-in-the-loop design, this is a rare look inside a working system serving millions.

🎙️ GUEST: Marianna Tessel | EVP &amp; GM, QuickBooks (Intuit)

🎙️ HOSTS: Matt Marshall | VentureBeat, Sam Witteveen | VentureBeat

00:00 Intro — Customer discovers fraud using QuickBooks AI

03:26 Intuit Intelligence: Agents, BI, and human expertise integration

05:20 First-time AI users and going beyond chatbots

08:02 How Intuit decides which workflows to automate

10:16 Sponsor: Outshift by Cisco

10:38 Human-in-the-loop: When to insert experts vs. full automation

13:00 The AI + HI model: Why customers want human verification

15:24 Human expertise as confidence layer, not just AI check

16:14 Proprietary data advantage: 24K bank connections, 600K attributes

18:39 Benchmarking: "Businesses like me" — using aggregate data for competitive insights

19:52 First-party vs. third-party data strategy

21:38 Addressing the "SaaS apocalypse" narrative — why Intuit grew 18% last quarter

24:39 Proactive AI: Anomaly detection for marketing expense spikes

25:20 Builder perspective: Leaning on LLM orchestration, not use-case-by-use-case builds

27:32 Architecture evolution: From monolithic agents to dynamic tools and skills

29:10 Composite UX: Chat side-by-side with traditional workflows

30:35 Multi-model strategy: Genos platform, FinLM, and model routing

31:16 Vibe coding and actions: Letting users automate without realizing they're coding

32:47 Personalization wave: Memory, persistence, and user-defined workflows

35:08 Docker background and primitives that survive disruption

36:00 Open Claw and agent automation: Real revolution or risky experimentation?

#EnterpriseAI #AIAgents #QuickBooks #Intuit #LLMOrchestration #AgenticAI

Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.

About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 01 Apr 2026 10:28:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>10</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c45b3668-2a47-11f1-948e-53a59cd954de/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>A QuickBooks customer discovered significant fraud by asking their AI assistant follow-up questions about transaction amounts that didn't add up. This isn't a demo — it's one of 3 million customers now using Intuit's AI agents in production, with 80.5% returning to use them again.

Marianna Tessel, EVP and GM of QuickBooks (formerly CTO of Intuit), walks through the architecture decisions behind one of the first enterprise AI deployments at true scale. Intuit's "done-for-you" agents now automate book closing, reconciliation, transaction categorization, and payroll — but the breakthrough came when they realized chatbots alone weren't enough. Businesses wanted human experts integrated directly into AI workflows, creating what Intuit calls the "AI + HI" model (artificial intelligence + human intelligence). The results: invoices paid 5 days faster, 90% more paid in full, 30% reduction in manual work, and 62% of users reporting bookkeeping is easier.

Tessel reveals the technical evolution: moving from monolithic agents to a dynamic orchestration layer that routes queries across multiple LLMs (including Intuit's proprietary FinLM built on open-source), 24,000 bank connections, and 600,000 customer attributes. The system now handles proactive anomaly detection, benchmarking against similar businesses, and even nascent vibe coding — all without requiring users to understand they're essentially programming workflows through natural language. She also addresses the "SaaS apocalypse" narrative head-on, explaining why QuickBooks saw 18% growth last quarter while competitors faced market pressure: durable data advantages and customer trust in financial accuracy matter more than ever when AI enters the mix.

For enterprise builders navigating agent architecture, data grounding, and human-in-the-loop design, this is a rare look inside a working system serving millions.

🎙️ GUEST: Marianna Tessel | EVP &amp; GM, QuickBooks (Intuit)

🎙️ HOSTS: Matt Marshall | VentureBeat, Sam Witteveen | VentureBeat

00:00 Intro — Customer discovers fraud using QuickBooks AI

03:26 Intuit Intelligence: Agents, BI, and human expertise integration

05:20 First-time AI users and going beyond chatbots

08:02 How Intuit decides which workflows to automate

10:16 Sponsor: Outshift by Cisco

10:38 Human-in-the-loop: When to insert experts vs. full automation

13:00 The AI + HI model: Why customers want human verification

15:24 Human expertise as confidence layer, not just AI check

16:14 Proprietary data advantage: 24K bank connections, 600K attributes

18:39 Benchmarking: "Businesses like me" — using aggregate data for competitive insights

19:52 First-party vs. third-party data strategy

21:38 Addressing the "SaaS apocalypse" narrative — why Intuit grew 18% last quarter

24:39 Proactive AI: Anomaly detection for marketing expense spikes

25:20 Builder perspective: Leaning on LLM orchestration, not use-case-by-use-case builds

27:32 Architecture evolution: From monolithic agents to dynamic tools and skills

29:10 Composite UX: Chat side-by-side with traditional workflows

30:35 Multi-model strategy: Genos platform, FinLM, and model routing

31:16 Vibe coding and actions: Letting users automate without realizing they're coding

32:47 Personalization wave: Memory, persistence, and user-defined workflows

35:08 Docker background and primitives that survive disruption

36:00 Open Claw and agent automation: Real revolution or risky experimentation?

#EnterpriseAI #AIAgents #QuickBooks #Intuit #LLMOrchestration #AgenticAI

Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.

About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>A QuickBooks customer discovered significant fraud by asking their AI assistant follow-up questions about transaction amounts that didn't add up. This isn't a demo — it's one of 3 million customers now using Intuit's AI agents in production, with 80.5% returning to use them again.</p>
<p>Marianna Tessel, EVP and GM of QuickBooks (formerly CTO of Intuit), walks through the architecture decisions behind one of the first enterprise AI deployments at true scale. Intuit's "done-for-you" agents now automate book closing, reconciliation, transaction categorization, and payroll — but the breakthrough came when they realized chatbots alone weren't enough. Businesses wanted human experts integrated directly into AI workflows, creating what Intuit calls the "AI + HI" model (artificial intelligence + human intelligence). The results: invoices paid 5 days faster, 90% more paid in full, 30% reduction in manual work, and 62% of users reporting bookkeeping is easier.</p>
<p>Tessel reveals the technical evolution: moving from monolithic agents to a dynamic orchestration layer that routes queries across multiple LLMs (including Intuit's proprietary FinLM built on open-source), 24,000 bank connections, and 600,000 customer attributes. The system now handles proactive anomaly detection, benchmarking against similar businesses, and even nascent vibe coding — all without requiring users to understand they're essentially programming workflows through natural language. She also addresses the "SaaS apocalypse" narrative head-on, explaining why QuickBooks saw 18% growth last quarter while competitors faced market pressure: durable data advantages and customer trust in financial accuracy matter more than ever when AI enters the mix.</p>
<p>For enterprise builders navigating agent architecture, data grounding, and human-in-the-loop design, this is a rare look inside a working system serving millions.</p>
<p>🎙️ GUEST: Marianna Tessel | EVP &amp; GM, QuickBooks (Intuit)</p>
<p>🎙️ HOSTS: Matt Marshall | VentureBeat, Sam Witteveen | VentureBeat</p>
<p>00:00 Intro — Customer discovers fraud using QuickBooks AI</p>
<p>03:26 Intuit Intelligence: Agents, BI, and human expertise integration</p>
<p>05:20 First-time AI users and going beyond chatbots</p>
<p>08:02 How Intuit decides which workflows to automate</p>
<p>10:16 Sponsor: Outshift by Cisco</p>
<p>10:38 Human-in-the-loop: When to insert experts vs. full automation</p>
<p>13:00 The AI + HI model: Why customers want human verification</p>
<p>15:24 Human expertise as confidence layer, not just AI check</p>
<p>16:14 Proprietary data advantage: 24K bank connections, 600K attributes</p>
<p>18:39 Benchmarking: "Businesses like me" — using aggregate data for competitive insights</p>
<p>19:52 First-party vs. third-party data strategy</p>
<p>21:38 Addressing the "SaaS apocalypse" narrative — why Intuit grew 18% last quarter</p>
<p>24:39 Proactive AI: Anomaly detection for marketing expense spikes</p>
<p>25:20 Builder perspective: Leaning on LLM orchestration, not use-case-by-use-case builds</p>
<p>27:32 Architecture evolution: From monolithic agents to dynamic tools and skills</p>
<p>29:10 Composite UX: Chat side-by-side with traditional workflows</p>
<p>30:35 Multi-model strategy: Genos platform, FinLM, and model routing</p>
<p>31:16 Vibe coding and actions: Letting users automate without realizing they're coding</p>
<p>32:47 Personalization wave: Memory, persistence, and user-defined workflows</p>
<p>35:08 Docker background and primitives that survive disruption</p>
<p>36:00 Open Claw and agent automation: Real revolution or risky experimentation?</p>
<p>#EnterpriseAI #AIAgents #QuickBooks #Intuit #LLMOrchestration #AgenticAI</p>
<p><strong>Presented by Outshift by Cisco</strong> Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at<a href="https://outshift.cisco.com"> <u>outshift.cisco.com</u></a>.</p>
<p><strong>About VentureBeat:</strong> VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.</p>
<p><strong>🔗 CONNECT WITH US</strong></p>
<p>Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters</p>
<p>Visit VentureBeat: Venturebeat.com</p>
<p>.</p>
<p>.</p>
<p>.</p>
<p>Subscribe to VentureBeat: </p>
<p>   /  @VentureBeat  </p>
<p>.</p>
<p>.</p>
<p>Subscribe to the full podcast here:</p>
<p>Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</p>
<p>Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</p>
<p>YouTube: https://www.youtube.com/VentureBeat</p>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>2304</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c4a055a4-2a47-11f1-948e-83b7d7f81c2d]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU5556156969.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>The AI War For Your Personal Context</title>
      <description>Major SaaS companies including Salesforce, Intuit, and ServiceNow saw stock drops of 45-50% as enterprises shift from bloated software suites to personalized AI agents that users can control directly. Microsoft just capitulated this week, opening Copilot to allow Claude Cowork-style functionality — a clear signal that the "build vs. buy" calculus for enterprise software has fundamentally changed.

Matt Marshall and Sam Witteveen break down why personalization is no longer optional for enterprise products. Companies like Zoom now offer personalized workflows that access your conversation history and profile context. Infrastructure decisions are moving fast: token budgets must account for per-user context, identity management has become the biggest technical challenge for agent deployments, and "skills" (not just MCP) are emerging as the key abstraction layer.

Zoom's Li Juan explains how their AI Companion moved beyond generic templates to user-controlled personalization: tracking opinion divergence in meetings, generating follow-up emails with specific context controls, and giving users explicit prompt examples instead of "good luck with your prompt." This is the new standard. If your product can't reason over which tools to use, which skills to apply, and which context to pull — all personalized to the individual user — you're competing with something that can be built in 10 days (Cowork's timeline).

The agents-are-taking-over reality is here: multi-user agent architectures require thinking about context contamination, security postures for computer-use capabilities, and whether you're building internal agents or buying SaaS that will adapt. Sam's take: "AGI is agentic, and we're well along that continuum now."

🎙️ HOSTS: Matt Marshall | CEO, VentureBeat &amp; Sam Witteveen | VentureBeat

📺 CHAPTERS:

00:00 Intro — The SaaS Apocalypse

00:01:00 The Personalization Imperative

00:02:00 Microsoft Copilot Capitulates to Cowork

00:03:00 From Template Selection to Skill Generation

00:04:00 The Land Grab for User Context

00:05:00 Zoom's Li Juan on Personalized Meeting Intelligence

00:06:00 Why Context = Magic in Enterprise AI

00:07:00 Product-Market Fit in the Agent Era

00:08:00 Metrics That Matter: JP Morgan's 30,000 Agents

00:09:00 Build vs. Buy: The New Calculus

00:10:00 Why Slack Might Win on Agent Identity Management

00:11:00 Zoom's AI Companion: Control Over Randomness

00:13:00 Li Juan on Purposeful Prompts and Reference Control

00:15:00 Multi-Agent vs. Multi-User: The Critical Distinction

00:16:00 LinkedIn's GPU Optimization Strategy

00:17:00 AGI Is Agentic: Where

Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.

About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 18 Mar 2026 10:31:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>9</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/96c75746-228a-11f1-aebd-1b25c26954a5/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Major SaaS companies including Salesforce, Intuit, and ServiceNow saw stock drops of 45-50% as enterprises shift from bloated software suites to personalized AI agents that users can control directly. Microsoft just capitulated this week, opening Copilot to allow Claude Cowork-style functionality — a clear signal that the "build vs. buy" calculus for enterprise software has fundamentally changed.

Matt Marshall and Sam Witteveen break down why personalization is no longer optional for enterprise products. Companies like Zoom now offer personalized workflows that access your conversation history and profile context. Infrastructure decisions are moving fast: token budgets must account for per-user context, identity management has become the biggest technical challenge for agent deployments, and "skills" (not just MCP) are emerging as the key abstraction layer.

Zoom's Li Juan explains how their AI Companion moved beyond generic templates to user-controlled personalization: tracking opinion divergence in meetings, generating follow-up emails with specific context controls, and giving users explicit prompt examples instead of "good luck with your prompt." This is the new standard. If your product can't reason over which tools to use, which skills to apply, and which context to pull — all personalized to the individual user — you're competing with something that can be built in 10 days (Cowork's timeline).

The agents-are-taking-over reality is here: multi-user agent architectures require thinking about context contamination, security postures for computer-use capabilities, and whether you're building internal agents or buying SaaS that will adapt. Sam's take: "AGI is agentic, and we're well along that continuum now."

🎙️ HOSTS: Matt Marshall | CEO, VentureBeat &amp; Sam Witteveen | VentureBeat

📺 CHAPTERS:

00:00 Intro — The SaaS Apocalypse

00:01:00 The Personalization Imperative

00:02:00 Microsoft Copilot Capitulates to Cowork

00:03:00 From Template Selection to Skill Generation

00:04:00 The Land Grab for User Context

00:05:00 Zoom's Li Juan on Personalized Meeting Intelligence

00:06:00 Why Context = Magic in Enterprise AI

00:07:00 Product-Market Fit in the Agent Era

00:08:00 Metrics That Matter: JP Morgan's 30,000 Agents

00:09:00 Build vs. Buy: The New Calculus

00:10:00 Why Slack Might Win on Agent Identity Management

00:11:00 Zoom's AI Companion: Control Over Randomness

00:13:00 Li Juan on Purposeful Prompts and Reference Control

00:15:00 Multi-Agent vs. Multi-User: The Critical Distinction

00:16:00 LinkedIn's GPU Optimization Strategy

00:17:00 AGI Is Agentic: Where

Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.

About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Major SaaS companies including Salesforce, Intuit, and ServiceNow saw stock drops of 45-50% as enterprises shift from bloated software suites to personalized AI agents that users can control directly. Microsoft just capitulated this week, opening Copilot to allow Claude Cowork-style functionality — a clear signal that the "build vs. buy" calculus for enterprise software has fundamentally changed.</p>
<p>Matt Marshall and Sam Witteveen break down why personalization is no longer optional for enterprise products. Companies like Zoom now offer personalized workflows that access your conversation history and profile context. Infrastructure decisions are moving fast: token budgets must account for per-user context, identity management has become the biggest technical challenge for agent deployments, and "skills" (not just MCP) are emerging as the key abstraction layer.</p>
<p>Zoom's Li Juan explains how their AI Companion moved beyond generic templates to user-controlled personalization: tracking opinion divergence in meetings, generating follow-up emails with specific context controls, and giving users explicit prompt examples instead of "good luck with your prompt." This is the new standard. If your product can't reason over which tools to use, which skills to apply, and which context to pull — all personalized to the individual user — you're competing with something that can be built in 10 days (Cowork's timeline).</p>
<p>The agents-are-taking-over reality is here: multi-user agent architectures require thinking about context contamination, security postures for computer-use capabilities, and whether you're building internal agents or buying SaaS that will adapt. Sam's take: "AGI is agentic, and we're well along that continuum now."</p>
<p>🎙️ HOSTS: Matt Marshall | CEO, VentureBeat &amp; Sam Witteveen | VentureBeat</p>
<p>📺 CHAPTERS:</p>
<p>00:00 Intro — The SaaS Apocalypse</p>
<p>00:01:00 The Personalization Imperative</p>
<p>00:02:00 Microsoft Copilot Capitulates to Cowork</p>
<p>00:03:00 From Template Selection to Skill Generation</p>
<p>00:04:00 The Land Grab for User Context</p>
<p>00:05:00 Zoom's Li Juan on Personalized Meeting Intelligence</p>
<p>00:06:00 Why Context = Magic in Enterprise AI</p>
<p>00:07:00 Product-Market Fit in the Agent Era</p>
<p>00:08:00 Metrics That Matter: JP Morgan's 30,000 Agents</p>
<p>00:09:00 Build vs. Buy: The New Calculus</p>
<p>00:10:00 Why Slack Might Win on Agent Identity Management</p>
<p>00:11:00 Zoom's AI Companion: Control Over Randomness</p>
<p>00:13:00 Li Juan on Purposeful Prompts and Reference Control</p>
<p>00:15:00 Multi-Agent vs. Multi-User: The Critical Distinction</p>
<p>00:16:00 LinkedIn's GPU Optimization Strategy</p>
<p>00:17:00 AGI Is Agentic: Where</p>
<p><strong>Presented by Outshift by Cisco</strong> Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at<a href="https://outshift.cisco.com"> <u>outshift.cisco.com</u></a>.</p>
<p><strong>About VentureBeat:</strong> VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.</p>
<p><strong>🔗 CONNECT WITH US</strong></p>
<p>Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters</p>
<p>Visit VentureBeat: Venturebeat.com</p>
<p>.</p>
<p>.</p>
<p>.</p>
<p>Subscribe to VentureBeat: </p>
<p>   /  @VentureBeat  </p>
<p>.</p>
<p>.</p>
<p>Subscribe to the full podcast here:</p>
<p>Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</p>
<p>Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</p>
<p>YouTube: https://www.youtube.com/VentureBeat</p>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>1247</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[96df8d02-228a-11f1-aebd-9b3bc50bc648]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU6615521414.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>LangChain: What OpenClaw Got Right (And Why Enterprises Can't Have It)</title>
      <description>LangChain told employees they cannot install OpenClaw on company laptops due to "massive security risk" — yet this unhinged approach is exactly what makes it work. Harrison Chase unpacks why OpenClaw succeeds where AutoGPT failed, and why context engineering, not just smarter models, separates demo agents from production-ready systems.

The shift is architectural: Modern agent harnesses like Claude Code now dump 40,000-token API responses to file systems instead of cramming them into message history. LangChain's Deep Agents framework emerged from reverse-engineering Claude Code, Codex, and Deep Research — discovering they all use planning via to-do lists, subagents for focused work, file systems for context control, and 2000-line system prompts. Harrison explains why coding agents make surprisingly good general-purpose agents, how prompt caching creates accuracy trade-offs, and why "context engineering" — bringing the right information in the right format to the LLM at the right time — matters more than framework choice.

For enterprise teams: Harrison breaks down LangGraph (agent runtime with durable execution), LangChain (unopinionated agent framework), and Deep Agents (batteries-included harness). The conversation covers when to use graphs vs. loops, how skills differ from tools and subagents, and why nine months ago marked the inflection point where models could finally run reliably in autonomous loops.

🎙️ GUEST: Harrison Chase | Co-founder &amp; CEO, LangChain

🎙️ HOSTS: Matt Marshall | CEO, VentureBeat | Sam Witteveen | VentureBeat

**CHAPTERS:**

00:00 Intro — OpenClaw security warning

01:00 LangChain's origin story: From open source library to company

03:00 Early LLM patterns: RAG and SQL agents before ChatGPT

05:00 Why OpenClaw works where AutoGPT failed

08:00 Step change in agent capability: The summer 2024 inflection

11:00 Deep Agents unpacked: Planning, subagents, file systems, prompting

14:00 Skills vs tools vs subagents

16:00 LangGraph, LangChain, and Deep Agents architecture

19:00 Context engineering: What the LLM sees vs what developers see

21:00 File systems for context management vs AutoGPT's approach

**LINKS:**

Subscribe to VentureBeat: https://www.youtube.com/@VentureBeat

Apple Podcasts: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

Website: https://venturebeat.com

LinkedIn: https://www.linkedin.com/company/venturebeat

Newsletter: https://venturebeat.com/newsletters

#EnterpriseAI #AIAgents #LangChain #AgenticAI #LLMInfrastructure


Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 04 Mar 2026 11:33:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>8</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c9816a7c-1723-11f1-b90b-c3cd539c1138/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>LangChain told employees they cannot install OpenClaw on company laptops due to "massive security risk" — yet this unhinged approach is exactly what makes it work. Harrison Chase unpacks why OpenClaw succeeds where AutoGPT failed, and why context engineering, not just smarter models, separates demo agents from production-ready systems.

The shift is architectural: Modern agent harnesses like Claude Code now dump 40,000-token API responses to file systems instead of cramming them into message history. LangChain's Deep Agents framework emerged from reverse-engineering Claude Code, Codex, and Deep Research — discovering they all use planning via to-do lists, subagents for focused work, file systems for context control, and 2000-line system prompts. Harrison explains why coding agents make surprisingly good general-purpose agents, how prompt caching creates accuracy trade-offs, and why "context engineering" — bringing the right information in the right format to the LLM at the right time — matters more than framework choice.

For enterprise teams: Harrison breaks down LangGraph (agent runtime with durable execution), LangChain (unopinionated agent framework), and Deep Agents (batteries-included harness). The conversation covers when to use graphs vs. loops, how skills differ from tools and subagents, and why nine months ago marked the inflection point where models could finally run reliably in autonomous loops.

🎙️ GUEST: Harrison Chase | Co-founder &amp; CEO, LangChain

🎙️ HOSTS: Matt Marshall | CEO, VentureBeat | Sam Witteveen | VentureBeat

**CHAPTERS:**

00:00 Intro — OpenClaw security warning

01:00 LangChain's origin story: From open source library to company

03:00 Early LLM patterns: RAG and SQL agents before ChatGPT

05:00 Why OpenClaw works where AutoGPT failed

08:00 Step change in agent capability: The summer 2024 inflection

11:00 Deep Agents unpacked: Planning, subagents, file systems, prompting

14:00 Skills vs tools vs subagents

16:00 LangGraph, LangChain, and Deep Agents architecture

19:00 Context engineering: What the LLM sees vs what developers see

21:00 File systems for context management vs AutoGPT's approach

**LINKS:**

Subscribe to VentureBeat: https://www.youtube.com/@VentureBeat

Apple Podcasts: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

Website: https://venturebeat.com

LinkedIn: https://www.linkedin.com/company/venturebeat

Newsletter: https://venturebeat.com/newsletters

#EnterpriseAI #AIAgents #LangChain #AgenticAI #LLMInfrastructure


Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>LangChain told employees they cannot install OpenClaw on company laptops due to "massive security risk" — yet this unhinged approach is exactly what makes it work. Harrison Chase unpacks why OpenClaw succeeds where AutoGPT failed, and why context engineering, not just smarter models, separates demo agents from production-ready systems.</p>
<p>The shift is architectural: Modern agent harnesses like Claude Code now dump 40,000-token API responses to file systems instead of cramming them into message history. LangChain's Deep Agents framework emerged from reverse-engineering Claude Code, Codex, and Deep Research — discovering they all use planning via to-do lists, subagents for focused work, file systems for context control, and 2000-line system prompts. Harrison explains why coding agents make surprisingly good general-purpose agents, how prompt caching creates accuracy trade-offs, and why "context engineering" — bringing the right information in the right format to the LLM at the right time — matters more than framework choice.</p>
<p>For enterprise teams: Harrison breaks down LangGraph (agent runtime with durable execution), LangChain (unopinionated agent framework), and Deep Agents (batteries-included harness). The conversation covers when to use graphs vs. loops, how skills differ from tools and subagents, and why nine months ago marked the inflection point where models could finally run reliably in autonomous loops.</p>
<p>🎙️ GUEST: Harrison Chase | Co-founder &amp; CEO, LangChain</p>
<p>🎙️ HOSTS: Matt Marshall | CEO, VentureBeat | Sam Witteveen | VentureBeat</p>
<p>**CHAPTERS:**</p>
<p>00:00 Intro — OpenClaw security warning</p>
<p>01:00 LangChain's origin story: From open source library to company</p>
<p>03:00 Early LLM patterns: RAG and SQL agents before ChatGPT</p>
<p>05:00 Why OpenClaw works where AutoGPT failed</p>
<p>08:00 Step change in agent capability: The summer 2024 inflection</p>
<p>11:00 Deep Agents unpacked: Planning, subagents, file systems, prompting</p>
<p>14:00 Skills vs tools vs subagents</p>
<p>16:00 LangGraph, LangChain, and Deep Agents architecture</p>
<p>19:00 Context engineering: What the LLM sees vs what developers see</p>
<p>21:00 File systems for context management vs AutoGPT's approach</p>
<p>**LINKS:**</p>
<p>Subscribe to VentureBeat: https://www.youtube.com/@VentureBeat</p>
<p>Apple Podcasts: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</p>
<p>Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</p>
<p>Website: https://venturebeat.com</p>
<p>LinkedIn: https://www.linkedin.com/company/venturebeat</p>
<p>Newsletter: https://venturebeat.com/newsletters</p>
<p>#EnterpriseAI #AIAgents #LangChain #AgenticAI #LLMInfrastructure</p>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>3382</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c9abd4b0-1723-11f1-b90b-8f41482373b9]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU5140953619.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>LexisNexis on Why Standard RAG Fails in Law</title>
      <description>On February 2nd, a single plugin wiped nearly $800 billion off the enterprise software market. Wall Street is terrified that AI agents are about to eat the legal industry's lunch. But LexisNexis isn't scared—they're building the moat.



In this episode of Beyond the Pilot, Min Chen (Chief AI Officer, LexisNexis) reveals the sophisticated architecture they built to counter the "LLM wrapper" revolution. Moving beyond standard RAG, Min breaks down their move to "GraphRAG", their deployment of Agentic workflows (using Planner and Reflection agents), and why they created a proprietary "Usefulness Score" because standard accuracy metrics weren't good enough for lawyers.

AI Gets Real Here. No theory, just the execution roadmap for deploying AI in a zero-error environment.

In this episode, we cover:


  
The "Dangerous RAG" Problem: Why semantic search fails in professional domains (retrieving "relevant" but overruled cases) and how "Point of Law" knowledge graphs fix it.



  
The "Usefulness" Metric: The 8 sub-metrics LexisNexis uses (including Authority, Comprehensiveness, and Fluency) to grade AI quality.



  
Agentic ROI: How deploying a "Planner Agent" to break down complex questions increased answer usefulness by 20%.



  
The "Reflection Agent": Using a secondary agent to critique and refine drafts in real-time.



  
Hallucination Detection: Why you should never rely on an LLM to judge its own hallucinations (and the deterministic code they use instead).




⏱️ TIMESTAMPS

00:00 - Intro: The $800 Billion AI Threat to Legal Tech

02:18 - Min Chen’s Journey: From Feature Engineering to Chief AI Officer

05:55 - Why Standard RAG Fails in Law (and How GraphRAG Fixes It)

10:40 - "Accuracy" is a Vanity Metric: The 8-Point Usefulness Score

14:20 - The "Auto-Eval" Framework: Human-in-the-Loop at Scale

16:40 - The Secret Sauce: Don't Use LLMs to Detect Hallucinations

21:15 - Agentic AI: How "Planner Agents" Drove a 20% Gain

22:00 - The "Reflection Agent": Self-Critique Loops for Drafting

30:30 - Distillation: Balancing Cost, Speed, and Quality

32:45 - Min’s Advice: Don't Build the Product First (Build the Metrics)



Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.

About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 18 Feb 2026 11:32:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>7</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/21b052c6-0c4d-11f1-8396-7b38a45837ad/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>On February 2nd, a single plugin wiped nearly $800 billion off the enterprise software market. Wall Street is terrified that AI agents are about to eat the legal industry's lunch. But LexisNexis isn't scared—they're building the moat.



In this episode of Beyond the Pilot, Min Chen (Chief AI Officer, LexisNexis) reveals the sophisticated architecture they built to counter the "LLM wrapper" revolution. Moving beyond standard RAG, Min breaks down their move to "GraphRAG", their deployment of Agentic workflows (using Planner and Reflection agents), and why they created a proprietary "Usefulness Score" because standard accuracy metrics weren't good enough for lawyers.

AI Gets Real Here. No theory, just the execution roadmap for deploying AI in a zero-error environment.

In this episode, we cover:


  
The "Dangerous RAG" Problem: Why semantic search fails in professional domains (retrieving "relevant" but overruled cases) and how "Point of Law" knowledge graphs fix it.



  
The "Usefulness" Metric: The 8 sub-metrics LexisNexis uses (including Authority, Comprehensiveness, and Fluency) to grade AI quality.



  
Agentic ROI: How deploying a "Planner Agent" to break down complex questions increased answer usefulness by 20%.



  
The "Reflection Agent": Using a secondary agent to critique and refine drafts in real-time.



  
Hallucination Detection: Why you should never rely on an LLM to judge its own hallucinations (and the deterministic code they use instead).




⏱️ TIMESTAMPS

00:00 - Intro: The $800 Billion AI Threat to Legal Tech

02:18 - Min Chen’s Journey: From Feature Engineering to Chief AI Officer

05:55 - Why Standard RAG Fails in Law (and How GraphRAG Fixes It)

10:40 - "Accuracy" is a Vanity Metric: The 8-Point Usefulness Score

14:20 - The "Auto-Eval" Framework: Human-in-the-Loop at Scale

16:40 - The Secret Sauce: Don't Use LLMs to Detect Hallucinations

21:15 - Agentic AI: How "Planner Agents" Drove a 20% Gain

22:00 - The "Reflection Agent": Self-Critique Loops for Drafting

30:30 - Distillation: Balancing Cost, Speed, and Quality

32:45 - Min’s Advice: Don't Build the Product First (Build the Metrics)



Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.

About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>On February 2nd, a single plugin wiped nearly $800 billion off the enterprise software market. Wall Street is terrified that AI agents are about to eat the legal industry's lunch. But LexisNexis isn't scared—they're building the moat.</p>
<p><br></p>
<p>In this episode of <strong>Beyond the Pilot</strong>, Min Chen (Chief AI Officer, LexisNexis) reveals the sophisticated architecture they built to counter the "LLM wrapper" revolution. Moving beyond standard RAG, Min breaks down their move to <strong>"GraphRAG"</strong>, their deployment of <strong>Agentic workflows</strong> (using Planner and Reflection agents), and why they created a proprietary <strong>"Usefulness Score"</strong> because standard accuracy metrics weren't good enough for lawyers.</p>
<p><strong>AI Gets Real Here.</strong> No theory, just the execution roadmap for deploying AI in a zero-error environment.</p>
<p><strong>In this episode, we cover:</strong></p>
<ul>
  <li>
<p><strong>The "Dangerous RAG" Problem:</strong> Why semantic search fails in professional domains (retrieving "relevant" but overruled cases) and how "Point of Law" knowledge graphs fix it.</p>
</li>
  <li>
<p><strong>The "Usefulness" Metric:</strong> The 8 sub-metrics LexisNexis uses (including Authority, Comprehensiveness, and Fluency) to grade AI quality.</p>
</li>
  <li>
<p><strong>Agentic ROI:</strong> How deploying a "Planner Agent" to break down complex questions increased answer usefulness by 20%.</p>
</li>
  <li>
<p><strong>The "Reflection Agent":</strong> Using a secondary agent to critique and refine drafts in real-time.</p>
</li>
  <li>
<p><strong>Hallucination Detection:</strong> Why you should never rely on an LLM to judge its own hallucinations (and the deterministic code they use instead).</p>
</li>
</ul>
<p><strong>⏱️ TIMESTAMPS</strong></p>
<p>00:00 - Intro: The $800 Billion AI Threat to Legal Tech</p>
<p>02:18 - Min Chen’s Journey: From Feature Engineering to Chief AI Officer</p>
<p>05:55 - Why Standard RAG Fails in Law (and How GraphRAG Fixes It)</p>
<p>10:40 - "Accuracy" is a Vanity Metric: The 8-Point Usefulness Score</p>
<p>14:20 - The "Auto-Eval" Framework: Human-in-the-Loop at Scale</p>
<p>16:40 - The Secret Sauce: Don't Use LLMs to Detect Hallucinations</p>
<p>21:15 - Agentic AI: How "Planner Agents" Drove a 20% Gain</p>
<p>22:00 - The "Reflection Agent": Self-Critique Loops for Drafting</p>
<p>30:30 - Distillation: Balancing Cost, Speed, and Quality</p>
<p>32:45 - Min’s Advice: Don't Build the Product First (Build the Metrics)</p>
<p><br></p>
<p><strong>Presented by Outshift by Cisco</strong> Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at<a href="https://outshift.cisco.com"> <u>outshift.cisco.com</u></a>.</p>
<p><strong>About VentureBeat:</strong> VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.</p>
<p><strong>🔗 CONNECT WITH US</strong></p>
<p>Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters</p>
<p>Visit VentureBeat: Venturebeat.com</p>
<p>.</p>
<p>.</p>
<p>.</p>
<p>Subscribe to VentureBeat: </p>
<p>   /  @VentureBeat  </p>
<p>.</p>
<p>.</p>
<p>Subscribe to the full podcast here:</p>
<p>Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</p>
<p>Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</p>
<p>YouTube: https://www.youtube.com/VentureBeat</p>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>2146</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[21fa2c0c-0c4d-11f1-8396-df9ca5faf449]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU4952214811.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Mastercard's 160 Billion Transactions: AI's Biggest Test</title>
      <description>While most of the world is still running GenAI pilots, Mastercard is running AI inference on 160 billion transactions a year—with a hard latency limit of 50 milliseconds per score.

In this episode of Beyond the Pilot, Johan Gerber (EVP of Security Solutions) and Chris Merz (SVP of Data Science) open the hood on one of the world's largest production AI systems: Decision Intelligence Pro. They reveal how they moved beyond legacy rules engines to build Recurrent Neural Networks (RNNs) that act as "inverse recommenders"—predicting legitimate behavior faster than the blink of an eye.

AI Gets Real Here. This isn't just about defense. Johan and Chris detail how they are taking the fight to criminals by leveraging Generative AI to engage scammers with "honeypots," expose mule accounts, and map fraud networks globally.

In this episode, we cover:


  
The 50ms Inference Challenge: How Mastercard optimized their RNNs to score transactions at a peak rate of 70,000 per second.



  
"Scamming the Scammers": How GenAI agents are being used to automate honeypot conversations and extract mule account data.



  
The "Inverse Recommender" Architecture: Why Mastercard treats fraud detection as a recommendation problem (predicting the next likely merchant).



  
Org Design for Scale: The "Data Science Engineering Requirements Document" (DSERD) Chris used to align four separate engineering teams.



  
The Hybrid Infrastructure: Why moving to Databricks and the cloud was necessary to cut innovation cycles from months to hours.




🚀 CHAPTERS

00:00 - Intro: 160 Billion Transactions &amp; 50ms Decisions

02:08 - Thinking Like a Criminal: Johan’s Law Enforcement Background

06:22 - Org Design: Why AI is the "Middle Lane" of Engineering

11:00 - The Scale: 70k Transactions Per Second

15:47 - Decision Intelligence Pro: The "Inverse Recommender" RNN

23:00 - The "Lego Block" Strategy: Aligning Data Science &amp; Engineering

33:00 - Infrastructure: Why Cloud/Databricks was Non-Negotiable

37:00 - GenAI Offensive: Threat Hunting &amp; "Scamming the Scammers"

46:40 - "Honeypots" and Detecting Mule Accounts

52:00 - Advice for Technical Leaders: Talent &amp; Prioritization



Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.



About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 04 Feb 2026 11:30:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>6</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/21b4bc6a-010a-11f1-97b6-6fe4862baee3/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>While most of the world is still running GenAI pilots, Mastercard is running AI inference on 160 billion transactions a year—with a hard latency limit of 50 milliseconds per score.

In this episode of Beyond the Pilot, Johan Gerber (EVP of Security Solutions) and Chris Merz (SVP of Data Science) open the hood on one of the world's largest production AI systems: Decision Intelligence Pro. They reveal how they moved beyond legacy rules engines to build Recurrent Neural Networks (RNNs) that act as "inverse recommenders"—predicting legitimate behavior faster than the blink of an eye.

AI Gets Real Here. This isn't just about defense. Johan and Chris detail how they are taking the fight to criminals by leveraging Generative AI to engage scammers with "honeypots," expose mule accounts, and map fraud networks globally.

In this episode, we cover:


  
The 50ms Inference Challenge: How Mastercard optimized their RNNs to score transactions at a peak rate of 70,000 per second.



  
"Scamming the Scammers": How GenAI agents are being used to automate honeypot conversations and extract mule account data.



  
The "Inverse Recommender" Architecture: Why Mastercard treats fraud detection as a recommendation problem (predicting the next likely merchant).



  
Org Design for Scale: The "Data Science Engineering Requirements Document" (DSERD) Chris used to align four separate engineering teams.



  
The Hybrid Infrastructure: Why moving to Databricks and the cloud was necessary to cut innovation cycles from months to hours.




🚀 CHAPTERS

00:00 - Intro: 160 Billion Transactions &amp; 50ms Decisions

02:08 - Thinking Like a Criminal: Johan’s Law Enforcement Background

06:22 - Org Design: Why AI is the "Middle Lane" of Engineering

11:00 - The Scale: 70k Transactions Per Second

15:47 - Decision Intelligence Pro: The "Inverse Recommender" RNN

23:00 - The "Lego Block" Strategy: Aligning Data Science &amp; Engineering

33:00 - Infrastructure: Why Cloud/Databricks was Non-Negotiable

37:00 - GenAI Offensive: Threat Hunting &amp; "Scamming the Scammers"

46:40 - "Honeypots" and Detecting Mule Accounts

52:00 - Advice for Technical Leaders: Talent &amp; Prioritization



Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.



About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>While most of the world is still running GenAI pilots, Mastercard is running AI inference on 160 billion transactions a year—with a hard latency limit of 50 milliseconds per score.</p>
<p>In this episode of Beyond the Pilot, Johan Gerber (EVP of Security Solutions) and Chris Merz (SVP of Data Science) open the hood on one of the world's largest production AI systems: Decision Intelligence Pro. They reveal how they moved beyond legacy rules engines to build Recurrent Neural Networks (RNNs) that act as "inverse recommenders"—predicting legitimate behavior faster than the blink of an eye.</p>
<p>AI Gets Real Here. This isn't just about defense. Johan and Chris detail how they are taking the fight to criminals by leveraging Generative AI to engage scammers with "honeypots," expose mule accounts, and map fraud networks globally.</p>
<p>In this episode, we cover:</p>
<ul>
  <li>
<p>The 50ms Inference Challenge: How Mastercard optimized their RNNs to score transactions at a peak rate of 70,000 per second.</p>
</li>
  <li>
<p>"Scamming the Scammers": How GenAI agents are being used to automate honeypot conversations and extract mule account data.</p>
</li>
  <li>
<p>The "Inverse Recommender" Architecture: Why Mastercard treats fraud detection as a recommendation problem (predicting the next likely merchant).</p>
</li>
  <li>
<p>Org Design for Scale: The "Data Science Engineering Requirements Document" (DSERD) Chris used to align four separate engineering teams.</p>
</li>
  <li>
<p>The Hybrid Infrastructure: Why moving to Databricks and the cloud was necessary to cut innovation cycles from months to hours.</p>
</li>
</ul>
<p><strong>🚀 CHAPTERS</strong></p>
<p>00:00 - Intro: 160 Billion Transactions &amp; 50ms Decisions</p>
<p>02:08 - Thinking Like a Criminal: Johan’s Law Enforcement Background</p>
<p>06:22 - Org Design: Why AI is the "Middle Lane" of Engineering</p>
<p>11:00 - The Scale: 70k Transactions Per Second</p>
<p>15:47 - Decision Intelligence Pro: The "Inverse Recommender" RNN</p>
<p>23:00 - The "Lego Block" Strategy: Aligning Data Science &amp; Engineering</p>
<p>33:00 - Infrastructure: Why Cloud/Databricks was Non-Negotiable</p>
<p>37:00 - GenAI Offensive: Threat Hunting &amp; "Scamming the Scammers"</p>
<p>46:40 - "Honeypots" and Detecting Mule Accounts</p>
<p>52:00 - Advice for Technical Leaders: Talent &amp; Prioritization</p>
<p><br></p>
<p><strong>Presented by Outshift by Cisco</strong> Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at<a href="https://outshift.cisco.com"> <u>outshift.cisco.com</u></a>.</p>
<p><br></p>
<p><strong>About VentureBeat:</strong> VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.</p>
<p><strong>🔗 CONNECT WITH US</strong></p>
<p>Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters</p>
<p>Visit VentureBeat: Venturebeat.com</p>
<p>.</p>
<p>.</p>
<p>.</p>
<p>Subscribe to VentureBeat: </p>
<p>   /  @VentureBeat  </p>
<p>.</p>
<p>.</p>
<p>Subscribe to the full podcast here:</p>
<p>Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</p>
<p>Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</p>
<p>YouTube: https://www.youtube.com/VentureBeat</p>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>3352</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[21e0fa28-010a-11f1-97b6-bb10062d79e2]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU8805467254.mp3" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Inside LinkedIn’s AI Engineering Playbook</title>
      <description>While the rest of the industry chases massive models, LinkedIn quietly achieved a major engineering breakthrough by going small.

In this episode of Beyond the Pilot, Erran Berger (VP of Product Engineering, LinkedIn) opens the "cookbook" on how they distilled massive 7B parameter models down to ultra-efficient 600M parameter "student" models—scaling AI to 1.2 billion users without breaking the bank.

AI Gets Real Here. This isn't theory. Erran details the exact architecture, the "Multi-Teacher" distillation process, and the organizational shift that forced Product Managers to write evals instead of specs.

In this episode, we cover:


  
The Distillation Pipeline: How to train a 7B "Teacher" and distill it to a 1.7B intermediate and 0.6B "Student" for production.



  
Synthetic Data Strategy: Using GPT-4 to generate the "Golden Dataset" for training.



  
Multi-Teacher Architecture: Why they separated "Product Policy" and "Click Prediction" into different teacher models to solve alignment issues.



  
10x Efficiency Hacks: Specific techniques (Pruning, Quantization, Context Compression) that slashed latency.



  
Org Design: Why the "Eval First" culture is the new requirement for AI engineering teams.




🚀 CHAPTERS

00:00 - Intro: LinkedIn's Massive "Small Model" Feat

04:00 - Why Commercial Models Failed at LinkedIn Scale

08:00 - The "Product Policy" Funnel &amp; Synthetic Data Generation

12:00 - The Pipeline: 7B → 1.7B → 600M Parameters

19:00 - The "Multi-Teacher" Breakthrough (Relevance vs. Clicks)

23:00 - How They Achieved 10x Latency Reduction (Pruning/Compression)

31:00 - Changing the Culture: Why PMs Must Write Evals

35:00 - The "Bright Green Matrix": Measuring Success &amp; Future Roadmap

Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.

About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat

#EnterpriseAI #LLMDistillation #LinkedInEngineering #SmallLanguageModels #AIArchitecture #TechLeadership


Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 21 Jan 2026 11:29:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>5</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c30f81fe-f326-11f0-8113-4791369b592a/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>While the rest of the industry chases massive models, LinkedIn quietly achieved a major engineering breakthrough by going small.

In this episode of Beyond the Pilot, Erran Berger (VP of Product Engineering, LinkedIn) opens the "cookbook" on how they distilled massive 7B parameter models down to ultra-efficient 600M parameter "student" models—scaling AI to 1.2 billion users without breaking the bank.

AI Gets Real Here. This isn't theory. Erran details the exact architecture, the "Multi-Teacher" distillation process, and the organizational shift that forced Product Managers to write evals instead of specs.

In this episode, we cover:


  
The Distillation Pipeline: How to train a 7B "Teacher" and distill it to a 1.7B intermediate and 0.6B "Student" for production.



  
Synthetic Data Strategy: Using GPT-4 to generate the "Golden Dataset" for training.



  
Multi-Teacher Architecture: Why they separated "Product Policy" and "Click Prediction" into different teacher models to solve alignment issues.



  
10x Efficiency Hacks: Specific techniques (Pruning, Quantization, Context Compression) that slashed latency.



  
Org Design: Why the "Eval First" culture is the new requirement for AI engineering teams.




🚀 CHAPTERS

00:00 - Intro: LinkedIn's Massive "Small Model" Feat

04:00 - Why Commercial Models Failed at LinkedIn Scale

08:00 - The "Product Policy" Funnel &amp; Synthetic Data Generation

12:00 - The Pipeline: 7B → 1.7B → 600M Parameters

19:00 - The "Multi-Teacher" Breakthrough (Relevance vs. Clicks)

23:00 - How They Achieved 10x Latency Reduction (Pruning/Compression)

31:00 - Changing the Culture: Why PMs Must Write Evals

35:00 - The "Bright Green Matrix": Measuring Success &amp; Future Roadmap

Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com.

About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat

#EnterpriseAI #LLMDistillation #LinkedInEngineering #SmallLanguageModels #AIArchitecture #TechLeadership


Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>While the rest of the industry chases massive models, LinkedIn quietly achieved a major engineering breakthrough by going small.</p>
<p>In this episode of <strong>Beyond the Pilot</strong>, Erran Berger (VP of Product Engineering, LinkedIn) opens the "cookbook" on how they distilled massive 7B parameter models down to ultra-efficient 600M parameter "student" models—scaling AI to 1.2 billion users without breaking the bank.</p>
<p><strong>AI Gets Real Here.</strong> This isn't theory. Erran details the exact architecture, the "Multi-Teacher" distillation process, and the organizational shift that forced Product Managers to write evals instead of specs.</p>
<p><strong>In this episode, we cover:</strong></p>
<ul>
  <li>
<p><strong>The Distillation Pipeline:</strong> How to train a 7B "Teacher" and distill it to a 1.7B intermediate and 0.6B "Student" for production.</p>
</li>
  <li>
<p><strong>Synthetic Data Strategy:</strong> Using GPT-4 to generate the "Golden Dataset" for training.</p>
</li>
  <li>
<p><strong>Multi-Teacher Architecture:</strong> Why they separated "Product Policy" and "Click Prediction" into different teacher models to solve alignment issues.</p>
</li>
  <li>
<p><strong>10x Efficiency Hacks:</strong> Specific techniques (Pruning, Quantization, Context Compression) that slashed latency.</p>
</li>
  <li>
<p><strong>Org Design:</strong> Why the "Eval First" culture is the new requirement for AI engineering teams.</p>
</li>
</ul>
<p><strong>🚀 CHAPTERS</strong></p>
<p>00:00 - Intro: LinkedIn's Massive "Small Model" Feat</p>
<p>04:00 - Why Commercial Models Failed at LinkedIn Scale</p>
<p>08:00 - The "Product Policy" Funnel &amp; Synthetic Data Generation</p>
<p>12:00 - The Pipeline: 7B → 1.7B → 600M Parameters</p>
<p>19:00 - The "Multi-Teacher" Breakthrough (Relevance vs. Clicks)</p>
<p>23:00 - How They Achieved 10x Latency Reduction (Pruning/Compression)</p>
<p>31:00 - Changing the Culture: Why PMs Must Write Evals</p>
<p>35:00 - The "Bright Green Matrix": Measuring Success &amp; Future Roadmap</p>
<p><strong>Presented by Outshift by Cisco</strong> Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at<a href="https://outshift.cisco.com"> <u>outshift.cisco.com</u></a>.</p>
<p><strong>About VentureBeat:</strong> VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality.</p>
<p><strong>🔗 CONNECT WITH US</strong></p>
<p>Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters</p>
<p>Visit VentureBeat: Venturebeat.com</p>
<p>.</p>
<p>.</p>
<p>.</p>
<p>Subscribe to VentureBeat: </p>
<p>   /  @VentureBeat  </p>
<p>.</p>
<p>.</p>
<p>Subscribe to the full podcast here:</p>
<p>Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</p>
<p>Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</p>
<p>YouTube: https://www.youtube.com/VentureBeat</p>
<p>#EnterpriseAI #LLMDistillation #LinkedInEngineering #SmallLanguageModels #AIArchitecture #TechLeadership</p>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>2438</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c369c5b0-f326-11f0-8113-c792987f52f7]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU9861507389.mp3?updated=1768929450" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Most enterprise AI agents are Slop - here’s why they fail</title>
      <description>The "TAM" for AI Agents isn't software. And there is a $10 Trillion opportunity.

In this episode, Replit CEO Amjad Masad reveals why 99% of today's enterprise AI agents are just "Slop"—unreliable, generic toys that fail in production. We dive deep into the engineering reality of building autonomous agents that actually work, moving beyond simple chatbots to systems that can navigate the messy reality of enterprise infrastructure.

Amjad breaks down Replit’s "Computer Use" hack that makes agents 10x cheaper than generic models, explains why "Vibe Coding" is the future of the C-Suite, and issues a warning to technical leaders: If you want to ship fast in the AI era, you need to kill your product roadmap.

In this episode, we cover:


  
The "Slop" Problem: Why most LLM outputs are generic and how to inject "taste" back into software.



  
The Computer Use "Hack": How Replit built a programmatic verifier loop that outperforms vision-based models.



  
Vibe Coding: Why non-technical domain experts (HR, Sales, Marketing) will build the next generation of enterprise software.



  
The $10T Market: Why the Junior Developer role is disappearing and being replaced by the "Manager of Agents."




🚀 CHAPTERS

0:00 - Intro: Why most AI Agents are "Toys"

03:02 - The only 2 AI use cases making money right now

06:00 - The "Crappy Product" Strategy (Shipping fast)

10:00 - What is "AI Slop"? (And how to fix it)

14:30 - The "Deleted Database" Incident: Solving Reliability

18:00 - The "Squishy" Divide: Why Marketing Agents fail

21:45 - Vibe Coding in the Enterprise

26:00 - Model Wars: Claude Opus vs. Gemini vs. OpenAI

28:10 - The "Computer Use" Hack (10x Cheaper, 3x Faster)

36:00 - Why Product Roadmaps are Dead

43:00 - Replit is the #1 Software Vendor (Ramp Data)

49:00 - The Unit Economics of Agents (Token Costs vs. Value)

53:00 - Open Source vs. Closed: The "Cathedral of Bazaars"

59:00 - The $10 Trillion Opportunity: Replacing Labor

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

#AgenticAI #Replit #VibeCoding #EnterpriseAI #LLM #SoftwareEngineering #FutureOfWork #AmjadMasad #ArtificialIntelligence #DevOps

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 07 Jan 2026 11:30:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>4</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fea0a462-eb58-11f0-a4df-db3bd5ca8bd1/image/66da1260e058cf084fd69c5fb366c0c9.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>The "TAM" for AI Agents isn't software. And there is a $10 Trillion opportunity.

In this episode, Replit CEO Amjad Masad reveals why 99% of today's enterprise AI agents are just "Slop"—unreliable, generic toys that fail in production. We dive deep into the engineering reality of building autonomous agents that actually work, moving beyond simple chatbots to systems that can navigate the messy reality of enterprise infrastructure.

Amjad breaks down Replit’s "Computer Use" hack that makes agents 10x cheaper than generic models, explains why "Vibe Coding" is the future of the C-Suite, and issues a warning to technical leaders: If you want to ship fast in the AI era, you need to kill your product roadmap.

In this episode, we cover:


  
The "Slop" Problem: Why most LLM outputs are generic and how to inject "taste" back into software.



  
The Computer Use "Hack": How Replit built a programmatic verifier loop that outperforms vision-based models.



  
Vibe Coding: Why non-technical domain experts (HR, Sales, Marketing) will build the next generation of enterprise software.



  
The $10T Market: Why the Junior Developer role is disappearing and being replaced by the "Manager of Agents."




🚀 CHAPTERS

0:00 - Intro: Why most AI Agents are "Toys"

03:02 - The only 2 AI use cases making money right now

06:00 - The "Crappy Product" Strategy (Shipping fast)

10:00 - What is "AI Slop"? (And how to fix it)

14:30 - The "Deleted Database" Incident: Solving Reliability

18:00 - The "Squishy" Divide: Why Marketing Agents fail

21:45 - Vibe Coding in the Enterprise

26:00 - Model Wars: Claude Opus vs. Gemini vs. OpenAI

28:10 - The "Computer Use" Hack (10x Cheaper, 3x Faster)

36:00 - Why Product Roadmaps are Dead

43:00 - Replit is the #1 Software Vendor (Ramp Data)

49:00 - The Unit Economics of Agents (Token Costs vs. Value)

53:00 - Open Source vs. Closed: The "Cathedral of Bazaars"

59:00 - The $10 Trillion Opportunity: Replacing Labor

🔗 CONNECT WITH US

Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters

Visit VentureBeat: Venturebeat.com

#AgenticAI #Replit #VibeCoding #EnterpriseAI #LLM #SoftwareEngineering #FutureOfWork #AmjadMasad #ArtificialIntelligence #DevOps

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p><strong>The "TAM" for AI Agents isn't software.</strong> And there is a $10 Trillion opportunity.</p>
<p>In this episode, Replit CEO Amjad Masad reveals why 99% of today's enterprise AI agents are just "Slop"—unreliable, generic toys that fail in production. We dive deep into the engineering reality of building autonomous agents that actually work, moving beyond simple chatbots to systems that can navigate the messy reality of enterprise infrastructure.</p>
<p>Amjad breaks down Replit’s "Computer Use" hack that makes agents 10x cheaper than generic models, explains why "Vibe Coding" is the future of the C-Suite, and issues a warning to technical leaders: If you want to ship fast in the AI era, you need to kill your product roadmap.</p>
<p><strong>In this episode, we cover:</strong></p>
<ul>
  <li>
<p><strong>The "Slop" Problem:</strong> Why most LLM outputs are generic and how to inject "taste" back into software.</p>
</li>
  <li>
<p><strong>The Computer Use "Hack":</strong> How Replit built a programmatic verifier loop that outperforms vision-based models.</p>
</li>
  <li>
<p><strong>Vibe Coding:</strong> Why non-technical domain experts (HR, Sales, Marketing) will build the next generation of enterprise software.</p>
</li>
  <li>
<p><strong>The $10T Market:</strong> Why the Junior Developer role is disappearing and being replaced by the "Manager of Agents."</p>
</li>
</ul>
<p><strong>🚀 CHAPTERS</strong></p>
<p>0:00 - Intro: Why most AI Agents are "Toys"</p>
<p>03:02 - The only 2 AI use cases making money right now</p>
<p>06:00 - The "Crappy Product" Strategy (Shipping fast)</p>
<p>10:00 - What is "AI Slop"? (And how to fix it)</p>
<p>14:30 - The "Deleted Database" Incident: Solving Reliability</p>
<p>18:00 - The "Squishy" Divide: Why Marketing Agents fail</p>
<p>21:45 - Vibe Coding in the Enterprise</p>
<p>26:00 - Model Wars: Claude Opus vs. Gemini vs. OpenAI</p>
<p>28:10 - The "Computer Use" Hack (10x Cheaper, 3x Faster)</p>
<p>36:00 - Why Product Roadmaps are Dead</p>
<p>43:00 - Replit is the #1 Software Vendor (Ramp Data)</p>
<p>49:00 - The Unit Economics of Agents (Token Costs vs. Value)</p>
<p>53:00 - Open Source vs. Closed: The "Cathedral of Bazaars"</p>
<p>59:00 - The $10 Trillion Opportunity: Replacing Labor</p>
<p><strong>🔗 CONNECT WITH US</strong></p>
<p>Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters</p>
<p>Visit VentureBeat: Venturebeat.com</p>
<p><strong>#AgenticAI #Replit #VibeCoding #EnterpriseAI #LLM #SoftwareEngineering #FutureOfWork #AmjadMasad #ArtificialIntelligence #DevOps</strong></p>
<p>.</p>
<p>.</p>
<p>.</p>
<p>Subscribe to VentureBeat: </p>
<p>   /  @VentureBeat  </p>
<p>.</p>
<p>.</p>
<p>Subscribe to the full podcast here:</p>
<p>Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</p>
<p>Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</p>
<p>YouTube: https://www.youtube.com/VentureBeat</p>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>3781</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fec972de-eb58-11f0-a4df-27029a01cc97]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU1135065127.mp3?updated=1767788493" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How JPMorgan Engineered a 30K AI Agent Economy</title>
      <description>Inside the 'Agent Economy': How 30,000 AI Assistants Took Over JPMorgan

While most enterprises were scrambling after ChatGPT launched, JPMorgan Chase was already two years ahead. 🚀

In this episode of Beyond the Pilot, we sit down with Derek Waldron, Chief Analytics Officer at JPMorgan Chase, to reveal how the world’s largest bank built an internal AI platform that is now used by 1 in 2 employees daily.

Derek shares the contrarian insight that drove their strategy: AI models are commodities; the real moat is connectivity.

Learn how they scaled from zero to 250,000+ users, why they empowered employees to build 30,000+ of their own "Personal Agents," and how they are solving the data privacy challenge at an enterprise scale.

🔥 IN THIS EPISODE:


  
The "Super Intelligence" Thought Experiment: Why raw intelligence is useless without enterprise connectivity.



  
The Agent Economy: How JPM enabled non-technical staff to build 30,000 custom AI assistants.



  
The Adoption Playbook: How to break through the "30% wall" and get the majority of your workforce using AI.



  
Build vs. Buy: Why JPM built their own "LLM Suite" instead of waiting for vendors.




⏳ CHAPTERS:

00:00 - Introduction: The JPMorgan AI Story

01:45 - The 3 Core Principles Behind JPM’s Strategy

03:25 - The "Super Intelligence" Thought Experiment

05:00 - Data Privacy: Why JPM Doesn't Train Public Models

06:00 - Viral Adoption: From 0 to 250k Users

09:20 - Evolution of LLM Suite: From RAG to Ecosystem

14:00 - The "Moat" is Connectivity, Not the Model

23:00 - The Agent Economy: 30,000 Employee-Built Assistants

31:00 - Governance &amp; Guardrails for AI Agents

33:00 - Crossing the Chasm: Getting to 60% Adoption

40:00 - The "Product" Mindset: Solving Business Problems First

42:30 - The Future: End-to-End Process Transformation

46:25 - The "Unsolved" Problem Derek Wants to Fix

🙏 SPECIAL THANKS TO OUR SPONSOR:

This episode is presented by Outshift by Cisco.

Learn more about their work on the Internet of Agents and the open-source Linux Foundation project:

🔗 https://www.agentcy.org

🎙️ GUEST: 

Derek Waldron | Chief Analytics Officer, JPMorgan Chase

HOSTS:

Matt Marshall | VentureBeat

Sam Witteveen | VentureBeat

#EnterpriseAI #JPMorgan #GenerativeAI #AgenticAI #FinTech #ArtificialIntelligence #Innovation #BeyondThePilot

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat

https://www.youtube.com/playlist?list=PLMQoSwszBxm5dCv2bdqGnJ0QAL9n7Ds4_

🔗 CONNECT WITH VENTUREBEAT:


  
Website: https://venturebeat.com



  
LinkedIn: https://www.linkedin.com/company/venturebeat



  
X (Twitter): https://twitter.com/VentureBeat



  
Newsletter: https://venturebeat.com/newsletters





Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 17 Dec 2025 11:30:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>3</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c4e87d06-da8e-11f0-8205-1b0ccddf8d20/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Inside the 'Agent Economy': How 30,000 AI Assistants Took Over JPMorgan

While most enterprises were scrambling after ChatGPT launched, JPMorgan Chase was already two years ahead. 🚀

In this episode of Beyond the Pilot, we sit down with Derek Waldron, Chief Analytics Officer at JPMorgan Chase, to reveal how the world’s largest bank built an internal AI platform that is now used by 1 in 2 employees daily.

Derek shares the contrarian insight that drove their strategy: AI models are commodities; the real moat is connectivity.

Learn how they scaled from zero to 250,000+ users, why they empowered employees to build 30,000+ of their own "Personal Agents," and how they are solving the data privacy challenge at an enterprise scale.

🔥 IN THIS EPISODE:


  
The "Super Intelligence" Thought Experiment: Why raw intelligence is useless without enterprise connectivity.



  
The Agent Economy: How JPM enabled non-technical staff to build 30,000 custom AI assistants.



  
The Adoption Playbook: How to break through the "30% wall" and get the majority of your workforce using AI.



  
Build vs. Buy: Why JPM built their own "LLM Suite" instead of waiting for vendors.




⏳ CHAPTERS:

00:00 - Introduction: The JPMorgan AI Story

01:45 - The 3 Core Principles Behind JPM’s Strategy

03:25 - The "Super Intelligence" Thought Experiment

05:00 - Data Privacy: Why JPM Doesn't Train Public Models

06:00 - Viral Adoption: From 0 to 250k Users

09:20 - Evolution of LLM Suite: From RAG to Ecosystem

14:00 - The "Moat" is Connectivity, Not the Model

23:00 - The Agent Economy: 30,000 Employee-Built Assistants

31:00 - Governance &amp; Guardrails for AI Agents

33:00 - Crossing the Chasm: Getting to 60% Adoption

40:00 - The "Product" Mindset: Solving Business Problems First

42:30 - The Future: End-to-End Process Transformation

46:25 - The "Unsolved" Problem Derek Wants to Fix

🙏 SPECIAL THANKS TO OUR SPONSOR:

This episode is presented by Outshift by Cisco.

Learn more about their work on the Internet of Agents and the open-source Linux Foundation project:

🔗 https://www.agentcy.org

🎙️ GUEST: 

Derek Waldron | Chief Analytics Officer, JPMorgan Chase

HOSTS:

Matt Marshall | VentureBeat

Sam Witteveen | VentureBeat

#EnterpriseAI #JPMorgan #GenerativeAI #AgenticAI #FinTech #ArtificialIntelligence #Innovation #BeyondThePilot

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat

https://www.youtube.com/playlist?list=PLMQoSwszBxm5dCv2bdqGnJ0QAL9n7Ds4_

🔗 CONNECT WITH VENTUREBEAT:


  
Website: https://venturebeat.com



  
LinkedIn: https://www.linkedin.com/company/venturebeat



  
X (Twitter): https://twitter.com/VentureBeat



  
Newsletter: https://venturebeat.com/newsletters





Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Inside the 'Agent Economy': How 30,000 AI Assistants Took Over JPMorgan</p>
<p>While most enterprises were scrambling after ChatGPT launched, JPMorgan Chase was already two years ahead. 🚀</p>
<p>In this episode of <em>Beyond the Pilot</em>, we sit down with Derek Waldron, Chief Analytics Officer at JPMorgan Chase, to reveal how the world’s largest bank built an internal AI platform that is now used by 1 in 2 employees daily.</p>
<p>Derek shares the contrarian insight that drove their strategy: <strong>AI models are commodities; the real moat is connectivity.</strong></p>
<p>Learn how they scaled from zero to 250,000+ users, why they empowered employees to build 30,000+ of their own "Personal Agents," and how they are solving the data privacy challenge at an enterprise scale.</p>
<p><strong>🔥 IN THIS EPISODE:</strong></p>
<ul>
  <li>
<p><strong>The "Super Intelligence" Thought Experiment:</strong> Why raw intelligence is useless without enterprise connectivity.</p>
</li>
  <li>
<p><strong>The Agent Economy:</strong> How JPM enabled non-technical staff to build 30,000 custom AI assistants.</p>
</li>
  <li>
<p><strong>The Adoption Playbook:</strong> How to break through the "30% wall" and get the majority of your workforce using AI.</p>
</li>
  <li>
<p><strong>Build vs. Buy:</strong> Why JPM built their own "LLM Suite" instead of waiting for vendors.</p>
</li>
</ul>
<p>⏳ CHAPTERS:</p>
<p>00:00 - Introduction: The JPMorgan AI Story</p>
<p>01:45 - The 3 Core Principles Behind JPM’s Strategy</p>
<p>03:25 - The "Super Intelligence" Thought Experiment</p>
<p>05:00 - Data Privacy: Why JPM Doesn't Train Public Models</p>
<p>06:00 - Viral Adoption: From 0 to 250k Users</p>
<p>09:20 - Evolution of LLM Suite: From RAG to Ecosystem</p>
<p>14:00 - The "Moat" is Connectivity, Not the Model</p>
<p>23:00 - The Agent Economy: 30,000 Employee-Built Assistants</p>
<p>31:00 - Governance &amp; Guardrails for AI Agents</p>
<p>33:00 - Crossing the Chasm: Getting to 60% Adoption</p>
<p>40:00 - The "Product" Mindset: Solving Business Problems First</p>
<p>42:30 - The Future: End-to-End Process Transformation</p>
<p>46:25 - The "Unsolved" Problem Derek Wants to Fix</p>
<p>🙏 SPECIAL THANKS TO OUR SPONSOR:</p>
<p>This episode is presented by Outshift by Cisco.</p>
<p>Learn more about their work on the Internet of Agents and the open-source Linux Foundation project:</p>
<p>🔗 https://www.agentcy.org</p>
<p>🎙️ GUEST: </p>
<p>Derek Waldron | Chief Analytics Officer, JPMorgan Chase</p>
<p>HOSTS:</p>
<p>Matt Marshall | VentureBeat</p>
<p>Sam Witteveen | VentureBeat</p>
<p>#EnterpriseAI #JPMorgan #GenerativeAI #AgenticAI #FinTech #ArtificialIntelligence #Innovation #BeyondThePilot</p>
<p>.</p>
<p>.</p>
<p>.</p>
<p>Subscribe to VentureBeat: </p>
<p>   /  @VentureBeat  </p>
<p>.</p>
<p>.</p>
<p>Subscribe to the full podcast here:</p>
<p>Apple: <a href="https://podcasts.apple.com/us/podcast/venturebeat/id1839285239"><u>https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</u></a></p>
<p>Spotify: <a href="https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4"><u>https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</u></a></p>
<p>YouTube: <a href="https://www.youtube.com/VentureBeat"><u>https://www.youtube.com/VentureBeat</u></a></p>
<p><a href="https://www.youtube.com/playlist?list=PLMQoSwszBxm5dCv2bdqGnJ0QAL9n7Ds4_"><u>https://www.youtube.com/playlist?list=PLMQoSwszBxm5dCv2bdqGnJ0QAL9n7Ds4_</u></a></p>
<p><strong>🔗 CONNECT WITH VENTUREBEAT:</strong></p>
<ul>
  <li>
<p>Website:<a href="https://venturebeat.com/"> <u>https://venturebeat.com</u></a></p>
</li>
  <li>
<p>LinkedIn:<a href="https://www.linkedin.com/company/venturebeat"> <u>https://www.linkedin.com/company/venturebeat</u></a></p>
</li>
  <li>
<p>X (Twitter):<a href="https://twitter.com/VentureBeat"> <u>https://twitter.com/VentureBeat</u></a></p>
</li>
  <li>
<p>Newsletter:<a href="https://venturebeat.com/newsletters"> <u>https://venturebeat.com/newsletters</u></a></p>
</li>
</ul>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>2789</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c5354bae-da8e-11f0-8205-fbc1bd4c38e0]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU1200266656.mp3?updated=1765904887" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>How Booking.com Boosted Agent Accuracy 2x with Mini LLMs with Pranav Pathak</title>
      <description>We built AI agents by accident... and it worked. 🤯

In this episode of VentureBeat’s Beyond the Pilot, we go inside the engineering brain of Booking.com with Pranav Pathak (Director of Product Machine Learning). Pranav reveals how they "stumbled" into agentic architectures before the term even existed, how a simple text box revealed a massive missed revenue opportunity (the "Hot Tub" story), and exactly how they stack LLMs, RAG, and Orchestrators to handle millions of travelers without breaking the bank.

If you are building Enterprise AI, this is the blueprint for moving from "cool demo" to production scale.

🚀 In this episode, we cover:


  
The "Hot Tub" Revelation: How free-text AI search exposed features customers desperately wanted but couldn't find.



  
Real ROI Metrics: How LLMs drove a 2x increase in topic detection accuracy and freed up 1.5x of agent bandwidth.



  
The Booking.com AI Stack: A full breakdown of their Orchestrator → Moderation → Agent → RAG architecture.



  
Latency vs. Intelligence: Why they don't use GPT-5 for everything and how they decide between small models and big brains.



  
The Memory Problem: How to build AI that remembers user preferences without being "creepy”.




00:00 Introduction to Agentic Architectures

00:30 Meet Pranav Pathak from Booking.com

01:24 Evolution of Travel Recommendations

03:41 Impact of Gen AI on Customer Service

07:29 Building an Effective AI Stack

10:32 Agentic Systems and Best Practices

13:45 Choosing Between Building and Buying AI Solutions

18:51 Evaluating AI Models for Business Use

24:10 Challenges in Human Evaluation

25:06 Recommendation System and Data Utilization

27:26 Innovations in Travel Search

29:04 Journey and Challenges in ML Integration

32:08 Managing Memory and User Data

38:07 Future of Travel Assistance

41:33 Advice for New AI Integrations

43:57 Final Thoughts and Farewell

🔗 LINKS &amp; RESOURCES:


  
OutShift by Cisco (Sponsor): outshift.cisco.com



  
VentureBeat: www.venturebeat.com




#ArtificialIntelligence #GenAI #Bookingcom #MachineLearning #AgenticAI #LLM #TechPodcast #EnterpriseAI

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 03 Dec 2025 11:30:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>2</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/2d73e790-cf89-11f0-a060-177515061a7a/image/985528ce61579a7444d741dd0ecd85e1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Inside Booking.com’s Agentic AI Stack</itunes:subtitle>
      <itunes:summary>We built AI agents by accident... and it worked. 🤯

In this episode of VentureBeat’s Beyond the Pilot, we go inside the engineering brain of Booking.com with Pranav Pathak (Director of Product Machine Learning). Pranav reveals how they "stumbled" into agentic architectures before the term even existed, how a simple text box revealed a massive missed revenue opportunity (the "Hot Tub" story), and exactly how they stack LLMs, RAG, and Orchestrators to handle millions of travelers without breaking the bank.

If you are building Enterprise AI, this is the blueprint for moving from "cool demo" to production scale.

🚀 In this episode, we cover:


  
The "Hot Tub" Revelation: How free-text AI search exposed features customers desperately wanted but couldn't find.



  
Real ROI Metrics: How LLMs drove a 2x increase in topic detection accuracy and freed up 1.5x of agent bandwidth.



  
The Booking.com AI Stack: A full breakdown of their Orchestrator → Moderation → Agent → RAG architecture.



  
Latency vs. Intelligence: Why they don't use GPT-5 for everything and how they decide between small models and big brains.



  
The Memory Problem: How to build AI that remembers user preferences without being "creepy”.




00:00 Introduction to Agentic Architectures

00:30 Meet Pranav Pathak from Booking.com

01:24 Evolution of Travel Recommendations

03:41 Impact of Gen AI on Customer Service

07:29 Building an Effective AI Stack

10:32 Agentic Systems and Best Practices

13:45 Choosing Between Building and Buying AI Solutions

18:51 Evaluating AI Models for Business Use

24:10 Challenges in Human Evaluation

25:06 Recommendation System and Data Utilization

27:26 Innovations in Travel Search

29:04 Journey and Challenges in ML Integration

32:08 Managing Memory and User Data

38:07 Future of Travel Assistance

41:33 Advice for New AI Integrations

43:57 Final Thoughts and Farewell

🔗 LINKS &amp; RESOURCES:


  
OutShift by Cisco (Sponsor): outshift.cisco.com



  
VentureBeat: www.venturebeat.com




#ArtificialIntelligence #GenAI #Bookingcom #MachineLearning #AgenticAI #LLM #TechPodcast #EnterpriseAI

.

.

.

Subscribe to VentureBeat: 

   /  @VentureBeat  

.

.

Subscribe to the full podcast here:

Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239

Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4

YouTube: https://www.youtube.com/VentureBeat


Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p><strong>We built AI agents by accident... and it worked.</strong> 🤯</p>
<p>In this episode of VentureBeat’s<em> Beyond the Pilot</em>, we go inside the engineering brain of Booking.com with Pranav Pathak (Director of Product Machine Learning). Pranav reveals how they "stumbled" into agentic architectures before the term even existed, how a simple text box revealed a massive missed revenue opportunity (the "Hot Tub" story), and exactly how they stack LLMs, RAG, and Orchestrators to handle millions of travelers without breaking the bank.</p>
<p>If you are building Enterprise AI, this is the blueprint for moving from "cool demo" to production scale.</p>
<p><strong>🚀 In this episode, we cover:</strong></p>
<ul>
  <li>
<p><strong>The "Hot Tub" Revelation:</strong> How free-text AI search exposed features customers desperately wanted but couldn't find.</p>
</li>
  <li>
<p><strong>Real ROI Metrics:</strong> How LLMs drove a 2x increase in topic detection accuracy and freed up 1.5x of agent bandwidth.</p>
</li>
  <li>
<p><strong>The Booking.com AI Stack:</strong> A full breakdown of their Orchestrator → Moderation → Agent → RAG architecture.</p>
</li>
  <li>
<p><strong>Latency vs. Intelligence:</strong> Why they <em>don't</em> use GPT-5 for everything and how they decide between small models and big brains.</p>
</li>
  <li>
<p><strong>The Memory Problem:</strong> How to build AI that remembers user preferences without being "creepy”.<br></p>
</li>
</ul>
<p>00:00 Introduction to Agentic Architectures</p>
<p>00:30 Meet Pranav Pathak from Booking.com</p>
<p>01:24 Evolution of Travel Recommendations</p>
<p>03:41 Impact of Gen AI on Customer Service</p>
<p>07:29 Building an Effective AI Stack</p>
<p>10:32 Agentic Systems and Best Practices</p>
<p>13:45 Choosing Between Building and Buying AI Solutions</p>
<p>18:51 Evaluating AI Models for Business Use</p>
<p>24:10 Challenges in Human Evaluation</p>
<p>25:06 Recommendation System and Data Utilization</p>
<p>27:26 Innovations in Travel Search</p>
<p>29:04 Journey and Challenges in ML Integration</p>
<p>32:08 Managing Memory and User Data</p>
<p>38:07 Future of Travel Assistance</p>
<p>41:33 Advice for New AI Integrations</p>
<p>43:57 Final Thoughts and Farewell</p>
<p><strong>🔗 LINKS &amp; RESOURCES:</strong></p>
<ul>
  <li>
<p>OutShift by Cisco (Sponsor): outshift.cisco.com</p>
</li>
  <li>
<p>VentureBeat: www.venturebeat.com</p>
</li>
</ul>
<p>#ArtificialIntelligence #GenAI #Bookingcom #MachineLearning #AgenticAI #LLM #TechPodcast #EnterpriseAI</p>
<p>.</p>
<p>.</p>
<p>.</p>
<p>Subscribe to VentureBeat: </p>
<p>   /  @VentureBeat  </p>
<p>.</p>
<p>.</p>
<p>Subscribe to the full podcast here:</p>
<p>Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239</p>
<p>Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4</p>
<p>YouTube: https://www.youtube.com/VentureBeat</p>
<p><br></p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>2747</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2d8f120e-cf89-11f0-a060-1bb0e4f9a88b]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU2545255799.mp3?updated=1764696558" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Beyond the Pilot - NOTION - Unpacking Ryan Nystrom's AI Journey: From Challenges to Custom Agents</title>
      <description>In our inaugural episode, we sit down with Ryan Nystrom, leader of the AI team at Notion, to pull back the curtain on Notion 3.0. Ryan reveals the journey of integrating powerful AI agents into the productivity platform and draws fascinating parallels between the current AI era and the mobile revolution he witnessed at Instagram. He shares exclusive insights into the development challenges, the critical role of tools, context, and curation, and how custom agents are poised to reshape work. Plus, Ryan offers essential advice for any company diving into the AI space.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 19 Nov 2025 11:00:00 -0000</pubDate>
      <itunes:title>NOTION - Unpacking Ryan Nystrom's AI Journey: From Challenges to Custom Agents</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>1</itunes:episode>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a6f86d2a-c1a1-11f0-ae21-9fffdf88f396/image/303aa3d4fcd77adce1002a0f6735a3fb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>In our inaugural episode, we sit down with Ryan Nystrom, leader of the AI team at Notion, to pull back the curtain on Notion 3.0. Ryan reveals the journey of integrating powerful AI agents into the productivity platform and draws fascinating parallels between the current AI era and the mobile revolution he witnessed at Instagram. He shares exclusive insights into the development challenges, the critical role of tools, context, and curation, and how custom agents are poised to reshape work. Plus, Ryan offers essential advice for any company diving into the AI space.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In our inaugural episode, we sit down with Ryan Nystrom, leader of the AI team at Notion, to pull back the curtain on Notion 3.0. Ryan reveals the journey of integrating powerful AI agents into the productivity platform and draws fascinating parallels between the current AI era and the mobile revolution he witnessed at Instagram. He shares exclusive insights into the development challenges, the critical role of tools, context, and curation, and how custom agents are poised to reshape work. Plus, Ryan offers essential advice for any company diving into the AI space.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>5356</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a766e69c-c1a1-11f0-ae21-0be86a13dfa6]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU4261455718.mp3?updated=1763160764" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>Venture Beat in Conversation: MongoDB - From Lift-and-Shift to AI-Ready Data</title>
      <description>Lift-and-shift isn’t enough. MongoDB’s Vinod Bagal breaks down how to modernize your data for AI — and why waiting could cost you your competitive edge.



Host: Sean Michael Kerner

Guest: Vinod Bagal



For more stories visit venturebeat.com
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Mon, 20 Oct 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Lift-and-shift isn’t enough. MongoDB’s Vinod Bagal breaks down how to modernize your data for AI — and why waiting could cost you your competitive edge.



Host: Sean Michael Kerner

Guest: Vinod Bagal



For more stories visit venturebeat.com
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Lift-and-shift isn’t enough. MongoDB’s Vinod Bagal breaks down how to modernize your data for AI — and why waiting could cost you your competitive edge.</p>
<p><br></p>
<p>Host: Sean Michael Kerner</p>
<p>Guest: Vinod Bagal</p>
<p><br></p>
<p>For more stories visit venturebeat.com</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>801</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[924f0eea-acf4-11f0-b728-3367e84252e2]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU3167071938.mp3?updated=1761670988" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: FBI Veterans on AI Cyber Threats &amp; Future Defenders</title>
      <description>AI is accelerating the cyber arms race — and former FBI agents Paul Bingham and Mike Morris say most enterprises aren’t ready. In this VB in Conversation, they break down the real-world threats targeting critical infrastructure, how AI is changing the attack surface, and why smart, layered defense starts with training the next-gen cyber workforce.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Mon, 13 Oct 2025 10:00:00 -0000</pubDate>
      <itunes:title>FBI Veterans on AI Cyber Threats &amp; Future Defenders</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>AI is accelerating the cyber arms race — and former FBI agents Paul Bingham and Mike Morris say most enterprises aren’t ready. In this VB in Conversation, they break down the real-world threats targeting critical infrastructure, how AI is changing the attack surface, and why smart, layered defense starts with training the next-gen cyber workforce.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>AI is accelerating the cyber arms race — and former FBI agents Paul Bingham and Mike Morris say most enterprises aren’t ready. In this VB in Conversation, they break down the real-world threats targeting critical infrastructure, how AI is changing the attack surface, and why smart, layered defense starts with training the next-gen cyber workforce.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>733</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[189d6e86-9889-11f0-9a3b-03b1de0c6bc9]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU2554000195.mp3?updated=1761670937" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI CHAT: 70% of Enterprises Adopt AI Agents: The Real-World Impact</title>
      <description>Key Insights and Takeaways from VentureBeat Transform Event: AI and Enterprise Innovation

In this episode, Matt and Sam recount their experiences and insights from the recent VentureBeat Transform event, an annual gathering focused on enterprise AI. They discuss the significant takeaways, including the increasing adoption of AI agents in production, the lack of dominance by any single hyperscaler in the AI model space, the focus on practical AI solutions over super-intelligence hype, and the evolving structure of teams in the AI-driven workplace. Highlights include insights from speakers at major companies like American Express, Google, IBM, and Zoom, as well as discussions on AI safety and the changing management dynamics with AI agents. Tune in to get a comprehensive overview of the current state and future of AI in enterprise settings.

00:00 Introduction and Event Overview
00:51 Key Takeaways from VentureBeat Transform
02:46 AI Deployment in Enterprises
04:37 Insights from Industry Leaders
08:48 Hyperscalers and Model Dominance
12:12 Real-World AI Applications
14:04 Focus on Practical AI Solutions
24:08 The Future of AI Teams
30:37 Conclusion and Final Thoughts
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 08 Oct 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Key Insights and Takeaways from VentureBeat Transform Event: AI and Enterprise Innovation

In this episode, Matt and Sam recount their experiences and insights from the recent VentureBeat Transform event, an annual gathering focused on enterprise AI. They discuss the significant takeaways, including the increasing adoption of AI agents in production, the lack of dominance by any single hyperscaler in the AI model space, the focus on practical AI solutions over super-intelligence hype, and the evolving structure of teams in the AI-driven workplace. Highlights include insights from speakers at major companies like American Express, Google, IBM, and Zoom, as well as discussions on AI safety and the changing management dynamics with AI agents. Tune in to get a comprehensive overview of the current state and future of AI in enterprise settings.

00:00 Introduction and Event Overview
00:51 Key Takeaways from VentureBeat Transform
02:46 AI Deployment in Enterprises
04:37 Insights from Industry Leaders
08:48 Hyperscalers and Model Dominance
12:12 Real-World AI Applications
14:04 Focus on Practical AI Solutions
24:08 The Future of AI Teams
30:37 Conclusion and Final Thoughts
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Key Insights and Takeaways from VentureBeat Transform Event: AI and Enterprise Innovation

In this episode, Matt and Sam recount their experiences and insights from the recent VentureBeat Transform event, an annual gathering focused on enterprise AI. They discuss the significant takeaways, including the increasing adoption of AI agents in production, the lack of dominance by any single hyperscaler in the AI model space, the focus on practical AI solutions over super-intelligence hype, and the evolving structure of teams in the AI-driven workplace. Highlights include insights from speakers at major companies like American Express, Google, IBM, and Zoom, as well as discussions on AI safety and the changing management dynamics with AI agents. Tune in to get a comprehensive overview of the current state and future of AI in enterprise settings.

00:00 Introduction and Event Overview
00:51 Key Takeaways from VentureBeat Transform
02:46 AI Deployment in Enterprises
04:37 Insights from Industry Leaders
08:48 Hyperscalers and Model Dominance
12:12 Real-World AI Applications
14:04 Focus on Practical AI Solutions
24:08 The Future of AI Teams
30:37 Conclusion and Final Thoughts</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>2160</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6c20e872-958b-11f0-9c25-c78b922af29a]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU8261563427.mp3?updated=1761671821" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: Visa’s $3.5B Bet on AI</title>
      <description>Visa’s SVP of Data &amp; AI, Sam Hamilton, joins VentureBeat to break down the hidden costs, trade-offs, and infrastructure realities behind running over 400 AI solutions incorporating 300 AI models at global scale.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Mon, 06 Oct 2025 10:00:00 -0000</pubDate>
      <itunes:title>Visa’s $3.5B Bet on AI</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Visa’s SVP of Data &amp; AI, Sam Hamilton, joins VentureBeat to break down the hidden costs, trade-offs, and infrastructure realities behind running over 400 AI solutions incorporating 300 AI models at global scale.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Visa’s SVP of Data &amp; AI, Sam Hamilton, joins VentureBeat to break down the hidden costs, trade-offs, and infrastructure realities behind running over 400 AI solutions incorporating 300 AI models at global scale.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>794</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b379999a-958f-11f0-8ee5-57e47fb14772]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU8852093282.mp3?updated=1761671608" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI CHAT: Interview with Richard Seroter: Unveiling Google Cloud's AI Strategy</title>
      <description>Google Cloud Next: In-Depth Discussion with Chief Evangelist Richard Seroter

Join us for an exclusive interview with Richard Seroter, Chief Evangelist of Google Cloud, as he discusses the latest developments and insights from Google Cloud Next. Dive into conversations about AI advancements, the new agent development kit, and the multi-agent protocol, and how they are reshaping the future of cloud services and enterprise solutions. Learn about the balance between pre-built and custom agents, and Google's commitment to open-source and multi-cloud flexibility. Don't miss out on this insider look at the cutting edge of AI and cloud technology.

00:00 Introduction and Recap of Google Cloud Next
02:17 Interview with Richard Seroter Begins
02:51 Discussing the Developer Keynote and Agent Technology
03:31 The Evolution and Readiness of AI Agents
05:19 Google's Approach to AI and Agent Development
07:20 Comparing Google with Competitors in AI
09:02 Agent Development Kit and Industry Adoption
10:51 The Future of Multi-Agent Systems
16:04 Google's Open Source Strategy and Cloud Integration
21:11 Exploring Google's Interest in Agent Technology
21:45 The Future of Agent Marketplaces
22:27 Google's Role in Payment Processing for Agents
23:17 Community Adoption of Agent Protocols
26:20 Enterprise Applications of Agents
28:43 The Evolution of Agent Space
33:48 The Rise of Personal Agents
36:04 Balancing Innovation Across Google Cloud and Labs
38:00 The Impact of Pre-Built Agents
40:18 Conclusion and Final Thoughts
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 01 Oct 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Google Cloud Next: In-Depth Discussion with Chief Evangelist Richard Seroter

Join us for an exclusive interview with Richard Seroter, Chief Evangelist of Google Cloud, as he discusses the latest developments and insights from Google Cloud Next. Dive into conversations about AI advancements, the new agent development kit, and the multi-agent protocol, and how they are reshaping the future of cloud services and enterprise solutions. Learn about the balance between pre-built and custom agents, and Google's commitment to open-source and multi-cloud flexibility. Don't miss out on this insider look at the cutting edge of AI and cloud technology.

00:00 Introduction and Recap of Google Cloud Next
02:17 Interview with Richard Seroter Begins
02:51 Discussing the Developer Keynote and Agent Technology
03:31 The Evolution and Readiness of AI Agents
05:19 Google's Approach to AI and Agent Development
07:20 Comparing Google with Competitors in AI
09:02 Agent Development Kit and Industry Adoption
10:51 The Future of Multi-Agent Systems
16:04 Google's Open Source Strategy and Cloud Integration
21:11 Exploring Google's Interest in Agent Technology
21:45 The Future of Agent Marketplaces
22:27 Google's Role in Payment Processing for Agents
23:17 Community Adoption of Agent Protocols
26:20 Enterprise Applications of Agents
28:43 The Evolution of Agent Space
33:48 The Rise of Personal Agents
36:04 Balancing Innovation Across Google Cloud and Labs
38:00 The Impact of Pre-Built Agents
40:18 Conclusion and Final Thoughts
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Google Cloud Next: In-Depth Discussion with Chief Evangelist Richard Seroter

Join us for an exclusive interview with Richard Seroter, Chief Evangelist of Google Cloud, as he discusses the latest developments and insights from Google Cloud Next. Dive into conversations about AI advancements, the new agent development kit, and the multi-agent protocol, and how they are reshaping the future of cloud services and enterprise solutions. Learn about the balance between pre-built and custom agents, and Google's commitment to open-source and multi-cloud flexibility. Don't miss out on this insider look at the cutting edge of AI and cloud technology.

00:00 Introduction and Recap of Google Cloud Next
02:17 Interview with Richard Seroter Begins
02:51 Discussing the Developer Keynote and Agent Technology
03:31 The Evolution and Readiness of AI Agents
05:19 Google's Approach to AI and Agent Development
07:20 Comparing Google with Competitors in AI
09:02 Agent Development Kit and Industry Adoption
10:51 The Future of Multi-Agent Systems
16:04 Google's Open Source Strategy and Cloud Integration
21:11 Exploring Google's Interest in Agent Technology
21:45 The Future of Agent Marketplaces
22:27 Google's Role in Payment Processing for Agents
23:17 Community Adoption of Agent Protocols
26:20 Enterprise Applications of Agents
28:43 The Evolution of Agent Space
33:48 The Rise of Personal Agents
36:04 Balancing Innovation Across Google Cloud and Labs
38:00 The Impact of Pre-Built Agents
40:18 Conclusion and Final Thoughts</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>2451</itunes:duration>
      <guid isPermaLink="false"><![CDATA[4cd3a9c4-958a-11f0-86b6-1b2a5a50bc0a]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU8965492606.mp3?updated=1761671880" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: The AI Surge Is Coming — Is Your Network Ready?</title>
      <description>Cisco’s Anurag Dhingra joins VentureBeat’s Matt Marshall to unpack what it means to build a truly AI-ready network. From smart switching to agentic operations, Dhingra explains how enterprises must stay ahead of surging network demands — and why network intelligence is now foundational to scaling AI.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Mon, 29 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:title>The AI Surge Is Coming — Is Your Network Ready?</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Cisco’s Anurag Dhingra joins VentureBeat’s Matt Marshall to unpack what it means to build a truly AI-ready network. From smart switching to agentic operations, Dhingra explains how enterprises must stay ahead of surging network demands — and why network intelligence is now foundational to scaling AI.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Cisco’s Anurag Dhingra joins VentureBeat’s Matt Marshall to unpack what it means to build a truly AI-ready network. From smart switching to agentic operations, Dhingra explains how enterprises must stay ahead of surging network demands — and why network intelligence is now foundational to scaling AI.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>907</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f2ff3cd8-958e-11f0-a6b2-e75c1e30ca5c]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU3309804502.mp3?updated=1761671624" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>AI CHAT: The Future of AI Agents: How Napkin AI Is Redefining Design</title>
      <description>Exploring Napkin.ai: Revolutionizing Graphic Design with AI Agents

Join Matt Marshall, founder and CEO of VentureBeat, and Sam Witteveen as they interview Pramod Sharma, CEO of Napkin.ai, and co-founder Jerome Scholler, about their innovative AI-powered graphic design tool. Discover how Napkin.ai has rapidly grown to 2 million beta users with its unique ability to transform text into compelling graphics effortlessly. Learn about the sophisticated backend structure involving multiple specialized AI agents and the recently introduced custom styles feature that allows users and companies to define and perfect their graphic outputs. Perfect for anyone interested in the future of AI in graphic design.

00:00 Introduction to Napkin AI
00:24 Interview with Pramod Sharma
00:44 How Napkin AI Works
00:59 The AI Agency Model
01:56 Deep Dive into Napkin AI's Features
04:34 User Experience and Customization
08:00 Future Plans and Innovations
15:11 Technical Insights and Agent Structure
28:09 Conclusion and Final Thoughts
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 24 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Exploring Napkin.ai: Revolutionizing Graphic Design with AI Agents

Join Matt Marshall, founder and CEO of VentureBeat, and Sam Witteveen as they interview Pramod Sharma, CEO of Napkin.ai, and co-founder Jerome Scholler, about their innovative AI-powered graphic design tool. Discover how Napkin.ai has rapidly grown to 2 million beta users with its unique ability to transform text into compelling graphics effortlessly. Learn about the sophisticated backend structure involving multiple specialized AI agents and the recently introduced custom styles feature that allows users and companies to define and perfect their graphic outputs. Perfect for anyone interested in the future of AI in graphic design.

00:00 Introduction to Napkin AI
00:24 Interview with Pramod Sharma
00:44 How Napkin AI Works
00:59 The AI Agency Model
01:56 Deep Dive into Napkin AI's Features
04:34 User Experience and Customization
08:00 Future Plans and Innovations
15:11 Technical Insights and Agent Structure
28:09 Conclusion and Final Thoughts
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Exploring Napkin.ai: Revolutionizing Graphic Design with AI Agents

Join Matt Marshall, founder and CEO of VentureBeat, and Sam Witteveen as they interview Pramod Sharma, CEO of Napkin.ai, and co-founder Jerome Scholler, about their innovative AI-powered graphic design tool. Discover how Napkin.ai has rapidly grown to 2 million beta users with its unique ability to transform text into compelling graphics effortlessly. Learn about the sophisticated backend structure involving multiple specialized AI agents and the recently introduced custom styles feature that allows users and companies to define and perfect their graphic outputs. Perfect for anyone interested in the future of AI in graphic design.

00:00 Introduction to Napkin AI
00:24 Interview with Pramod Sharma
00:44 How Napkin AI Works
00:59 The AI Agency Model
01:56 Deep Dive into Napkin AI's Features
04:34 User Experience and Customization
08:00 Future Plans and Innovations
15:11 Technical Insights and Agent Structure
28:09 Conclusion and Final Thoughts</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>1819</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4b88a624-9589-11f0-a30f-b3a5ceb81bf5]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU4812007120.mp3?updated=1761671689" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: From RSAC 2025 - Inside the Cybersecurity-First AI Model &amp; Built in, Always on</title>
      <description>Part 1: Inside the Cybersecurity-First AI Model

LLMs are inherently non-deterministic — but more data isn’t always better. As Cisco’s Jeetu Patel explains, Cisco Foundation AI distilled 900 billion tokens down to the most relevant 5 billion to create the industry’s first AI model purpose-built for security.



Part 2: AI Security: Built-in, Always-On

Cisco’s Tom Gillis and Splunk’s Mike Horn reveal how distributed enforcement, self-upgrading firewalls, and AI-powered infrastructure are redefining security architecture and transforming what a SOC can do in the AI era.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Mon, 22 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:title>From RSAC 2025 - Inside the Cybersecurity-First AI Model &amp; Built in, Always on</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/411e1b54-933d-11f0-a801-ebd754ff0e24/image/b2ff5d6fc76d8bcf490574a41dedce31.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Part 1: Inside the Cybersecurity-First AI Model

LLMs are inherently non-deterministic — but more data isn’t always better. As Cisco’s Jeetu Patel explains, Cisco Foundation AI distilled 900 billion tokens down to the most relevant 5 billion to create the industry’s first AI model purpose-built for security.



Part 2: AI Security: Built-in, Always-On

Cisco’s Tom Gillis and Splunk’s Mike Horn reveal how distributed enforcement, self-upgrading firewalls, and AI-powered infrastructure are redefining security architecture and transforming what a SOC can do in the AI era.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Part 1: Inside the Cybersecurity-First AI Model</p>
<p>LLMs are inherently non-deterministic — but more data isn’t always better. As Cisco’s Jeetu Patel explains, Cisco Foundation AI distilled 900 billion tokens down to the most relevant 5 billion to create the industry’s first AI model purpose-built for security.</p>
<p><br></p>
<p>Part 2: AI Security: Built-in, Always-On</p>
<p>Cisco’s Tom Gillis and Splunk’s Mike Horn reveal how distributed enforcement, self-upgrading firewalls, and AI-powered infrastructure are redefining security architecture and transforming what a SOC can do in the AI era.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>1627</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[41820e48-933d-11f0-a801-039375376deb]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU4232299358.mp3?updated=1761671738" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: Credit Karma’s path to scalable AI</title>
      <description>Originally published in September 2024

As orgs race to progress from proof-of concept to full, scalable production in AI, there are many lessons and resets. Key to these are choices made around infrastructure. Intuit’s Credit Karma, with over 100 million users, has been one of AI’s success stories. Vishnu Ram, Credit Karma’s VP of Engineering, spoke with VB about those lessons – and how the learning will continue.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 17 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:title>Credit Karma’s path to scalable AI</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b990dc5c-9338-11f0-bdd8-736a0bb785ed/image/f2b98d3a692c55160d4322db957b3766.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Originally published in September 2024

As orgs race to progress from proof-of concept to full, scalable production in AI, there are many lessons and resets. Key to these are choices made around infrastructure. Intuit’s Credit Karma, with over 100 million users, has been one of AI’s success stories. Vishnu Ram, Credit Karma’s VP of Engineering, spoke with VB about those lessons – and how the learning will continue.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Originally published in September 2024

As orgs race to progress from proof-of concept to full, scalable production in AI, there are many lessons and resets. Key to these are choices made around infrastructure. Intuit’s Credit Karma, with over 100 million users, has been one of AI’s success stories. Vishnu Ram, Credit Karma’s VP of Engineering, spoke with VB about those lessons – and how the learning will continue.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>1067</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b9add780-9338-11f0-bdd8-57f00b153db2]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU7636656651.mp3?updated=1761671029" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: Inside the Super Bowl’s cyber war: NFL CISO on AI threats, deepfakes and defending the big game</title>
      <description>NFL CISO Tomás Maldonado speaks with VentureBeat about defending Super Bowl LIX from adversarial attacks that potentially include weaponized AI, endpoint attacks, deepfakes, and finely tuned social engineering – and require collaboration with the FBI and Secret Service.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Mon, 15 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:title>Inside the Super Bowl’s cyber war: NFL CISO on AI threats, deepfakes and defending the big game</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fee4ddce-91c3-11f0-8374-4f87f04fa6d9/image/da3faf59ea333339df9d3e8d31372944.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>NFL CISO Tomás Maldonado speaks with VentureBeat about defending Super Bowl LIX from adversarial attacks that potentially include weaponized AI, endpoint attacks, deepfakes, and finely tuned social engineering – and require collaboration with the FBI and Secret Service.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>NFL CISO Tomás Maldonado speaks with VentureBeat about defending Super Bowl LIX from adversarial attacks that potentially include weaponized AI, endpoint attacks, deepfakes, and finely tuned social engineering – and require collaboration with the FBI and Secret Service.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>924</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a7493032-91c4-11f0-b722-6f82f0d8f3f5]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU3697602352.mp3?updated=1761671021" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: Handling Greater Demands in the Data Center with Ken Spangler, FedEx</title>
      <description>Originally Published in July 2023

FedEx handles 100 billion daily transactions, from tracking packages to routing flights. To keep up with this massive data demand, the company has invested in data center performance, especially in high-performance computing. Ken Spangler, EVP of IT and CIO of Global Operations Technology, FedEx, talks with VB Editor-in-Chief Matt Marshall about how the company has adopted the co-location model for the data center, which allows it to leverage the benefits of cloud computing while maintaining control of the technology stack. He also reveals how FedEx is partnering with edge computing providers to deploy “mini data centers” that can reduce latency and enhance security for its customers – and of course AI and automation.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Sat, 13 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:title>Handling Greater Demands in the Data Center with Ken Spangler, FedEx</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/00e65790-8e60-11f0-9a60-1b0dea1e8424/image/c2d71184e4846057066baa8be42231d2.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Originally Published in July 2023

FedEx handles 100 billion daily transactions, from tracking packages to routing flights. To keep up with this massive data demand, the company has invested in data center performance, especially in high-performance computing. Ken Spangler, EVP of IT and CIO of Global Operations Technology, FedEx, talks with VB Editor-in-Chief Matt Marshall about how the company has adopted the co-location model for the data center, which allows it to leverage the benefits of cloud computing while maintaining control of the technology stack. He also reveals how FedEx is partnering with edge computing providers to deploy “mini data centers” that can reduce latency and enhance security for its customers – and of course AI and automation.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Originally Published in July 2023

FedEx handles 100 billion daily transactions, from tracking packages to routing flights. To keep up with this massive data demand, the company has invested in data center performance, especially in high-performance computing. Ken Spangler, EVP of IT and CIO of Global Operations Technology, FedEx, talks with VB Editor-in-Chief Matt Marshall about how the company has adopted the co-location model for the data center, which allows it to leverage the benefits of cloud computing while maintaining control of the technology stack. He also reveals how FedEx is partnering with edge computing providers to deploy “mini data centers” that can reduce latency and enhance security for its customers – and of course AI and automation.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>770</itunes:duration>
      <guid isPermaLink="false"><![CDATA[a7490440-91c4-11f0-b722-53253a3decc0]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU3560493034.mp3?updated=1761671595" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: Payment giant Paypal tackles the data sustainability challenge</title>
      <description>Originally published in October 2023

VentureBeat Editor-in-Chief Matt Marshall talks with Archana Deskus, EVP &amp; CIO of Paypal, on how the payment giant, with more than 400 million users, is meeting the need for more computation, more power and more storage, particularly during peak periods — all while committing to sustainability and greater efficiency. Deskus dives into bursting capacity, data center consolidation, asset utilization, efficient code and more.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Sat, 13 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:title>Payment giant Paypal tackles the data sustainability challenge</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9cc31004-8e60-11f0-ad7f-a7c4efc023f5/image/cce36ee1c2d7dc41369b249145f58c5f.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Originally published in October 2023

VentureBeat Editor-in-Chief Matt Marshall talks with Archana Deskus, EVP &amp; CIO of Paypal, on how the payment giant, with more than 400 million users, is meeting the need for more computation, more power and more storage, particularly during peak periods — all while committing to sustainability and greater efficiency. Deskus dives into bursting capacity, data center consolidation, asset utilization, efficient code and more.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Originally published in October 2023

VentureBeat Editor-in-Chief Matt Marshall talks with Archana Deskus, EVP &amp; CIO of Paypal, on how the payment giant, with more than 400 million users, is meeting the need for more computation, more power and more storage, particularly during peak periods — all while committing to sustainability and greater efficiency. Deskus dives into bursting capacity, data center consolidation, asset utilization, efficient code and more.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>773</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a7494be4-91c4-11f0-b722-3f7371c703a1]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU7238738716.mp3?updated=1761671546" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: Handling Greater Demands in the Data Center with Kamran Ziaee, Verizon</title>
      <description>Originally published in July 2023

Kamran Ziaee, SVP, Technology Strategy &amp; Global Infrastructure at Verizon sits down with VB Editor-in-Chief Matt Marshall to discuss how data centers now must handle greater and greater demands. While high-performance computing used to be reserved for exceptionally data-intensive applications, like gene sequencing or self-driving cars, every industry is seeing real use cases for adopting HPC. Verizon is an example of a company that says HPC is now table stakes for many applications it runs in its data centers. While Verizon has consolidated down to three data centers from nine, it has also invested in multi-access edge computing (MEC) to ensure rapid response time for customers. We explore his unique approach to an optimized hybrid cloud, where many critical applications stay on-prem and highly elastic applications migrate to the public cloud to enjoy performance gains there. We touch on the Gen AI craze and the proof of concepts the company is running, which also point to a hybrid future.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Sat, 13 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:title>Handling Greater Demands in the Data Center with Kamran Ziaee, Verizon</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0a67ba9c-8e61-11f0-b892-ff2956451b0c/image/c8ea7bfb26087d20a334c6308cceb0e8.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Originally published in July 2023

Kamran Ziaee, SVP, Technology Strategy &amp; Global Infrastructure at Verizon sits down with VB Editor-in-Chief Matt Marshall to discuss how data centers now must handle greater and greater demands. While high-performance computing used to be reserved for exceptionally data-intensive applications, like gene sequencing or self-driving cars, every industry is seeing real use cases for adopting HPC. Verizon is an example of a company that says HPC is now table stakes for many applications it runs in its data centers. While Verizon has consolidated down to three data centers from nine, it has also invested in multi-access edge computing (MEC) to ensure rapid response time for customers. We explore his unique approach to an optimized hybrid cloud, where many critical applications stay on-prem and highly elastic applications migrate to the public cloud to enjoy performance gains there. We touch on the Gen AI craze and the proof of concepts the company is running, which also point to a hybrid future.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Originally published in July 2023

Kamran Ziaee, SVP, Technology Strategy &amp; Global Infrastructure at Verizon sits down with VB Editor-in-Chief Matt Marshall to discuss how data centers now must handle greater and greater demands. While high-performance computing used to be reserved for exceptionally data-intensive applications, like gene sequencing or self-driving cars, every industry is seeing real use cases for adopting HPC. Verizon is an example of a company that says HPC is now table stakes for many applications it runs in its data centers. While Verizon has consolidated down to three data centers from nine, it has also invested in multi-access edge computing (MEC) to ensure rapid response time for customers. We explore his unique approach to an optimized hybrid cloud, where many critical applications stay on-prem and highly elastic applications migrate to the public cloud to enjoy performance gains there. We touch on the Gen AI craze and the proof of concepts the company is running, which also point to a hybrid future.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>788</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a74922d6-91c4-11f0-b722-8b2ab71c3154]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU3298705986.mp3?updated=1761671574" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: Gen AI: What the enterprise is getting right, and wrong</title>
      <description>Originally Released in February 2024

In his role as Global Leader for Tech and Digital Advantage at the Boston Consulting Group, Vlad Lukic has overseen hundreds of AI deployments. Now working with scores of enterprise companies racing to embrace gen AI, he finds that critical considerations are often overlooked, such as readiness of data and pivotal financial factors. Missteps such as deploying AI where it’s not really needed and neglecting the essential task of managing organizational change for seamless AI adoption can also tank the best intentions to capitalize on gen AI. He also has lots of advice on how to do it right.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Sat, 13 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:title>Gen AI: What the enterprise is getting right, and wrong</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/cfbf8296-8e5e-11f0-bebf-5fdc8391da6c/image/e564accd679a3fc673375bd7a6e97e0e.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Originally Released in February 2024

In his role as Global Leader for Tech and Digital Advantage at the Boston Consulting Group, Vlad Lukic has overseen hundreds of AI deployments. Now working with scores of enterprise companies racing to embrace gen AI, he finds that critical considerations are often overlooked, such as readiness of data and pivotal financial factors. Missteps such as deploying AI where it’s not really needed and neglecting the essential task of managing organizational change for seamless AI adoption can also tank the best intentions to capitalize on gen AI. He also has lots of advice on how to do it right.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Originally Released in February 2024

In his role as Global Leader for Tech and Digital Advantage at the Boston Consulting Group, Vlad Lukic has overseen hundreds of AI deployments. Now working with scores of enterprise companies racing to embrace gen AI, he finds that critical considerations are often overlooked, such as readiness of data and pivotal financial factors. Missteps such as deploying AI where it’s not really needed and neglecting the essential task of managing organizational change for seamless AI adoption can also tank the best intentions to capitalize on gen AI. He also has lots of advice on how to do it right.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>768</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a7495288-91c4-11f0-b722-e3652bc736ad]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU1406840488.mp3?updated=1761671614" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat in Conversation: eBay doubles down on data efficiency</title>
      <description>Originally published in October 2023

VentureBeat Editor-in-Chief Matt Marshall talks with Mazen Rawashdeh, SVP &amp; CTO of eBay, about how one of the world’s largest online marketplaces is using innovative technologies to power its data centers and reduce its environmental impact while meeting the growing demand for data processing in the age of generative AI. Rawashdeh explores the importance of fungibility between CPUs and GPUs, hybrid cloud, open source, running their data centers on 85% utilization and more.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Sat, 13 Sep 2025 10:00:00 -0000</pubDate>
      <itunes:title>eBay doubles down on data efficiency</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/2abd14a6-8e5f-11f0-bb61-6beba991795e/image/ccff4d1132ccb09ef607fa1d90ea6130.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Originally published in October 2023

VentureBeat Editor-in-Chief Matt Marshall talks with Mazen Rawashdeh, SVP &amp; CTO of eBay, about how one of the world’s largest online marketplaces is using innovative technologies to power its data centers and reduce its environmental impact while meeting the growing demand for data processing in the age of generative AI. Rawashdeh explores the importance of fungibility between CPUs and GPUs, hybrid cloud, open source, running their data centers on 85% utilization and more.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Originally published in October 2023

VentureBeat Editor-in-Chief Matt Marshall talks with Mazen Rawashdeh, SVP &amp; CTO of eBay, about how one of the world’s largest online marketplaces is using innovative technologies to power its data centers and reduce its environmental impact while meeting the growing demand for data processing in the age of generative AI. Rawashdeh explores the importance of fungibility between CPUs and GPUs, hybrid cloud, open source, running their data centers on 85% utilization and more.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>761</itunes:duration>
      <guid isPermaLink="false"><![CDATA[a74945b8-91c4-11f0-b722-a75398a4dafa]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU6647536098.mp3?updated=1761670957" length="0" type="audio/mpeg"/>
    </item>
    <item>
      <title>VentureBeat: Scaling Up with Databricks - Achieving the potential of generative AI</title>
      <description>Originally Released in October 2023

How do companies identify the best uses cases for gen AI — and then navigate the complexity of AI-first products and tools to drive that innovation at scale? VB’s editorial director, Michael Nuñez, speaks with Databricks co-founder Reynold Xin and Vijoy Pandey, SVP at Outshift by Cisco, about the many factors that will determine how organizations succeed, or don’t, during a time of, as Xin says, bottom-up innovation.
Learn more about your ad choices. Visit megaphone.fm/adchoices</description>
      <pubDate>Wed, 10 Sep 2025 17:01:00 -0000</pubDate>
      <itunes:title>Scaling Up with Databricks: Achieving the potential of generative AI</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>VentureBeat</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/093234e4-8e62-11f0-b88c-0f2a7d4cfe07/image/4dc8eaa150e5f5799cf150c9f483eac4.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle></itunes:subtitle>
      <itunes:summary>Originally Released in October 2023

How do companies identify the best uses cases for gen AI — and then navigate the complexity of AI-first products and tools to drive that innovation at scale? VB’s editorial director, Michael Nuñez, speaks with Databricks co-founder Reynold Xin and Vijoy Pandey, SVP at Outshift by Cisco, about the many factors that will determine how organizations succeed, or don’t, during a time of, as Xin says, bottom-up innovation.
Learn more about your ad choices. Visit megaphone.fm/adchoices</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Originally Released in October 2023</p>
<p>How do companies identify the best uses cases for gen AI — and then navigate the complexity of AI-first products and tools to drive that innovation at scale? VB’s editorial director, Michael Nuñez, speaks with Databricks co-founder Reynold Xin and Vijoy Pandey, SVP at Outshift by Cisco, about the many factors that will determine how organizations succeed, or don’t, during a time of, as Xin says, bottom-up innovation.</p><p> </p><p>Learn more about your ad choices. Visit <a href="https://megaphone.fm/adchoices">megaphone.fm/adchoices</a></p>]]>
      </content:encoded>
      <itunes:duration>844</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a7493e42-91c4-11f0-b722-07d4e2087782]]></guid>
      <enclosure url="https://traffic.megaphone.fm/UTEAU9846164791.mp3?updated=1761670971" length="0" type="audio/mpeg"/>
    </item>
  </channel>
</rss>
