What to prioritize, what to expect, and what questions to ask before you leave Las Vegas.
Mandalay Bay · April 22–24
TL;DR — 3 things to do before you land in VegasIf you read nothing else, read this.
|
Jeremy Hehl — Chief Evangelist, Foresite CybersecurityPresenting three sessions at Google Cloud Next 2026
Jeremy is the only practitioner at this conference presenting a complete governance framework for Agentic SOC — from Glass Box audit trails to MCP-powered investigations. Want 15 minutes with him at the conference? Book here →
|
|
SECLT108 · Lightning Talk · Jeremy Hehl, Foresite
Beyond the Black Box: Defensible Governance for the Agentic Era ↗ |
|
|
The most direct answer to the question every CISO is sitting with: when AI agents make decisions autonomously, what does accountability look like? Glass Box audit trails, least-privilege enforcement, visible and defensible outcomes. If you attend one Foresite session, make it this one.
|
|
|
|
|
|
|
BRK1-015 · Breakout · Jeremy Hehl, Foresite + Wunan Li, Google Cloud
AI-Driven, Human-Led: Accelerate Security Outcomes with an Agentic SOC ↗ |
|
How agentic SOC works in production: faster threat detection, human-in-the-loop validation, and AI agents in Google Security Operations supercharging security teams. Not a product demo — a real operational framework.
|
|
|
|
SECLT118 · Lightning Talk · Jeremy Hehl, Foresite + Jonas Kelley, Google Cloud
The Engine and the Operator: Realizing High-Fidelity Outcomes with Google SecOps and MCP ↗ |
|
The technical layer underneath the Agentic SOC story. How MCP connects Gemini agents to partner tools to enrich investigations — and where Foresite's human-in-the-loop validation makes remediation repeatable. Send your architects here.
|
|
BRK1-087 · Breakout · Anton Chuvakin, Google Cloud + Allie Mellen, Forrester
AI SOC vs. AI in your SOC: What is real today and what is coming ↗
The most important session for CISOs who want to cut through the AI hype. A Google security leader debates a Forrester analyst on what's actually operational versus what's 18 months away. Rare: you'll get real pushback at a vendor conference.
BRK1-085 · Breakout · Spencer Lichtenstein + Wendy Willner, Google Cloud
Agentic defense: Security operations at machine speed ↗
Google's product roadmap for agentic SOC, live demo included. Triage, Investigation, Enrichment, and Response Plan agents working together. Ask about governance controls that prevent false-positive suppression from becoming a liability.
BRK1-103 · Breakout · Nick Bennett + Jurgen Kutscher, Mandiant / Google Cloud
Cybercrime trends: Lessons from the front lines ↗
The annual M-Trends session is consistently the most intelligence-dense content at GCN. This year: how threat actors are leveraging AI, and how organizations are adapting. The Mandiant team has more real IR data than anyone else presenting this week. Don't skip it.
BRK2-205 · Breakout · Google Cloud Offensive Security Services
Securing AI: Real-world lessons and a roadmap for proactive defense ↗
A year of practical lessons securing production AI systems — vulnerabilities found, governance gaps, AI-specific threat modeling in practice. More useful than the conceptual sessions because it's grounded in what actually broke in real deployments.
BRK1-086 · Breakout · Vicente Diaz, Google Cloud + Alexander Pabst, CISO, Allianz SE
Agentic Threat Intelligence: Your new AI security teammate ↗
GTI with a conversational AI interface, demonstrated alongside a real enterprise CISO who's running it in production. The Allianz co-presenter is the credibility signal — this is a peer conversation, not a product demo.
SECW101 · Capture the Flag Lab · Google Cloud Security
Agentic SOC Experience ↗
Not a competitive CTF — an immersive lab where you work directly with Google's SecOps Alert Triage Agent and Agentic GTI. Send your practitioners here. It's the fastest way to form a real opinion about whether agentic SOC is ready for your environment.
The Security Hub is where the real conversations happen. Don't spend your time watching demos you could watch on YouTube. Come with these.
"Every vendor at this conference will use the word agentic. The question is what it means when the AI gets it wrong."
There's an irony in attending a conference about enterprise AI adoption without having audited your own.
The average mid-market organization has dozens of unsanctioned AI tools running across its workforce — LLM integrations, AI coding assistants, shadow agents built by individual teams, third-party SaaS with AI features enabled by default. Most security teams have no visibility into them.
Employees paste sensitive data into AI tools that store, train on, or transmit that data outside your control. Most acceptable use policies don't cover tools that didn't exist when they were written.
AI agents built without security review can be granted permissions, make API calls, and take actions in your environment. If you haven't mapped what's running, you can't govern it.
Third-party AI integrations introduce model dependencies you haven't vetted. A compromised or misconfigured model in your stack is a new class of supply chain risk.
The first step is discovery. Before you can govern AI use in your organization, you need to know what's actually running. That's where most security teams are right now — and it's a conversation worth having at the Security Hub this week.
While you're at the Security Hub, your engineering and architecture teams should be in the AI platform sessions. The infrastructure being announced there — Gemini Enterprise, Vertex AI Agent Builder, MCP at scale — is what every security announcement is built on. Understanding it gives you better questions to ask vendors.
Four sessions worth routing your team to: