Google Cloud Next '26 opened on April 22 with a Wiz and Google Security Operations announcement that most of the coverage is getting wrong. Not factually wrong — most of the trade press is accurate on what was announced. But the framing is small. The dominant story is "Google and Wiz shipped a better integration." The actual story is that the architectural center of gravity for a cloud-native SOC is moving, and this announcement is the first release where that movement becomes unmistakable.
Context is moving upstream, from post-detection enrichment to pre-detection resolution. Detection authoring is moving from a craft to a managed supply chain. Remediation is moving from ticket-driven to agent-driven. And the governance layer that used to be a compliance afterthought is becoming the defining design decision for whether these systems are safe to operate at all. The SOCs that read this announcement as a tooling update will spend the next 18 months catching up to the SOCs that read it as an architectural signal.
The thesis I want to put on the table: runbook latency is now an attack surface, and unaudited autonomy is now a liability surface. Mandiant's M-Trends 2026 report shows the median time between initial access and hand-off to a secondary threat actor has collapsed from more than eight hours to 22 seconds in three years. Every delay built into your workflow is something attackers are exploiting. And every autonomous action an agent takes without an audit trail is something a regulator, an auditor, or a board will ask you to defend after an incident. Those two truths, together, are what this announcement is really responding to.
That's the lens I'm reading Next '26 through, and I think it's the lens SOC leaders need to read it through to make the right calls about how to invest between now and 2027.
The four architectural shifts Foresite sees in the Wiz and Google SecOps integration announcement. Each is explored in the observations that follow.
The piece of the announcement getting the most amplification is the updated integration between Wiz Defend and Google Security Operations. I'm already seeing it described as "native" in secondary coverage. Google's own wording is more careful. They said they "updated how we integrate security detections from Wiz Defend with Google Security Operations and Mandiant Threat Defense to help analysts more easily configure automatic threat information forwarding." That is meaningful, but it is not a replacement for the existing Wiz log parser or the Chronicle UDM ingestion path.
For SOC operators right now, this means something specific: if you have custom parsing, normalization, or SOAR content that sits between Wiz and Chronicle UDM today, none of that is automatically obsolete. Read Google's Wiz log parser documentation alongside the Next '26 wording and decide where the new forwarding path adds value versus where your existing pipeline is already doing the work. Inventory every custom parser and normalization rule that touches Wiz data, and label each as still needed, potentially redundant, or needs a side-by-side test.
But the larger signal matters more than the immediate audit. Google and Wiz are converging on a model where context travels with the detection, not after it. The direction of travel is clear even if this specific release is a bridge rather than a destination. SOC architectures that assume Wiz is a source to be parsed and Google SecOps is a platform to be filled are going to age badly. The teams that win the next cycle are the ones treating this integration as the first wave of a fundamental collapse of the Wiz-to-SOC boundary, not as a forwarding configuration change.
Google announced three new agents for Google Security Operations. Threat Hunting and Detection Engineering are in preview. Third-Party Context is listed as coming soon to preview. They are at different maturity levels and should not be treated as a single capability.
The Detection Engineering agent is the one I'm watching most closely. Google's description is that it "can identify coverage gaps and create new detections for threat scenarios." The observation I want SOC leaders to sit with is this: agent-generated detections don't reduce the amount of work your team does. They move it. Authoring time goes down. Validation time goes up. You now need a review step between the agent's output and your production rule set, and that step has to be repeatable, documented, and fast enough not to become the bottleneck.
I don't have enough reps on this yet to know what the right ratio looks like for our team. What I do know is that the teams who will get hurt are the ones who treat agent-generated detections like human-authored ones and push them straight to production. That produces false positive fatigue inside a sprint.
But the bigger point for how SOCs will be built: detection engineering is becoming a supply chain discipline. The question stops being "can our detection engineer write a good rule?" and starts being "can our team validate, version, and govern a stream of agent-generated detections at rate?" That's a different hiring profile, a different process model, and a different set of tools. Most SOCs are still organized for the first model. The ones that re-organize for the second one fastest will have a coverage advantage that compounds quarter over quarter. The ones that don't will fall behind not because their analysts are worse, but because their operating model is wrong.
A point of clarity worth stating: Wiz Blue Agent became generally available on March 30, 2026, not at Next. It has been available for a few weeks. My team has had hands on it. What Next '26 added was tighter integration with the Google stack, not the agent itself.
What Blue Agent does, in operator terms: when my analyst is investigating a cloud incident, they no longer start by opening three tabs (Wiz Security Graph, identity provider, runtime telemetry) and stitching context together by hand. Blue Agent pulls from the Security Graph and related sources and presents context alongside the investigation. Wiz's own positioning describes it as an investigation partner that compresses manual correlation time.
What I'm watching in my own environment is whether the context the agent surfaces is the context my analysts would have looked for. If it is, investigation time compresses. If it surfaces a different set of context, we've added an evaluation step where the analyst has to decide whether to trust the agent's framing. Both are fine outcomes. They require different training.
What I think this means for SOC leadership is more important than the tool itself. For 15 years, the profile of a good tier-one analyst has been "can execute a playbook under pressure." For the next five, it's going to be "can evaluate an agent's framing under pressure." Those are different skills. The first is procedural. The second is judgmental. If you're hiring and training for the first and deploying agents that require the second, your bench depth is going to erode quietly until it fails loudly. This is a hiring and development problem dressed up as a tooling announcement.
Wiz Green Agent is in public preview. It generates environment-specific remediation guidance drawn from the Wiz Security Graph, identity ownership, and historical remediation patterns. Wiz Workflows supports human-in-the-loop approvals. Those are the facts.
Here's the operational question for SOC leaders who run change management: where does human approval live in your current process? For most teams I've talked to, approval happens at the ticket stage. A finding gets opened, a ticket is created, someone approves the change, and then something gets executed. When a remediation agent is in the loop, approval at the ticket stage is arguably too late — the agent has already drafted the action, surfaced the context, and is effectively waiting on a gate.
The conversation I'm having with my team is about defining, explicitly, where within Wiz Workflows human approval is required before agent execution. Not as a policy statement. As a configuration step, documented, with a named owner. If you don't define it, the default behavior becomes your policy.
The larger point: the governance question around autonomous remediation has been something SOCs could defer for the last three years. You could say "we'll figure it out when it matters." It matters now. If my team acts on Green Agent guidance and something goes wrong, the board will ask me what triggered the action, what data the agent used, and who validated it. I'd rather be able to answer that question from the audit trail than from memory.
This is where I think the industry is underestimating the shift. The gap between the SOCs that operate these tools well and the SOCs that have a bad quarter is not going to be technical. It's going to be governance. The teams that treated "accountability for autonomous actions" as a design requirement from day one will be fine. The teams that treated it as a future problem will find out in front of a regulator or a board that the future started in April 2026.
Not sure where your approval gates should live? Talk to a Catalyst engineer →
This one is less about Next '26 specifically and more about what happens when you stand up Wiz Workflows alongside an existing SOC on-call rotation. Most teams I've seen have two separate escalation paths: one for cloud and Wiz-side issues (often routed to cloud engineering or a Wiz administrator) and one for SOC-side detections (routed to the on-call analyst). Agentic workflows don't respect that boundary. A remediation action that started from a Wiz finding can now trigger a response that lands in the SOC queue, and vice versa.
What breaks: two teams reaching for the same incident with no defined handoff. Or worse, both teams assuming the other has it.
What I'm asking my team to do is define a single incident owner for cross-surface events, with the cloud and SOC paths feeding the same rotation, not two parallel ones. This is also where the 22-second hand-off number lives in practice. If the escalation path has more latency than the adversary's hand-off window, the tooling speed doesn't help — the human decision path is the bottleneck.
The structural point: the organizational separation between "cloud security" and "SOC" was an artifact of two tool stacks that didn't talk to each other. Those tool stacks now do talk to each other. The org chart that was designed around the gap between them is now the gap. SOCs that reorganize around incident type and speed — not around tool category — will be faster than competitors that preserve the historical boundary because that's how the budget was carved up three years ago.
Remote Google Cloud Model Context Protocol (MCP) server support for Google Security Operations is now generally available. I've seen this conflated with the Wiz detection forwarding story in a few places, so worth separating them clearly.
MCP GA is about building your own agents against Google SecOps. It gives you a documented interface for agents — yours, partners', or third-party — to read from and act on Google SecOps. Google also shipped a preview capability to access the MCP server client directly from the Google SecOps chat interface. That's a separate step forward for analysts who want to interact with their own SIEM through an agent.
This is not about how Wiz data gets into Chronicle UDM. That's the log parser and ingestion story, which is a different code path and a different team. If you've been planning MCP-based agent work against Google SecOps, this moves that work from experimental to supported. If you've been planning parser work against Wiz logs, MCP GA doesn't change anything for you.
Why I think this is the under-covered announcement of the week: it's the beginning of the SIEM as an agent platform, not just a data platform. For 20 years, SOC tooling has been about getting data into a platform and running queries against it. MCP going GA means Google SecOps becomes a surface that custom and third-party agents can be built against. The teams that see this early and start building against the interface will have workflow advantages that the teams who waited to see how it played out will spend 18 months catching up to. This is the quiet architectural shift. It will be louder a year from now.
I said earlier that runbook latency is now an attack surface and unaudited autonomy is now a liability surface. This is the observation that ties the whole announcement together.
Agentic workflows don't just move faster — they act. That shifts the question my team has to answer from "did we detect it?" to "can we defend the decision the system made?" If my SOC can't show why an agent acted, what data it used, and what validated the action, then speed becomes exposure. An agentic workflow that can't be audited isn't a security control — it's an unmanaged process running inside the security stack.
The practical implication is that every autonomous action in our environment needs to produce an auditable trail. What triggered it. What the agent reasoned from. What it executed. Whether a human validated it and who. That's not something you add after an incident. It has to be designed into the workflow before you let the agent run.
This is the work I think separates teams that will operate these tools well from teams that will have a bad quarter. It's not about the tools — everyone will have the same tools. It's about whether accountability was designed in before the agents went live. The SOCs that win the next cycle are the ones that treat auditability as a prerequisite, not a feature. The ones that lose are the ones that treat it as something to be solved for later, after something goes wrong.
The internal name for this design approach at Foresite is the Glass Box model. Every autonomous action in our environment produces a structured record of what triggered it, what the agent reasoned from, what it executed, and whether a human validated it. The name is ours, but the principle is not proprietary — any SOC operating agentic workflows needs some version of this. The difference is whether you design it in before the agents go live or try to reconstruct it after an incident.
Foresite Glass Box model overview
The architectural shift I've been describing is not an abstract industry trend. It produces three specific consequences on the CISO's P&L and risk register, and each one compounds over time if the transition is deferred.
Liability becomes personal. Autonomous decisions made inside your security stack are now decisions your organization owns. If they aren't auditable — if the SOC cannot show what triggered an agent action, what data it reasoned from, and what validated the execution — they are not defensible to a regulator, an auditor, or a board the morning after an incident. The liability shift has already happened at the tooling layer. The question is whether your governance model has shifted with it.
Operational risk compounds quietly. The cost of running a pre-Next '26 operating model against a post-Next '26 toolset is not a single moment of pain — it's a slow drift of inconsistency. Agent-generated detections validated in one team, human-authored detections validated in another. Wiz context resolved pre-detection in some workflows, post-detection in others. Cross-surface incidents handled by whichever team grabs them first. Each inconsistency is small. In aggregate, they produce the kind of response variance that shows up in incident postmortems as "we saw it but we didn't correlate it in time."
Financial exposure doubles before it halves. Running parallel enrichment, detection, and remediation workflows — the old model and the new model, side by side during transition — doubles analyst effort and slows response. The new architecture is designed to eliminate that overhead once you've fully committed to it. The teams that move fastest to a single operating model capture the cost savings the architecture is designed to deliver. The teams that straddle both for 18 months pay double for the privilege of straddling.
The question for a security leader reading this is not "when should we adapt?" It's "what does the cost look like if we don't, measured in audit exposure, response variance, and analyst headcount?" That math is what drives the build-versus-offload decision in the next section.
The architectural shift I've described above is going to happen whether your SOC is ready for it or not. The question for security leaders is not whether to adopt the new model — it's whether to build the capability to operate it in-house or to offload it to a team that already does.
Both are legitimate answers. They fit different organizations.
Build it yourself if your security team has the headcount to re-specialize detection engineering as a supply chain function, the organizational authority to consolidate cloud and SOC escalation paths, and the governance maturity to design the Glass Box equivalent before your first agent action hits production. This is the right path for large, mature security organizations that already operate sophisticated in-house detection and response programs and view cloud-native SOC operations as a strategic capability to own.
Offload it to a partner if your team is capable but stretched, if your priority is outcomes rather than capability-building, or if your organization doesn't have the volume of cloud-native incidents to justify building the operating model from scratch. The offload path makes sense when the cost of getting the architecture wrong — in detection quality, governance audit trails, or escalation latency — is higher than the cost of running the capability through a specialist team.
Foresite operates both paths. Catalyst Citadel is our MDR offering — we run the full model for you, from parser architecture through detection engineering validation through Glass Box governance and 24×7 operations. Catalyst Bridge is our enablement offering — we bring the operating model to your SOC, work alongside your team to stand up the capability in your environment, and transfer the practice rather than running it in ours.
Which path is right depends less on the tools and more on what your security organization is actually trying to be: a security operator, or a security outcome buyer. Both are valid positions. Neither is wrong. But the Next '26 announcement makes it urgent to know which one you're committing to.
The prescriptive version of everything above is short. The SOC that operates these tools well in 2027 will look different from the SOC that operates well today in five specific ways.
It will treat context resolution as a pipeline input, not a post-detection enrichment step — which means tier-one analyst workflow and detection-side tooling are designed together, not stitched together. It will treat detection engineering as a supply chain — which means it has a validation pipeline, not just an authoring workflow. It will treat remediation as a governed autonomous loop — which means approval configurations are documented artifacts with named owners, not policy statements. It will treat incident ownership as cross-surface — which means the org chart reflects how incidents actually flow, not how the budget was carved up three years ago. And it will treat audit trails as a design requirement, not a compliance bolt-on.
None of these are new in concept. What's new is that the tools now force the question. The old organizational model will work against the new tools, and the new tools will expose the old organizational model. That's the shift the Next '26 announcement is accelerating.
Foresite is already operating this way — through Citadel for customers who offload the full capability to us, and through Bridge for customers building it inside their own SOCs with our team alongside theirs. Parser inventory work is in motion across both motions. Our Detection Engineering agent validation workflow is being documented now, before we run agent output at production volume. Blue Agent is in our tier-one analysts' hands. Green Agent and Wiz Workflows approval configuration are moving in parallel, with named owners for the approval gates. Our escalation paths are being consolidated around incident type rather than tool origin. MCP we're watching and scoping, not building against yet. Third-Party Context we'll evaluate when it hits preview.
Where we don't have enough production reps yet, I'll say so: Blue Agent under sustained load, Detection Engineering agent output at scale, and Green Agent in environments with complex identity ownership. Those are the places where I'll have more to say in 60 days than I do today.
Foresite has been deploying on Google Security Operations (formerly Chronicle) since its inception in 2018. Catalyst is the platform we operate it on, whether we're running it for you through Citadel MDR or standing it up inside your SOC through Bridge.
The practical implication is that every autonomous action in our environment needs to produce an auditable trail. What triggered it. What the agent reasoned from. What it executed. Whether a human validated it and who. That's not something you add after an incident. It has to be designed into the workflow before you let the agent run.
This is the work I think separates teams that will operate these tools well from teams that will have a bad quarter. It's not about the tools — everyone will have the same tools. It's about whether accountability was designed in before the agents went live. The SOCs that win the next cycle are the ones that treat auditability as a prerequisite, not a feature. The ones that lose are the ones that treat it as something to be solved for later, after something goes wrong.
Foresite Cybersecurity is the 2026 Google Cloud Security Partner of the Year (North America). Catalyst delivers Practitioner-Governed Agentic SOC operations built natively on Google Security Operations, Mandiant, and Wiz.