MXDR for Google Cloud
A Practitioner’s Analysis: What the M-Trends Hand-Off Finding Means for Your SOC
M-Trends 2026 Analysis · SOC Strategy · Foresite
TL;DR
M-Trends 2026 documents a sharp rise in attacker “hand-offs” — initial access brokers passing environment access to ransomware operators, sometimes in under 30 seconds. That finding doesn’t make human judgment obsolete. It makes the case for getting AI agents to do the investigative groundwork faster, so practitioners can make better decisions sooner. Speed without governance is just faster chaos. The right answer is both.
The New Reality: From Initial Access to Ransomware Hand-Off in Seconds
Every year the M-Trends report lands and security teams mine it for the headline dwell time number. This year that number rose to 14 days, up from 11 in 2024 — and the report is clear on why: long-term espionage operations and North Korean IT worker campaigns, which deliberately move slow and averaged a 122-day dwell time, pulled the global median up. For ransomware specifically, dwell time actually dropped to a median of nine days. The picture is more segmented than the headline suggests.
But the finding worth spending real time on is buried in the ransomware section, and it’s about speed of a different kind.
Mandiant documented a significant rise in attacker “hand-offs”: an initial access broker compromises an environment — often through opportunistic techniques like malicious advertisements or compromised websites — then passes that foothold to a second-stage ransomware operator. Prior compromise was the most frequently confirmed initial infection vector for ransomware-related incidents Mandiant investigated in 2025, up from 15% to 30% year-over-year. This pattern now accounts for nearly a third of ransomware cases they worked.
In a third, less well-defined model Mandiant describes — where the two groups operate as what the report characterizes as behavior consistent with a distribution cluster — the time between the initial access partner’s earliest activity and the secondary group’s earliest attributed activity is generally less than 30 seconds. Across the broader “division of labor” model, Mandiant observed a median of 22 seconds for that handover. To be precise about what those numbers mean: they measure the point at which the secondary group gains access, not necessarily when they begin hands-on-keyboard activity. But the direction of travel is clear.
The M-Trends 2026 report documents the time between key phases in the attacker hand-off model, with the secondary group gaining access in some cases in under 30 seconds.
The report’s own framing of this is worth quoting directly: “alerts traditionally considered ‘lower priority’ can very quickly become significant compromises.” That is not a theoretical risk. It is a documented pattern that more than doubled as an infection vector within Mandiant’s ransomware investigations in a single year.
Mandiant also notes a defender advantage embedded in this model: initial access partners with known relationships to specific ransomware operators can be tracked. When a FAKEUPDATES alert fires, for instance, defenders who understand UNC1543’s relationship with UNC2165 can treat that alert with elevated criticality and hunt for follow-on activity immediately. Intelligence about the hand-off pattern is itself a detection asset — if your operation is set up to use it.
The Wrong Conclusion: “Our Analysts Need to Be Faster”
The instinctive response to faster attacker timelines is to demand faster human response: tighter SLAs, more headcount, quicker triage. That instinct is understandable. It’s also aimed at the wrong bottleneck.
The bottleneck isn’t analyst intent. It’s the volume and velocity of work that reaches them before they can exercise any judgment. Google’s own agentic SecOps research puts the false positive rate at 83% — the vast majority of what analysts process never becomes a real incident. (Source: Google Future of SecOps Infographic.) Compressing the SLA on that process doesn’t fix it. It just means analysts are moving faster through work that shouldn’t be reaching them at all.
You cannot compress human decision-making to 22 seconds. You can only decide what work reaches a human inside that window — and how prepared they are when it does.
The human cost of that design is real. 71% of SOC staff rate their workplace pain at 6–9 out of 10. Half of all CISOs are projected to change roles due to stress. The industry is short 4.8 million people globally. (Sources: SANS/Devo SOC Survey; Gartner; ISC2.) Those figures don’t come from M-Trends, but they describe the environment inside which M-Trends’ findings land. Asking more of analysts inside a model that already burns them out is not a strategy. It’s acceleration toward a cliff.
The UNC1543 and UNC2165 case study in the report is useful here. In that specific incident, the hand-off to UNC2165 took approximately 70 minutes, with another ~45 minutes before their earliest interactive activity. That is a window a well-run SOC can work with. The sub-30-second figure applies to a tighter, distribution-cluster model — not every hand-off. The point is not that humans are always too late. It’s that the investigation workload arriving at human desks needs to be triaged and enriched before it gets there, so the analyst is making a decision, not starting from scratch.
The Right Conclusion: Agents Do the
Groundwork. Practitioners Make the Call.
The Practitioner-Governed Agentic SOC is not a model that removes humans from security operations. It’s a model that removes humans from the work that doesn’t require them, so they’re fully present for the work that does. That distinction matters, because most of what’s sold as “agentic security” today either hands the AI too much authority or hands the customer too much responsibility for governing it.
Autonomous triage at machine speed. An AI agent — running on a platform like Google Cloud, where Gemini-powered agents operate across the full telemetry stack — handles initial investigation before a human analyst is paged. It correlates signals, enriches identity context, maps indicators, and presents a fully assembled case. The analyst doesn’t start an investigation. They review one. That shift — from doing to deciding — is where the time compression happens.
Practitioner governance at every decision point.strong> This is the part most agentic pitches omit. Google’s SAIF framework is explicit: autonomous agents require human controllers. The question is who provides that control and who is accountable for the outcome. In the practitioner-governed model, a named practitioner reviews every autonomous investigation and authorizes high-impact actions before they execute. Containment, isolation, escalation — all staged for human approval. The agent compresses the time to decision. The practitioner owns the decision.
This is one way to operationalize what Mandiant is recommending: reduce behavioral variety, enrich alerts with contextual data, and give analysts the space to investigate low-impact events rapidly before they become high-impact ones. An agent that enriches and contextualizes before the analyst sees the alert is doing exactly that work. The practitioner layer ensures the output is verified, explainable, and defensible to regulators, insurers, and the board.
The black box problem that defines many MSSP relationships — where AI actions happen inside a proprietary platform with no audit trail — doesn’t exist when a named practitioner authorizes every significant action inside your own environment. That’s not just a better security model. It’s a better governance model.
Is Your Security Partner Built for This?
M-Trends 2026 documents a threat landscape that has become measurably more segmented, more automated, and faster in its most dangerous phases. Internal detection rates actually improved in 2025 — 52% of organizations detected malicious activity internally, up from 43% — which shows that better human-led operations are possible and are happening. The report is not a case against human judgment. It’s a case for putting human judgment where it belongs: at the decision point, not the bottom of an alert queue.
The practical question for any security leader reading M-Trends is whether their current operation — or their current partner’s operation — is structured to act on that. Can your team treat a FAKEUPDATES detection with elevated criticality in under a minute because they have the threat intelligence context loaded and the investigation pre-built? Or does that alert sit in a queue until a human manually opens it and starts from scratch?
As a leading Google Cloud partner specializing in this practitioner-governed model, we help organizations move beyond the limitations of traditional security operations. The mission is to deliver resilience and trusted outcomes — not just a queue of alerts.
The Resilience Maturity Matrix from M-Trends 2026 maps security posture across two axes: Minimal Viable Security (prevention friction) and Recovery Path Reliability. Active Resilience — the target state — combines hardened identity with a recovery environment severed from the attack surface.
See the model in action
|
If the M-Trends hand-off data has you questioning whether your current operation is structured to respond at this speed, the right next step is a direct conversation. We offer a 15-minute whiteboard session with one of our practitioners — mapping your current detection model against this threat profile and identifying where the gaps are. No slides, no pitch. An operational conversation. |
Read the source
![]() |
The M-Trends 2026 report was published this month. The hand-off analysis, the ransomware infection vector data, and Mandiant’s “Active Resilience” framework — their term for severing the recovery path from the attack surface, hardening identity controls, and treating ransomware as a resilience problem rather than purely a detection one — are worth reading directly, not through a vendor summary. Download the full report. |
