Skip to content
Secure Smarter — Solutions for Modern Threats

From AI-driven SecOps to cloud security automation, Foresite delivers fully managed and scalable security solutions tailored for enterprise, hybrid, and multi-cloud environments.

Why Foresite — Security Excellence, Your Way

From our Adaptive Security Model to our Google Cloud Premier SecOps Partnership, we combine deep expertise, purpose-built technology, and customer-first flexibility.

Google Cloud Security — delivered by Foresite,
Premier SecOps Partner

Combine Google’s native security power with Foresite’s expert-driven, AI-powered operations to secure your cloud and unlock business growth.

Foresite - Google Cloud SecOps Delivery Partner Badge
AI Security and Shadow Agent Risk: What's Running in Your Organization Right Now
Brian PepperdineApril 9, 20265 min read

Shadow Agent Risk: What's Running in Your Organization Right Now

Shadow Agent Risk: What's Running in Your Organization Right Now
8:22

What's Running in Your Organization Right Now

The biggest AI security threat to most organizations isn't an outside attacker. It's the AI tools your own employees are already using.

Ask a CISO whether their organization has an AI security policy and most will say yes. Ask them how many AI tools are currently running across their workforce and most will go quiet.

The gap between those two answers is where shadow agent risk lives.

"Shadow IT" has been in the security vocabulary for years — unauthorized SaaS tools, personal cloud storage, consumer apps on corporate devices. Security teams learned to manage it, mostly. But shadow AI is a different problem in two important ways: the tools are more capable, and the consequences of exposure are more severe.

An employee using an unsanctioned file sharing app risks some data leakage. An employee using an unsanctioned AI agent with access to internal systems risks something considerably worse.

 

What shadow agents actually are

A shadow agent is any AI system operating in your environment without explicit security review, defined permissions, or organizational accountability.

That definition is broader than most security teams currently treat it. It includes:

Consumer AI tools used for work
ChatGPT, Claude, Gemini — used directly by employees to process work content. When an analyst pastes a customer record into a consumer AI to summarize it, that data leaves your environment. Whether it's stored, used for training, or transmitted elsewhere depends on terms of service that most employees haven't read.

AI features in existing SaaS
Most enterprise SaaS platforms now have AI capabilities enabled by default. Salesforce, Microsoft 365, Slack, Notion — all have AI features that may be processing your data without your explicit knowledge or consent. If your security team didn't review the AI features when they reviewed the platform, you have an unreviewed AI system running on your data.

Internally built agents
Individual teams — engineering, operations, finance — are building AI tools using APIs and low-code platforms. These agents may have access to internal databases, APIs, and systems. They're often built without security review, documented nowhere, and owned by no one when the person who built them leaves.

Third-party integrations
AI tools integrated with your environment through APIs, webhooks, or OAuth connections. Every integration is a data flow. Every data flow is an attack surface.

 

The three exposure vectors

Shadow agents create security risk through three primary mechanisms:

  1. Data exfiltration
    The most immediate and common risk. Employees process sensitive data through AI tools that store, log, or train on that content. Customer PII, financial data, internal strategy documents, source code — all of it can end up in a model's training corpus or a vendor's data store. Most acceptable use policies were written before these tools existed and don't cover them.

  2. Unauthorized action
    This is where shadow agents become qualitatively different from shadow IT. An AI agent with tool-use capabilities — the ability to make API calls, send emails, query databases, execute code — can take actions in your environment. An agent built without security review may have been granted excessive permissions. An agent connected to a compromised model may be operating under external influence. Unlike a passive data leak, an agent acting with excessive permissions can cause active harm.

  3. Supply chain exposure
    Every third-party AI integration introduces a dependency on a model or service you haven't vetted. A compromised or misconfigured model in your supply chain can influence the outputs your employees trust, inject malicious content into workflows, or serve as an entry point for a more sophisticated attack. Most organizations don't yet have a framework for evaluating this.

 

Why discovery is harder than it sounds

Traditional shadow IT discovery relied on network traffic analysis and DNS monitoring. An employee accessing an unauthorized SaaS tool generates recognizable patterns: new domain, new authentication flow, sustained data transfer to an external endpoint.

Shadow agents are harder to find.

The attack surface changes faster than the tooling. New AI tools, new API integrations, new agent frameworks ship weekly. Point-in-time discovery is not enough — shadow agent risk requires continuous monitoring.

They often use authorized channels. An employee using ChatGPT via browser generates web traffic indistinguishable from any other HTTPS session. An internally built agent using the OpenAI API uses the same endpoint as a sanctioned integration.

They're not centrally registered. Enterprise SaaS goes through procurement. AI tools built by individual contributors don't. There's no purchase order, no vendor review, no record of what was built or what it can access.

Many SaaS vendors have been quietly enabling AI features on existing contracts. If you're not actively auditing your SaaS portfolio for new AI capabilities, you may have AI systems processing your data that you approved years ago as a completely different product — opted out when you weren't watching

 

What governance actually looks like

Discovery is the first step. Governance is what you do with what you find. Effective AI governance for shadow agents has four components:

  1. Inventory
    You can't govern what you can't see. Build a continuous process for discovering AI tools and agents across your environment — network monitoring, SaaS audit, developer tooling review, employee disclosure programs. The goal is a live inventory, not a quarterly snapshot.

  2. Classification
    Not all shadow agents carry equal risk. An employee using an AI writing tool to draft internal emails is different from an agent with API access to your CRM. Classify each discovered tool by data sensitivity, permission scope, and action capability. Risk-tier the inventory so remediation effort is proportional.

  3. Policy
    Define clearly what is and isn't permitted. Which AI tools are approved for what categories of data? What review process is required before an internally built agent can access internal systems? What happens when an employee discovers an unsanctioned tool they want to use? The policy needs to be specific enough to answer those questions — "don't use unauthorized AI" is not a policy.

  4. Identity and access governance for agents
    Treat AI agents as managed identities. Every agent that can take actions in your environment should have defined permissions, a documented scope, an owner, and a review cadence. The same least-privilege principles that apply to human accounts apply to agents — and they need to be enforced, not assumed.


 
Google Cloud Next 2026 → · Las Vegas · April 22–24

Find out what's actually running in your environment

Foresite is running Shadow Agent Risk Assessments for organizations that want to understand their exposure before it shows up in an incident report. Find our team at Security Hub Booth #4001, Kiosk #15 — or request an assessment now.

Request a Shadow Agent Risk Assessment →

Google-funded assessments available. Eligibility confirmed during scoping.


 

Foresite Cybersecurity — Google Cloud Managed Security Services Partner

SOC 2 Type II
Google SecOps
Premier Co-sell Partner

 

avatar
Brian Pepperdine
Brian Pepperdine is VP of Customer Engineering at Foresite.

RELATED ARTICLES