21 Apr 2026
by Iain Batty, Waterstons

Generally, this is well intentioned and an attempt at being more productive, but what teams don’t know is that this behaviour is increasing the organisation’s exposure to security incidents, compliance failures, and intellectual property loss.

Common examples include staff submitting internal content to public GenAI services for summarisation or rewriting, using unauthorised cloud applications for file conversion, or relying on browser-based AI extensions with broad access to corporate data. These activities typically occur outside approved controls, logging, and governance, creating material enterprise risk with limited visibility.

What’s happening

The key issue isn’t ‘AI adoption’ itself, but uncontrolled use of external services that could process or store corporate information without the organisation’s knowledge. In practice, this creates Shadow IT and Shadow AI pathways where sensitive information can leave approved environments without detection or audit trails.

Why it matters

From a risk management perspective, unmanaged AI and cloud services create four primary exposures:

  • Data leakage and intellectual property loss: Public AI platforms and consumer cloud services might retain content, use it to improve models, or store it in unknown locations. Once data is uploaded, control over retention, onward use, and jurisdiction is often unclear.
  • Loss of oversight and control: If usage is not monitored, security and risk teams cannot quantify exposure, enforce data classification rules, or respond effectively to incidents.
  • Regulatory and contractual non-compliance: Uploading personal data, client data, or restricted material to external tools can breach GDPR, sector regulations, and contractual confidentiality obligations.
  • Third-party and supply chain risk: Unapproved services introduce unknown vendors, unclear security commitments, limited or reduced service levels, and constrained action following a breach.
The changing threat landscape

Threat actors increasingly exploit weak points created by cloud sprawl, misconfigured integrations, and poorly understood API connections. When staff use unsanctioned tools, organisations lose the ability to reliably detect abnormal behaviour, control data movement, and enforce DLP and identity/access controls - creating blind spots that can be readily exploited.

What ‘good’ looks like 

Mitigating Shadow Cloud and Shadow AI typically needs a shift from reactive blocking to a governance-led approach that combines visibility, guardrails, and workforce enablement. CASB/SSE/SASE capabilities (for example, Netskope and equivalents) are commonly used to establish visibility of AI/cloud usage, assess application risk, and apply granular controls including real-time DLP and GenAI-specific policy enforcement.

Practically, risk managers should drive a focused control set:

  • Establish visibility of AI and cloud application usage, including what data is moving and by who.
  • Define policy-based access to approved AI tools, with tiering based on data sensitivity and user role.
  • Implement enterprise DLP controls to reduce the likelihood of source code, financial data, or client information being uploaded to unmanaged services.
  • Publish a clear AI use policy that sets expectations on permitted tools and acceptable data handling.
  • Provide sanctioned alternatives so staff can achieve productivity gains without resorting to personal or public tools.
  • Assure third parties by reviewing AI vendors’ security posture, data handling, retention, and compliance alignment before enabling access.

It’s not always easy to know where to start with AI – or where to go next – especially if your teams are already using unsanctioned tools. But given AI’s increasing prevalence in the workplace and everyday life, it’s more important than ever to manage your AI journey effectively.

Related topics