Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Build Article
Build May 5, 2026 · 9 min read

Shadow AI: How to Inventory Unauthorised Tools Before the Auditors Do

Shadow AI: How to Inventory Unauthorised Tools Before the Auditors Do

Shadow AI: How to Inventory Unauthorised Tools Before the Auditors Do

Employees are pasting customer data into ChatGPT. Finance analysts are routing sensitive queries through personal Claude accounts. Marketing teams are generating campaign assets with Midjourney on freemium tiers. None of this appears in any IT register.

This is shadow AI: the use of artificial intelligence tools by employees without validation from IT, security, or the Data Protection Officer. And it's not a fringe problem. Studies published between 2024 and 2025 indicate that between 41% and 78% of companies are affected, with higher prevalence in larger organisations.

The EU AI Act doesn't directly prohibit shadow AI. But it imposes documentation, supervision, and traceability obligations that undeclared AI usage makes impossible to fulfil. Article 26 requires a register and logs for high-risk systems. Article 4 mandates AI literacy for all operators, applicable since 2 February 2025. An undeclared tool cannot be assessed, documented, or audited. Which means the organisation carries the regulatory risk alone.

In Brief

  • 41-78% of organisations have employees using AI tools without IT approval
  • Article 26 of the EU AI Act requires registers and logs for high-risk AI systems
  • 5 detection methods combine to create a reliable inventory: surveys, expense analysis, network scanning, business unit interviews, and voluntary declaration
  • Blocking without alternatives fails , sustainable governance requires approved tools, clear policy, and training

This is exactly the kind of operational governance challenge we'll be unpacking at Human x AI Europe on May 19 in Vienna , where the people building Europe's AI future are making the hard decisions together.

Why Shadow AI Spreads

Consumer AI tools are designed for immediate adoption. A single email address creates an account. Freemium tiers eliminate financial barriers. The pattern is always the same: an individual finds a tool that makes their job easier, starts using it, doesn't mention it, and may not even think of it as "using AI."

Three factors accelerate the spread:

Productivity pressure. Employees need to move fast. AI appears as a lever to save time on drafting, analysis, or code generation. Business units, often ahead of IT departments, adopt these tools without waiting for internal approval.

Lack of approved alternatives. If the organisation doesn't offer compliant, easy-to-use AI tools, employees turn to external solutions. This is particularly pronounced in sectors where AI needs are emerging, such as legal or communications.

Invisible integration. Shadow AI doesn't always look like a new application. It often appears as a feature toggle inside an existing SaaS platform, an API integration that inherits access silently, or a background automation acting through a non-human identity.

The Compliance Problem

The risks are concrete, not theoretical.

Data exposure is instant. Shadow IT usually meant a file sync that could later be found and killed. Shadow AI means the prompt, with its contents, is already on a third-party server the moment the user hits send. Prompts can't be recalled.

GDPR violations compound. A prompt containing customer data sent to a non-EU API may constitute an unlawful transfer of personal data under GDPR Article 44, even if the tool is used in good faith for internal purposes. Personal LLM sessions often don't have audit logs, don't have data residency controls, and may be used as training data by default.

AI Act obligations become impossible. Article 26 requires deployers of high-risk AI systems to maintain a register and automatic logs. Article 50 imposes transparency obligations for systems interacting with natural persons, applicable from November 2026. None of this can be satisfied if the system hasn't been identified.

Undocumented biases enter business processes. A text generation or data analysis tool may produce biased outputs without the organisation being aware. These biases can have legal consequences in areas such as recruitment or risk assessment.

The 5-Step Inventory Method

No single detection method is sufficient. Combining all five steps ensures a complete and reliable inventory.

1. Employee survey. Send an anonymous questionnaire to identify tools in use. Sample questions: "Which AI tools do you use in your work?" "For which tasks?" "Did you create a personal account to access them?" The response rate is directly linked to the level of trust perceived by the teams. Frame this as securing AI usage, not sanctioning employees.

2. IT expenditure analysis. Review corporate card statements and invoices to identify subscriptions to AI tools. Paid versions of ChatGPT, Midjourney, or similar services are often purchased directly by business units, outside IT procurement.

3. Network and log scanning. Use monitoring tools to detect connections to AI APIs or services. Firewall and proxy logs can reveal undeclared usage, particularly towards endpoints such as api.openai.com or generativelanguage.googleapis.com. If the SWG (Secure Web Gateway) inspects SSL on-device, every AI interaction produces a log line with the destination, time, user, and OAuth provider.

4. Business unit interviews. Run workshops with teams to understand their real needs and usage patterns. These exchanges surface tools unknown to IT and gather feedback on perceived effectiveness. As one security practitioner noted, "Shadow IT in all its forms comes from a need for functionality. So the users have done some consulting for you."

5. Voluntary declaration form. Set up a simple channel for employees to declare their AI usage. Fields to include: name of the AI tool, primary use, frequency of use, types of data processed, account type used (corporate, personal, or freemium). The form must be accessible, non-judgmental, and accompanied by clear communication about its purpose.

What to Do With the Inventory

Discovery is the starting point, not the destination. A realistic 30-day remediation plan looks like this:

Week 1: Discover. Pull SWG, CASB (Cloud Access Security Broker), and IdP (Identity Provider) logs. Build a single inventory of AI tools, user volume, and account type. Don't block anything yet.

Week 2: Classify. For each tool, answer three questions: What data is being sent? Is it approved? Is there a corporate-tenant alternative? Score each tool as Green (approved), Yellow (needs controls), or Red (block candidate).

Week 3: Decide and communicate. Put together the approved list, the restricted list, and the block list. Write a two-page AI usage policy. Communicate before enforcing. Surprise policies generate helpdesk tickets and workarounds.

Week 4: Enforce and monitor. Push the policies via SWG and access controls. Move approved tools to corporate-tenant-only access. Enable endpoint DLP (Data Loss Prevention) in Monitor mode for sensitive categories. Watch the first week of logs, tune.

After the first month, this becomes a monthly rhythm. New AI tools appear constantly. Discovery has to be continuous.

Why Blocking Alone Fails

Banning shadow AI without offering alternatives is ineffective. The needs remain; the tools change. Employees find workarounds: mobile access, personal VPNs, or undetected alternative tools. Repression breeds mistrust and makes invisible what was merely discreet.

An effective strategy rests on three complementary levers:

Awareness. Train employees on concrete risks: data leakage, GDPR violations, personal liability. Short e-learning modules are more effective than long, unread policy documents. Article 4 of the AI Act imposes an AI literacy obligation on all operators.

Approved solutions. Offer vetted and compliant alternatives. Corporate access to certified tools, with appropriate contractual terms, significantly reduces recourse to shadow AI. The market now offers professional versions of most consumer tools.

Clear framework. Publish a simple, operational AI usage policy. Specify which tools are permitted, for which uses, and with which data. The goal is not to eliminate AI use by employees, but to channel it.

Well-governed AI usage is a competitive asset. Undocumented usage is a regulatory risk the organisation carries alone.

Frequently Asked Questions

Q: What is shadow AI?

A: Shadow AI refers to the use of AI tools by employees without validation from IT, security, or the Data Protection Officer. Studies indicate that between 41% and 78% of companies are affected, with higher prevalence in larger organisations.

Q: Does the EU AI Act prohibit shadow AI?

A: The AI Act does not directly prohibit shadow AI, but it imposes documentation, supervision, and traceability obligations that undeclared AI usage makes impossible to fulfil. Article 26 requires a register and logs for high-risk systems.

Q: How can organisations detect unauthorised AI tools?

A: Detection requires combining five methods: anonymous employee surveys, IT expenditure analysis, network and log scanning, business unit interviews, and voluntary declaration forms. No single method is sufficient on its own.

Q: What data is exposed by shadow AI?

A: Any data entered into an external AI tool may be transmitted to servers outside the EU, without confidentiality guarantees. Every prompt containing a name, email address, or any identifying information constitutes processing of personal data under the GDPR.

Q: When do AI Act transparency obligations take effect?

A: Article 4's AI literacy obligation has been applicable since 2 February 2025. Article 50's transparency obligations for systems interacting with natural persons apply from November 2026.

Q: What happens if we just block all AI tools?

A: Blocking without offering approved alternatives fails. Employees find workarounds through mobile access, personal VPNs, or undetected tools. Sustainable governance requires approved solutions, clear policy, and training alongside enforcement.

Enjoyed this? Get the Daily Brief.

Curated AI insights for European leaders — straight to your inbox.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Secure your place at the most important AI convergence event in Central Europe.