If you're managing IT at a mid-market company, you have a blind spot problem. In our State of AI in the Workplace 2025 report, we analyzed 160+ organizations and found that IT teams have visibility into less than 20% of the AI applications employees actually use.
In some organizations, employees have adopted 100+ unique AI tools while IT knows about fewer than 20 of them.
This isn't a future problem. It's already happening. Across the mid-market companies we studied, AI adoption is accelerating while governance visibility is collapsing.
Here's why this matters: every AI-related risk, be it data leaks, compliance violations, identity sprawl, or security threats, stems directly from this visibility deficit.
The security tools you already use (SSO logs, CASB solutions, expense reports, network monitoring) are catching less than one-fifth of AI adoption in your organization.
The other 80% operates in the shadows, processing your company's data, accessing sensitive systems, and creating compliance exposures you won't discover until something goes wrong.
Why Traditional Security Can't See AI
In our work with mid-market IT teams, we've identified a pattern: traditional security tools weren't designed to catch AI adoption, which happens faster and more quietly than any software rollout you've managed before.
The Four Blind Spots
Blind Spot 1: Embedded AI Features
AI copilots and assistants are now baked into applications you already approved. Microsoft 365, Notion, Zoom, and Google Workspace all ship with AI features that users can toggle on without IT involvement. No new SSO login events, procurement requests, or notifications.
Employees enable Microsoft Copilot, Notion AI, or Zoom AI Companion with a single click. From an IT perspective, they're just using the same tools they've always had. From a risk perspective, they've granted an AI system access to emails, calendars, documents, and chat history.
Blind Spot 2: Personal Card Purchases
Employees expense AI subscriptions under vague categories like "Software" or "Marketing Tools." ChatGPT Plus costs $20/month, Grammarly Premium is $12/month, and Midjourney is $10/month. These purchases slip through expense reports as miscellaneous software.
Free tier usage bypasses procurement entirely. ChatGPT, Claude, Gemini, and Perplexity all offer powerful free versions. No corporate card, no paper trail, completely invisible to IT.
Blind Spot 3: Developer-Deployed Models
Engineering teams don't wait for IT approval. Based on our analysis of mid-market organizations, we see this consistently where development teams have their own cloud budgets. They spin up AI tools directly on cloud platforms, deploy open-source models like Deepseek or Llama, and integrate code generation tools into their workflows.
According to our report, Product and Engineering teams collectively use 200+ unique AI applications—the highest of any department. These tools rarely touch your identity systems. They connect via API keys, run in development environments, and operate outside your corporate infrastructure.
Blind Spot 4: Department-Led Procurement
In mid-market companies, this is particularly challenging. You don't have the centralized procurement controls that enterprises use, but you're too large to track every purchase manually. Marketing, Sales, and Product teams often have budget authority to buy tools directly.
Marketing buys Jasper for content creation, Sales purchases Clay for prospecting automation, Design subscribes to Midjourney for visual assets, and Customer Success implements conversational intelligence platforms.
By the time you discover these tools, they're deeply embedded in workflows. Hundreds of prompts have been entered. Sensitive data has already been processed. And employees resist any attempt to remove tools they depend on.
Why Your Current Tools Miss AI
SSO logs only catch tools that integrate with your identity provider. Many AI platforms offer direct signup and never touch your SSO.
CASB solutions struggle with browser-based AI tools. Personal ChatGPT accounts accessed via HTTPS just look like regular web traffic.
Expense reports are too slow and too generic. AI purchases hide in broad software categories. By the time finance flags something suspicious, months of usage have occurred.
Network monitoring can identify domains, but encrypted traffic reveals nothing about what data is being shared or how AI tools are being used.
Manual surveys fail because employees don't think of AI as "software they need to report." They see it as a web service, similar to Google search. Or they forget the tools they use casually.
The Risks Born From the Invisibility of AI Tools
Based on our analysis of mid-market organizations, we've identified that every AI risk category stems from the same root cause: you can't control what you can't see.
Risk 1: Data Security & Privacy Exposure
Employees input sensitive information into AI prompts every day: source code, customer PII, financial records, strategic plans, and proprietary algorithms. Without visibility into which AI tools employees use, you can't make informed decisions about where your data is going.
Our report reveals that Sales and Marketing teams alone use 170+ unique AI applications, many of which process customer data and prospect information. Customer Success and Support teams use 140+ AI apps, frequently handling PHI and PII.
An engineer debugging code pastes a proprietary algorithm into ChatGPT. The AI provides helpful suggestions. The problem is solved. But that code now exists on OpenAI's servers indefinitely. Your IP is out there—and you have no idea it happened.
Risk 2: Compliance & Regulatory Violations
Every unmonitored AI tool is a potential compliance violation waiting to be discovered during an audit.
GDPR, HIPAA, SOC 2, and PCI DSS all require organizations to control how sensitive data is processed. When you don't know which AI systems your teams use, you can't demonstrate those controls.
As one Head of Procurement at a global fintech company explained:
"We got a lot of pushback because of heavy FinTech and government regulations. People want to use AI, but they're sharing company information with public tools, which we can't allow due to confidential information and critical client data."
When regulators or auditors request your AI usage logs, incomplete visibility means you're providing incomplete answers. You're disclosing the 20% you know about while remaining unaware of the other 80%. Missing systems from your disclosure isn't just embarrassing—it's often treated as evidence of inadequate controls.
Risk 3: Identity & Access Chaos
Each AI tool creates another set of user accounts, permissions, and access paths. The complexity multiplies when you consider that AI copilots often inherit the full access permissions of the user: email, calendar, files, chat history, everything.
Your identity management system wasn't built to track AI-specific accounts. When employees sign up for AI tools independently, those accounts exist outside your identity lifecycle management.
Our report identifies identity and access sprawl as a major risk: "Orphaned accounts with lingering access to AI platforms create unauthorized entry points for hackers, posing significant security risks."
If a user account is compromised and that user has an AI copilot with full access permissions, the attacker gets an AI assistant that can rapidly search through emails, documents, and communications to find precisely what they're looking for.
Offboarding becomes a nightmare. You deactivate Salesforce, Slack, email, and VPN access when an employee leaves. But what about the 15 AI tools that employee signed up for independently? In our work with mid-market IT teams, we've seen cases where sales reps who left months ago still have access to conversational intelligence recordings and AI-powered prospect research tools.
Risk 4: Operational & Cost Risks
When your Marketing, Sales, and Product teams adopt AI tools independently, you end up with massive redundancy.
For example, Marketing pays for Jasper, Product uses Writer, and Sales expenses Copy.ai. All three are AI content generation tools with overlapping capabilities. You're paying three vendors for nearly identical functionality because nobody has visibility into what's already been purchased.
Microsoft Copilot costs $30 per user per month. If you deployed it enterprise-wide but only achieved 30% adoption, you're burning thousands monthly on unused licenses while departments expense tools that duplicate its capabilities.
One VP of Business Systems at a major IT consulting firm explained why this keeps happening:
"Gemini and ChatGPT are great for generic tasks, but they falter for specific marketing content generation. When employees find specialized marketing platforms trained specifically for content creation, they gravitate to those tools that save them time, and we're right back where we were."
Without visibility, IT teams can't optimize spend, consolidate tools, or negotiate volume discounts.
Risk 5: Security Threats & Attack Surface
Every unknown AI tool is an unvetted entry point into your environment. These tools haven't gone through security reviews. You haven't threat modeled them, and you lack incident response procedures in case of a compromise.
Our report highlights "inadequate security controls" as a critical gap. "Many AI apps lack essential features such as RBAC, MFA, and SSO, resulting in monitoring gaps that traditional security tools can't address."
Consider this scenario: An employee's account is compromised. The attacker discovers the employee is using an unsanctioned AI coding tool, leverages it to generate malware designed to evade your security stack, then exfiltrates data through the AI tool's API, bypassing your DLP and CASB entirely.
Why the 80% Stays Hidden: The Discovery Gap
Based on our analysis of mid-market organizations, AI adoption patterns don't match anything IT teams have dealt with before.
Speed asymmetry creates perpetual lag. AI tools can be adopted in minutes through free-tier signups. Your discovery processes run quarterly or annually. By the time you detect one wave of tools, dozens more have appeared.
Categorization is genuinely hard. Our report uses G2's taxonomy of 37+ distinct AI categories: AI Chatbots, AI Writing Assistants, AI Code Generation, Voice Recognition, Conversational Intelligence, AI Image Generators, Text to Speech, AI Content Creation, AI Agents/Work AI, AI Video Generators, and many more. Tools often span multiple functions. Is Cursor a code editor or an AI copilot? Both.
Department variance makes patterns invisible. The adoption rates vary wildly across teams:
- Product & Engineering: 200+ unique applications
- Sales & Marketing: 170+ unique applications
- Customer Success & Support: 140+ unique applications
- Business Strategy & Operations: 100+ unique applications
- Design: 90+ unique applications
Each team has unique workflows, which means they have unique tool preferences. Tracking adoption manually is impossible.
Shadow AI persists despite approved alternatives. Even when IT provides sanctioned options, shadow adoption continues. Employees try the approved tool, find it doesn't fit their specific workflow, and quietly adopt specialized alternatives.
Start With Visibility
The less than 20% visibility rate isn't just a troubling statistic—it's the foundation of every AI risk your organization faces.
Ritish, co-founder and CEO of Zluri, puts it simply:
"The organizations achieving effective AI governance started with a simple recognition: you need to see the problem before you can solve it. They didn't try to build perfect governance frameworks on day one. They focused on discovery first."
That's where you should start. Not with policies. Not with controls. With visibility.
We built Zluri specifically for mid-market IT teams managing this challenge. Our platform provides automated discovery across 56+ AI categories, department-level usage tracking, and access control capabilities, helping teams move from less than 20% visibility to comprehensive oversight.






















.webp)




.webp)
.webp)





.webp)