Generative AI has quietly become a part of the SaaS ecosystem that businesses use every day. Platforms like Zoom, Slack, Microsoft 365, and Salesforce now have AI assistants. You can use these tools to do things like write summaries of meetings or perform routine tasks. A recent survey found that 95% of U.S. businesses now use generative AI. This is a big increase from last year. But this quick growth of AI features is making security leaders worried. Sensitive information could be leaked or used in the wrong way if there aren't enough controls in place.
Shadow AI and Its Far-Reaching Risks
When employees use AI apps without the knowledge or approval of IT, it creates shadow AI. This is akin to the shadow IT problem of unsanctioned cloud apps, but now with AI services. The unauthorized use of AI platforms can unknowingly expose organizations to data privacy issues, compliance violations, and even disinformation risks.
We're already seeing these risks play out.
Samsung engineers accidentally leaked sensitive code to ChatGPT, prompting the company to temporarily ban generative AI use on corporate devices. Privacy and sovereignty issues have also emerged: Italy's regulators briefly banned ChatGPT over privacy violations, and multiple countries (as well as U.S. agencies like NASA and the Navy) have blocked or banned DeepSeek due to national security concerns around its data practices.
Part of the overall worry is that DeepSeek's privacy policy allows user data to be sent to servers in China. Under Chinese law, the government can access that data freely. At the same time, DeepSeek lacks safety controls. A Cisco study found it failed to block any harmful prompts, making it more exploitable by cybercriminals than other AI models. Security researchers have even observed cybercriminals using DeepSeek to generate malware and bypass fraud controls.
Why "Just Ban It" Isn't a Real Solution
Seeing these risks, some organizations just choose to block popular AI tools, but outright bans are a blunt instrument that rarely work. Generative AI is embedded in a lot of applications now, making it hard to fully disable. Employees often bypass bans, with over half of U.S. workers using GenAI tools at work without IT's approval. This shadow AI usage reduces visibility and control for security teams. Banning AI can also stifle innovation and competitiveness. Instead of banning, security leaders should focus on governance, enabling safe AI usage to leverage its efficiency and insights without risks.
Embracing SaaS AI Governance
SaaS AI governance can be defined as the set of rules, procedures, and controls that make sure AI is used safely and responsibly in a business. Good governance makes sure that AI tools are used in a way that meets the company's security needs, legal obligations, and moral standards, rather than letting everyone do whatever they want. In a world where data is always going to third-party services, this kind of governance is necessary to keep track of where your data is going. The goal is to make AI safe to use, not to stop it from being used at all.
5 Key Steps for Effective AI Governance
To address the risks without losing the benefits, every security leader should put a SaaS AI governance plan on their agenda.
Here are some actionable steps to get started:
1. Inventory AI Usage
Begin by shining a light on the shadow. You can't govern what you don't know exists. Conduct a thorough audit of your environment to identify every AI enhanced application, feature, or integration in use. Build a centralized inventory listing each AI tool, what it does, which teams use it, and what data it touches.
2. Define Usage Policies
Establish an AI acceptable use policy. Much like your standard IT usage policy, this should spell out which AI tools are approved (and any that are off-limits), what kinds of data can be fed into AI systems, and the process for vetting/approving new AI solutions.
3. Monitor Data Access
Once AI tools are in play, put technical controls in place to monitor their activity and enforce least-privilege access. Ensure AI integrations only access the minimum data necessary. Use whatever admin consoles or logs your SaaS platforms provide (or consider a SaaS security platform) to keep an eye on AI integrations and data flows.
4. Educate Employees
Educate employees about the risks of unsanctioned AI tools and the importance of safe AI practices. Train staff on what is (and isn't) acceptable to share with AI platforms (for instance, no proprietary code or personal data in public chatbots). Make sure they understand the new AI usage policy and the reasons behind it.
5. Review and Adapt
Regularly scan for any new AI services or features popping up in your SaaS environment, and evaluate any updates to vendors' AI offerings. Stay informed on AI threats and vulnerabilities, for example, new prompt injection exploits or data leakage incidents, and update your policies accordingly.
It's Time to Govern, Not Block
Whether it's welcomed or not, generative AI is now a part of the new SaaS ecosystem and it isn't going anywhere. That's why every CISO should have SaaS AI governance on their list of things to do. Security leaders can't ignore these tools anymore or just hope that employees won't use them. At the same time, banning AI completely is a blunt approach that could backfire.
The best way to use AI in a way that is safe and responsible is to manage it ahead of time. Establish boundaries so that your company can take advantage of AI's benefits without taking on too much risk. Companies can convert AI from a big risk into a well-managed asset by making its use clear, setting clear rules, and enforcing smart controls.
Gal Nakash — CPO and Cofounder at Reco https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_Lu6mbGJP-mqX506jRtfNmkgJaM5N31mqZvQKWI8f7HJ_V94oLZzKbDiNDmIYkZz8as8S91j4MOwVPa8p7Tv9pDyQsYc9p-rl5XgQHye2YZu5zkdR-JI2Rg48R6xHC9M6lR_x1yuNhn2vZyRP3fICew8-bYHTLUDsz-ajWzz6Ax3EALODFa_wQL_DFP4/s728-rw-e365/Ofer.png