AI

Cybersecurity Awareness Month-Using AI Securely at Work

Author:
Sayers
Date:
October 3, 2025

AI is everywhere—and for good reason. Used well, it accelerates routine work, helps us learn faster, and takes the tedium out of repetitive tasks. In fact, three in four knowledge workers now use AI on the job to save time, boost creativity, and focus on highervalue work.

It’s equally important, however, to remember the risks—particularly when using AI tools that aren’t approved by your organization. Unvetted tools can expose sensitive data, violate regulations, or create legal discovery obligations you didn’t intend.

Interesting fact: According to Gartner, 78% of employees who use AI at work are “bringing their own AI”—adopting consumer tools on their own rather than waiting for a sanctioned solution. That convenience can create “shadow AI” risk if data handling isn’t governed. 

Following Policy

Being familiar with internal policy is the single best way to ensure your AI usage doesn’t turn into a data incident (or a discovery headache). Start with your organization’s AI and information security policies—These spell out what data can be shared, where, and under which controls.

Most corporate frameworks are converging on widely recognized standards:

• The NIST AI Risk Management Framework emphasizes governing, mapping, measuring, and managing AI risks—practical guardrails for anyone deploying or using AI.

• ISO/IEC 42001 (the first AI management system standard) provides a certifiable structure to manage AI risks and responsibilities across the lifecycle.

• In the EU, the AI Act is now law, with phased enforcement that will touch highrisk uses and transparency. Even if you’re not in Europe, multinational operations and vendors may bring these obligations to your doorstep.

Many companies are enabling secure AI usage through approved platforms (e.g., Microsoft 365 Copilot) that respect existing access controls and explicitly do not use your tenant data to train foundation models. Others will choose a more limited posture based on risk tolerance. Either approach is valid—what matters is that your behavior matches policy. 

The Dangers of Free Tools

1) Default training and data reuse. Many free AI tools improve their models using your chats by default. For instance, OpenAI notes that consumer ChatGPT conversations may be used to train models unless you opt out (business/enterprise tiers differ). If a question includes confidential details, you may have effectively shared them with a third party. 

2) Insecure storage and breaches. Misconfigurations happen—and when they do, user content can be exposed. In January 2025, researchers reported that DeepSeek left an internet exposed database that included over a million log lines, chat history, backend details, and secret keys, all accessible without authentication, before the issue was closed. It’s a vivid example of why unsanctioned services can amplify risk.

3) Crossuser data exposure incidents. In 2023, a bug in ChatGPT briefly allowed some users to see other users’ chat titles (and for a small subset, portions of first messages and limited payment metadata for ~1.2% of Plus subscribers). OpenAI hotfixed the issue, but the lesson stands: consumer services can have incidents, and you don’t control their stack.

4) Hallucinations and false confidence. Free AI tools can generate convincing but entirely fabricated information—known as hallucinations. These aren’t just minor factual slips; they can include made-up citations, non-existent tools, or incorrect technical advice delivered with confidence. Relying on such outputs without verification can lead to reputational damage, compliance issues, or even operational risk.

ChatGPT Legally Has to Store Chat History Indefinitely

As of mid2025, OpenAI is subject to a courtordered legal hold in ongoing litigation that requires retaining user content indefinitely, including deleted chats, for certain tiers (ChatGPT Free/Plus/Pro/Team and API without Zero Data Retention). OpenAI has said the order does notapply to ChatGPT Enterprise/Edu or API customers using Zero Data Retention and that it is appealing the ruling. Practically, this means consumertier chats may persist under legal hold even after you hit “delete.” 

If you must use ChatGPT personally, review your data controls and optout options—but the safest approach for work content is to route usage through sanctioned, enterprisegrade AI with contractual protections and tenantisolated data handling.

Positive Ways to Interact with AI

Being aware of the risks associated with AI is key to leveraging the power of AI securely. AI is an incredibly powerful technology that is changing the world around us, here are some ways you can use AI to help positively change your world:

Upskill faster. Treat AI as your ondemand coach: ask for short learning plans, flashcards, scenariobased quizzes, and code/algebra walkthroughs. In controlled studies, AI tutoring has been shown to materially improve learning outcomes—sometimes in less time than traditional methods. 

Automate the mundane—securely. Within approved tools, use AI to draft emails, summarize long threads, create firstdraft documents, pull action items from meetings, or generate project plans from templates. Microsoft’s 2025 Work Trend Index reports that 30% of leaders say, “AI saves them over an hour a day.”

Know the “jagged frontier.” AI supercharges creative ideation and drafting but can struggle with nuanced reasoning or domain specific facts outside its competence. A large field experiment with 750+ consultants, done at Boston Consulting Group, found ~40% performance gains on creative tasks but performance declines when AI was used for tasks it wasn’t good at. Use AI where it shines; apply human review where accuracy is critical. 

Use enterprise AI with strong data protections. Platforms like Microsoft 365 Copilot respect your existing permissions and do not use your tenant data to train foundation models; they also align with Zero Trust and existing compliance commitments. That’s the right place for corporate work.

Quick checklist before you prompt (with corporate data):

1. Is this an approved AI tool? 

2. What’s the data classification of what I’m about to paste? 

3. Do I need to mask or redact? 

4. Will the output be stored or shared? 

5. Who else could access this artifact later (internally or via discovery)? 

Conclusion

AI can be a phenomenal accelerator for knowledge work—when used within policy and on approved platforms. Know your organization’s rules (and why they exist), prefer enterprisegrade AI with builtin protections, and match the task to the tech: ideation and drafting are ideal; regulated data and highstakes decisions require caution and human oversight. Use these practices and resources to harness AI’s benefits while keeping your team—and your data—safe.

Subscribe to blog
By subscribing you agree to with our
Privacy Policy
Share
featured Resources

The Biggest Headlines in IT Consulting

Explore news articles, case studies, and more.
View All
Blog
Defining a Category: Workspace Security
Read More
Blog
The Operational and Security Functions of Building AI Applications
Read More
Blog
The Cloudy Road to Cyber Risk Management
Read More