Navigating Enterprise AI Conversations
Posted July 17, 2025 by Jamy Sneeden

AI remains a top priority across enterprises. Whether in boardroom strategy or IT operations, AI adoption and investments are accelerating. Venture capital firms reflect this momentum with one study reported an increase of AI application retention rate from 41% to 63% along with increased spending last year.
However, AI is an umbrella term that encompasses disparate use cases. Terms like “generative”, “agents/agentic”, and “AI powered” get thrown around and add more confusion than value. It’s important to understand different AI use cases will come with vastly different operational and security implications.
To bring clarity to enterprise AI conversations and use-cases, we will divide these broader AI conversations into three categories. We are going use these three separate tracks of conversation as a practical framework for categorizing enterprise AI initiatives.
- Utilizing Existing AI Solutions
- Developing Custom AI Workflows
- Training Models from Scratch
We can represent these categories into a ‘maturity pyramid’ as enterprises often start with step 1, leveraging out of the box AI solutions for relatively easy and immediate return to value, and progress to more advanced stages as their AI needs grow more complex.

Utilizing Existing AI Solutions
The most popular existing solutions being utilized in enterprises today include Microsoft CoPilot, Salesforce Agentforce, Anthropic Claude, OpenAI ChatGPT and more. There are also a large variety of solutions that act as ‘wrappers’ around these large generative AI backends that provide access to models with additional security functionality.
This is the natural starting point for enterprise AI. The power of AI can be harnessed and ready for use with very little time investment. In the case of CoPilot, it can be as simple as purchasing licenses and configuring access.
Although ease of implementation is a major advantage, that does not mean enterprises will be ready to see a positive ROI or that they will be safe from AI related cyber-attacks.
There are many reasons an enterprise may be considering investing into a holistic solution. Some examples include: M365 productivity enhancements (CoPilot can increase the speed of working in nearly all of the M365 applications), personal assistance by having an LLM act as a ‘personal assistant’ that has access to end user’s calendars and other proprietary information to help manage personal work data, software development support via integrated tools that help autocomplete code and reduce time spent on writing boilerplate code, and many more.
Enterprises may consider investing in out-of-the box AI solutions for several use cases such as:
- M365 productivity enhancements – Copilot can significantly accelerate workflows across Microsoft 365 applications.
- Personal assistance – LLMs can act as digital assistants, accessing calendars and empowering employees to work with less friction in their day-to-day tasks.
- Software development support – Integrated tools can autocomplete code and reduce time spent on boilerplate programming.
- Customer service – Chatbots can assist end users for common use cases and questions.
- And many more – There are countless possibilities an enterprise could deploy to increase productivity and output.
An enterprise concern is combating shadow AI. Shadow AI is unauthorized use of AI in the workplace. As AI becomes more prevalent in our day-to-day lives, and gets integrated into more and more places online, it becomes easier for the average employee to turn to AI to help with daily tasks. The advantage for the employee is an increase in productivity and potentially large time savings. The large risk for the enterprise, however, is accidently losing proprietary data to unvetted and unmanaged third parties. Providing a policy supported alternative can help deter shadow AI.
Implementing these solutions bring their own security considerations. Data is the core power of AI. Ensuring AI only has access to data the enterprise wants it to see is a major logistical problem. And that the end user interfacing with the AI can only access a specific subset of that data that they should have access to.
If we take CoPilot for example. CoPilot allows users to ask about and interface with any data that has ever been shared with them, shared with everyone in the company, or any ‘public’ data sources; for which is contained in the MS suite of solutions. That means end users can now quickly query for information and get immediate responses for over provisioned data. It also means ‘dark data’ from old repositories or previous employees that may not have been cleaned up properly becomes easily available again.
ROT data – data that is Redundant, Obsolete, or Trivial – can also impact the quality of an AI solution. Chatbot responses may rely on this older data, shadow data, or ROT data and provide inaccurate responses.
Data is also the core to compliance and regulatory concerns. If AI solutions are misconfigured, data is mislabeled, or users are using unauthorized AI tools, there could be data leakage which could potentially lead to fines. An example would be PHI being handled improperly leading to HIPAA violations.
And, with any AI solution, there is the potential for prompt injection. Prompt injection is an attack that manipulates responses by delivering specific instructions. This can be used for many malicious reasons including: Allowing the AI to output malicious instructions, poisoning future responses for the AI, forcing the AI to reveal other internal instruction sets, and more.
Securing AI deployments begins with clearly identifying the specific business use cases for AI. The identification of use-cases alone could be a journey, but worth the investment up front to ensure a solid foundation and direction for the enterprise AI goals for which can be measured down the road. Varying AI solutions have different built in security capabilities and will need to be augmented for true enterprise protection. For example, if an enterprise is worried about Shadow AI and their employees accidently leaking data to untrusted AI websites, there are solutions that will act as ‘firewalls’ in front public AIs to ensure sensitive data does not accidently get publicly exposed.
Another common use case for Microsoft CoPilot is increasing M365 productivity and related workflows. CoPilot has built in functionality to allow creation of custom GPTs, this allows tuning of various response quality settings, and other security parameters. However, the enterprise may need to use additional data observability solutions to ensure the M365 data CoPilot will be utilizing is labeled correctly and not being misused.
These solutions typically have built in ways to adjust the model guidelines to ensure quality responses, but it is up to the enterprise to ensure their users are utilizing AI securely and that the AI itself is accessing data appropriately.
For data, there are numerous observability, labelling, and monitoring solutions available. For AI usage, there are varying enterprise solutions that can act as AI firewalls, AI gateways, Shadow AI monitors, or a combination of these functionalities.
While off-the-shelf tools offer a fast path to value, some enterprises require more tailored solutions – At this stage, an enterprise may consider investing in larger AI operation teams and creating customized AI pipelines.
Developing Custom AI Workflows
The next level of AI adoption maturity is developing customized AI workflows. This can start with fine tuning existing models and can get as complex as augmenting user prompts with attached proprietary data sources.
Finetuning AI can help meld general chatbots into assistants with precise outputs that match specific use cases. These use cases can include adjusting prompt length, focusing on a specific domain of knowledge, adding a new skill (such as a new programming language), or using tools (such as calling functions in response to certain queries).
Beyond fine-tuning, retrieval-augmented generation (RAG) enhances user prompts by injecting relevant context from predetermined datasets. This approach allows enterprises to unlock proprietary data for AI use while also reducing hallucinations. By requiring the AI to cite from a trusted dataset, organizations can significantly improve the precision of their AI pipelines.
This stage is more complicated to secure as there are more moving pieces. However, there are common characteristics that are considerations for all AI Operation workflows. This includes model file integrity, proper testing, deployment, and monitoring of pipelines, protection against AI attacks such as prompt injection, and more.
Model scanning is important to ensure supply chain security when developing AI pipelines. This preliminary scanning analyzes the model itself for vulnerabilities, obfuscated malicious code, and other security risks.
Real time monitoring of model responses stops undesired prompts and outputs. Whether there are malicious prompt injection attempts or the model is outputting undesirable content, monitoring can help alert and keep these pipelines operational.
Automated red teaming is a solution that can continuously attempt to break AI pipelines autonomously. This is extremely helpful to ensure AI pipelines in development aren’t vulnerable to jailbreaks, prompt injecting, and other AI related attacks.
Whether you are creating agent-to-agent AI pipelines or running AI infrastructure locally for internal use, it is important to maintain observability and ensure secure operations of the entire pipeline, starting from the supply chain and ending in the end user’s hands.
Data is still a concern at this stage as well. With more sophisticated AI workflows, AI may be touching more internal data sources. This demands good data observability and labeling so the enterprise is aware of what data the AI has access to, how it is using it, and what data users are able to access through the AI. In addition, before AI can even be leveraged, the data may need to be swept, sanitized and audited for preparedness for AI consumption.
Training Models from Scratch
Pretraining large models from scratch is the most expensive part of the AI process. This involves taking a massive dataset (like roughly the size of the internet huge) and using that to tune weights in a model architecture to reach a desired output. OpenAI has indicated ChatGPT took over $100 million to train, and Anthropic has claimed they have model training costs of an expected $1 billion currently underway.
Those costs are hard to justify when utilizing existing (already trained) and readily available models can be relatively inexpensive. There is a trend of AI startups simply ‘wrapping’ around existing models like OpenAI’s ChatGPT to provide additional functionality without having to invest hundreds of millions of dollars into their own proprietary models. Even Microsoft uses a combination of OpenAI’s infrastructure for CoPilot’s base model.
As a result, it is exceedingly rare for the average enterprise to pursue pretraining their own models from scratch because of the enormous cost. It is much more common for businesses to finetune existing models and create custom AI-to-AI workflows as opposed to starting from the ground up.
If an enterprise would like to attempt to train their own models, the first hurdle is hardware. The requirements are a massive amount of compute, a large network to handle traffic between the compute, storage to store the large amounts of data and finished models, and a place to power and cool the infrastructure.
These are traditional datacenter requirements; however, AI training demands additional computing power leveraging advanced GPUs which increases the cost all around. The GPUs are expensive themselves, but they also significantly increase power requirements, they produce more heat, and they will need exceptionally advanced networks to commute data to work together seamlessly.
Once the hardware is ready to go, the business can begin the process of training their model which includes data collection, data preprocessing, model architecture, pretraining, evaluation, fine-tuning, then deployment. Each step of the process is a difficult task and will require significant investment in human talent which may require additional hiring in spaces like data science and machine learning engineers.
Conclusion
As enterprises continue to explore the transformative potential of AI, it’s critical to approach these conversations with clarity, structure, and a deep understanding of operational and security implications. By categorizing AI initiatives into three distinct tracks—leveraging existing solutions, customizing workflows, and training models from scratch—organizations can better align their strategies with their capabilities, risk tolerance, and business goals.
Each path presents unique challenges and opportunities. Off-the-shelf tools like Microsoft Copilot or Salesforce Agentforce offer rapid deployment and scalability but require strong governance to mitigate risks like data leakage and shadow AI. Custom workflows enable tailored intelligence and tighter integration with enterprise data but demand robust DevOps and security practices. Meanwhile, training models from scratch remains a niche endeavor, best suited for organizations with hyperscale infrastructure and specialized expertise.
Ultimately, successful enterprise AI adoption hinges on more than just technical implementation—it requires thoughtful conversation and planning. By framing discussions around these categories, stakeholders can more effectively evaluate trade-offs, prioritize investments, and build AI strategies that are not only innovative but also secure, sustainable, and aligned with enterprise values.
If your enterprise would like help ensuring they are prepared to deploy and utilize an AI solution securely, or would like assistance understanding the various enterprise AI security solutions in the space, schedule a call with us by reaching out to: hello@sayers.com