In recent years, generative AI – including systems like ChatGPT – has become part of everyday life in many workplaces. More and more companies and professionals use these tools for brainstorming, problem-solving, explanations, or quickly clarifying technical questions. When used well, they can genuinely help you work more efficiently, react faster, and make better decisions. At the same time, the question of AI and data security cannot be ignored when these tools are used in a workplace environment.
Many people only realize later how much company information they hand over to an external provider in the process – client data, internal documents, source code, financial or strategic information. From an IT and data security (AI and data security) perspective, this carries serious risks, especially if there are no clear company rules about what can and cannot be shared with AI.
While AI makes everyday work easier, we often input confidential company information into external systems without real control or internal guidelines. Even an “innocent” prompt can contain client data, contract details, source code, financial information or internal strategies. Especially for an IT company that uses AI daily, it is crucial to strictly separate open, public tools from AI systems running in an internal, controlled environment.
AI and Data Security in the Workplace – What Happens to Your Data?
Whenever you type something into an AI tool (for example, ChatGPT), you are in fact handing over data to an external provider: that data is stored on the provider’s servers at least temporarily, it can be logged, and depending on the settings or contractual terms, it may even be used to further train or improve the model. This is why AI and data security needs to be considered before using these tools at work.
It also matters a lot what kind of environment you are using: a private, enterprise AI setup under a corporate agreement, properly configured (internal, controlled environment) provides a very different level of data security than a simple, public, free or individual account where you just “type in anything”. In a development context this is especially critical: if you share source code, configuration files, API keys or security settings with a public AI, you are essentially outsourcing one of the company’s most important intellectual properties and security layers to an external provider—creating clear AI and data security risks.
Many companies – especially banks, large enterprises and tech players – have therefore restricted or put strict rules around using ChatGPT and similar AI tools, after it turned out that employees were feeding confidential code, meeting notes and internal documents into these systems, raising serious AI and data security concerns.

How Can Data Be Stored or “Leak” from AI Tools?
It’s not that the AI “consciously steals” your data, but technically, when you enter something, it becomes part of the provider’s system: your data may be stored temporarily or even longer-term on their servers, prompts and responses may be used in certain setups to further improve the models (if you don’t opt out of this, or if you’re not using an enterprise contract that explicitly excludes it), and some of the provider’s staff may have access to logs and admin interfaces. If the settings are misconfigured or human error occurs, your data may be exposed to more risk than necessary.
It’s also important to understand that ChatGPT and other generative AI systems work based on existing data and patterns: they are trained on huge amounts of previous text, and if the terms of use allow it, they can also use new user inputs to fine-tune the models. This means that the ideas, formulations and structures you feed into the system – even if they don’t appear verbatim – can become part of the “knowledge mass” from which the AI generates answers for other users as well.
From an IT perspective, this is very similar to any third-party hosted SaaS tool: whatever you put into it no longer exists “only with you”. That’s why a key question is which data you allow to leave your own controlled environment, and what you keep in internal, secure AI solutions where data does not mix with public models and cannot be used to generate responses for other users.
What You Should Never Share with ChatGPT or Other AI Tools
At work, there are several types of data and information that you should never share with a public AI tool (such as the freely accessible version of ChatGPT). These include all kinds of access credentials: usernames, passwords, API keys and tokens, private keys, as well as VPN, database and server access details. Once these get out, they become an immediate, serious security risk, so under no circumstances should you share them with AI – not even under the label of “code generation” or “help me configure this”.
Likewise, you should not input client data or personally identifiable information (PII): names, addresses, emails, phone numbers, client IDs, contract numbers, as well as health, financial or other sensitive data. These are extremely risky from a data protection (e.g. GDPR, contractual) standpoint and could cause serious reputational damage if they leak in any way. You should also avoid sharing internal financial, strategic and business information such as pricing structures, margins, internal forecasts, business plans, M&A details or negotiation positions, because sharing these can easily breach contracts, internal policies or even competition rules.
Internal documents, contracts and HR materials (employment contracts, NDAs, internal policies, compliance documents, performance reviews, HR notes) are typically classified as “internal only”, so it is not safe to paste them into an AI tool just to “improve the wording”. From a development perspective, security-critical source code and architecture are particularly sensitive: any code that implements security logic, encryption or access control, the full application architecture, or configuration files with sensitive settings. As a developer, it is tempting to “just ask AI”, but the sharing of such content can lead to serious security gaps and intellectual property (IP) risks.

As you can see, even the AI itself clearly states that you should not share sensitive data with it, and that confidential company information should be kept out of these tools.
What You Can Share – And How to Do It Safely
The goal is not to ban AI entirely, but to use it wisely. Sharing general, anonymized text is usually low-risk: for example, asking for a summary on a generic topic, generating a template-style email or description, or using examples that do not contain real names, company names, numbers, identifiers or confidential details.
As a developer, you can safely ask general algorithmic or technical questions and work with non-sensitive, anonymized code. However, your code must not contain access credentials (passwords, API keys), encryption keys or security logic, and domain names, hostnames, IP addresses and database names should be masked or replaced with dummy values.
If some content will be public anyway (e.g. public descriptions, marketing materials, certain parts of documentation), AI can safely be involved as a creative or stylistic helper – but even then, you should avoid pasting unnecessary internal, non-public details.
Special attention should be paid to IDE plugins, that is, AI tools integrated into the development environment (such as code completion tools, AI assistants, automatic refactoring or code analysis plugins). These plugins often run in the background, and may:
-send parts or even the entirety of the source code,
-error reports and logs,
-project structure, file names and configurations
to external AI services for processing. Because of this, it is particularly important to:
-use only IDE plugins and AI integrations that are approved within the company,
-understand the data handling practices of the given tool (what it sends, where, how long it is stored, whether it is used for training, etc.),
-disable or restrict, where possible in the plugin settings, the transmission of sensitive code, files, folders or configurations,
-never allow the plugin to send access credentials, encryption keys, security logic or client-specific, contract-protected code to any third party.
For such AI tools integrated into the development environment, you should always assume that you are effectively sending data to an external third party – so the same rules apply as for any public AI chat: only anonymized, non-sensitive, non-contractually protected content may be shared.
How to Establish Safe AI Usage Practices Inside a Company
The biggest risk arises when everyone simply relies on their “gut feeling” to decide what they can share with an AI. For a software development or any IT company, this is particularly dangerous, because employees work with client projects, source code and internal systems. It is therefore worth creating an internal AI policy that clearly defines what is strictly forbidden to share, what can be shared and how it must be prepared (anonymization, dummy data), which AI tools are approved, and who to contact in case of doubt.
It is especially important to highlight that at most software companies, the source code is partly or entirely the property of the parent company or the client. Contracts typically prohibit sharing this with, or making it available to, any third party. In this sense, an external, public AI service also qualifies as a third party, which means that source code – and in many cases the related documentation and data – must not be shared with it for contractual reasons either.
Team education is also essential: you should demonstrate the risks with simple examples and make it clear that whatever we type into an AI no longer exists “only with us”. It also makes sense to separate public and internal AI usage: public AI tools can be used for creative, non-confidential tasks, while for client projects, source code and internal data, a much safer approach is to use an internal, controlled AI solution.
Summary: Should You Use It or Not?
The answer is not black and white. You should use AI if you want to save time on repetitive tasks, need ideas, structure or inspiration, or if you are creating templates and general, anonymized content.
However, you should not use public AI for solving password, access or security configuration issues, for processing client or personal data, for drafting/rephrasing internal contracts, financial or strategic documents, or for sharing critical source code and architecture details.
Think in terms of systems: have a company-wide AI policy, educate your team on what AI and data security mean in practice, and if you want to integrate AI more seriously into your processes, choose a solution where AI runs in an internal, controlled environment.
FAQ:
Is it safe to use public AI tools at work?
When you use a public AI tool, everything you type leaves your company’s controlled environment, can be stored on the provider’s servers, logged, and in some cases used to further train the model. Because of this, you must not enter sensitive or confidential data (passwords, client data, internal financial, legal or strategic information, full source code, HR documents). Public AI should only be used for general, anonymized, non-confidential content that you would not mind a third-party provider seeing.
What kind of data should never be shared with AI tools?
You should never share any access credentials (passwords, API keys, tokens, private keys, VPN or server access details), client or personal data (names, contact details, identifiers, contract or customer numbers), internal financial, strategic or legal information (pricing, margins, forecasts, business plans, contracts, HR documents), or critical source code and configuration (security logic, encryption, access control, full architecture). If a contract or internal policy says you must not share something with a third party, you must not put it into an AI service either.
How can a company introduce AI usage safely?
Safe AI usage requires an internal policy that clearly defines what must never be shared with AI, what kind of anonymized content is allowed, which tools and plugins may be used, and under which settings. Training is essential so everyone understands that anything entered into an AI tool technically goes to an external provider. If the company wants to use AI more seriously, it is advisable to set up an internal, controlled AI environment where company and client data do not mix with public models and are not used to serve other users.
Thank you for reading our article! If you’d like to dive deeper into AI, be sure to check out our other related articles on the topic.
