Back to Blog
AI & Technology

Five rules to govern AI in your business right now

Your staff are already using AI tools. Probably without telling you, probably with good intentions, and possibly with your client data. Here is what to do about it before it becomes a problem.

Dave Lane·7 min read·May 2025

I ask every business I work with the same question: does your team use AI tools like ChatGPT or Copilot?

The usual answer from leadership is "probably, I'm not sure." The usual answer from staff, when asked directly, is "yes, all the time."

That gap is the problem. AI tools are free, they are built into software people already use, and they genuinely save time. Of course people are using them. The question is not whether they are , it is whether anyone has thought about what goes into them.

This is not a technology issue. It is a governance issue. And it does not require a lengthy policy document. It requires five clear rules that people can actually apply.

Rule 1: What goes in stays out there

When a member of your team types something into ChatGPT, that text leaves your business and goes to a server you do not control. Depending on the tool and its settings, it may be used to train future AI models. It cannot be deleted.

The rule is simple: no client data, no personal data, no contracts, no financials, no anything you would not want to see on the front page of a newspaper. Not because these tools are malicious , they are not , but because data shared outside your business is data you no longer control.

This happens more than you would expect. Staff copy a client email into an AI to help draft a reply. They paste a contract to get a summary. They include financial figures to get help with a presentation. All of it with entirely good intentions and no awareness of the risk.

Rule 2: A human checks everything before it goes out

AI tools produce confident-sounding output that is sometimes wrong. They will state incorrect facts, cite sources that do not exist, and produce text that reads well but is not accurate.

The rule is that nothing AI-generated leaves the business without being checked by the person responsible for it. Not because the AI is unreliable , it is often very useful , but because your name is on whatever goes out, not the AI's.

I have seen businesses send client proposals that contained AI-generated errors. I have seen reports go to boards with AI-fabricated statistics in them. In both cases, the person sending them had trusted the output without reading it properly.

Rule 3: Log significant use

You do not need to monitor every use of AI. But for work that matters , client-facing documents, board reports, analysis that feeds decisions , there should be a record that AI was involved and who reviewed it.

This is partly about accountability. If a decision was made on the basis of AI-generated analysis that later proved wrong, you need to be able to trace that. It is also about culture , logging significant use encourages people to think consciously about when they are using AI, rather than reaching for it reflexively without asking whether it is the right tool for the task.

Rule 4: Enterprise tools, not personal accounts

There is a significant difference between someone using a free personal ChatGPT account and using Microsoft Copilot through your Microsoft 365 business subscription. The enterprise versions of these tools have data protection terms that personal accounts do not.

If your staff are using personal accounts, your data is likely being processed under consumer terms, not business terms. That is worth fixing. Most of the tools your team are already using have enterprise AI options that you may already be paying for and not using.

Rule 5: Review the policy every year

AI is moving faster than almost any other technology I have seen in 25 years of IT. The tools available today are fundamentally different from those available 12 months ago. Regulations are still developing. Guidance from insurers, professional bodies, and the ICO is evolving.

A policy written today will have gaps in a year. Schedule a review. The question is not whether the policy still sounds right , it is whether it still covers the tools and risks that are actually relevant to how your business uses AI now.

What most AI policies get wrong

I have seen AI policies that are three words ("use AI responsibly") and ones that are 40 pages that nobody has read. Both are useless for different reasons.

A useful policy is one that a member of staff can apply to a real decision in the moment. It needs to be specific enough to answer real questions. It needs to reflect how your business actually works, not a generic template. And it needs to come from a conversation with the people who actually use these tools , not just the leadership team deciding what they think should happen.

"The risk with AI is not that your staff will use it. It is that nobody has told them what is and is not acceptable. The gap between those two things is where problems happen."

If you want help building an AI policy that actually works for your business, policy and governance is part of what I cover. Start with a conversation. It does not need to be complicated.

Dave Lane

Dave Lane

Fractional IT Director

25 years working across IT infrastructure, cyber security, risk, and governance. I work with business owners and MDs as their independent IT director. No vendor commissions. No managed services to sell.

Sound familiar?

If any of this resonates, let's have a conversation. No sales process. Just an honest conversation about what you're dealing with.