Someone in your team is already using ChatGPT (or something close to it). They’re drafting emails, summarising notes, shaping proposals, smoothing rough writing into something presentable. In a small business, that speed is of real value. But it also creates problems.
Without guidance, different people make different calls. One team member might use AI to polish a paragraph, while another drops entire client documents into a chatbot. The problem is the lack of shared boundaries.
That’s why an AI usage policy is so important. It sets practical, business-specific rules around what’s okay, what’s not, and what needs a second look. And once it’s in place, your team can maximise their AI productivity.
Curious how this plays out with tools like Microsoft Copilot? We explore this in more detail in Microsoft Copilot Readiness Assessment: Is Your Business Ready for AI?
Australia’s Regulatory and Legal Context
How AI Fits Into Australia's Legal Framework
There’s no single law in Australia that covers AI technology. Instead, AI use is governed through existing legislation, most notably the Privacy Act 1988 and the Australian Privacy Principles.
If your business collects or uses personal information, including through generative AI tools, those obligations apply. These laws already cover how you handle personal information, regardless of the systems or tools involved.
What the Privacy Regulator Expects From AI Use
The Office of the Australian Information Commissioner (OAIC) has published practical guidance on using commercially available AI tools in a business setting.
That guidance makes two points especially clear:
- Personal information entered into an AI system is still subject to privacy law.
- Outputs that include or could reasonably be used to identify personal information, even if AI-generated content, must be handled carefully.
The OAIC recommends that businesses avoid entering personal or sensitive data into public tools, due to the difficulty of controlling where that data ends up.
If you’re collecting, using, or disclosing personal information, even through tools powered by large language models, you’re responsible for how that data is handled.
Note: From 10 June 2025, individuals can take direct legal action under a new statutory tort for serious invasions of privacy. This raises the legal stakes for how personal information is managed, including when using AI. Learn more on the reforms from the Attorney-General’s Department.
As AI becomes part of everyday operations, it’s worth seeing how broader digital change is reshaping how businesses work. Learn more with AI for Small Business: Where to Start, What to Avoid, and What to Expect.
Practical Steps to Draft the Policy in a Small Business
Creating an AI usage policy doesn’t need to be a long, bureaucratic exercise. These five steps help you build a usable, effective policy that reflects how your team works.
Step 1: Map What’s Already Happening
Before writing any rules, get a snapshot of current behaviour. Many policies fail because they assume everyone is following procedure, when in reality people are already using AI tools informally.
Start by gathering information like:
- Who is using AI now: teams, roles, individuals?
- Which tools are in use: ChatGPT, Microsoft Copilot, Grammarly, Notion AI?
- How are these tools accessed: company accounts, personal logins, browser extensions?
- What kind of data is being entered: emails, meeting notes, proposals, pricing, client details?
You can gather this through short interviews, quick surveys, or even reviewing installed extensions. The goal is to understand behaviour so your policy reflects reality.
Step 2: Choose Your Approved-Tool Posture
Every policy needs a firm line around what’s allowed and what’s not. Your approach will depend on how you balance opportunity with risk management.
Options include:
- Approved tools list: only tools reviewed and listed by the business are allowed.
- Approved accounts only: staff must use company-managed access to authorised tools.
- Restrictions on high-risk use cases: those involving customer data or internal financials.
This choice also affects long-term risk and productivity, so it needs to be strategic. Whichever posture you take, be clear. If a tool isn’t approved, it shouldn’t be used. And staff should know who to ask when in doubt. An IT Consulting & It Strategy partner can make these decisions much easier.
Step 3: Define Proprietary Data in Plain Language
Your business holds valuable confidential data like templates, pricing models, workflows and documents. These shouldn’t end up in a prompt by accident.
Therefore you need to define what counts as protected information. For example:
- Templates, scripts, and proposal frameworks
- Pricing logic or commercial models
- Strategic plans or project documents
- Source code, macros, automation scripts
- CRM exports or client strategy notes
You also need to make handling rules easy to understand. Is this content never to be used with AI? Is it permitted within internal-only systems with strict controls? Reduce ambiguity by spelling it out.
Your AI policy is one part of protecting business data. IT Security Services can also help.
Step 4: Create a “Paste Test” That Works in Real Life
Many privacy breaches happen because someone pastes too much, too fast. A simple human oversight check can prevent this.
Before pasting anything into an AI tool, ask yourself:
- Would I share this outside the business?
- Does it identify a person?
- Could this information cause harm if exposed?
If the answer isn’t a confident “no,” the next step is to pause and ask. This is not about blocking productivity. It is about data protection and safe habits.
Step 5: Set Up a Breach and Misuse Pathway
Even with a clear policy, mistakes will happen. What matters is how your team responds.
Your policy should explain:
- What counts as suspected misuse, such as uploading client data to a public tool.
- Who to contact if something feels off.
- What to do immediately: stop, document, escalate.
- How the business will review and respond.
Make it safe to speak up. If people are afraid of getting it wrong, they won’t report early. And small issues become real risks. Thinking this through is similar to Creating a Business Continuity Management Plan That Actually Works.
What a Responsible Use of AI Policy Looks Like
You don’t need pages of legal language to create a useful policy. What matters is that it’s clear and usable.
A good AI responsible use policy does five things:
- It’s specific. It names tools, use cases, and limits.
- It’s practical. It fits how your team actually works.
- It’s readable. Staff can scan it and know what to do.
- It’s actionable. It guides fast, safe decisions.
- It’s owned. Someone maintains and updates it.
If your policy does these things, it will be used. Not ignored.
Ready to Get Your AI Usage Policy Under Control?
Deployus works with GMs, practice managers, and internal IT leads who don’t have time to chase vague policies or untangle tech risks after the fact. If your team is already using AI tools, it’s time to make sure you’ve got the right rules in place.
An AI policy isn’t about slowing people down. It’s about making sure smart tools don’t create problems like data exposure or bad decision-making. When the rules make sense, teams can move faster.
The right policy doesn’t have to be long or complicated. It just needs to be specific enough to guide behaviour, and flexible enough to grow with your business.
If you’re ready to put real structure behind your AI use, start with AI Consulting.
Frequently Asked Questions
What is an AI usage policy?
An AI usage policy is a set of rules that outlines how your team can and can’t use AI tools at work. It covers things like approved platforms, data handling rules, and what to do if something goes wrong. The goal is to make AI useful without putting your business, clients, or data at risk.
How do I protect proprietary data when using AI?
Start by defining what counts as proprietary. Things like internal templates, pricing logic, or client documents. Then make it a policy that these must not be entered into public AI tools. Use company-managed accounts, keep inputs minimal, and review outputs before use. If you’re unsure, don’t paste.
What are the risks of unauthorized AI tool use?
When staff use AI tools without approval or oversight, it can lead to privacy breaches, data leaks, and poor-quality outputs. It also makes it hard to track what’s being shared or relied on. Unauthorised use creates blind spots that can quickly turn into real business risks.
How often should AI policies be updated?
Review your AI usage policy at least once a year, or sooner if tools change, regulations shift, or new risks emerge. AI is evolving fast, and your policy needs to keep up. Make sure someone owns it, and that version history is tracked.