First Published: 30 April, 2026

Most organisations believe they have their AI use under control. They have a policy, a tool list, and a sense that someone is paying attention. What they actually have is AI distributed across every person in the organisation, data flowing into systems outside their jurisdiction, and a governance framework built for a version of AI that no longer matches how it is being used.
The gap between that assumption and the reality is where the real ethics risk sits.
There is a compounding problem. Frontier AI agents violate ethical constraints 30 to 50 percent of the time in actual business deployments, not in lab conditions (Ozturkcan and Bozdag, 2025). The tools your team is already using are operating outside the parameters you think they have, roughly half the time. And most marketing teams do not have anyone explicitly owning that problem.
In New Zealand, 69 percent of people use AI regularly. Only 34 percent trust it. And 44 percent believe the risks outweigh the benefits (KPMG, 2025). That is an audience watching to see whether organisations handle this responsibly. Marketing teams sit at the intersection of AI adoption and customer trust. How you manage the gap matters more here than in larger, more anonymous markets.
This guide covers where the real risks are, what four things you must get right regardless of where your team sits, and what to do this week. A one-page AI Usage Principles template is available to download alongside this guide at marketing.org.nz/resources.
Why This Matters Now for New Zealand Marketers
New Zealand's regulatory environment is shifting. In July 2025, the Government released its first AI strategy alongside MBIE's Responsible AI Guidance for Businesses. Neither document creates new obligations. Both confirm that existing law applies fully to AI use.
The laws that apply to marketers are already in force. The Privacy Act 2020 requires that personal information be accurate, relevant and not misleading before use (Privacy Act 2020, IPP 8). The Fair Trading Act 1986 prohibits misleading or deceptive conduct in trade (Fair Trading Act 1986, s 9). AI-generated images that misrepresent a product, deepfake endorsements, or content that misleads consumers about pricing or availability are covered by the Fair Trading Act regardless of how they were produced. The Advertising Standards Authority has confirmed its codes apply to AI-generated advertising content. Legal, decent, honest and truthful: those standards do not change because the content was machine-generated.
Internationally, EU AI Act enforcement began in 2026 (European Parliament, 2024). If your clients, partners or parent companies operate in the EU, their compliance obligations will reach into your work. New Zealand's light-touch approach does not insulate you from that.
The practical position is this: you are not waiting for regulation to define your obligations. The frameworks already exist. Organisations building ethical practice now will adapt as requirements tighten. Those treating ethics as a future concern will face expensive retrofitting when they can least afford it.
The Risks Change as Your Capabilities Mature
Ethics is not a one-time assessment. The risks that matter when your team is experimenting with ChatGPT are different from the risks that matter when you are running automated campaigns at scale, which are different again when AI is embedded in how your organisation allocates budget, segments customers, and plans strategy.
I see this in every workshop I run. Teams at the early stage underestimate basic data hygiene risks. Teams further into automation have not asked what happens when their system runs a million times. Teams with AI embedded at the strategic level often have no governance structure at all, because they moved fast and assumed the framework would catch up.
Early adoption: the data hygiene stage
When your team is starting to use ChatGPT, Claude, Copilot or Gemini, the ethics question is primarily about data and verification. Someone pastes client data or customer lists into a free-tier tool. Someone publishes AI-generated content with fabricated statistics and does not notice until it is live. Someone uses a tool whose terms permit training on user inputs, without checking what that means for confidential client information.
The Privacy Act's Information Privacy Principle 11 restricts disclosure of personal information without authorisation (Privacy Act 2020, IPP 11). Entering customer data into an AI tool that trains on user inputs, without the customer's knowledge, is a real legal and reputational exposure. Check the data handling terms of every tool your team uses before allowing it near client data.
The practical minimum at this stage is straightforward: know what data is safe to share; verify every output before publication; disclose when AI generated or substantially shaped your content; and give people a clear path to escalate concerns. Most teams skip even this, and wonder why problems surface later.
Workflow automation: the bias at scale stage
Once you are automating workflows, the question changes. You are no longer asking whether this individual output is accurate. You are asking what patterns emerge when this process runs a thousand times. Who gets shown which offers? Whose messages look different?
Programmatic advertising algorithms that show different offers to different demographic groups, based on proxies like location, device type or browsing behaviour, are a documented discrimination risk (Brookings Institution, 2019). Personalisation engines trained on historical campaign data will optimise toward the demographics your past campaigns reached, and actively deprioritise those they did not. The Human Rights Act 1993 prohibits discrimination in the provision of goods and services on grounds including race, age and disability (Human Rights Act 1993, s 44). That applies to automated marketing systems.
Content generation at scale compounds the problem. When AI writes 500 email variants, reviewing each one is impractical. The biases in the training data surface in aggregate, in patterns your compliance review was not designed to catch.
At this stage you need human review at the high-impact decision points, not everywhere. You need regular audits of automated systems for disparate impact across demographic groups. And you need anomalies tracked. If your AI-driven campaign consistently underperforms with Maori or Pasifika audiences, that is a signal worth investigating before it becomes a complaint.
Strategic integration: the governance stage
When AI is embedded in how your organisation makes decisions, allocates budget and plans strategy, the risks are different again. Competitive pressure to bypass ethical guardrails because everyone else is doing it. Reputational events from decisions that technically comply with policy but violate public trust. Over-reliance on AI for decisions that require cultural context, community relationships or values judgements that no model can make.
New Zealand is a small market. Reputation damage travels fast and recovery takes years. "The algorithm did it" is not accountability. When AI makes a decision that affects customers, someone human must own the outcome.
The governance frameworks many teams built two years ago are probably insufficient for what they are doing now. Build in regular review cycles. Do not assume last year's principles fit this year's capabilities.
Four Things You Must Get Right
Regardless of where your team sits, these four principles apply. Get them right and you build ethical AI capability. Skip them and you are exposed.
1. Transparency about what AI can and cannot do
AI tools make mistakes. Be honest about them with clients, customers and your own team. If your chatbot cannot handle complex queries, say so. If your AI-generated content has a known error rate, own it. Overpromising destroys the trust that transparency builds.
New Zealand consumers have higher trust expectations than most markets. Seventy-one percent say advances in AI make trust more important, above the global average (Salesforce, 2025). Only 34 percent currently extend that trust to AI. Label AI-generated content. MBIE recommends it. The global direction is toward mandatory disclosure. Getting ahead of it now costs nothing and builds credibility you will need later.
2. Accountability for autonomous decisions
When AI makes a decision, someone human must own the outcome. Clear ownership, escalation paths and review mechanisms are not optional. The Privacy Commissioner has specifically recommended processes for human review of automated decisions (Office of the Privacy Commissioner, 2024). Build this into your team structure as an explicit function, not an addition to someone's existing workload.
For marketers, that means knowing who is responsible when the automated bidding system spends budget on the wrong audience. Who answers when the AI chatbot gives a customer incorrect information. Who reviews the outputs of an automated segmentation system before it determines which customers receive which offers. Name those people now, before you need them.
3. Bias mitigation as ongoing practice
Do not wait for annual reviews to check for bias. Build detection into your workflows. Monitor patterns. Give people a way to flag concerns without fear, and make it clear that flagging is expected, not discouraged.
In New Zealand, this has specific cultural dimensions that global frameworks do not account for. AI systems trained predominantly on North American or European data carry biases that are more consequential in a bicultural and multicultural context. Ask whether your AI tools understand the difference between Auckland's North Shore and South Auckland. Ask whether they can navigate cultural appropriateness within a diverse Pacific community. If you cannot answer those questions, you have not tested for it.
Te Mana Raraunga, the Māori Data Sovereignty Network, establishes that Māori data should be subject to Māori governance. Te Kāhui Raraunga has built on this with the Māori Data Governance Model and the Māori AI Governance Framework, which set out how the principle applies in practice. AI models trained on Māori cultural data, language or imagery without consent represent a specific reputational, ethical and cultural risk that is unique to this market. Any campaign or content workflow that engages with te reo Māori, tikanga, or Māori community data requires cultural oversight. This is not a box to tick. It is a relationship to build.
4. Human judgment in contexts that require it
Some decisions need human involvement. Customer complaints should not be resolved entirely by AI. Sensitive communications around health, finance or personal circumstances need human oversight. Content that engages cultural values, community identity or sensitive social topics needs human review. AI handles the volume. Humans handle the judgment.
Making Ethics Practical
The most common response I see when teams try to build AI ethics into their workflow is to look for a product that solves the problem. A governance platform. A compliance tool. An AI that checks the other AI.
That instinct is the problem.
There is no product that makes your team's AI use responsible. There is no software that replaces the judgment call about whether a piece of AI-generated content is accurate, appropriate, or culturally safe. Responsible AI use is a capability. You build it through clear principles, practiced judgment, and honest accountability. The tools keep changing. The mindset for using them well is what stays.
Name it right
Call it "AI usage principles" rather than "ethics policy." The framing matters. People engage with practical guidance. They resist compliance documents. Write a one-page version, give it to everyone using AI tools, and review it every six months. A downloadable template is available at marketing.org.nz/resources.
Build judgment through scenarios
Rules tell people what not to do. Scenarios build the muscle for situations they have not encountered yet. Walk your team through real examples before they encounter them:
A client asks you to use AI to generate testimonials for their website. How do you respond?
Your AI tool produces ad copy that performs well in testing but relies on fear-based messaging targeting elderly consumers. What do you do?
You discover your automated email segmentation is sending premium offers predominantly to higher-income postcodes. What is your next step?
Ethical judgment is a skill. You build it through practice, not policy documents.
Calibrate your decisions
Give your team a simple way to decide how much scrutiny a decision needs. Impact multiplied by reversibility works well. High-impact decisions that cannot easily be reversed need more scrutiny. Low-impact decisions that can be corrected quickly do not.
An AI-generated social post: low impact, easily deleted. Move fast. An AI-driven pricing strategy that varies offers by customer segment: high impact, hard to reverse once customers notice the pattern. Slow down, review, get sign-off.
This prevents both the paralysis of treating every decision as high stakes and the complacency of treating none of them that way.
Keep your process lightweight
You do not need an ethics board. You need a named person responsible for AI ethics questions, a simple escalation path, and a quarterly review of flagged issues. A senior marketer can hold this responsibility. It does not require a dedicated hire. It requires someone who takes it seriously and has the authority to act.
The Business Case
The competitive case for getting AI ethics right is usually framed as risk mitigation. That understates it.
The tools are commoditised. ChatGPT, Claude, Copilot: available to every marketing team in New Zealand. Your competitors have the same capabilities. What is not commoditised is how trustworthy your use of those tools is. In a market where 34 percent of consumers trust AI and 71 percent say trust matters more than ever (KPMG, 2025; Salesforce, 2025), the gap between those two numbers is where competitive advantage sits.
In a small, relationship-driven market like New Zealand, trust compounds. One agency known for responsible AI use gets referrals from clients who care. One agency caught misusing customer data gets discussed at every industry event for the next three years. The reputational dynamics here are not the same as a large, anonymous market.
There is also a talent argument. The best people in marketing increasingly want to work somewhere that takes AI ethics seriously. And a regulatory readiness argument: New Zealand's light-touch approach will not hold indefinitely. Australia is developing its own frameworks. EU obligations are already reaching NZ organisations through client and parent company relationships. Teams that build ethical practice now will not be scrambling when requirements tighten.
What to Do This Week
If your team is just starting with AI: write the one-page AI Usage Principles document. Cover what data is safe to share, when outputs need to be verified, how to disclose AI use, and who to contact with concerns. A downloadable template is available at marketing.org.nz/resources.
If your team is automating workflows: map your highest-volume automated processes. Identify where bias could enter and where errors could compound. Add human review at the decision points that matter most. Start logging anomalies.
If AI is embedded in your strategy, establish clear governance with named accountability for AI decisions. Build in review cycles. Do not assume the framework you wrote two years ago fits what your team is doing now.
Regardless of your stage, three actions apply immediately:
The marketing teams that will be in a strong position from here are not the ones with the most sophisticated AI. They are the ones their clients and customers trust to use it responsibly.
Source: Peter Mangin, with support from the Data Thought Leader group and Te Kahui Raraunga, 30th April 2026
Category
Contact us if you have any suggestions on resources you would like to see more of, or if you have something you think would benefit our members.
Get in TouchSign up to receive updates on events, training and more from the MA.