AI Ethics for Small Business
Navigate bias, transparency and responsible AI use as a small business - including when and how to tell customers you are using AI.
AI Ethics for Small Business
In this lesson, you will:
- Understand the core principles of ethical AI use: transparency, fairness, and consent
- Spot and address bias in AI-generated content
- Know when and how to tell customers you’re using AI
Imagine you’re the owner of a small New Zealand winery that sells online. You’ve used an AI tool to generate product descriptions for your website, hoping to save time and make your listings more engaging. One day, a customer messages you complaining that the AI-generated description for a red wine used phrases like “bold and aggressive,” which they found off-putting and stereotypical. You’re confused — how could AI say something like that? You’re not sure how to address the issue or prevent it from happening again.
This scenario is more common than you might think. AI tools can unintentionally produce content that reflects biases, stereotypes, or inaccuracies present in the data they were trained on. As a small business owner, you don’t need to be an AI expert to avoid these pitfalls-but you do need to understand the basics of AI ethics and how to apply them in your work.
All company names and scenarios used in this course are fictitious and created for illustration and training purposes only. Any resemblance to real businesses or organisations is coincidental.
What is AI Ethics?
AI ethics is about using artificial intelligence in a way that is fair, respectful, and transparent. It’s not just about avoiding harm-it’s about ensuring your use of AI aligns with your values and the expectations of your customers, employees, and the wider community.
Here are three key principles to keep in mind:
1. Transparency
Be clear with your customers and stakeholders about when and how you use AI. For example:
- If you use AI to generate marketing content, include a note like, “This text was created with AI assistance.”
- If you use AI for customer service (e.g., chatbots), let customers know they’re interacting with AI and provide an option to speak to a human if needed.
Transparency builds trust. It also helps you take responsibility if AI makes a mistake.
2. Avoiding Bias
AI systems can unintentionally reinforce biases present in the data they’re trained on. For example:
- A local café using AI to create social media posts might end up with content that stereotypes certain groups (e.g., “men love our burgers, women prefer our salads”).
- A small accounting firm using AI for client emails might receive responses that inadvertently use gendered language (e.g., “Dear Sir” instead of a neutral greeting).
Bias in AI can harm your business’s reputation and alienate customers. To spot it:
- Review AI-generated content for stereotypes, assumptions, or unfair language.
- Ask someone from a different background to read it and give feedback.
3. Respecting Consent and Privacy
AI should never be used in ways that harm people or violate their rights. For example:
- Don’t use AI to generate content that could mislead customers (e.g., fake reviews or exaggerated claims).
- If you use AI to process customer data (e.g., for personalisation), ensure you have clear consent and follow privacy laws like the Privacy Act 2020, as covered in Lesson 8.
The AI for Good Principles
The AI for Good initiative, supported by the UN, outlines key principles for ethical AI use. These are particularly relevant for small businesses:
- Fairness: Ensure AI doesn’t disadvantage any group (e.g., avoid AI that recommends lower prices to certain customers based on biased data).
- Accountability: Take responsibility for AI’s outputs and have a plan for when things go wrong.
- Respect for People: Use AI in ways that protect dignity, privacy, and rights.
Cultural Sensitivity in Aotearoa
In Aotearoa New Zealand, ethical AI use includes being mindful of cultural context. AI tools are trained primarily on English-language data and may not reflect the values, language, or perspectives of Māori and Pasifika communities. Consider these points:
- Te Reo Māori: AI tools have limited capability with Te Reo Māori. If your business uses Te Reo in branding, communications, or product names, always have AI-generated content reviewed by a competent Te Reo speaker. Do not rely on AI for translations or cultural references.
- Tikanga: AI cannot understand tikanga Māori (customs and protocols). Content involving mihi, pepeha, karakia, or references to taonga should be written by people with the appropriate cultural knowledge, not generated by AI.
- Māori data sovereignty: Be aware that Māori data sovereignty principles (such as those outlined by Te Mana Raraunga) emphasise Māori rights and interests in data. If your business works with Māori communities or data, consider these principles when choosing AI tools.
- Inclusive representation: Review AI-generated content to ensure it represents the diversity of Aotearoa, including Māori, Pasifika, and other communities.
What to Do When AI Gets It Wrong
Even with care, AI can make mistakes. If it does:
- Acknowledge the error promptly. For example, if a customer receives an AI-generated email with incorrect information, respond quickly to correct it.
- Review the AI’s training data and settings to identify why the mistake happened.
- Update your processes to prevent similar issues in the future (e.g., adding a human review step).
Common Misconceptions or Pitfalls
1. Thinking AI is “Too Big” to Affect You
Small businesses are not immune to AI ethics issues. In fact, because AI tools are often used without strict oversight in smaller organisations, the risk of bias or errors can be higher. For example, a local retailer using AI for pricing might accidentally set prices that are unfairly high for certain products due to flawed data.
2. Assuming AI Is Always Neutral
AI systems are not inherently neutral. They reflect the data they’re trained on, which can include historical biases. For example, AI used in hiring might favour candidates from certain backgrounds if the training data is skewed.
3. Ignoring the Human Element
AI should support-not replace-human judgment. Relying solely on AI without human oversight can lead to mistakes. Always have a plan to review AI outputs, especially for content that affects customers directly (e.g., marketing, customer service).
Try This: Check for Bias in AI-Generated Content
What you’ll need:
- A free AI tool: Microsoft Copilot (copilot.microsoft.com).
- A sample task, such as generating a product description, social media post, or customer email.
Steps to try today:
- Generate content with AI: Use a free AI tool to create a short piece of content relevant to your business (e.g., a social media post about your products).
- Review it for bias or inaccuracies:
- Does it use stereotypes (e.g., “men love this, women prefer that”)?
- Is the language respectful and inclusive?
- Are there any factual errors?
- Is the content accessible (clear language, appropriate for a wide audience)?
- Revise as needed: Edit the content to remove bias or inaccuracies. For example, change “men love our burgers” to “our burgers are loved by many customers.”
- Reflect: How could you make this a regular practice? Maybe set aside 10 minutes each week to review AI-generated content.
This simple step helps you stay ethical and maintain your business’s reputation.
Key Takeaway
AI ethics is about using technology responsibly. To avoid issues:
- Be transparent about when AI is used.
- Check AI-generated content for bias, stereotypes, or inaccuracies.
- Respect customer privacy and consent.
If AI makes a mistake, correct it quickly and learn from it. Small steps today can build trust and ensure your use of AI aligns with your values.