Governance, Risk, and Responsible Use
Of course, with any powerful technology especially one as fast-moving as AI we must talk about governance and risk.
The good news is, this isn’t uncharted territory.
Many of you have already built strong governance processes for machine learning models especially in areas like “next best action”, fraud detection, and risk modelling. These frameworks include testing for bias, explainability, performance monitoring, and human oversight.
But generative AI is different. It produces content, not just outputs. It learns from broader and less structured datasets. And it can generate entirely new responses, rather than making predictions based on defined variables.
This introduces new questions:
- How do you govern probabilistic outputs?
- How do you audit AI-generated language or imagery?
- How do you explain decisions when the logic isn’t always linear?
We now have the opportunity and the obligation to revisit and upgrade those existing governance frameworks, ensuring they are fit for purpose in a world where AI powers personalised conversations, investment recommendations, or disclosure narratives.
That means:
- Clear ownership and accountability for AI models
- Cultural competence, particularly around the safe and respectful use of Māori language and perspectives
- Technical competence to assess design, limitations, and risks
- Transparency for customers when AI influences a recommendation or decision
Good AI governance requires more than compliance it demands ethical foresight, interdisciplinary skill, and a commitment to fairness.
It’s not a reason to hold back. It’s a reason to move forward with purpose and with care.