
GOV-042973
Generative Artificial Intelligence Policy
OWNER
Head of Information Technology and Information Integration
DATE APPROVED
June 2024
APPROVER
Deputy Chief Executive – Technology & Data
DATE OF NEXT
June 2025
REVIEW
1. Objective
The objective of this policy is to ensure the responsible, ethical, and sustainable use of
Generative AI models and Services at ACC and that ACC complies with all relevant legislation
and government guidance.
2. Scope
This policy applies to all our people, including permanent and temporary employees,
consultants, contractors, and organisations (including vendors and other third parties)
engaged to undertake work on behalf of ACC.
This policy covers:
▪
All Generative Artificial intelligence (AI) Models and Services (Generative AI Models
and Services), and
▪
All data types, including all ACC intellectual property (IP), data sets, and personally
identifiable information (PII).
3. Policy statements
3.1 We are transparent about all Generative AI usage.
ACC staff are transparent about the use of Generative AI Models and Services including any
potential limitations or biases.
3.2 We always have human oversight throughout the use of Generative AI Models or
Services
Human oversight must be in place throughout and at the conclusion of the use of any
Generative AI Model or Service to monitor outputs and intervene if necessary to ensure the
output is accurate and the technology is being used ethically and responsibly.
Accident Compensation Corporation
Page 1 of 6
Generative Artificial Intelligence Policy
Approval date: June 2024
Review date: June 2025
GOV-042973
3.3 We will make data security and privacy paramount in the design and use of
Generative AI Models and Services
All Generative AI Models and Services must be designed, developed, and used with privacy
and security in mind. Appropriate security controls and measures must be implemented to
protect against cyber threats and any unauthorised access to or sharing of information.
All new instances of AI technology, as well as any new feature or use cases for existing AI
systems, must follow standard approvals and governance processes. This includes obtaining
the necessary clearances through the Certification and Accreditation process and securing
approval from the Change Advisory Board or Solution Alignment Board (as applicable) before
any deployment can proceed.
3.4 We will actively protect Mātauranga Māori, tikanga, and taonga (Māori Protected
Materials).
Māori Protected Materials must not be entered into Generative AI Models and Services where
doing so could threaten the integrity of the materials, Māori control over the materials, or the
cultural, economic, or other potential to Māori of the materials.
3.5 We will comply with all applicable laws and associated policies.
ACC’s staff wil ensure that the use of any Generative AI Model or Service is compliant with
applicable laws and ACC policies, and data is protected through appropriate data privacy and
security safeguards.
3.6 We will apply an ethical lens to all Generative AI Models and Services use
All large scale (or structured) uses of Generative AI Models and Services at ACC must be
reviewed by the ACC Ethics Panel via the Privacy and Ethics Risk Assessment process prior
to implementation or use.
3.7 We will consider and take reasonable steps to protect and respect ACC and third-
party intellectual property rights.
ACC staff should not enter ACC or third-party intellectual property into third-party Generative
AI Models or Services if doing so would put ACC’s intellectual property at risk or infringe third-
party intellectual property rights.
ACC staff must not:
▪ use or publish outputs generated by Generative AI Models or Services (particularly
images, audio, music or video) where doing so would raise a real risk of infringing third-
party intellectual property rights
▪ use external Generative AI Models or Services for the development of business
outputs or tools (such as software, software applications) for use in ACC’s business, if
ACC’s ownership of, or right to use, cannot be assured.
3.8 When an incident or breach occurs, we fix it and learn from it
When an issue occurs, we fix it and learn from it. We do this in a transparent and constructive
way and seek longer term solutions that help prevent similar events from occurring in the
future.
Accident Compensation Corporation
Page 2 of 6
Generative Artificial Intelligence Policy
Approval date: June 2024
Review date: June 2025
GOV-042973
4. Accountabilities
The Deputy Chief Executive – Technology & Data is accountable to the Chief Executive for
implementing this policy and together with the Executive has overall management
responsibility for the appropriate use of Generative AI Models and Services at ACC.
The Head of Information and Technology Integration is accountable to the Deputy Chief
Executive – Technology & Data for implementing and monitoring compliance with this policy.
5. Roles and responsibilities
Role
Responsibility
Deputy Chief
▪ Overall accountability for ensuring effective and responsible use
Executive –
of Generative AI models and Services at ACC in line with ACC’s
Technology & Data
risk appetite.
▪ Approve this policy.
Deputy Chief
▪ Overall responsibility for managing risks relating to their people’s
Executives
use of Generative AI models and services.
▪ Ensure quarterly monitoring of their Business Group’s
compliance with this policy is undertaken and results provided to
Enterprise Risk & Compliance.
Head of Information
▪ Implementation of this policy.
and Technology
▪ Ensure suitable communication, training and guidance are
Integration (Policy
provided to business groups to embed this policy and related
Owner)
guidelines and standards into business activities.
▪ Provide advice and support to business groups in relation to this
policy.
▪ Regularly monitor overall compliance with this policy and any
associated procedures and reporting to Enterprise Risk &
Compliance as required.
▪ Assist business with breach management and mitigation
activities as required.
▪ Review and update this policy in line with the Corporate Policy
Governance Framework.
Head of Information
▪ Participate in investigations of breaches of this policy as
and Technology
required, and in the development and review of this policy and
Integration
related guidelines.
(Policy Lead)
▪ Ensure the policy owner is informed of any potential future
changes that may affect this policy and related guidelines.
▪ Ensure associated standards, procedures and guidelines are
maintained.
▪ Reviews requests to extend the use cases for new AI Models and
Manager of Privacy
Services via the Privacy Risk Assessment process.
Accident Compensation Corporation
Page 3 of 6
Generative Artificial Intelligence Policy
Approval date: June 2024
Review date: June 2025
GOV-042973
Role
Responsibility
▪ Supports the ongoing review of AI-related research, quality
improvement activities and data requests in ACC via the Ethics
Panel.
▪ Ensures support is available for AI-related ethical decision
making.
▪ Be aware of their responsibilities under this policy.
Our People
▪ Comply with this policy and related procedures relevant to their
role.
▪ Complete all mandatory Generative AI Models and Services
training.
▪ Remain alert to potential breaches of this Policy and report
potential and actual breaches to their manager.
Additionally, People Leaders:
▪ Ensure all people in their team are aware of this Policy and
Generative AI Models and Services usage guidelines.
▪ Support the embedding of effective Generative AI models and
Services practices within their teams.
▪ Ensure their people complete mandatory Generative AI models
and Services training.
▪ Ensure that:
i.
Any / all breaches brought to their attention are
documented; and
ii.
Notification of the breach is provided to the owner of the
Policy as soon as is reasonable
6. Measures of success and compliance management
The policy owner will assess the effectiveness of this policy based on the following measures
of success:
▪ Quantum of significant policy breaches.
▪ NIST AI Risk Management Framework control enhancement actions completion rates.
The policy owner will monitor compliance with the policy as follows:
▪ Completion of mandatory training modules in line with enterprise targets.
▪ Policy compliance rates (via attestation, monitoring tools, audits).
▪ NIST AI Risk Management Framework controls testing results.
▪ A central register to record breaches of Generative AI models and Services policy is
held and maintained by the policy owner.
7. Non-compliance
Failure to comply with this policy may be considered a breach of the Code of Conduct.
Accident Compensation Corporation
Page 4 of 6
Generative Artificial Intelligence Policy
Approval date: June 2024
Review date: June 2025
GOV-042973
Any action taken because of a breach (actual or potential) of any of the obligations set out in
this policy will be conducted in good faith, a fair process will be followed, and the person
involved will have a full opportunity to respond to the concerns or allegations and have access
to appropriate support, advice, or representation.
8. Contacts
For any enquiries about issues of interpretation or management of the policy please contact
the Head of Information and Technology Integration.
9. Definitions
In this policy the following definitions apply:
Generative AI
Generative Artificial Intelligence (Gen AI) describes algorithms that
Models and
can be used to create new content. This includes code, images, text,
Services
simulations, and videos. Gen AI can use prompts or questions to
generate text or images that closely resemble human-created content.
Large scale (or
Large scale or structured usage is any usage which involve the
structured) uses
systematic application of generative AI models to generate content,
of Generative AI
insights, or solutions within systems, platforms or solutions used as
Models and
part of formal business processes.
Services
Compliance
An ongoing process and the outcome of ACC meeting its compliance
obligations.
Issue
A set of circumstances that give rise to the realisation of a risk, control
failure or incident that requires management attention.
Human Oversight
Human oversight refers to the process of having humans monitor,
review, and guide the outputs of Generative AI Models and Services
within the organisation. It ensures that AI solutions operate in
alignment with ACC’s policies, and standards.
Ethical Use
The use of AI systems in a manner that respects human rights, avoids
discrimination, ensures transparency and human oversight at all
relevant steps of its development lifecycle or use.
10. References
This policy should be read in conjunction with:
Accident Compensation Corporation
Page 5 of 6
Generative Artificial Intelligence Policy
Approval date: June 2024
Review date: June 2025
GOV-042973
Cloud Computing Policy
Code of Conduct
Personal Information and Privacy Policy
Social Media Policy
Information Management Policy
Information Security Policy
Guidelines - Generative AI Models and Services Policy Guidelines
11. Version control
Version
Date
Change Reason
0.1
02/08/2023
Full update
0.2
03/10/2023
Policy review date updated
0.3
14/12/2023
Minor updates from reviews
0.4
13/02/2024
Minor update to contact details
0.5
24/05/2024
Minor update to a link
0.6
13/06/2024
Updated to new Policy Governance Framework
including new template.
1.0
06/08/2024
Approved for release
1.1
17/12/2024
Minor update to business group references
Accident Compensation Corporation
Page 6 of 6
Generative Artificial Intelligence Policy
Approval date: June 2024
Review date: June 2025

GOV-042973
Generative AI Models and Services Policy
Guidelines
Objective of the policy
The Generative AI Models and Services Policy outlines ACC's position on the appropriate uses of any Generative
AI Models or Services, or artificial intelligence large language models (GPT AI LLM models).
Our goal is to make sure that this kind of technology is used and developed in a way that is responsible, ethical,
and sustainable, focusing on the long-term welfare of New Zealanders.
Objective of the guidelines
These guidelines provide information on the correct usage standards that our people need to follow when using
Generative AI Models and Services. Before using Generative AI, our people should consider the people and
communities who are affected by or involved in the expected user case. To get assistance with this assessment,
please reach out to the Privacy and Ethics team via [email address].
These guidelines complement our Generative AI Models and Services Policy. Please also think about how these
guidelines should be applied in the area that you work.
Scope of these Guidelines
These guidelines mainly target Generative AI Models and Services that employ LLM techniques to create text.
However, Generative AI can learn to produce content in almost any media format, such as (but not limited to)
text, images, video, audio, interactive media, or any mix of them. These guidelines should be followed for all
Generative AI Models and Services that ACC evaluates, no matter what the format of the media created by that
Model or Service is.
Who this applies to
This policy applies to all ACC people, including employees, secondees, accredited employer providers, and
independent contractors.
For specific Accountabilities, Roles, and Responsibilities, please see our Generative AI Models and Services
Policy.
GOV-042973
Generative AI Models and Services Policy Guidelines
People and their data matter
When we use Generative AI, we must respect people and their data. We need to use Generative AI in a way that
is trustworthy and reliable, and to do that, we need to consider the people and communities involved, being
careful and thoughtful about how we use Generative AI and making sure we always have human supervision
along the way.
Use only ACC approved Generative AI Models or Services
ACC people should only ever use an ACC approved Generative AI Model or Service when carrying out any ACC
business or work, or while using any ACC property. A list of all ACC approved Generative AI Models and Services
can be found on the AI Te Pātaka page.
Protection of certain kinds of information is paramount
The use of Generative AI must support the protection of personal information of our clients, the intellectual
property and business confidentiality of our organisation, and the information accessible to our staff. The use of
Generative AI should always comply with the Privacy Act and treat personal information as taonga.
Take Our guiding principles into account.
When considering whether a use case for a Generative AI Model or Service is acceptable, the guiding principles
of our Huakina Te Rā Strategy should be used to inform decision-making. For clarity, the principles are:
• Whāia te tika | We strive to do what is right.
• Whāia te pono | Undertaking to act justly.
• Whāia te aroha | We are considerate of everyone.
• Mo te oranga whānau | We improve the lives of whānau.
• Ki te ao mārama | We strive to grow and evolve.
Use privacy settings when you can
Where privacy settings are available on the Generative AI technology, users should opt-out of sharing
information into the Generative AI Model or Service and opt-in to software filters. Results-Based Accountability
settings should be maintained despite Generative AI use, that is, if the information cannot be shared with the
rest of the organisation, it is not appropriate for use in Generative AI.
Examples of acceptable usage:
These are broken into two categories: Small and Medium/Large scale.
Once an ACC employee has read the Generative AI Models and Services Policy, the associated Generative AI
Models and Services Guidelines and completed any necessary training, the individuals will then be expected to
make judgement calls on a case-by-case basis for small scale, low risk uses of approved tools and services.
In the case of medium/Large scale use, established controls on privacy, security, Information management and
technology must be fol owed, and Manager approval should be gained in all cases.
Page 2 of 6
GOV-042973
1.
Small Scale, Low Risk Use: It is acceptable to use an ACC approved Generative AI Tools or Services to:
o Draft Communications
o Draft Standard Operating Procedure (SOPs)
o Draft business rules
o Draft FLIS documents
o Review and summarise existing publications
o Draft survey questions
o Draft other business-related documentation (subject to policy settings and guidelines)
(In these instances, the draft documents must always be reviewed, edited, and finalised by a human.
Checking for, amongst other things, accuracy, factual correctness, bias and that principles have been
adhered to with subjective judgements captured alongside rationale).
o Use an ACC approved Generative AI Model or Service for coding development, provided no
bespoke or proprietary code is shared with the technology. Al code output needs to be
reviewed by a professional software coder for accuracy
o Use an ACC approved Generative AI Model or Service as a look-up function, asking questions to
seek information. However, Generative AI should not be used as a single source of truth. Any
facts, statistics or other data generated by the tool should be verified independently by the user,
to manage any biases, assumptions or hallucinations made by the tool when summarising and
presenting information.
2.
Medium/Large Scale Use, Medium Risk: These are use cases that may be acceptable, but only after
prior approval by privacy, security, information management, technology assessments, and manager
approval.
Subject to these review processes, and using only an ACC approved Generative AI Model or Service, it
may be acceptable to:
o Procure, design, or co-design Generative AI Models or Services for use in our organisation, provided
risks are formally identified, management plans are recorded, and relevant controls occur at
determined steps of the delivery lifecycle
o Use Generative AI Models or Services in analysis, research, and quality improvement
activities. Research and quality improvement activities that use Generative AI and health
information are subject to the National Ethical Standards for Health and Disability Research
and Quality Improvement, in particular Standards 13.1 - 13.8 and require ethical review by
the ACC Ethics Panel, who may refer teams on to the Health and Disability Ethics
Committees for approval, prior to proceeding
o Use Generative AI Models or Services to provide initial review, summary, and high-level analysis of
publicly available business documents (such as policy documents) that either ACC owns or is
permitted to copy and input into a Generative AI Model or Service. Any text input into the
Generative AI Model or Service must not include any personal information (including reference to
elected officials) and any generated text must be reviewed by a relevant Subject Matter Expert to
ensure political neutrality, manage bias, and remove any potentially disparaging remarks regarding
ACC, ACC people, or any elected officials
o Use Generative AI Models or Services with ethnicity, gender or social deprivation status or data as
part of a research question or hypothesis, so long as it is:
o consulted on with representatives of the affected group
o al inputs and outputs are reviewed by an ethics committee
Page 3 of 6
GOV-042973
o provided such uses or purposes are not related to client care and do not involve the entry of
any personal or health information into an external Generative AI Model or Service
o Results must be routinely audited by a professional for accuracy and biases
o Input images into a Generative AI Model or Service for activities such as ‘photoshopping’, where
there can be an assurance that use of the original image does not infringe a third party’s intellectual
property or other rights, the resulting image does not infringe any third party rights, and the
resulting image does not identify individuals unless those individuals have consented to both the use
of the image by ACC in Generative AI Models or Services and ACC’s use of the resulting image
o Generate images using Generative AI Model or Services, where we can be assured the resulting
image does not infringe third party intellectual property or other rights, create bias, perpetuate
stereotypes, be perceived to be defamatory, and is factually correct, culturally safe, and ethically
sound.
Unacceptable usage
It is unacceptable to:
• Use Generative AI Models or Services for/to:
o Hate speech; materials promoting violence or illegal activity
o Intentional y generate text that perpetuates stereotypes, reinforces biases or promotes violence
o Generate content that does not align with ACC’s political neutrality, including (but not limited
to) the generation of:
Disparaging remarks regarding elected officials (by name or by role/title)
Disparaging remarks regarding ACC or ACC staff (by name or by role/title)
Content that is critical of ACC’s policies and practices (excluding constructive criticisms
as part of policy analysis)
Content that disparages the New Zealand Government, New Zealand political parties or
New Zealand Government policies (unless as part of legitimate policy analysis)
• Input into external Generative AI Models or Services, for any purpose:
o ACC’s commercially sensitive or business confidential information
o Legally privileged information or material
o The personal information of our people, providers, vendors, customers or their family or
associates This includes sensitive information, health information and ID documentation such as
birth certificates or passports, except for those exceptions noted in this guideline
o Sensitive information as defined by the Protective Security Requirements
• Use external Generative AI Models or Services to:
o Link data sets by inputting the data sets into the Generative AI Model or Service
o Seek personal or relationship advice (as this may require personal information or create or
contribute to mental health risks).
o Undertake social credit reporting
o Undertake online manipulation such as hyper-personalised marketing communications, tailored
for high influence
• Use external Generative AI Models or Services that might:
o Sell or commercialise information held by ACC
o Use biometrics, or photos of real individuals, such as clients or our people
Page 4 of 6
GOV-042973
o Undertake real time facial recognition (such as surveillance technologies)
• Use Generative AI Models or Services to assess or decide on:
o Client’s care,
o Ongoing entitlements or financial entitlements including weekly compensation
o Claims, claim activity, or identify claimant activity perceived to be fraudulent or financially
improper
o Risk, where that risk assessment directly results in a consequence for real people, such as care,
surveillance, financial benefit, or loss, or increase or reduction of services
o Screening of candidates in hiring processes, or hiring
• Provide any data held by ACC to a third party for the use in, or generation of, Generative AI Models or
Services (i.e., third party app designer requests data for AI modelling) without a formal procurement
process around the design and limitations of those models, led by ACC
• Enter into MOUs and/or AISAs which enable data sharing for Generative AI purposes occurring in other
organisations
• Enter into MOUs and/or AISAs to provide data for use in Generative AI Models or Services in
organisations which do not have similar restrictions on Generative AI.
Intel ectual property rights and other third-party rights
The rights
It is important to think about intellectual property rights and other third-party rights when entering information
into a Generative AI Model or Service and when using outputs from a Generative AI Model or Service.
Intel ectual property rights include copyright, designs, patents, and trademarks. For example, an original and
sufficiently creative report or story will be protected by copyright. Confidential information is not an intellectual
'property right' but it is often grouped in with the other rights as something deserving of similar protection.
Other third-party rights include privacy rights, and in some countries, what are known as 'personality rights'.
Input
On the input side, there are two important topics to consider. Firstly, it's important to check whether you could
be releasing ACC's intellectual property into a Generative AI Model or Service. If you could be doing that, you
need to consider whether you're authorised to do so and whether the model or service wil 'consume' that
intellectual property as part of its ongoing learning. If the Generative AI Model or Service will consume the
intellectual property, you need to consider whether that is problematic from ACC's perspective. If you're using a
publicly available Generative AI Model or Service, it could be problematic because ACC could lose control of the
intellectual property and, at least in theory, some of it could emerge when someone outside of ACC inputs a
prompt into the model or service on the same subject matter in the future. If in doubt, you should check with
your manager or the Legal team before entering the information.
The second topic on the input side concerns third party intel ectual property rights, that is, intel ectual property
rights not owned by ACC. If you enter into a Generative AI Model or Service, material comprising intellectual
property owned by someone else without their permission, ACC could be infringing their intellectual property
rights. For this reason, third party material should not be entered into a Generative AI Model or Service, unless
you are sure that ACC has all required rights to do so. If in doubt, consult the Legal team. The same applies to
material protected by other third-party rights, particularly privacy rights. For example, you should not enter a
photo or an audio or video clip of someone else into a Generative AI Model or Service, whether directly or by
linking to a publicly accessible web page or media file, without their fully informed consent.
Page 5 of 6
GOV-042973
Output
Turning to the output side, as noted in the Policy, there are unresolved legal issues regarding the materials on
which some large language models have been trained, how they generate their outputs, whether the outputs
are protected by copyright and, if so, who owns that copyright.
The Policy requires our people to take reasonable steps to ensure they do not use or publish outputs generated
by Generative AI Models and Services where doing so would raise a real risk of infringing third party intellectual
property rights (and the same applies to third party privacy or personality rights).
The Policy also requires that our people should not use Generative AI Models or Services for the development of
ACC business outputs or tools (such as software, software applications) for use in ACC’s business, if ACC’s
ownership of, or right to use the intellectual property rights comprised in such outputs or tools cannot be
assured. Whether or not this assurance can be given will need to be considered on a case-by-case basis and our
people should speak to the Legal team about this issue.
In this context, our people need to understand the limitations of Generative AI Models or Service’s terms and
conditions. Those terms and conditions may state that the user owns al ‘rights, title and interest in and to the
output’. However, that doesn't necessarily mean that ACC does in fact own the intellectual property rights.
Copyright wil usual y be the relevant right in question but there may be no copyright in the output, either
because there’s no relevant author (there’s an open issue here under New Zealand copyright law) or because
the output isn’t original.
This is a complicated and evolving area. What we can say now is that the risk of infringement is low, if not very
low, where you're asking a Generative AI Model or Service to interpret, summarise, paraphrase, or translate
something you input into the Model or Service that ACC owns. By contrast, the risk increases if you use a
creative output that the Generative AI or Service purports to have created from scratch, such as an image or an
audio or video file. In some instances, the risk may well be low, but in others it may not. The challenge lies in
making an educated assessment of where the level of risk lies. Each situation needs to be considered on a case-
by-case basis. If in doubt, ask the Legal team. In addition, and as noted in the Policy, our people should never
openly publish any output created by a Generative AI Model or Service (particularly images, audio, or video).
Current approach is risk averse
We recognise our current approach to Generative AI use is risk averse, if in doubt, staff should seek advice from
the Privacy and Ethics team (who may wish to consult the Legal team), or not use Generative AI Models or
Services. For any large-scale usage, the Privacy, Legal, Ethics, Information Management and Security teams
should be engaged in the design phase of your Generative AI work to ensure all risks are assessed prior to
implementation. Once Generative AI Models or Services are in use, or data has been shared with Generative AI
Models or Services, there may be limited opportunities to retrieve or rescind the ongoing access and secondary
uses of the information shared.
Page 6 of 6
Document Outline