We don't know whether the most recent response to this request contains information or not – if you are Johniel Bocacao please sign in and let everyone know.

AI systems and OIA section 22 and 23

Johniel Bocacao made this Official Information request to Office of the Ombudsman

This request has an unknown status. We're waiting for Johniel Bocacao to read a recent response and update the status.

From: Johniel Bocacao

Kia ora Office of the Ombudsman,

I am a Masters student at VUW looking into government use and oversight of AI in decision-making with an interest in how OIA section 22 and 23 is interpreted as it relates to AI. This is both the predictive kind used by government for decades (I include statistical models like Corrections’ RoC*RoI risk scoring under AI using the OECD definition of AI) and the new generative kind employed by the likes of ChatGPT. This inquiry is independent but cognisant of the latest government direction around AI use in the public service. This inquiry is not an OIA request.

I will soon be publishing a whitepaper that outlines my preliminary findings, which interprets how the OIA would be applied to decision-making or decision-recommending AI. I would appreciate any clarifications to the interpretation outlined below, recognising this is general advice and application of the OIA is case-by-case. I know this office has no statutory time limit to respond, but I would appreciate a response by COB 23 January/SOB 26 January with the intent to publish the paper by the end of January. I am making this request via FYI.org.nz to retain this correspondence as a public record.

- Outputs of any algorithm, model or AI system are considered decisions or recommendations under section 23 as long as they are designed to process data regarding an individual requestor and the output of the AI is relevant only to that requestor.
- The rules, parameters, or weights of an AI model or algorithm are considered a document that contains rules with which decisions or recommendations are made under section 22, with caveats as above.
- Routine internal administrative AI and algorithms that do not lead to a decision about a person is not subject to OIA section 22 or 23.
- Evidence-generating models used for operational or policy research can only be subject to section 23 if they make intermediate determinations at the individual level before the final aggregate decision. For example, international migration modelling or overall public feedback theming are not section 23 requestable, but an individual’s provisional migrant classification or how their feedback was themed is section 23 requestable.
- Any decision made by AI or algorithms regarding an individual must be logically explainable linking the requestor’s data (“material issues of fact”) with every step of reasoning to the conclusion (“the reasons for the decision”). Reasons that only approximate how the model reached an output after the fact, instead of showing the actual computations that lead to an output, is insufficient to meet these requirements for a section 23 response.
- Decisions informed by the outputs of such systems, e.g. cancelling a contract from insufficient return on investment (internal numeric prediction of financial benefit of a requesting service provider’s contract), is considered as “adopting reports and recommendations” in Ombudsman s23 guidance. The underlying “findings on material issues of fact” and “reasons for a decision” must still be fully explained and connected as above.

Please do let me know as soon as possible if this is a request the Office of the Ombudsman can assist in within the given timeframe (by 23 January) or who else would be best placed to assist.

Ngā mihi mahana,
Johniel Bocacao
School of Engineering and Computer Science - Te Kura Mātai Pūkaha, Pūrorohiko
Te Herenga Waka - Victoria University of Wellington

Link to this

From: Gareth Derby
Office of the Ombudsman


Attachment image001.jpg
8K Download


Tēnā koe Johniel

 

Thank you for your query, made via FYI.org. As you have noted, this is not
a request under the OIA.

 

You've queried how section 22 and 23 of the OIA apply to AI-assisted
decision-making. You've suggested how you consider the Act may apply to
such information, and seek our views on your suggestions.

 

As a first step, it may be useful to meet with us to help square away some
of the parameters of the query and to discuss some of your starting
presumptions. If you're still wanting a response via FYI at that stage, we
can also discuss what that might look like.

 

Would you be available to chat sometime in early January? We'd be happy to
host you in our Office, if you wish to meet in person.

 

Nāku noa, nā 

Gareth Derby

Principal Advisor Strategic Advice

Office of the Ombudsman | Tari o te Kaitiaki Mana Tangata

 

DDI 04 460 9701 | Phone 04 473 9533

Email [1][email address] |
[2]www.ombudsman.parliament.nz

PO Box 10152, Level 7, SolNet House, 70 The Terrace, Wellington

 

[3]image001-original

 

IMPORTANT: The information contained in this email may be confidential or
legally privileged. It is intended solely for the recipient or recipients
named in this message. Please note that if you are not the intended
recipient you are not authorised to use, copy or distribute the email or
any information contained in it. If you have received this email in error,
please advise the sender immediately and destroy the original message and
any attachments.

 

 

 

References

Visible links
1. mailto:[email address]
2. http://www.ombudsman.parliament.nz/

Link to this

We don't know whether the most recent response to this request contains information or not – if you are Johniel Bocacao please sign in and let everyone know.

Things to do with this request

Anyone:
Office of the Ombudsman only: