AI systems and OIA section 22 and 23
Johniel Bocacao made this Official Information request to Office of the Ombudsman
Currently waiting for a response from Office of the Ombudsman, they must respond promptly and normally no later than (details and exceptions).
From: Johniel Bocacao
Kia ora Office of the Ombudsman,
I am a Masters student at VUW looking into government use and oversight of AI in decision-making with an interest in how OIA section 22 and 23 is interpreted as it relates to AI. This is both the predictive kind used by government for decades (I include statistical models like Corrections’ RoC*RoI risk scoring under AI using the OECD definition of AI) and the new generative kind employed by the likes of ChatGPT. This inquiry is independent but cognisant of the latest government direction around AI use in the public service. This inquiry is not an OIA request.
I will soon be publishing a whitepaper that outlines my preliminary findings, which interprets how the OIA would be applied to decision-making or decision-recommending AI. I would appreciate any clarifications to the interpretation outlined below, recognising this is general advice and application of the OIA is case-by-case. I know this office has no statutory time limit to respond, but I would appreciate a response by COB 23 January/SOB 26 January with the intent to publish the paper by the end of January. I am making this request via FYI.org.nz to retain this correspondence as a public record.
- Outputs of any algorithm, model or AI system are considered decisions or recommendations under section 23 as long as they are designed to process data regarding an individual requestor and the output of the AI is relevant only to that requestor.
- The rules, parameters, or weights of an AI model or algorithm are considered a document that contains rules with which decisions or recommendations are made under section 22, with caveats as above.
- Routine internal administrative AI and algorithms that do not lead to a decision about a person is not subject to OIA section 22 or 23.
- Evidence-generating models used for operational or policy research can only be subject to section 23 if they make intermediate determinations at the individual level before the final aggregate decision. For example, international migration modelling or overall public feedback theming are not section 23 requestable, but an individual’s provisional migrant classification or how their feedback was themed is section 23 requestable.
- Any decision made by AI or algorithms regarding an individual must be logically explainable linking the requestor’s data (“material issues of fact”) with every step of reasoning to the conclusion (“the reasons for the decision”). Reasons that only approximate how the model reached an output after the fact, instead of showing the actual computations that lead to an output, is insufficient to meet these requirements for a section 23 response.
- Decisions informed by the outputs of such systems, e.g. cancelling a contract from insufficient return on investment (internal numeric prediction of financial benefit of a requesting service provider’s contract), is considered as “adopting reports and recommendations” in Ombudsman s23 guidance. The underlying “findings on material issues of fact” and “reasons for a decision” must still be fully explained and connected as above.
Please do let me know as soon as possible if this is a request the Office of the Ombudsman can assist in within the given timeframe (by 23 January) or who else would be best placed to assist.
Ngā mihi mahana,
Johniel Bocacao
School of Engineering and Computer Science - Te Kura Mātai Pūkaha, Pūrorohiko
Te Herenga Waka - Victoria University of Wellington
Things to do with this request
- Add an annotation (to help the requester or others)
- Download a zip file of all correspondence (note: this contains the same information already available above).

