Preview – Microsoft Responsible AI Standard v2 – Introduction
Released
Microsoft
Responsible AI
Impact Assessment
under
Template
FOR EXTERNAL RELEASE
the
June 2022
Official
The Responsible AI Impact Assessment Template is
the product of a multi-year effort at Microsoft to
define a process for assessing the impact an AI
system may have on people, organizations, and
society. We are releasing our Impact Assessment
Information
Template external y to share what we have learned,
invite feedback from others, and contribute to the
discussion about building better norms and
practices around AI.
We invite your feedback on our approach:
https://aka.ms/ResponsibleAIQuestions
Act 1982
1
Microsoft Responsible AI Impact Assessment Template
Responsible AI Impact Assessment for Project GovGPT
Released
For questions about specific sections within the Impact Assessment, please refer to the Impact Assessment Guide.
Section 1: System Information
System profile
1.1 Complete the system information below.
under
System name GovGPT – Pilot project conversational companion for Government Information
Team name
Project MARION Team
the
Track revision history below.
Authors
Jenna Whitman – Chief Information Security Officer / AI Governance
Richie Atkinson – Solutions Lead
Official
Wil Garland – Business Analyst / Prompt Engineering
Last updated
Identify the individuals who wil review your Impact Assessment when it is completed.
Sarah Sun – Head of AI
Information
Reviewers
Helena Page – Senior Legal Counsel / Privacy Officer
Act 1982
2
Microsoft Responsible AI Impact Assessment Template
System lifecycle stage
1.2 Indicate the dates of planned releases for the system.
Date
Lifecycle stage
Released
July 2024
Planning & analysis
August 2024
Design
August 2024
Development
August 2024
Testing
October 2024
Implementation & deployment
under
October-
Maintenance
December 2024
December 2024
Retired
System description
the
1.3 Briefly explain, in plain language, what you’re building. This will give reviewers the necessary context to understand
the system and the environment in which it operates.
Official
System description
This pilot project is a col aboration between Microsoft, Cal aghan Innovation and supported by the Whāriki Māori
Business Network, with the goal of developing a Minimum Viable Product (MVP) and demo-ready version for
presentation at the AI Summit on 11 September 2024. The MVP wil explore the concept of creating a single AI-
Information
based service that provides information on publicly available government services through a “conversational
companion” experience.
An AI-based conversational companion wil provide accurate and referenceable information about New Zealand’s
public services through natural language conversations. It wil use Retrieval Augmented Generation (RAG)
techniques to enhance the accuracy and reliability of the generative AI model by fetching facts from open source,
publicly facing website sources.
The system runs entirely on the Microsoft Azure platform, including a global deployment of OpenAI’s current
Act
large language model.
1982
If you have links to any supplementary information on the system such as demonstrations, functional specifications,
slide decks, or system architecture diagrams, please include links below.
Description of supplementary information
Link
Out of scope
3
Microsoft Responsible AI Impact Assessment Template
GovGPT FAQ
GovGPT FAQ.docx
Out of scope
Released
GovGPT Annoucement video
GovGPT demo - 720p.mp4
under
the Official
Information
Act 1982
4
Microsoft Responsible AI Impact Assessment Template
System purpose
1.4 Briefly describe the purpose of the system and system features, focusing on how the system will address the needs
of the people who use it. Explain how the AI technology contributes to achieving these objectives.
Released
System purpose
How will this address the need:
Currently, navigating the web of publicly available information on New Zealand Government services is time-
consuming, convoluted, and mostly jargon-heavy English. GovGPT is a digital front-door, enabling New
Zealanders to easily discover and explore the abundance of Government support available to them, in a language
that they understand. Users of the tool can interact with the answers they are given, to refine them and get a
better understanding of the government support and regulations that concern them.
under
System features:
The MVP wil explore the concept of creating a single AI-based service that provides information on publicly
available government services through a “conversational companion” experience.
the
An AI-based conversational companion wil provide accurate and referenceable information about New Zealand’s
public services through natural language conversations. It wil use Retrieval Augmented Generation (RAG)
techniques to enhance the accuracy and reliability of the generative AI model by fetching facts from open source,
Official
publicly facing website sources.
Pre-populated prompts exist on the landing page to start the experience, such as “what benefits am I entitled to
as a business owner?”, and from there, users can ask their own questions to GovGPT.
Whāriki wil play a crucial role in this project by contributing specific use cases, particularly related to government
Information
support and funding opportunities for Māori businesses. This col aboration aims to ensure the cultural relevance
and appropriateness of the content, providing a comprehensive and valuable tool for al users.
System features
1.5 Focusing on the whole system, briefly describe the system features or high-level feature areas that already exist and
those planned for the upcoming release.
Act
Existing system features
System features planned for the upcoming release
Indexing of government websites for retrieval-augmented-
Scheduled updates of government sources, and direct linking to
1982
generation search, so only indexed sources wil be included in source in citations.
results.
AI generated plain English responses to user
Real-time speech input and output.
questions, grounded on indexed websites and
documents, and including citations.
Written input and output across all major, and some minor Potential for a digital avatar to aid in personalisation.
languages.
Generated answers grounded on the indexed websites,
Potential to include feedback mechanisms to assist in future
provided in the language and style the user requests.
training and relevance of outputs.
5
Microsoft Responsible AI Impact Assessment Template
Briefly describe how this system relates to other systems or products. For example, describe if the system includes
Released
models from other systems.
Relation to other systems / products
Geographic areas and languages
under
1.6 Describe the geographic areas where the system will or might be deployed to identify special considerations for
language, laws, and culture.
9(2)(b)(ii) - Commercial Information
The system is currently deployed to:
In the upcoming release, the system wil be deployed to:
the
In the future, the system might be deployed to:
Official
For natural language processing systems, describe supported languages:
All major languages in written format, though languages with less
The system currently supports:
available data or speakers have lower levels of performance.
English language spoken outputs
In the upcoming release, the system wil support:
Information
In the future, the system might support:
Fluent spoken Reo Māori and language switching.
Act 1982
6
Microsoft Responsible AI Impact Assessment Template
Deployment mode
1.7 Document each way that this system might be deployed.
How is the system currently deployed?
First release will be embedded on Callaghan Innovation’s
Released
website as a public pilot.
Wil the deployment mode change in the upcoming release? Future releases may have a standalone website, or a mobile
app.
If so, how?
Intended uses
1.8 Intended uses are the uses of the system your team is designing and testing for. An intended use is a description of
who will use the system, for what task or purpose, and where they are when using the system. They are not the same as
system features, as any number of features could be part of an intended use. Fill in the table with a description of the
under
system’s intended use(s).
Name of intended use(s)
Description of intended use(s)
1.
Anyone in New Zealand should be able to engage in a conversation with GovGPT to ask questions
Public information
and get answers from government websites. Users can refine their questions to better understand the
searching by New
the
information, and be directed to government services most relevant to them. This use case applies to
Zealanders.
anyone looking for answers from government websites, including civilians, researchers, public
servants.
2.
Official
3.
Information
Act 1982
7
Microsoft Responsible AI Impact Assessment Template
Section 2: Intended uses
Released
Intended use #1:
Public information searching by New Zealanders – repeat for each intended use
Name of intended use(s)
Description of intended use(s)
Public information
Anyone in New Zealand should be able to engage in a conversation with GovGPT to ask questions
and get answers from government websites. Users can refine their questions to better understand the
searching by New
information, and be directed to government services most relevant to them. This use case applies to
Zealander’s.
anyone looking for answers from government websites, including civilians, researchers, public
servants.
under
Assessment of fitness for purpose
2.1 Assess how the system’s use will solve the problem posed by each intended use, recognizing that there may be
multiple valid ways in which to solve the problem.
the
Assessment of fitness for purpose
The system wil provide a single tool where users can perform natural language searches on publicly available
Official
government websites. Good answers wil mean the user won’t have to navigate multiple complex websites, and
can be almost instantly directed to an appropriate service prepared with relevant information. Compared to
existing search tools, GovGPT al ows for answers to be made understandable and personalised for the user, not
subject to corporate, legal, or governmental jargon. GovGPT can retrieve and process relevant information for
user queries much more effectively and quickly than existing tools, and offers a more control ed and
manageable alternative to free AI-search tools on the market. Information
Stakeholders, potential benefits, and potential harms
2.2 Identify the system’s stakeholders for this intended use. Then, for each stakeholder, document the potential benefits
and potential harms. For more information, including prompts, see the Impact Assessment Guide.
Stakeholders
Potential system benefits
Potential system harms
9(2)(g)(i) - Free and Frank opinions
1.
Act
2.
1982
3.
4.
5.
6.
8
Microsoft Responsible AI Impact Assessment Template
9(2)(g)(i) - Free and Frank opinions
7.
8.
Released
9.
10.
under
the Official
Information
Act 1982
9
Microsoft Responsible AI Impact Assessment Template
Stakeholders for Goal-driven requirements from the Responsible AI Standard
2.3 Certain Goals in the Responsible AI Standard require you to identify specific types of stakeholders. You may have
included them in the stakeholder table above. For the Goals below that apply to the system, identify the specific
stakeholder(s) for this intended use. If a Goal does not apply to the system, enter “N/A” in the table.
Released
Goal A5: Human oversight and control
This Goal applies to all AI systems. Complete the table below.
Who is responsible for troubleshooting, managing,
For these stakeholders, identify their oversight and
operating, overseeing, and control ing the system during
control responsibilities.
and after deployment?
under
Project Marion team (the authors listed in this Assessment as Total responsibility of the tool, its safety, and decisions
well as other Microsoft, Callaghan Innovation and Whariki
around scaling and continuation. Capability of control
team members)
provided within the platform
Goal T1: System intelligibility for decision making
the
This Goal applies to AI systems when the intended use of the generated outputs is to inform decision making by or
about people. If this Goal applies to the system, complete the table below.
Official
Who wil use the outputs of the system to make
Who wil decisions be made about?
decisions?
New Zealand public, business owners, and all potential
Decisions will be made about the users, their families, and
stakeholders of indexed websites may choose to act on the
their business. Clear messaging (through FAQs and terms
outputs they are given.
of use statements) will advise that outputs are not to be
taken as advice, but as referrals to appropriate services.
Information
Goal T2: Communication to stakeholders
This Goal applies to al AI systems. Complete the table below.
Who wil make decisions about whether to employ the
Who develops or deploys systems that integrate with
system for particular tasks?
this system?
Individual users can choose to use the tool.
Ministries may build or improve their own information
architecture to better collaborate with the tool.
Act
Goal T3: Disclosure of AI interaction
This Goal applies to AI systems that impersonate interactions with humans, unless it is obvious from the circumstances
1982
or context of use that an AI system is in use, and AI systems that generate or manipulate image, audio, or video content
that could falsely appear to be authentic. If this Goal applies to the system, complete the table below.
Who wil use or be exposed to the system
N/A. It is obvious from the circumstances that an AI system is in use. GovtGPT wil not contain audio, video or
image outputs.
10
Microsoft Responsible AI Impact Assessment Template
Fairness considerations
2.4 For each Fairness Goal that applies to the system, 1) identify the relevant stakeholder(s) (e.g., system user, person
impacted by the system); 2) identify any demographic groups, including marginalized groups, that may require fairness
Released
considerations; and 3) prioritize these groups for fairness consideration and explain how the fairness consideration
applies. If the Fairness Goal does not apply to the system, enter “N/A” in the first column.
Goal F1: Quality of service
This Goal applies to AI systems when system users or people impacted by the system with different demographic
characteristics might experience differences in quality of service that can be remedied by building the system
differently. If this Goal applies to the system, complete the table below describing the appropriate stakeholders for this
intended use.
under
Which stakeholder(s)
For affected stakeholder(s) which
Explain how each demographic group
wil be affected?
demographic groups are you prioritizing for might be affected.
this Goal?
System user
Non English speakers, those with learning disabilities The tool provides the following languages -
the
[INSERT], and as a conversational companion
style tool it can be more user friendly for those
with learning disabilities than the existing
websites. Where possible, users will not
Official experience a different quality of service as these
demographic groups have considered in the
design of the tool. We have also worked with
Whariki to provide the Reo Māori language
version. There may be further opportunities to
support people with learning disabilities that
may not be able to read the content from the tool
Information
(for example adding an audio speech to text
version)
Goal F2: Allocation of resources and opportunities
This Goal applies to AI systems that generate outputs that directly affect the allocation of resources or opportunities
relating to finance, education, employment, healthcare, housing, insurance, or social welfare. If this Goal applies to the
system, complete the table below describing the appropriate stakeholders for this intended use.
Which stakeholder(s)
For affected stakeholder(s) which
Explain how each demographic group
wil be affected?
demographic groups are you prioritizing for might be affected. Act
this Goal?
1982
Goal F3: Minimization of stereotyping, demeaning, and erasing outputs
This Goal applies to AI systems when system outputs include descriptions, depictions, or other representations of people,
cultures, or society. If this Goal applies to the system, complete the table below describing the appropriate stakeholders
for this intended use.
11
Microsoft Responsible AI Impact Assessment Template
Which stakeholder(s)
For affected stakeholder(s) which
Explain how each demographic group
wil be affected?
demographic groups are you prioritizing for might be affected.
this Goal?
System users
Indigenous and underrepresented groups in New
The tool will generate answers based on the
Released Zealand, gender and sexual minorities, people with information indexed in the backend from
disabilities.
existing websites. Existing biases present in the
content may be surfaced through use of the tool.
These will be approached as opportunities for
learning and improving the sources.
under
the Official
Information
Act 1982
12
Microsoft Responsible AI Impact Assessment Template
Technology readiness assessment
2.5 Indicate with an “X” the description that best represents the system regarding this intended use.
Select one Technology Readiness
Released
The system includes AI supported by basic research and has not yet
been deployed to production systems at scale for similar uses.
X
The system includes AI supported by evidence demonstrating feasibility for uses similar to
this intended use in production systems.
This is the first time that one or more system component(s) are to be validated in
relevant environment(s) for the intended use. Operational conditions that can be
under
supported have not yet been completely defined and evaluated.
This is the first time the whole system will be validated in relevant environment(s) for
the intended use. Operational conditions that can be supported wil also
be validated. Alternatively, nearly similar systems or nearly similar methods have been
applied by other organizations with defined success.
the
The whole system has been deployed for all intended uses, and operational conditions
have been qualified through testing and uses in production.
Official
Task complexity
2.6 Indicate with an “X” the description that best represents the system regarding this intended use.
Select One Task Complexity
Information
Simple tasks, such as classification based on few features into a few categories with clear
boundaries. For such decisions, humans could easily agree on the correct answer, and identify
mistakes made by the system. For example, a natural language processing system that checks
spelling in documents.
X
Moderately complex tasks, such as classification into a few categories that are subjective. Typical y,
ground truth is defined by most evaluators arriving at the same answer. For example, a natural
language processing system that autocompletes a word or phrase as the user is typing.
Complex tasks, such as models based on many features, not easily interpretable by humans,
Act
resulting in highly variable predictions without clear boundaries between decision criteria. For such
decisions, humans would have a difficult time agreeing on the best answer, and there may be no
clearly incorrect answer. For example, a natural language processing system that generates prose
1982
based on user input prompts.
13
Microsoft Responsible AI Impact Assessment Template
Role of humans
2.7 Indicate with an “X” the description that best represents the system regarding this intended use.
Select One Role of humans
Released
X
People will be responsible for troubleshooting triggered by system alerts but wil not
otherwise oversee system operation. For example, an AI system that generates keywords
from unstructured text alerts the operator of errors, such as improper format of submission
files.
The system will support effective hand-off to people but wil be designed to automate
most use. For example, an AI system that generates keywords from unstructured text that can
be configured by system admins to alert the operator when keyword generation fal s below a
certain confidence threshold.
under
The system will require effective hand-off to people but wil be designed to automate
most use. For example, an AI system that generates keywords from unstructured text alerts
the operator when keyword generation fal s below a certain confidence threshold (regardless
of system admin configuration).
the
People will evaluate system outputs and can intervene before any action is taken: the
system wil proceed unless the reviewer intervenes. For example, an AI system that generates
keywords from unstructured text wil deliver the generated keywords for operator review but
wil finalize the results unless the operator intervenes.
Official
People will make decisions based on output provided by the system: the system wil not
proceed unless a person approves. For example, an AI system that generates keywords from
unstructured text but does not finalize the results without review and approval from the
operator.
Information
Deployment environment complexity
2.8 Indicate with an “X” the description that best represents the system regarding this intended use.
Select One Deployment environment complexity
X
Simple environment, such as when the deployment environment is static, possible input
options are limited, and there are few unexpected situations that the system must deal with
gracefully. For example, a natural language processing system used in a control ed research
environment.
Moderately complex environment, such as when the deployment environment varies,
Act
unexpected situations the system must deal with graceful y may occur, but when they do,
there is little risk to people, and it is clear how to effectively mitigate issues. For example, a
natural language processing system used in a corporate workplace where language is
1982
professional and communication norms change slowly.
Complex environment, such as when the deployment environment is dynamic, the system
wil be deployed in an open and unpredictable environment or may be subject to drifts in
input distributions over time. There are many possible types of inputs, and inputs may
significantly vary in quality. Time and attention may be at a premium in making decisions and
it can be difficult to mitigate issues. For example, a natural language processing system used
on a social media platform where language and communication norms change rapidly.
14
Microsoft Responsible AI Impact Assessment Template
Section 3: Adverse impact
Restricted Uses
Released
3.1 If any uses of the system are subject to a legal or internal policy restriction, list them here, and follow the
requirements for those uses.
Restricted Uses
N/A - other than restriction to external, open-source information to inform the knowledge base.
under
Unsupported uses
3.2 Uses for which the system was not designed or evaluated or that should be avoided.
the
Unsupported uses
Giving advice or representation. We also recommend that users do not input any personal or commercial y
sensitive information into the tool.
Official
Known limitations
3.3 Describe the known limitations of the system. This could include scenarios where the system will not perform well,
Information
environmental factors to consider, or other operating factors to be aware of.
Known limitations
Attempts to bypass the system prompt could enable the system to perform tasks outside of its remit. The
system wil not search the internet outside of the indexed material. Malicious actors may attempt to
deliberately “break” the tool.
Act
Potential impact of failure on stakeholders
3.4 Define predictable failures, including false positive and false negative results for the system as a whole and how
they would impact stakeholders for each intended use.
1982
Potential impact of failure on stakeholders
The system may generate incorrect or conflicting information, especial y for more novel or complex requests.
Attempts to circumvent the prompt may provide unexpected results. Stakeholders should verify al answers
they get with the source material, and know that the output does not constitute advice nor representation.
A terms and conditions ‘pop up’ wil be given to the end user to accept before being able to use GovGPT to
inform them of limitations, a legal disclaimer and reminder not to input personal y identifiable information.
15
Microsoft Responsible AI Impact Assessment Template
Released
under
the Official
Information
Act 1982
16
Microsoft Responsible AI Impact Assessment Template
Potential impact of misuse on stakeholders
3.5 Define system misuse, whether intentional or unintentional, and how misuse could negatively impact each
stakeholder. Identify and document whether the consequences of misuse differ for marginalized groups. When serious
impacts of misuse are identified, note them in the summary of impact as a potential harm.
Released
Potential impact of misuse on stakeholders
Users may circumvent the prompt in order to get the tool to generate outputs that are harmful, false
misrepresent information . These outputs, or hal ucinated fal acies, may be shared online to defame entities or
the tool itself.
under
Sensitive Uses
3.6 Consider whether the use or misuse of the system could meet any of the Microsoft Sensitive Use triggers below.
Yes or No Sensitive Use triggers the
No
Consequential impact on legal position or life opportunities
The use or misuse of the AI system could affect an individual’s: legal status, legal rights, access
Official
to credit, education, employment, healthcare, housing, insurance, and social welfare benefits, services,
or opportunities, or the terms on which they are provided.
No
Risk of physical or psychological injury
The use or misuse of the AI system could result in significant physical or psychological injury to an
individual.
Information
No
Threat to human rights
The use or misuse of the AI system could restrict, infringe upon, or undermine the ability to realize an
individual’s human rights. Because human rights are interdependent and interrelated, AI can affect
nearly every international y recognized human right.
Act 1982
17
Microsoft Responsible AI Impact Assessment Template
Section 4: Data Requirements
Released
Data requirements
4.1 Define any document data requirements with respect to the system’s intended uses, stakeholders, and the
geographic areas where the system will be deployed.
Data Requirements
Any documents which wil be, or are, indexed must be ful y publicly available documents.
9(2)(b)(ii) -
Commerci
under
al
The domain of the data indexed wil initial y be limited to information which is specifical y beneficial to
Information
smal businesses in New Zealand, with some limited exceptions for information relevant to the Science
and Innovation ecosystem in New Zealand (such as information about Cal aghan Innovation’s Minister,
Board, etc.). As other agencies request their data be indexed, an assessment wil need to be made on
the
the relevance of this information to the overarching goal of the system.
The system must be transparent and provide the user with sources for its responses. It wil also inform
the user of its system prompt if asked.
Official
9(2)(b)(ii) - Commercial Information
The system wil run entirely on Microsoft’s Azure platform,
Information
9(2)(b)(ii) - Commercial Information
- Azure App Service Plans
- Azure App Services / Web Apps
- Azure Deployment Services
- Azure Blob Storage
- Azure Cognitive Services (Azure AI Search, Azure Document Intelligence)
- Azure Monitoring Services (Azure Log Analytics, Azure Application Insights)
- Azure OpenAI (however the Azure OpenAI ChatGPT-4o model wil be cal ed from the global-
standard deployment as it is not available in an Australian data centre)
Act
For privacy reasons, no user data or personal information wil be col ected or stored and sessions wil
be cleared after each day. No training wil be completed at this stage and the system wil rely entirely 1982
on RAG for data veracity. This does mean that we rely on some of our stakeholders to ensure that their
data is true and correct, and any terms of use wil reflect this and wil also reiterate that users should
not put any personal, confidential or commercial y sensitive information into the tool.
The outcomes delivered by this system must benefit the public interest of New Zealanders and deliver
the goals as determined by other key stakeholders.
Stakeholders include:
- Government Agencies, Departments, Ministries or Crown Entities whose data is on the specific
18
Microsoft Responsible AI Impact Assessment Template
indexed sites list OR who have requested their data be added after launch
- The Executive Leadership Team at Cal aghan Innovation
- The Board of Cal aghan Innovation
- Minister Hon. Judith Col ins KC
Released
Existing data sets
4.2 If you plan to use existing data sets to train the system, assess the quantity and suitability of available data sets
that will be needed by the system in relation to the data requirements defined above. If you do not plan to use pre-
defined data sets, enter “N/A” in the response area.
under
Existing data sets
N/A - system wil not be trained, it wil only use indexed data.
the Official
Information
Act 1982
19
Microsoft Responsible AI Impact Assessment Template
Section 5: Summary of Impact
Released
Potential harms and preliminary mitigations
5.1 Gather the potential harms you identified earlier in the Impact Assessment in this table (check the stakeholder
table, fairness considerations, adverse impact section, and any other place where you may have described potential
harms). Use the mitigations prompts in the Impact Assessment Guide to understand if the Responsible AI Standard can
mitigate some of the harms you identified. Discuss the harms that remain unmitigated with your team and potential
reviewers.
Describe the potential harm
Corresponding Goal from the
Describe your initial ideas for mitigations or
under Responsible AI Standard explain how you might implement the
(if applicable)
corresponding Goal in the design of the system
Outputs containing biased
F1, F3
The prompt can be developed over time to account for
information
existing biases in the source information, but ultimately the
sources themselves will be developed to account for
demographic and other biases.
the
Official
Goal Applicability
5.2 To assess which Goals apply to this system, use the tables below. When a Goal applies to only specific types of AI
Information
systems, indicate if the Goal applies to the system being evaluated in this Impact Assessment by indicating “Yes” or
“No.” If you indicate that a Goal does not apply to the system, explain why in the response area. If a Goal applies to the
system, you must complete the requirements associated with that Goal while developing the system.
Accountability Goals
Goals
Does this Goal apply to the system? (Yes or No)
A1: Impact assessment
yes
Applies to: All AI systems.
Act
A2: Oversight of significant adverse impacts
yes
Applies to: All AI systems.
1982
A3: Fit for purpose
yes
Applies to: All AI systems.
A4: Data governance and management
Yes
Applies to: All AI systems.
A5: Human oversight and control
yes
Applies to: All AI systems.
20
Microsoft Responsible AI Impact Assessment Template
Transparency Goals
Goals
Does this Goal apply to the system?
(Yes or No)
T1: System intelligibility for decision making
no
Released
Applies to: AI systems when the intended use of the generated
outputs is to inform decision making by or about people.
T2: Communication to stakeholders
yes
Applies to: All AI systems.
T3: Disclosure of AI interaction
yes
Applies to: AI systems that impersonate interactions with
humans, unless it is obvious from the circumstances or context
under
of use that an AI system is in use, and AI systems that generate
or manipulate image, audio, or video content that could falsely
appear to be authentic.
the
If you selected “No” for any of the Transparency Goals, explain why the Goal does not apply to the system
This is system’s outputs should not be considered advice nor representation, and so decisions should not be
Official
made on the output alone.
Fairness Goals
Goals
Does this Goal apply to the system?
(Yes or No)
Information
F1: Quality of service
yes
Applies to: AI systems when system users or people impacted by
the system with different demographic characteristics might
experience differences in quality of service that can be
remedied by building the system differently.
F2: Allocation of resources and opportunities
no
Applies to: AI systems that generate outputs that directly affect
the al ocation of resources or opportunities relating to finance,
Act
education, employment, healthcare, housing, insurance, or
social welfare.
F3: Minimization of stereotyping, demeaning, and erasing
yes
1982
outputs
Applies to: AI systems when system outputs include
descriptions, depictions, or other representations of people,
cultures, or society.
If you selected “No” for any of the Fairness Goals, explain why the Goal does not apply to the system below.
The system does not allocate resources, nor advice on their allocation.
21
Microsoft Responsible AI Impact Assessment Template
Reliability & Safety Goals
Goals
Does this Goal apply to the system?
(Yes or No)
RS1: Reliability and safety guidance
yes
Released
Applies to: All AI systems.
RS2: Failures and remediations
yes
Applies to: All AI systems.
RS3: Ongoing monitoring, feedback, and evaluation
yes
Applies to: All AI systems.
under
Privacy & Security Goals
Goals
Does this Goal apply to the system?
(Yes or No)
PS1: Privacy Standard compliance
Yes. Cal aghan Innovation has considered this
Applies when the Microsoft Privacy Standard applies.
impact assessment and GovGPT against its
the
privacy policy and determined that a PIA is not
necessary.
PS2: Security Policy compliance
yes
Official
Applies when the Microsoft Security Policy applies.
Inclusiveness Goal
Goals
Does this Goal apply to the system?
(Yes or No)
Information
I1: Accessibility Standards compliance
yes
Applies when the Microsoft Accessibility Standards apply.
Signing off on the Impact Assessment
5.3 Before you continue with next steps, complete the appropriate reviews and sign off on the Impact Assessment. At
minimum, the PM should verify that the Impact Assessment is complete. In this case, ensure you complete the
appropriate reviews and secure all approvals as required by your organization before beginning development.
Reviewer role and I can confirm that the document benefitted from
Date
Comments
Act
name
col aborative work and different expertise within the
reviewed
team (e.g., engineers, designers, data scientists, etc.)
1982
Update and review the Impact Assessment at least annually, when new intended uses are added, and before advancing
to a new release stage. The Impact Assessment will remain a key reference document as you work toward compliance
with the remaining Goals of the Responsible AI Standard.
22
Microsoft Responsible AI Impact Assessment Template
Released
under
the
Official
Information
Scan this code to access responsible AI resources from Microsoft:
Act 1982
© 2022 Microsoft Corporation. Al rights reserved. This document is provided “as-is.” It has been edited for external release to remove
internal links, references, and examples. Information and views expressed in this document may change without notice. You bear the risk of
using it. Some examples are for il ustration only and are fictitious. No real association is intended or inferred. This document does not
provide you with any legal rights to any intel ectual property in any Microsoft product. You may copy and use this document for your
internal, reference purposes.
23
Document Outline