Microsoft Responsible AI Impact Assessment Guide
26
2.4
Fairness considerations
Guidance
This section requires you to assess which of the Fairness Goals from the Responsible AI Standard apply to the
system and identify which stakeholders should be considered for this Goal. After identifying the affected
stakeholders you're asked to identify which demographic groups, especially marginalized groups, would be
most at risk of experiencing a fairness harm.
To complete this section, please follow the process below:
1. Identify the relevant stakeholder(s) (e.g., end user, person impacted by the system, etc.).
2. Identify any demographic groups, including marginalized groups, that may require fairness considerations.
3. Prioritize these groups for fairness consideration and explain how the fairness consideration applies.
Demographic groups can refer to any population group that shares one or more particular demographic
characteristics. Depending on the AI system and context of deployment, the list of identified demographic
groups will change.
Marginalized groups are demographic groups who may have an atypical or even unfair experience with the
system if their needs and context are not considered. May include minorities, stigmatized groups, or other
particularly vulnerable groups. Additionally, marginalized groups can also include children, the elderly,
indigenous peoples, and religious minorities. Groups to include for consideration will depend in part on the
geographic areas and intended uses of your system.
Goal F1: Quality of Service
Applies to: AI systems when system users or people impacted by the system with different demographic
characteristics might experience differences in quality of service that Microsoft can remedy by building the
system differently.
E.g., a system that uses natural language processing may perform differently for users who speak supported
languages as a second language or who speak less common varieties of a language.
Goal F2: Allocation of resources and opportunities
Applies to: AI systems that generate outputs that directly affect the allocation of resources or opportunities
relating to finance, education, employment, healthcare, housing, insurance, or social welfare.
E.g., an automated hiring system that exhibits bias against hiring certain demographic groups (e.g., women).
Goal F3: Minimization of stereotyping, demeaning, and erasing outputs
Applies to: AI systems when system outputs include descriptions, depictions, or other representations of people,
cultures, or society.
E.g., a system that uses natural language processing to generate text for images may under-represent particular
groups. Such an example is if the system generated the caption of “CEO” only for white males.