References

Stereotyping is a specific form of bias. It involves having a fixed, generalized belief about a particular group or class of people. Stereotypes can be positive or negative, but they often simplify complex human characteristics or behaviors, leading to generalized and often inaccurate perceptions about a group. For the StereoWipe project, our objective is to develop a dataset and an evaluation framework specifically targeted at stereotyping. We recognize that addressing the full scope of bias and fairness is an expansive subject; therefore, our focus is more narrowly tailored to understanding and tackling stereotyping issues.

References and existing dataset: 

Bias in Bios: Evaluating and Mitigating Gender Bias in a Dataset of Professional Biographies Paper  GitHub  

An in-depth analysis of gender bias within the Bios dataset, focusing on professional biographies. Includes methodologies for measuring gender bias, experiments, and discussions on implications in natural language processing.

BiaSwap: Removing Dataset Bias with Bias-Tailored Swapping Augmentation Paper 

Introduction of the bFFHQ (Gender-biased FFHQ dataset) from the FFHQ dataset, focusing on age as a target label and gender as a correlated bias. The dataset features a dominant number of images of young women (aged 10-29) and older men (aged 40-59)

StereoSet: Measuring Stereotypical Bias in Pretrained Language Models Paper

A large-scale dataset in English, StereoSet is designed to measure stereotypical biases in pretrained language models across four domains: gender, profession, race, and religion.

WinoBias: Winograd-schema Dataset for Detecting Gender Bias Paper

This study analyzes various coreference resolution systems to understand gender bias issues. By altering only the gender of the pronoun in a sentence, the performance of these systems varies significantly. To highlight this issue, the WinoBias dataset was created.