stereowipe

Measuring fairness

StereoWipe project aims to develop a comprehensive dataset for evaluating fairness in Large Language Models (LLMs), specifically targeting the harm of stereotyping across diverse and intersectional identities. This initiative combines the power of large generative models with community engagement to ensure global cultural coverage and to address the critiques of current NLP fairness metrics, which often lack concrete definitions of biases and tend to perpetuate a Western-centric view of fairness.