FAQ

What is the main goal of the StereoWipe project?

The StereoWipe project aims to create a benchmark dataset for evaluating fairness in Large Language Models (LLMs), with a focus on identifying and addressing stereotyping across diverse and intersectional identities. 

How does StereoWipe differ from existing fairness evaluation methods?

Unlike traditional methods that often lack specific definitions of biases and are limited in scope, StereoWipe employs a dual approach: leveraging large generative models and community engagement. This allows for a broader, more inclusive perspective.

Can individuals contribute to the StereoWipe project, and if so, how?

Absolutely! Community participation is vital to the success of our project. Interested individuals can contribute by joining our GitHub project, where they can provide input, suggest scenarios, and help in curating the dataset. Your perspectives and insights are invaluable in helping us achieve a truly inclusive and fair AI.

What types of stereotyping does the StereoWipe project aim to address?

The project targets a wide array of stereotyping, focusing on various aspects such as gender, race, age, nationality, religion, and profession. 

How will the data and findings of the StereoWipe project be utilized?

The insights and data gathered from the StereoWipe project will be instrumental in guiding AI developers and researchers in creating more fair and unbiased models. The dataset will serve as a benchmark for testing LLMs, and our findings will be shared with the broader AI and tech community to promote awareness and encourage the development of more equitable AI systems.