In a pioneering effort, doctoral students from Oregon State University have joined forces with Adobe researchers to develop a novel and cost-effective training technique to reduce social biases in AI systems. This innovative method, named FairDeDup, short for fair deduplication, is designed to remove redundant information from the datasets used to train AI, thereby reducing the high computing costs typically associated with AI training.
The Problem of Bias in AI Training
AI systems are usually trained on datasets sourced from the internet, which often contain inherent societal biases. When these biases are embedded in the AI models, they can perpetuate unfair ideas and behaviors. The researchers from OSU and Adobe emphasize the importance of understanding how deduplication impacts the prevalence of these biases.
FairDeDup: The New Approach
FairDeDup works by pruning datasets of image captions collected from the web. In this context, Pruning means selecting a subset of the data representing the whole dataset. By making informed decisions about which parts of the data to retain and which to discard, FairDeDup aims to maintain diversity while removing redundancy.
“FairDeDup removes redundant data while incorporating controllable, human-defined dimensions of diversity to mitigate biases,” explained Eric Slyman of the OSU College of Engineering. “Our approach enables AI training that is not only cost-effective and accurate but also more fair.”
Addressing a Range of Biases
The researchers aimed to reduce biases related to occupation, race, gender, age, geography, culture, and more. By addressing these biases during the dataset pruning process, the team hopes to create AI systems that are more “socially just.”
“Our work doesn’t force AI into following our own prescribed notion of fairness but rather creates a pathway to nudge AI to act fairly when contextualized within some settings and user bases in which it’s deployed,” said Slyman. “We let people define what is fair in their setting instead of the internet or other large-scale datasets deciding that.”
Conclusion
Oregon State University and Adobe researchers have collaborated to develop FairDeDup, a method that promises to make AI training more efficient and equitable. FairDeDup represents a significant step towards creating AI systems that reflect a broader range of human experiences and values by focusing on reducing redundancy and enhancing diversity in training datasets.
In a pioneering effort, doctoral students from Oregon State University have joined forces with Adobe researchers to develop a novel and cost-effective training technique to reduce social biases in AI systems. This innovative method, named FairDeDup, short for fair deduplication, is designed to remove redundant information from the datasets used to train AI, thereby reducing the high computing costs typically associated with AI training.
The Problem of Bias in AI Training
AI systems are usually trained on datasets sourced from the internet, which often contain inherent societal biases. When these biases are embedded in the AI models, they can perpetuate unfair ideas and behaviors. The researchers from OSU and Adobe emphasize the importance of understanding how deduplication impacts the prevalence of these biases.
FairDeDup: The New Approach
FairDeDup works by pruning datasets of image captions collected from the web. In this context, Pruning means selecting a subset of the data representing the whole dataset. By making informed decisions about which parts of the data to retain and which to discard, FairDeDup aims to maintain diversity while removing redundancy.
“FairDeDup removes redundant data while incorporating controllable, human-defined dimensions of diversity to mitigate biases,” explained Eric Slyman of the OSU College of Engineering. “Our approach enables AI training that is not only cost-effective and accurate but also more fair.”
Addressing a Range of Biases
The researchers aimed to reduce biases related to occupation, race, gender, age, geography, culture, and more. By addressing these biases during the dataset pruning process, the team hopes to create AI systems that are more “socially just.”
“Our work doesn’t force AI into following our own prescribed notion of fairness but rather creates a pathway to nudge AI to act fairly when contextualized within some settings and user bases in which it’s deployed,” said Slyman. “We let people define what is fair in their setting instead of the internet or other large-scale datasets deciding that.”
Conclusion
Oregon State University and Adobe researchers have collaborated to develop FairDeDup, a method that promises to make AI training more efficient and equitable. FairDeDup represents a significant step towards creating AI systems that reflect a broader range of human experiences and values by focusing on reducing redundancy and enhancing diversity in training datasets.
By Asif Raza
Recent Posts
Recent Posts
Hugging Face: Revolutionizing the World of AI
Hazelcast: A Powerful Tool for Distributed Systems
What is SonarQube in Java Development?
Archives