Fair Data (Re)weighting
- Mentors
- Pranjal Awasthi, Jenny Hamer
- Organization
- Responsible AI and Human Centred Technology
- Technologies
- python, tensorflow
- Topics
- machine learning, Fairness, Responsible AI
This project seeks to experimentally examine and benchmark the performance of fair data reweighting (FDW), a fairness preprocessing technique, as part of TensorFlow’s model remediation library.
The approach is a set of experiments which evaluates the impact of FDW on both traditional model metrics like accuracy or R2, and on different fairness metrics (prediction disparity between groups; accuracy between groups), across different datasets, machine learning models, and hyperparameter configurations. This is then benchmarked against other fairness remediation techniques such as mindiff across the same indicators.
Further, documentation is created to ease understanding and use of FDW. This includes a text tutorial to be included on TensorFlow's Responsible AI site, and colab notebooks demonstrating how to use FDW on demo datasets.
The minimum deliverables are the insights gained by the experiments in the form of a report, and a set of documentation, both in the form of tutorials and colab code samples, showing how to use FDW. Optional additional deliverables may include changes to the FDW implementation itself or further integration into the TF ecosystem.