Publication
ICLR 2021
Conference paper

Individually Fair Rankings

Download paper

Abstract

We develop an algorithm to train individually fair learning-to-rank (LTR) models. The proposed approach ensures items from minority groups appear alongside similar items from majority groups. This notion of fair ranking is based on the definition of individual fairness from supervised learning and is more nuanced than prior fair LTR approaches that simply ensure the ranking model provides underrepresented items with a basic level of exposure. The crux of our method is an optimal transport-based regularizer that enforces individual fairness and an efficient algorithm for optimizing the regularizer. We show that our approach leads to certifiably individually fair LTR models and demonstrate the efficacy of our method on ranking tasks subject to demographic biases. One-sentence Summary: We present an algorithm for training individually fair learning-to-rank systems using optimal transport tools.

Date

03 May 2021

Publication

ICLR 2021