Our society collects data on people for a wide range of applications, from building a census for policy evaluation to running meaningful clinical trials. To collect data, we typically sample individuals with the goal of accurately representing a population of interest. However, current sampling processes often collect data opportunistically from data sources, which can lead to datasets that are biased and not representative, i.e., the collected dataset does not accurately reflect the distribution of demographics of the true population. This is a concern because subgroups within the population can be under- or over-represented in a dataset, which may harm generalizability and lead to an unequal distribution of benefits and harms from downstream tasks that use such datasets (e.g., algorithmic bias in medical decision-making algorithms). In this paper, we assess the relationship between dataset representativeness and group-fairness of classifiers trained on that dataset. We demonstrate that there is a natural tension between dataset representativeness and classifier fairness; empirically we observe that training datasets with better representativeness can frequently result in classifiers with higher rates of unfairness. We provide some intuition as to why this occurs via a set of theoretical results in the case of univariate classifiers. We also find that over-sampling underrepresented groups can result in classifiers which exhibit greater bias to those groups. Lastly, we observe that fairness-aware sampling strategies (i.e., those which are specifically designed to select data with high downstream fairness) will often over-sample members of majority groups. These results demonstrate that the relationship between dataset representativeness and downstream classifier fairness is complex; balancing these two quantities requires special care from both model- and dataset-designers.
我们的社会收集有关人员的数据,从建立政策评估的人口普查到运行有意义的临床试验。为了收集数据,我们通常采样个人,以准确代表感兴趣的人群。但是,当前的采样过程通常会从数据源中收集数据,这可能导致数据集有偏见而不是代表性的数据集,即收集到的数据集并不能准确反映出真实人口的人口统计数据的分布。这是一个问题,因为人口中的子组在数据集中可能不足或过分代表,这可能会损害普遍性,并导致使用此类数据集的下游任务的福利和危害分布不平等(例如,在医疗决策算法中算法偏见)。在本文中,我们评估了在该数据集上培训的分类器的数据集代表性与组对象之间的关系。我们证明数据集代表性和分类器公平之间存在自然的张力。从经验上讲,我们观察到具有更好代表性的培训数据集通常会导致分类器的不公平率更高。我们提供了一些直觉,即为什么在单变量分类器的情况下通过一组理论结果发生这种直觉。我们还发现,过度采样不足的群体会导致分类器对这些群体表现出更大的偏见。最后,我们观察到,公平意识到的采样策略(即专门为选择具有高下游公平的数据的专门设计的采样策略)通常会过度样本的多数群体成员。这些结果表明,数据集代表性与下游分类器公平之间的关系很复杂。平衡这两个数量需要从模型和数据集设计器那里进行特殊护理。