喵ID:K3ti7E免责声明

Dataset Representativeness and Downstream Task Fairness

数据集代表性和下游任务公平性

基本信息

DOI:
--
发表时间:
2024
期刊:
影响因子:
--
通讯作者:
Yevgeniy Vorobeychik
中科院分区:
文献类型:
--
作者: Victor A. Borza;Andrew Estornell;Chien;Bradley A. Malin;Yevgeniy Vorobeychik研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Our society collects data on people for a wide range of applications, from building a census for policy evaluation to running meaningful clinical trials. To collect data, we typically sample individuals with the goal of accurately representing a population of interest. However, current sampling processes often collect data opportunistically from data sources, which can lead to datasets that are biased and not representative, i.e., the collected dataset does not accurately reflect the distribution of demographics of the true population. This is a concern because subgroups within the population can be under- or over-represented in a dataset, which may harm generalizability and lead to an unequal distribution of benefits and harms from downstream tasks that use such datasets (e.g., algorithmic bias in medical decision-making algorithms). In this paper, we assess the relationship between dataset representativeness and group-fairness of classifiers trained on that dataset. We demonstrate that there is a natural tension between dataset representativeness and classifier fairness; empirically we observe that training datasets with better representativeness can frequently result in classifiers with higher rates of unfairness. We provide some intuition as to why this occurs via a set of theoretical results in the case of univariate classifiers. We also find that over-sampling underrepresented groups can result in classifiers which exhibit greater bias to those groups. Lastly, we observe that fairness-aware sampling strategies (i.e., those which are specifically designed to select data with high downstream fairness) will often over-sample members of majority groups. These results demonstrate that the relationship between dataset representativeness and downstream classifier fairness is complex; balancing these two quantities requires special care from both model- and dataset-designers.
我们的社会收集有关人员的数据,从建立政策评估的人口普查到运行有意义的临床试验。为了收集数据,我们通常采样个人,以准确代表感兴趣的人群。但是,当前的采样过程通常会从数据源中收集数据,这可能导致数据集有偏见而不是代表性的数据集,即收集到的数据集并不能准确反映出真实人口的人口统计数据的分布。这是一个问题,因为人口中的子组在数据集中可能不足或过分代表,这可能会损害普遍性,并导致使用此类数据集的下游任务的福利和危害分布不平等(例如,在医疗决策算法中算法偏见)。在本文中,我们评估了在该数据集上培训的分类器的数据集代表性与组对象之间的关系。我们证明数据集代表性和分类器公平之间存在自然的张力。从经验上讲,我们观察到具有更好代表性的培训数据集通常会导致分类器的不公平率更高。我们提供了一些直觉,即为什么在单变量分类器的情况下通过一组理论结果发生这种直觉。我们还发现,过度采样不足的群体会导致分类器对这些群体表现出更大的偏见。最后,我们观察到,公平意识到的采样策略(即专门为选择具有高下游公平的数据的专门设计的采样策略)通常会过度样本的多数群体成员。这些结果表明,数据集代表性与下游分类器公平之间的关系很复杂。平衡这两个数量需要从模型和数据集设计器那里进行特殊护理。
参考文献(2)
被引文献(0)
Tailoring Data Source Distributions for Fairness-aware Data Integration
定制数据源分布以实现公平感知的数据集成
DOI:
发表时间:
2021
期刊:
Proceedings of the VLDB Endowment
影响因子:
2.5
作者:
Nargesian, Fatemeh;Asudeh, Abolfazl;Jagadish, H. V.
通讯作者:
Jagadish, H. V.
MithraCoverage: A System for Investigating Population Bias for Intersectional Fairness
MithraCoverage:用于调查群体偏差以实现交叉公平的系统
DOI:
10.1145/3318464.3384689
发表时间:
2020
期刊:
Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data
影响因子:
0
作者:
Jin, Zhongjun;Xu, Mengjing;Sun, Chenkai;Asudeh, Abolfazl;Jagadish, H. V.
通讯作者:
Jagadish, H. V.

数据更新时间:{{ references.updateTime }}

Yevgeniy Vorobeychik
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓