In privacy-sensitive systems where participants operate under pseudonyms, timing information can be exploited to compromise anonymity. Motivated by applications in online peer review forums and cryptocurrency transactions, we consider deanonymization risk arising due to batching—multiple actions taken by a user at nearly the same time. We provide a formulation of privacy against batching attacks where an adversary has knowledge of a probabilistic model generating the data. We provide a queue-based algorithm that introduces delays to the system to prevent linking of actions to the same user and give theoretical results that demonstrate that it is possible to provide formal privacy guarantees without introducing excessive delay to the system. Then, we show that given problem constraints, it is not possible to defend against a stronger adversary modeled by standard differential privacy definitions. In particular, we prove that if an algorithm must release all the data it receives and no fake data, it is not possible for the algorithm to be differentially private in our setting.
在参与者使用化名操作的隐私敏感系统中,时间信息可能被利用来破坏匿名性。受在线同行评审论坛和加密货币交易中的应用启发,我们考虑因批量操作(用户在几乎同一时间采取的多个行动)而产生的去匿名化风险。我们针对对手了解生成数据的概率模型的批量攻击情况,给出了隐私的表述形式。我们提供了一种基于队列的算法,该算法会给系统引入延迟,以防止将行动与同一用户关联起来,并给出理论结果,证明在不给系统引入过多延迟的情况下提供正式的隐私保证是可能的。然后,我们表明,在给定问题约束条件下,无法抵御由标准差分隐私定义所模拟的更强对手。特别是,我们证明,如果一个算法必须发布它所接收的所有数据且不发布虚假数据,那么在我们设定的情况下,该算法不可能满足差分隐私。