Mitigating Privacy Conflicts with Computational Theory of Mind

2025-01-01
Erdogan, Emre
Aydın, Hüseyin
Dignum, Frank
Verbrugge, Rineke
Yolum, Pınar
Multiagent systems bring together agents that represent different users with possibly different concerns. When interacting to make decisions, conflicts occur. A well-known case is with privacy. Agents often need to manage the privacy of content that belong to multiple users, such as sharing group pictures on social media. When agents have different expectations on how the content should be shared, multi-party privacy conflicts can arise. How should we design agents to deal with such conflicts? We have studied an empirical user study to understand the effect of group dynamics in various multi-party privacy settings. Our findings show that as users' beliefs and knowledge about others evolve, privacy expectations shift as well. Inspired by this, we propose computational agents that mimic a human-inspired Theory of Mind (ToM) model to help their users preserve their privacy in multi-party privacy conflicts. The agents can express empathy when others are in need but can also fight for their own privacy. We evaluate our approach in multiagent simulations with varying decision-making strategies. Our results demonstrate that ToM-enabled agents improve privacy preservation for all parties, and even more when their understanding of others is dynamically updated through learning.
24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025
Citation Formats
E. Erdogan, H. Aydın, F. Dignum, R. Verbrugge, and P. Yolum, “Mitigating Privacy Conflicts with Computational Theory of Mind,” presented at the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025, Michigan, Amerika Birleşik Devletleri, 2025, Accessed: 00, 2025. [Online]. Available: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105009759652&origin=inward.