About

The Princeton Dialogues on AI and Ethics is a research collaboration between Princeton’s University Center for Human Values (UCHV) and the Center for Information Technology Policy (CITP) that seeks to explore these questions – as well as many more. Research is focused on the emerging field of artificial intelligence (broadly defined) and its interaction with ethics and political theory. The aim of this project is to develop a set of intellectual reasoning tools to guide practitioners and policy makers, both current and future, in developing the ethical frameworks that will ultimately underpin their technical and legislative decisions. More than ever before, individual-level engineering choices are poised to impact the course of our societies and human values. And yet there have been limited opportunities for AI technology actors, academics, and policy makers to come together to discuss these outcomes and their broader social implications in a systematic fashion. This project aims to provide such opportunities for interdisciplinary discussion, as well as in-depth reflection, in the form of public conferences, invitation-only workshops, outreach efforts, etc.

The impacts of rapid developments in artificial intelligence (“AI”) on societyboth real and not yet realizedraise deep and pressing questions about our philosophical ideals and institutional arrangements. AI is currently applied in a wide range of fieldssuch as medical diagnosis, criminal sentencing, online content moderation, and public resource managementbut it is only just beginning to realize its potential to influence practically all areas of human life, including geopolitical power balances. As these technologies advance and increasingly come to mediate our everyday lives, it becomes necessary to consider how they may reflect prevailing philosophical perspectives and preferences. We must also assess how the architectural design of AI technologies today might influence human values in the future. This step is essential in order to identify the positive opportunities presented by AI and unleash these technologies’ capabilities in the most socially advantageous way possible while being mindful of potential harms. Critics question the extent to which individual engineers and proprietors of AI should take responsibility for the direction of these developments, or whether centralized policies are needed to steer growth and incentives in the right direction. What even is the right direction? How can it be best achieved?

The Princeton Dialogues on AI and Ethics is part of a wider effort at Princeton University to investigate the intersection between AI technology, politics, and philosophy. This project places particular emphasis on the ways in which the interconnected forces of technology and its governance simultaneously influence and are influenced by the broader social structures in which they are situated. The Princeton Dialogues on AI and Ethics makes use of the university’s exceptional strengths in computer science, public policy, and philosophy. The project also seeks opportunities for cooperation with existing projects in and outside of academia.


This project would not have been possible without the generous support of the University Center for Human Values and the Center for Information Technology Policy. We would also like to thank the many individuals from academia, industry, and government who have shared their time and knowledge with us in the development of this program. Special thanks to those who participated in the case study workshops and contributed comments to drafts at various stages.

CITP Logo
UCHV Logo