Trust is vital for much of what we know and do. Yet, consensus about how best to understand the nature and norms of trust remains elusive. In a series of papers, I explore how trust can come in different forms—what I call pluralism about trust—and the impact of pluralism on assessments of when a potential trustee is worthy of trust.
Most philosophers of science agree that values play some role in science—whether and how they should is hotly contested. Science is also at the center of public decision making in most democracies. In a pair of papers, I engage with empirical literature on trust to explain how values disagreements can impact science and science communication. I argue that managing values in science is itself value-laden and at the heart of integrating science into society.
Across higher education, government, and industry, there are calls for norms and guidelines to ensure AI’s trustworthiness. However, AI differs in important respects from other paradigmatic trustees (humans, non-human animals, organizations, etc.). I argue that it is important to distinguish between two types of questions here. On the one hand, there are ontological questions about whether trust in AI is possible. For instance, one might think that AI lacks certain necessary capacities for trust. On the other hand, there are normative questions about whether one should trust AI. That is, supposing it is possible, it remains to be seen whether trust in AI is good, fitting, obligatory, and so on. While arguing that trust in AI is possible, I argue that there is a hard problem in sorting out normative questions—especially when operationalizing standards for measuring trustworthiness.