Trust is vital for much of what we know and do. Yet, consensus about how best to understand the nature and norms of trust remains elusive. In a series of papers, I explore how trust can come in different forms—what I call pluralism about trust—and the impact of pluralism on assessments of when a potential trustee is worthy of trust.
Most philosophers of science agree that values play some role in science—whether and how they should is hotly contested. Science is also at the center of public decision making in most democracies. In a pair of papers, I engage with empirical literature on trust to explain how values disagreements can impact science and science communication. I argue that managing values in science is itself value-laden and at the heart of integrating science into society.
Across higher education, government, and industry, there are calls for norms and guidelines to ensure AI’s trustworthiness. However, AI differs in important respects from other paradigmatic trustees (humans, non-human animals, organizations, etc.). I argue that it is important to distinguish between two types of questions here. On the one hand, there are ontological questions about whether trust in AI is possible. For instance, one might think that AI lacks certain necessary capacities for trust. On the other hand, there are normative questions about whether one should trust AI. That is, supposing it is possible, it remains to be seen whether trust in AI is good, fitting, obligatory, and so on. While arguing that trust in AI is possible, I argue that there is a hard problem in sorting out normative questions—especially when operationalizing standards for measuring trustworthiness.
2023 “The intentions of information sources can affect what information people think qualifies as true.” Scientific Reports 13, 7718. https://doi.org/10.1038/s41598-023-34806-4 (with I.J. Handley-Miner, R. Atkins, et al.)
2018 “Peircean Faith: Perception, Trust, and Religious Belief in the Conduct of Life.”Transactions of the Charles S. Peirce Society, 54 (4): 457–482.
Draft in Preparation
A paper about trust’s pluralistic nature (Under Review; draft available upon request)
A paper developing a function-first approach to trust (Under Review; draft available upon request)
A paper about value-based disagreements and public trust in science (Under Review; draft available upon request)
A paper defending a norm-based approach to trustworthy AI (Under Review; draft available upon request)
“Fiduciary Duties and the Ethics of Expertise” (90% complete; draft available upon request)
“Trust between Outlaws: On Immoral and Amoral Forms of Trust” (85% complete; draft available upon request)
“Against Epistemic Ownership” (90% complete; draft available upon request)
2026 (confirmed) “A Norm-based Approach to Trustworthy AI: Trust within Human Limits,” Symposium on Technology and the Human Person in the Age of AI, Baylor University.
2026 (confirmed) “Possibility Gaps: Rethinking Technological Value Embedding, Limits, and Justice,” APA Central 2026, group session of the Concerned Philosophers for Peace.
2025 “Pursuing Learning Goals with Generative AI,” Annual Peer-Led Team Learning International Society Meeting, Los Angeles, California (Invited Keynote).
2025 “Pragmatic Pluralism about Trust,” Pacific Meeting of the American Philosophical Association, San Francisco, CA.
2024 “Value Divergence and Democratic Science,” Science and Democracy organized jointly by the PERITIA Network and the University of Oslo, University College Dublin, Ireland.
2024 “The Ethics of AI in the Classroom,” The Derek Bok Center for Teaching and Learning, Harvard University (Invited).
2024 “Pragmatic Pluralism about Trust,” 51st Meeting of the Society for the Advancement of American Philosophy, Boston.
2023 “The Conditions of Trust,” PERITIA Conference: “Rethinking Policy, Expertise, and Trust,” University College Dublin, Ireland.
2023 “An Ethics for Trust in Inquiry,” invited panel entitled “Ethics and Inquiry” and organized for the meeting of the Charles S. Peirce Society meeting at APA Central, Denver, Colorado.
2022 “Warranted Trust and Divergent Values,” International Conference on Engaging Ethics and Epistemology in Science, Hannover, Germany.
2022 “Pluralism about Trust,” Graduate Cross-training Workshop, Calvin University, USA 2022 “Rarely pure and never simple: Truth evaluations and the role of intent in expert testimony,” (with Isaac Handley-Miner) Society for Philosophy & Psychology Meeting, Milan, Italy.
2022 “Rarely pure and never simple: Truth, Trust, and the Role of Intent in Expert Testimony,” (with Isaac Handley-Miner) VMST 2022: Values in Medicine, Science, and Technology, University of Texas, Dallas.
2021 “Warranted Trust and Divergent Values,” SAS21: Trust in Science, HLRS University Stuttgart, Germany.
2021 “Warranted Trust and Divergent Values,” Graduate Philosophy Conference, University of Tennessee, Knoxville.
2020 “Trust the Process: Expert Consensus and the Self-Correction of Induction,” Boston University Graduate Conference in Philosophy. (Cancelled for Covid-19)
2019 “Trust the Process: Peirce on the Self-Correction of Induction,” John J. Reilly Center for Science, Technology, and Values, Conference: “Science and the Public.”