My work is also available for download at my PhilPeople profile, which I more frequently update. Some of the papers archived for download are penultimate drafts. Please cite the final, published version.
A Manifesto for Moral Knowledge
Under contract with Cambridge University Press as part of their Elements series.
Easy Knowledge, Junk Knowledge, Knowledge Nonetheless
Forthcoming in Di Paolo and Oliveira (eds.), Hilary Kornblith and His Critics. Wiley-Blackwell.
Hilary Kornblith argues that bootstrapping is legitimate when itself a reliable process. I disagree: even reliable bootstrapping is bad. I argue that this is because reliabilist “knowledge” is insufficient as a basis for reasoning in the absence of higher-order evidence of reliability. I then draw a broader lesson about the theoretical costs of reliabilism, the connection between knowledge and reasoning, and the possible roles knowledge might play.
Open-Mindedness, Rational Confidence, and Belief Change
Social Epistemology Review and Reply Collective, 12(2): 33–44. (PDF)
It’s intuitive to think that (a) the more sure you are of something, the harder it’ll be to change your mind about it, and (b) you can’t be open-minded about something if you’re very sure about it. If these thoughts are right, then, with minimal assumptions, it follows that you can’t be in a good position to both escape echo chambers and be rationally resistant to fake news: the former requires open-mindedness, but the latter is inimical to it. I argue that neither thought is true and that believing them will get us all mixed up. I show that you can be open-minded and have confidently held beliefs, and that beliefs in which you are less sure are not, thereby, more fragile. I close with some reflections on the nature of rational belief change and open-mindedness and a brief sketch about what might actually help us in the fight against misinformation and belief polarization.
(Jeremy Fantl replies to this paper here.)
The Limits of Rational Belief Revision: A Dilemma for the Darwinian Debunker
Noûs, 55(3): 717-734, 2021. (Penultimate version.)
We are fallible creatures, prone to making all sorts of mistakes. So, we should be open to evidence of error. But what constitutes such evidence? And what is it to rationally accommodate it? I approach these questions by considering an evolutionary debunking argument according to which (a) we have good, scientific, reason to think our moral beliefs are mistaken, and (b) rationally accommodating this requires revising our confidence in, or altogether abandoning the suspect beliefs. I present a dilemma for such debunkers, which shows that either we have no reason to worry about our moral beliefs, or we do but we can self-correct. Either way, moral skepticism doesn’t follow. That the evolutionary debunking argument fails is important; also important, however, is what its failure reveals about rational belief revision. Specifically, it suggests that getting evidence of error is a non-trivial endeavor and that we cannot learn that we are likely to be mistaken about some matter from a neutral stance on that matter.
Review of Berislav Maruśič’s Evidence and Agency: Norms of Belief for Promising and Resolving.
Ethics, 128(3): 687–695, 2018. (Penultimate version.)
Deliberation and Prediction: It’s complicated
Episteme, 13(4): 529–538, 2016. (Penultimate version.)
Alan Hájek attacks the idea that deliberation crowds out prediction—that when we are deliberating about what to do, we cannot rationally accommodate evidence about what we are likely to do. Although Hájek rightly diagnoses the problems with some of the arguments for the view, his treatment falls short in crucial ways. In particular, he fails to consider the most plausible version for the argument, the best argument for it, and why anyone would ever believe it in the first place. In doing so, he misses a deep puzzle about deliberation and prediction—a puzzle which all of us, as agents, face, and which we may be able to resolve by recognizing the complicated relationship between deliberation and prediction.
Irrelevant Influences
Philosophy and Phenomenological Research, 96(1): 134–152, 2018. Early View, 2016. (Penultimate version.)
It can be disturbing to realize that your belief reflects the influence of irrelevant factors. But should it be? Such influence is pervasive. If we are to avoid mass agnosticism, we must determine when evidence of irrelevant belief influence is undermining and when it is not. I provide a principled way to do this. I explain why belief revision is required when it is, and why it isn’t when it isn’t. I argue that rational humility requires us to revise our beliefs in response to such evidence. I explain the nature and import of such humility: what it is and what it is to accommodate it. In doing so, I bring to light a little-discussed epistemic challenge, and explain its significance in a way that provides insight into the role of rational humility in our epistemic lives.
Evolutionary Debunking of Moral Realism
Philosophy Compass, 10(2): 104–116, 2015. (Penultimate version.)
Evolutionary debunking arguments move from a premise about the influence of evolutionary forces on our moral beliefs to a skeptical conclusion about those beliefs. My primary aim is to clarify this empirically grounded epistemological challenge. I begin by distinguishing among importantly different sorts of epistemological attacks. I then demonstrate that instances of each appear in the literature under the ‘evolutionary debunking’ title. Distinguishing them clears up some confusions and helps us better under- stand the structure and potential of evolutionary debunking arguments.
Moral Disagreement and Moral Skepticism
Philosophical Perspectives, 28(1): 302–333, 2014. (Penultimate version.)
The fact of moral disagreement when conjoined with Conciliationism, an independently attractive view about the epistemic significance disagreement, seems to entail moral skepticism. This worries those who like Conciliationism but dislike moral skepticism. Others, equally inclined against moral skepticism, think this is a reductio of Conciliationism. I argue that they are both wrong. There is no reductio and we have nothing to worry about.
Debunking Evolutionary Debunking
Oxford Studies in Metaethics, R. Shafer-Landau (ed.), Vol. 9: 76–101, 2014. (Penultimate version).
Evolutionary debunking arguments start with a premise about the influence of evolutionary forces on our evaluative beliefs, and conclude that we are not justified in those beliefs. The value realist holds that there are attitude-independent evaluative truths. But the debunker argues that we have no reason to think that the evolutionary forces that shaped human evaluative attitudes would track those truths. Worse yet, we seem to have a good reason to think that they wouldn’t: evolutionary forces select for creatures with characteristics that correlate with survival and genetic fitness, and beliefs that increase a creature’s fitness and chances of survival plausibly come apart from the true evaluative beliefs. My aim in this paper is to show that no plausible evolutionary debunking argument can both have force against the value realist and not collapse into a more general skeptical argument. I begin by presenting what I take to be the best version of the debunker’s challenge. I then argue that we have good reason to be suspicious of evolutionary debunking arguments. The trouble with these arguments stems precisely from their evolutionary premise. The most ambitious of these threaten to self-defeat: they rely on an evolutionary premise that is too strong. In more modest debunking arguments, on the other hand, the evolutionary premise is idle. I conclude that there is little hope for evolutionary debunking arguments. This is bad news for the debunker who hoped that the cold, hard scientific facts about our origins would debunk our evaluative beliefs. She has much to do to convince us that her challenge is successful.
Confidence, Evidence, and Disagreement
Erkenntnis, 79(S1):173–183, 2014. (Penultimate version.)
Should learning that we disagree about p lead you to reduce confidence in p? Some who think it should want to except beliefs in which you are rationally highly confident. Against this I show that quite the opposite holds: factors that justify low confidence in p also make disagreement about p less significant. I examine two such factors: your antecedent expectations about your peers’ opinions and the difficulty of evaluating your evidence. I close by showing how this initially surprising result can help us think about confidence, evidence and disagreement.
Review of Hilary Kornblith’s On Reflection (with Nishi Shah)
Ethics, 124 (3): 632–636, 2014. (Penultimate version.)