Suppose you’re studying the nutritional content of, say, parsnips. You test a hundred samples of parsnips and all the samples have relatively high amounts of calcium. So you conclude that parsnips are a good source of calcium. Concluding this makes good sense. Of course, there are thousands of other potential parsnip samples around and it is possible that all the other ones are actually low in calcium. So you can be a sceptic if you like, but most of us ordinarily are happy to accept claims on even less evidence than would be provided by these hundred samples. Remember the claim from the earlier post about not needing to survey all the possible evidence in order to form a rational belief. This is just another example of that.
But now suppose that another researcher happened to be also studying the nutritional value of parsnips. But her hundred samples turned out to not have much calcium at all. There could be all sorts of explanations for this. Maybe you’re in different areas and parsnips respond differently to different soil types. But never mind possible explanations for the discrepancy in research findings. The point I’m interested in is that when you hear about this other researcher’s study, it calls into question the conclusion that you drew from your own findings. You should no longer be sure that parsnips generally are a good source of calcium. In fact, you might well think that you should withhold judgement on the question altogether, on grounds that there is equal evidence pro and con.
This case — and I can easily come up with countless other cases in which we would have a similar reaction — might suggest a general principle that says that when two people (or two groups of people) of roughly equal intelligence and experience disagree about something, both groups should recognize the opposing evidence and withhold judgement at least until some further evidence becomes available. That’s a rather rough and ready formulation, but I don’t think we need to worry about the details at the moment.
There are lots of cases where that principle seems to get things exactly right. Some people think it is always right. But others think that it might not quite get things right in all cases. Political, moral, and religious disagreements might be thought to be exceptions, for example. Suppose half of the population thinks that it is wrong to kill dogs for food and the other half thinks it is fine. And suppose further that there is no obvious indication that one side happens to have far more vicious people on it or something like that. You might think you’re entitled to keep holding your moral view even though half the population disagrees with you.
I’m inclined to think that. But I do think that the fact that half the population disagrees with you is reason to be cautious about your belief and perhaps to spend some time thinking about whether you might have gotten it wrong. In other words, disagreement with peers may not mean that you need to drop your belief but surely it calls for some humility. Ridiculing the other side for being stupid is surely, well, stupid. Think about the researchers again. Most of us aren’t going to think much of you if you decide the other researcher is stupid merely because she came to a different conclusion than you did.
So where am I going with this? Recall yesterday’s post, in which I argued that you were rational in deciding to support your party. In case you didn’t notice it on first reading, the argument applies equally well whichever party you support. But once you notice this, you have reason to hold your own view with more humility and with more caution (this has got to be the least-followed advice ever given). Being a Democrat doesn’t mean that you’re stupid. Being a Republican doesn’t mean that you’re stupid. But maybe thinking that someone is stupid because they belong to the other party is stupid.
Sydney
i like what you’ve been writing.
*sigh* You’re prompting me to think really hard, Sydney.