Friday, March 20, 2009

But What If It Goes Wrong?

A particularly common argument against doing something (usually something new, but not always) is to point out that it will lead to unacceptable or undesirable consequences, e.g.
But if we allow gays to marry, what's to stop polygamists from getting married?
Rebutting an argument of this form is tricky business, because there are a lot of ways in which it can be wrong. And sometimes, it isn't -- sometimes, it's okay to argue this way. (Check back next week for an exploration of the slippery slope fallacy.)

First of all, in order for an argument of this type to be effective, the undesirable consequences need to actually be undesirable. For example, it would be unreasonable for most of us to ask,
But if we make higher education available and affordable, what's to stop everyone from getting a university education?
This is because it's hard for us to accept that everyone getting a university education is something that we want to avoid or preclude. So our first defense against this sort of argument is to ask for a demonstration of why that consequence is unacceptable. (Personally, I'm not yet convinced that there's any reason to stop the polygamists from being able to marry.)

Second, in order for an argument of this type to be effective, the causal relationship between the antecedent (the "if" clause) and the consequent (the "then" clause) must be plausible. For example, it woud be unreasonable for most of us to ask,
But if we make higher education available and affordable, what's to stop farmers from growing too many yams?
Even if we accept that the potential outcome is unacceptable (pretend yams killed your father or something), the argument fails because there's no obvious link between availability of education and yam-growing. So our second defense against this sort of argument is to ask for a demonstration of the causal connection between the antecedent and the consequent.

Third, in order for an argument of this type to be effective, the causal link must not admit of exceptions. For example, it would be unreasonable for most of us to ask,
But if we raise teachers' salaries, what's to stop logicians from demanding/expecting a raise?
Even if we accept that logicians demanding a raise is unacceptable (they perhaps exhibit an excessive fondness for yams), and we accept that an increase in teachers' salaries should lead to an increase in logicians' salaries, the argument might still fail because a logician's work is substantively different from a teacher's, and that the causal link therefore doesn't actually apply. So our third defense against this sort of argument is to challenge the validity of what appears to be a legitimate causal link.

If all of these defenses have been exhausted to no avail, we're left with arguments about the particulars of the situation in question, and about the probabilities of various consequences. For example, if someone were to ask,
If we research genetic modification, what's to stop us from resurrecting eugenics?
The consequences do seem pretty unacceptable (to most of us, though not to me personally), and the causal link does seem pretty valid. What to do? The best defense available at this point is something like,
Well, yes, it could turn out that way, but this is why it won't (or probably won't)...
Ultimately, it usually boils down to agreeing on the chances of the unacceptable thing happening. If the unacceptable thing is 10% likely to happen, you can probably go ahead. If it's 90% likely to happen, you probably shouldn't. Of course, it also depends on the magnitude of the Bad Thing™ -- if letting gays marry is 10% likely to cause polygamists to be able to marry, that's probably acceptable; if letting gays marry is 10% likely to cause G-d to flatten the Earth with giant fiery fists, that's probably not acceptable. 

Once you've agreed on the chances of Bad Things™, it's just a matter of deciding whether or not we're willing to accept that risk. Tragically for those of us who love rational rigor, this decision is one that's personal and fairly arbitrary, and one that depends a lot on each person's ethical system (a Utilitarian would accept much higher chances of Bad Things™ than would a Kantian, for example).

At the end of the day, we assume all sorts of risks in our lives, both in the long term and in the short term. The argument that "We shouldn't do X because it might have bad consequences" is generally born of a desire to not take any risks at all. It's often a good idea to "play it safe" -- but not always.

No comments:

Post a Comment