Tuesday, May 5, 2009

Fallacy of the Week #3

First, perfunctory apologies for the recent hiatus. Now.

The fallacy I'd like to begin with today is called argumentum ad hominem. It is a type of non sequitur, which is an exceedingly broad category of error and which, therefore, comprises many subtypes. You're committing a non sequitur fallacy whenever you make irrelevant or unsubstantiated claims, or when you make excessively large leaps of inference. A few examples of non sequitur errors:
(1) Jack: I think we should nationalize the banks.
     Jim:  The moon may or may not be made of cheese.

(2) If you like yams, you'll probably hate Beethoven.
Note that even if the claim you're making is true (it is tautologically true to claim that "Object X has or does not have Trait Y"), it can still be irrelevant.

So, back to the ad hominem. Over time, two different kinds of bad arguments have been called ad hominems:
(1) In the past, an ad hominem argument was one that appealed to the audience's emotions -- their sympathy, pride, compassion, greed, etc. These days, this fallacy is unimaginitively called appeal to emotion.
(2) More recently, an ad hominem argument is one that challenges a source's character and uses that as justification for rejecting that source's claims (i.e., "Jill likes yams, so we shouldn't listen to heropinions on politics").
As is usually the case with such common "errors," they're so common because they're not always errors. It is sometimes quite reasonable to argue in a way very similar to this. Consider the following:
(1) Jill likes yams, so we shouldn't listen to her opinions on politics.
(2) Jill likes yams, so we shouldn't listen to her opinions on what to have for dinner.
Now, (1) is clearly an example of a non sequitur -- there's no reasonable connection between having terrible taste and being qualified to make political decisions. But (2) is not a non sequitur -- there is a reasonable connection between having terrible taste and being qualified to make culinary decisions. Generally speaking, if you attempt either (1) or (2), you will probably be called out for using an ad hominem. If what you tried was (1), then you should probably just hang your head in shame and fall upon your sword. But if what you tried was (2), then you have a defense.

You see, there's a substantive difference between impeaching someone's character and impeaching their qualifications. Suppose, for example, that I had written and published a treatise on quantum mechanics, or Sumerian cuneiform, or whatever. You would be wrong to respond,
Peter's an asshole, so this is all bunk. (I'll even bet he likes yams.)
This is because whether or not I'm an asshole (and whether or not I have terrible taste) has no bearing on my hypothetical explanation of quantum mechanics or cuneiform. However, you would be perfectly justified in responding,
Peter doesn't know anything about quantum mechanics or Sumerian cuneiform, so this is all bunk.
This is because my knowledge of a subject does have bearing on my attempts to hold discourse on that subject.

All of this is pretty straightforward. The amiguity really arises because we so rarely bother to pay attention to what we're actually talking about, and even when we are paying attention, it's not always easy to know. In academic discourse, we usually know whether we're talking about literary analysis or class struggle or the potential effects of nanotechnology on medicine, because we usually enter that discourse with intent and awareness. But in "real life," in a world where political and religious ideologies mingle with social norms, economic conditions, Hollywood, the manufacture of needs, and so on... well, it really is legitimately difficult to figure out where and why your opponent stands on the issues, and what the issues are in the first place.

It gets a little more complicated because there are definitely times when we want to feel justified in impeaching someone's character -- as a rule, we're not even going to bother listening to someone's positions on education reform if we know (or believe) him to be a murderer, a rapist, a child molestor, etc. Formally, we are wrong in doing so, unless the topic of discourse is relevant to the prior offense. But we don't really care -- in extreme cases, we generally reserve the freedom to reject someone's arguments on the basis of their character.

It gets a little more complicated still because it's unclear what constitutes an extreme case. Schoolteachers? Probably not. Cannibals? Probably. Drug traffickers? Probably. Politicians? Probably not. Gays? Apparently so, for a lot of people.

And for an extra dose of complexity: sometimes, the very things we use to impeach someone's credibility/qualification on a subject -- to exclude them from a given discourse -- are the things that make them participate in that discourse. For example, a claim like
Jill is gay, so we should ignore her opinions on legalizing gay marriage [on the grounds of her obvious bias]
is admissible given the guidelines we've established so far. Her gayness does have bearing on her position on gay marriage. But is this really grounds to dismiss those opinions?

So the moral of the story is this: pay attention. Pay attention to who says what and why. Pay attention to the accusation -- both to its veracity ("Does Tim like yams?") and its defensibility ("Is liking yams really such a failing?" [... yes, yes it is]). Pay attention to whether or not you're dealing with an extreme case, as well as to the potential differences with regard to what constitutes such an extreme case.

And just remember... if after all this, someone continues to disagree with you, it's probably because they're stupid.

Wednesday, April 8, 2009

Random Rants #1

So here's the short of it: on one of the forums I post on, someone broke the Law of the Internet by taking something from this forum and posting it verbatim on a different, related one. It was a personal thing and there was all sorts of drama, and we're all very upset, etc, etc. This is all completely irrelevant. I bring it up only as background for the discussion that followed, with lots of people lamenting the betrayal of their trust, expressing worries about continuing to post on the (ostensibly private) forum, and so on. I don't mean to trivialize these people's reactions of hurt and worry, though I myself don't share them.

Anyway, the discussion turned to "What can we do to prevent this from happening again?" I suggested that we stop feeling compelled to keep secrets, as this would prevent the breaching of trust from ever becoming a problem. Everyone laughed.

Part 1 of my rant
Why is it that even when we're looking for solutions to a problem, we're unwilling to look at the greater context that makes the problem a problem in the first place? Imagine someone saying, "I'm building a house, but I don't have enough nails. Where can I find more?" Would it be reasonable for this person to scoff at a reply like, "I don't have any nails, but why don't you just build the house without any? Here, I'll show you how"?

Or, more generally: if I'm playing by rules A, B, and C, and I'm worried about someone breaking rule C... as long as rule C isn't absolutely intrinsic to the game, wouldn't it make more sense to just abandon rule C yourself, rather than creating a rule D that says, in effect, "follow rule C"? (Incidentally, this is related to a number of problems regarding logic, and is very well-illustrated in Lewis Carroll's What the Tortoise Said to Achilles.)

And if we insist on continuing to play by the old rules, doesn't it behoove us to accept the risks and vulnerabilities that playing by those rules entails?

Part 2 of my rant
One of the responses to my suggestion was something like, "WTF at Peter turning this into a philosophical discussion." I'm not sure what exactly constitutes a philosophical discussion -- I'm inclined to think that a philosophical discussion is characterized by its style and tenor rather than by its content. 

Anyway, this is something I've been seeing a lot lately -- when we don't want to deal with an idea, we just say "Oh, that's so philosophical" and write it off, as if that made it somehow inapplicable to the world we live in.

And maybe it's true that sometimes, we philosophers contribute tidbits that are both true and irrelevant. But asking "What should we do?" and then rejecting a "This is what you should do" response on the grounds that it's just "philosophical"... well. Honestly, I'm not even sure where to begin with this one. Suffice it to say that you shouldn't do it.

Sunday, March 29, 2009

You Can Think It, Just Don't Say It

This has been a pet peeve of mine for a long time, and an online friend of mine just reminded me of it. She was complaining about her roommate, who is a fairly conservative Christian and has the gall to outwardly disapprove of behavior of which her religion compels her to disapprove inwardly. A discussion ensued in which everybody recounted cases of people being "judgmental" as a result of their moral codes (which were generally the results of religious beliefs). The overall tenor of the conversation was that it's okay to hold a position as long as you don't criticize others for not holding that position. This is all very politically correct.

It's also preposterous, for at least two reasons.

First, when we do something, we are also asserting that it is the right thing to do -- if we believed that something else was the right thing to do, we would do that thing instead. (This is a slight extension of Socrates' critique of akrasia -- acting against our own better judgment -- which Socrates asserts we are unable to engage in.) In a certain sense, whenever there are two possible courses of action and I select one while someone else selects the other, I am asserting that the other person is in error. (Obviously, this applies only to actions performed with intent. When we trip and fall down by accident, we are not also asserting that others should trip and fall down.)

Now, there's a difference between describing a state ("Yams are starchy tubers") and describing an action ("I destroy yams") because actions carry the implied assertion that acting otherwise is unreasonable or immoral.

I believe it would be an error to regard statements like "I am a Christian" as descriptions of states. There is a fundamental difference between statements like "I am a Christian" or "I am a Democrat" and statements like "I am a man" or "I am a Wisconsinite." When we belong to a group defined by its ideology, we do so with intent. This is why "I am a Zoroastrian" implies that it is wrong to be other than a Zoroastrian.

Now, the position I'm objecting to here is that people are free to have their beliefs as long as they don't criticize others for not holding those same beliefs. The first part of my objection is that we are already criticizing others, simply by virtue of disagreeing with them. But one is not compelled to accept the impossibility of akrasia as a premise, as I have done, or to accept that the implicit criticism is as significant as explicit criticism (stating aloud that "If you have sex with your boyfriend, you'll go to hell," etc.) as I have also done.

So, secondly, it's important to remember that what we're talking about here are beliefs about how one ought to act. But of course, we don't act in a vacuum -- we act in response to or as a result of states of affairs ("Yams are gross, so I will destroy them"), other actions ("Yams killed my father, so I will destroy them"), and the beliefs that we hold ("Yams are evil, so I will destroy them"). And, of course, the beliefs that we hold aren't just about objects in the world, but also about actions ("It is immoral to beat your kids").

It turns out that we hold lots of beliefs about how one ought to act as a result of our beliefs ("We must bear witness to the greatness of our Lord and Savior").

This is why it is impossible to suggest that a person should A) hold their beliefs, and B) be silent about their beliefs -- because some or many of their beliefs involve the call to speech. Personally, I get on people's nerves a lot because I just keep arguing -- but I am driven by the belief that to do otherwise, to allow someone to continue to be wrong, would be immoral on my part. Similarly, the adherents of many religions are required to proselytize or outwardly express their positions.

And here's some food for thought: the belief that "You can believe what you want, just don't push it on others" is an example of a belief about how one should act as a result of one's beliefs. It happens to be a very popular one, these days (that is, it's one that we love to push on people).

Telling people who disagree with you that they can think what they want as long as they're quiet about it is, at worst, profoundly hypocritical. At best, it is a stumbling into the Cretan paradox or pseudomenon: an assertion that undermines its own validity.

Friday, March 27, 2009

Fallacy of the Week #2

Whenever you turn on the TV or pick up a newspaper, you're likely to see someone shouting about one thing leading to another, leading to another, leading to something horrible. Whenever you see this, you're seeing an example of the slippery slope fallacy. The curious thing about this fallacy is that while it isn't a valid form of argument, it nevertheless doesn't invalidate the conclusions. As we discussed last week, the following is more or less a true statement (and is, regardless, a reasonable thing to say):
This mixture of cyanide and arsenic is poisonous, because cyanide is poisonous and arsenic is poisonous.
We can restate this as a conditional, just to make things a little easier later:
If cyanide is poisonous and arsenic is poisonous, then the mixture of cyanide and arsenic is poisonous.
But that's not quite right. In our daily discourse, we generally don't bother to distinguish between the claims we make and the meanings they express (or the conclusions they bring us to). As a general rule, our arguments consist of three types of elements:
  1. logical connectors (of which conditionals are one subtype)
  2. premises (information that we're arguing from, and which we assume to be true)
  3. conclusions
In order for an argument to be effective, all three must "work." Typically, when we go to the trouble of making a claim like this:
It's a good idea to study a lot, because knowledge is power,
the structure of our argument is something like this:
  1. if you study, then you will gain knowledge
  2. power is desirable; knowledge really is power
  3. it's a good idea to study a lot
Or, more generally:
  1. If A, then B
  2. A
  3. Therefore B
To illustrate this, let's consider an example where the conditional holds but the premise and conclusion are false:
If I were a teapot, I could be used to boil water.
I am (fortunately) not a teapot and I (tragically) cannot be used to boil water, but the conditional nevertheless holds. On the other hand, we can come up with statements where the premise and conclusion are true but the conditional fails:
If the Earth is roughly spherical, its core must be very hot.
While the Earth is roughly spherical, and while its core is very hot, the conditional itself fails. This obviously doesn't mean that the Earth's core isn't hot, but it does mean that we cannot conclude from the Earth's roundness that its core is hot. 

We can come up with lots of other examples of arguments that fail because one or more of those three elements fails. The point, though, is to really get a feel for the difference between the assertion that "If A, then B" and the conclusion that "B is the case."

To get back to slippery slopes, we can reconstruct such an argument more or less like this:
  1. If A, then B
  2. If B, then C
  3. If C, then D
  4. A
  5. Therefore D
So why is this problematic? Well, it's really not, assuming that all of those conditionals work. The problem comes in because each of those added steps might fail, causing the whole chain to fail. That is, increasingly complex systems face increasing margins of doubt. Because most of the reasoning we do is inductive (arguing from particular cases to general principles) and therefore not truth-preserving (we are not guaranteed true conclusions, even with true premises and working conditionals), each additional step increases the chances of failure.

This is why it's generally reasonable to make claims like,
If we all stop buying stocks, the stock market will suffer and stock prices will drop,
but not claims like,
If we all stop buying stocks, the stock market will suffer and stock prices will drop, and the government will privatize banks, and we'll start wars across the globe to stimulate industry, and everyone will hate us, and then we'll all nuke each other and the human race will die out,
even if
we can accept that every step along that chain is a pretty reasonable one.

Friday, March 20, 2009

But What If It Goes Wrong?

A particularly common argument against doing something (usually something new, but not always) is to point out that it will lead to unacceptable or undesirable consequences, e.g.
But if we allow gays to marry, what's to stop polygamists from getting married?
Rebutting an argument of this form is tricky business, because there are a lot of ways in which it can be wrong. And sometimes, it isn't -- sometimes, it's okay to argue this way. (Check back next week for an exploration of the slippery slope fallacy.)

First of all, in order for an argument of this type to be effective, the undesirable consequences need to actually be undesirable. For example, it would be unreasonable for most of us to ask,
But if we make higher education available and affordable, what's to stop everyone from getting a university education?
This is because it's hard for us to accept that everyone getting a university education is something that we want to avoid or preclude. So our first defense against this sort of argument is to ask for a demonstration of why that consequence is unacceptable. (Personally, I'm not yet convinced that there's any reason to stop the polygamists from being able to marry.)

Second, in order for an argument of this type to be effective, the causal relationship between the antecedent (the "if" clause) and the consequent (the "then" clause) must be plausible. For example, it woud be unreasonable for most of us to ask,
But if we make higher education available and affordable, what's to stop farmers from growing too many yams?
Even if we accept that the potential outcome is unacceptable (pretend yams killed your father or something), the argument fails because there's no obvious link between availability of education and yam-growing. So our second defense against this sort of argument is to ask for a demonstration of the causal connection between the antecedent and the consequent.

Third, in order for an argument of this type to be effective, the causal link must not admit of exceptions. For example, it would be unreasonable for most of us to ask,
But if we raise teachers' salaries, what's to stop logicians from demanding/expecting a raise?
Even if we accept that logicians demanding a raise is unacceptable (they perhaps exhibit an excessive fondness for yams), and we accept that an increase in teachers' salaries should lead to an increase in logicians' salaries, the argument might still fail because a logician's work is substantively different from a teacher's, and that the causal link therefore doesn't actually apply. So our third defense against this sort of argument is to challenge the validity of what appears to be a legitimate causal link.

If all of these defenses have been exhausted to no avail, we're left with arguments about the particulars of the situation in question, and about the probabilities of various consequences. For example, if someone were to ask,
If we research genetic modification, what's to stop us from resurrecting eugenics?
The consequences do seem pretty unacceptable (to most of us, though not to me personally), and the causal link does seem pretty valid. What to do? The best defense available at this point is something like,
Well, yes, it could turn out that way, but this is why it won't (or probably won't)...
Ultimately, it usually boils down to agreeing on the chances of the unacceptable thing happening. If the unacceptable thing is 10% likely to happen, you can probably go ahead. If it's 90% likely to happen, you probably shouldn't. Of course, it also depends on the magnitude of the Bad Thing™ -- if letting gays marry is 10% likely to cause polygamists to be able to marry, that's probably acceptable; if letting gays marry is 10% likely to cause G-d to flatten the Earth with giant fiery fists, that's probably not acceptable. 

Once you've agreed on the chances of Bad Things™, it's just a matter of deciding whether or not we're willing to accept that risk. Tragically for those of us who love rational rigor, this decision is one that's personal and fairly arbitrary, and one that depends a lot on each person's ethical system (a Utilitarian would accept much higher chances of Bad Things™ than would a Kantian, for example).

At the end of the day, we assume all sorts of risks in our lives, both in the long term and in the short term. The argument that "We shouldn't do X because it might have bad consequences" is generally born of a desire to not take any risks at all. It's often a good idea to "play it safe" -- but not always.

Monday, March 16, 2009

Fallacy of the Week #1

To follow up from the previous post concerning the value of "natural"ness, I'd like to talk a little bit about the genetic fallacy.

Put simply, you're committing a genetic fallacy when your argument is that a thing is good now because it was good before or had its origins in something good. An example might be a claim like
I know that Jack is a good person because I know his parents, and they're good people.
Of course, we can use predicates other than "good" and still be committing a genetic fallacy, e.g:
NaCl (table salt) is poisonous because Na is poisonous and Cl is poisonous.
It's important to keep in mind that this is a claim about knowledge (how I know that something is the case) and not one about causes (why something is the case). The examples provided in the Wikipedia article linked to above are pretty clear cut, but many instances of this fallacy in our everyday lives are more ambiguous. 

It feels silly and unnatural to us to disclaim our statements as being about knowledge as opposed to causes, so that's something we have to keep an eye out for, not to mention the fact that we often make claims about causes and knowledge simultaneously, as if they were the same thing. We ought to be especially vigilant because it is often the case, as with our friend Jack above, that the claim about causes has merit (it is not unreasonable to claim that being raised by good people causes you to be a good person) while the claim about knowledge does not (it is unreasonable to claim that we have knowledge of a person's goodness based on knowledge of his/her parents' goodness, unless we've already accepted the validity of the causal claim).

In the claim that a thing is bad because it is "unnatural," a reverse genetic fallacy is being committed:
X is bad because it does not have its origins in Y, and Y is/was good.
X is good because it does not have its origins in Y, and Y is/was bad.
A very obvious example of why this is usually a bad plan:
Murder is good because it does not have its origins in Nazism, and Nazism was bad.
And a slightly less obviously example:
Homosexuality is bad because it does not have its origins in nature, and nature is good.
Bear in mind that because genetic fallacies are informal (that is, their form sends up a red flag to reevaluate their merits, but does not automatically disqualify the conclusions), the conclusions they lead to are not always wrong:
This mixture of cyanide and arsenic is poisonous, because cyanide is poisonous and arsenic is poisonous.

Friday, March 13, 2009


One argument I hear all the time is that something is wrong, or that we should avoid something, or that we should regard something with scorn or scepticism, because it is "unnatural." The most common examples that come to mind are those of homosexuality, contraception, and genetic modification.

This sort of argument faces a whole host of problems.

First, it is unclear what constitutes "natural"ness. Is a thing natural simply because it occurs in the world without human intervention? Are only those things that occur in the world without human interverntion natural?

But let's suppose that we've been provided with a satisfactory definition of "natural." Even so, it is a stretch to claim that "X is wrong because it is unnatural" because regardless of how we define naturalness, there are many, many things that are unnatural, but which we don't want to write off as being wrong. If our definition of "natural" is very strict, then such unnatural things include tools of any kind. If our definition is very loose, then such unnatural things might include science, government, culture, airplanes, hygiene, music, medicine and so on.

But let's suppose further that we've managed to find a definition of "natural" that admits of all of the nice things that we'd like to hang on to, like forks and schools and doctors and movies and string theory and flush toilets. Even still, the fact remains that our category of all things natural contains things that we do want to write off as being wrong. It's not much of a stretch to suggest that the all 7 of the "deadly sins," which most of us tend to agree are things to be avoided, are pretty natural behaviors -- if they weren't, we wouldn't need to be prohibited against them. As such, the category of "good" or "right" cannot be coextensive with the category of "natural."

Another related problem: the things about us as humans that we tend to think of as making us human -- things like mercy and forgiveness and law and order -- are arguably unnatural and are essentially understood as such. Mercy is the act of refraining from the natural impulse of wrath. Forgiveness is the act of refraining from the natural impulse of anger. Law and order are the result of willingly and unnaturally submitting to an outside authority, and of refraining from the natural impulses of selfishness and violence. 
This particular counterargument only works, of course, when you and the person/people with whom you're arguing share a comparable set of assumptions about things like mercy vs. wrath and humaneness vs. beastliness.

Now, historically, lots and lots of things have been bludgeoned with the "It's unnatural, so it's wrong" argument. As a rule of thumb, when people say that "X is wrong because it's unnatural," what they really mean to say is that "X is wrong because I'm unaccustomed to it." Which is a very weak reason for making a very strong claim. Everything from women's education/suffrage to interracial marriage to airplanes to James Joyce to rock music has weathered this particular storm.

So maybe the best rebuttal is just to say, "Check back in 20 years."