A pervasive matter of debate in the field of philosophy is that of what constitutes prudential value; this is to say, what things in life are good in themselves (non-instrumentally good)? Let's look at something like eating your vegetables. Most people wouldn't say that eating vegetables is good in itself, but rather that it's good because it serves another purpose, or is instrumentally good. In this case, eating your vegetables is good because it increases your overall health. Is being healthy non-instrumentally good? Although you could argue that being healthy is in and of itself a good thing, it again would seem that a more reasonable explanation is that being healthy is good in that it causes some other effect. For example, being healthy could be said to increase your overall happiness. We can then ask another question: is happiness non-instrumentally good? It would seem to be the viewpoint of many people that being happy is good in and of itself (and, conversely, that pain is bad in and of itself). After all, I can't imagine that most people want to become happy to further some other end. Moreover, if we were to compare two people, it would not be unreasonable to say that, all else being equal, the one who had experienced more happiness and/or less pain in life was better off than the other (indeed, this might seem intuitive). This viewpoint is one theory of prudential value, known as hedonism, which has gone on to be the foundation for a number of influential philosophical works, including (perhaps most famously) John Stuart Mill's Utilitarianism. Although hedonism is a subject well worth covering, I won't be doing so today. Instead, I'll be presenting a theory of prudential value known as "Desire Fulfillment Theory", or "DFT". DFT's take on why health is good for you is that you have a desire to be healthy, which being healthy fulfills (rather self-explanatory, isn't it?). A more formalized definition of DFT might be as follows:
Something is good for someone if and only if (and because) it fulfills their desires.Again, we can provide an example of how this might work: Al and Bob are identical brothers that have led identical lives. Both of them want a candy-cane. Al is given one, fulfilling his candy-cane desire, whereas Bob is not. DFT would then say that Al is (however slightly) better off than Bob. Make sense so far?
Something is good for someone if and only if (and because) if fulfills the desires they would have in some idealized condition C.So, if we again take the case of the addict, in an "idealized condition" they would not be addicted, would be well informed of all the pros and cons of abusing whatever substance, and would (presumably) not have a desire to take that substance (or even a desire to not take that substance). Hooray! Are we done?
Something is good for someone if and only if (and because) it fulfills a self-regarding desire that they would have in some idealized condition CObviously, this solves the problem of Charlie but, as you might have guessed, raises some questions. What counts as self-regarding? Let's say that Charlie hopes that his mother has a good day. Will the fulfillment of this desire be sufficiently self-regarding that Charlie could benefit from it? If so, how far out from Charlie can we go before his desire no longer counts? His distant uncle? A cousin of a cousin? On the other hand, should we say that the only desires that could benefit Charlie are those which directly benefit himself, it would seem like the fulfillment of a desire that his own son lead a good life would, if fulfilled, make Charlie no better off, which would seem to be a clear violation of our intuitions. This same question applies to the "idealized condition C" under which we've been operating since IDFT: what exactly is the condition? Is it tantamount to omniscience? If our desires are supposed to convey some level of abstraction from our actual lives (e.g. so that we don't make decisions based on addictions), how far removed are the desires supposed to be? John Rawls offers an amusing reductio ad absurdum in A Theory of Justice, quoted here as re-stated by Roger Crisp in the Stanford Encyclopedia of Philosophy:
"Imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns of Harvard. […] This case is another example of philosophical ‘bedrock’. Some will believe that, if she really is informed, and not suffering from some neurosis, then the life of grass-counting will be the best for her."Overall, the problem seems to be that, with the addition of each modification of DFT, we are moving away from actual human desires and, in some sense, rejecting their validity. Instead, it would seem that SRIDFT's theory of prudential value is something along the lines of "It's good for you to get what you want, and what you want should be what's good for you." This statement not only fails to really provide a framework for what has prudential value and what does not, but it's almost tautological! So is this it, then? Should we abandon DFT entirely?
According to second-order desire fulfillment theories (2DF), A is bad for me if and only if I desire that A not occur, and my desire that A not occur is either endorsed (the stronger variant) or not disendorsed (the weaker variant). A desire is "endorsed" if and only if I have a desire to have that desire. A desire is "disendorsed" if and only if I have a desire not to have that desire.The inverse of this might be "the fulfillment of a desire is good for one if and only if the desire is either endorsed or not disendorsed." Ergo, if I want to go skydiving and either A) want to want to go skydiving or B) don't have a desire not to want to skydive, then skydiving would be a good thing for me. But say I was an addict: although I most certainly want whatever I'm addicted to, I don't want to want it. I want not to want it. In this way 2DFT brings out a more "idealized" version of my desires without having to resort to IDFT. We can also look at the case of Charlie yet again: he may have a passing desire for the old man he saw to have a good day, but does he truly want to want for him to have a good day? Is his desire strong enough to explicitly provoke this second-order desire, as the stronger variant of 2DFT would require? It would seem unlikely. Thus, we can also avoid the pitfalls of SRIDFT. Most importantly, we have stopped short in their tracks any objections about vague hypotheticals and defining the good in terms of the good: second-order desires are real, human things that have been well-explored in the field of psychology (readers who find second-order desires intriguing may want to investigate the works of Harry Frankfurt and Richard Holton, especially Holton's excellent work on akrasia (a.k.a. weakness of will)).