Read This First – My Beliefs

First published 5 July 2017. Last updated 7 July 2017.

This post clarifies some of my most important philosophical beliefs, which I take for granted in much of my other writing.

I am a consequentialist and a utilitarian. I’ve written about my flavor of utilitarianism here. In brief, that flavor is {hedonic, non-negative, total, act} as opposed to, for example, {preference, negative, average, rule}. In briefer, I think Bentham pretty much had it right. Utilitarianism beats other ethical systems because of its simplicity, in virtue of which it’s sort of correct by definition, so I try to keep it that way. Most of the complexity is in the application, not the theory, and many post-Bentham strains of utilitarianism confuse application with theory. This simple view doesn’t mean that utilitarianism is trivial; it just means that it doesn’t do what people usually expect ethical systems to do. It tell us to do the calculus (to add up all the good and bad impacts to get the net impact) for each possible action, and to take the action which has the best net impact.

What does this mean for how we ought to actually live on a daily basis? My thinking on this question borrows several points from Peter Singer’s Famine, Affluence, and Morality:

  • There’s no such thing as supererogation.
  • There’s no intrinsic ethical difference between, for example, doing a bad thing and failing to do an equally good thing.
  • The proximity of an event to you does not intrinsically affect the event’s ethical importance.

These observations leave us with an exceptionally demanding notion of what we’re ethically obligated to do. That’s something I’m still wrestling with personally, but its personal difficulty does not impugn its philosophical correctness.

Deciding how to to actually execute utilitarian calculus is often a question for philosophy of mind, not for utilitarianism. Utilitarianism did its entire job by telling us to do the calculus in the first place, rather than by giving us a sketchy list of contrived rules to follow. Philosophy of mind is relevant here because utilitarianism only cares about actions which affect the well-being (utility) of “conscious” beings (whatever “conscious” means!). So it’s up to philosophy of mind (and cognitive science, etc.) to tell us which beings we should care about ethically, and to what degree, and how those beings are impacted by certain actions. Of course, forecasting how an action will affect beings is facilitated by having a generally accurate world model. To that end, I am a naturalist.

My beliefs in the philosophy of mind are messier than my beliefs in ethics. I guess I’m basically a computationalist, and I believe pretty strongly in substrate independence. I’m not sure how we should treat fuzzy or nested minds. As long as I find a theory convincing, I try to take its implications seriously, rather than revising the theory to get more comfortable results, so I’m compelled by things such as wild animal suffering (including insects and other simple minds) and Boltzmann brains and panpsychism.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s