This post arises out of a comment on the previous entry.
The following is excerpted from William MacAlill's dissertation 'Normative Uncertainty'.
Susan and the Medicine - II
Susan is a doctor, who faces three sick individuals, Greg, Harold and Harry.
Greg is a human patient, whereas Harold and Harry are chimpanzees. They all suffer from the same condition.
She has a vial of a drug, D. If she administers all of drug D to Greg, he will be completely cured, and if she administers all of drug to the chimpanzees, they will both be completely cured (health 100%). If she splits the drug between the three, then Greg will be almost completely cured 32 (health 99%), and Harold and Harry will be partially cured (health 50%). She is unsure about the value of the welfare of non-human animals: she thinks it is equally likely that chimpanzees’ welfare has no moral value and that chimpanzees’ welfare has the same moral value as human welfare. And, let us suppose, there is no way that she can improve her epistemic state with respect to the relative value of humans and chimpanzees.
Using numbers to represent how good each outcome is: Sophie is certain that completely curing Greg is of value 100 and that partially curing Greg is of value 99. If chimpanzee welfare is of moral value, then curing one of the chimpanzees is of value 100, and partially curing one of the chimpanzees is of value 50. Her three options are as follows: A: Give all of the drug to Greg B: Split the drug C: Give all of the drug to Harold and Harry
Finally, suppose that, according to the true moral theory, chimpanzee welfare is of the same moral value as human welfare and that therefore, she should give all of the drug to Harold and Harry. What should she do? According to (some ethical theory) both A and C are appropriate options, but B is inappropriate. But that seems wrong. B seems like the appropriate option, because, in choosing either A or C, Susan is risking grave wrongdoing. B seems like the best hedge between the two theories in which she has credence. But if so, then any metanormative theory according to which what it’s appropriate to do is always what it’s maximally choice-worthy to do according to some theory in which one has credence (including some ethical theory called MFT, MFO, and variants thereof) is false. Moreover, this case shows that one understanding of the central metanormative question that has been given in the literature is wrong. Jacob Ross seems to think that the central metanormative question is “what ethical theories are worthy of acceptance and what ethical theories should be rejected,” where Ross defines acceptance as follows:' to accept a theory is to aim to choose whatever option this theory would recommend, or in other words, to aim to choose the option that one would regard as best on the assumption that this theory is true;. For example, to accept 19 (Ross 2006, 743). utilitarianism is to aim to act in such a way as to produce as much total welfare as possible, to accept Kantianism is to aim to act only on maxims that one could will as universal laws, and to accept the Mosaic Code is to aim to perform only actions that conform to its Ten Commandments.
The above case shows that this cannot be the right way of thinking about things. Option B is wrong according to all theories in which Susan has credence: she is certain that it’s wrong. The central metanormative question is therefore not about which firstorder normative theory to accept: indeed, in cases like Susan’s there’s no moral theory that she should accept. Instead, it’s about which option it’s appropriate to choose.
What mistake is the author making here? He thinks people should maximize expected utility under uncertainty even if that uncertainty stretches to catastrophic consequences. This is not the case. What they should do, what portfolio managers do, indeed, what Evolution does, is 'minimize regret'.
The author is aware of this possibility, but dismisses it in a footnote- 'One could say that, in Susan’s case, she should accept a theory that represents a hedge between the two theories in which she has credence. But why should she accept a theory that she knows to be false? This seems to be an unintuitive way of describing the situation, for no additional benefit.' The answer here is that the first order normative theory which fulfills 'regret minimization' is the one which maximizes her welfare given her preferences- be they altruistic or otherwise. It also has a lot of other neat properties- for e.g. it can give rise to a Parrando game- a combination of losing games which is winning- because MUWA regret minimization strategies are higher entropy- as well as more effectively guarding against catastrophic risk.
A normative theory deals with things like guilt, remorse as well as the satisfaction of having done the right thing at high personal cost. Regret minimization is a desirable quality in a first order normative theory and, under the author's scheme, such a theory must always exist though it may not be known. Thus 'normative uncertainty' is a mere artifact. We have normative certainty about the regret minimizing first order normative theory- it represents the best we can do, all things considered- though we don't know its details. We may use some calculus- though not the one MacAskill prescribes, because it isn't Muth Rational- which uses other first order Normative Theories to arrive at an approximation to the true regret-minimizing theory, but this does not make it a second order theory. It is first order simply.
'Metanormativity' is a delusion. It is the sort of hysteresis effect that arises when a theory is not Muth Rational- i.e. when agents are constrained not to do what all would agree would be the best thing to do. It is not 'economic' because it is not ergodic.
In a future post, I hope to put flesh on the bare bones of the following intuition-
Regret minimization by means of the multiplicative weights update algorithm is Muth Rational because it preserves diversity. It can easily be incorporated into a first order theory such that 'overlapping consensus' prescriptivity is, so to speak, built in. There is absolutely no good reason why scarce resources should be diverted from doing good into studying false theories which mischievously claim that some people and organizations with expert knowledge who are doing good aren't as 'effective' as some hare-brained scheme invented by an ignorant academic without any expert knowledge.
Showing posts with label Effective altruism. Show all posts
Showing posts with label Effective altruism. Show all posts
Wednesday, 26 August 2015
Tuesday, 25 August 2015
Metanormativism is throwing up in the sink instead of doing the washing up.
This is from William MacAskill's Doctoral dissertation, titled 'Normative Uncertainty'. My comments are in bold.
'Normative uncertainty is a fact of life.
'Suppose that I have £20 to spend. With that money, I could eat out at a delightful Indian restaurant. Or I could pay for four long-lasting insecticide-treated bednets that would protect eight children against malaria. In comparing these two options, let us suppose that I know all the morally relevant facts about what that £20 could do. Even so, I still don’t know whether I’m obligated to donate that money or whether it’s permissible for me to pay for the meal out, because I just don’t know how strong my moral obligations to distant strangers are. So I don’t ultimately know what I ought to do.'
This is a good reason to hold that normative uncertainty can never be a fact of a truly ethical life but merely a fallacy that a self-publicist may strategically cultivate. Why? Well, a person whose life is truly ethical can never have disposable income for indulgence in a luxury while some people lack necessities. Thus, an ethical person never has £20 to spend on a 'delightful' Indian meal at a restaurant because she is already eating at a langar- or Community Soup kitchen- and handing over her entire earnings to those in need.
'For an example of normative uncertainty on a larger scale, suppose that the members of a government are making a decision about whether to tax carbon emissions. They know, let us suppose, all the relevant facts about what would happen as a result of the tax: it would make presently existing people worse off, as they would consume less oil and coal, and therefore be less economically productive; but it would slow the onset of climate change, thereby increasing the welfare of people living in the future. But the members of the government don’t know how to weigh the interest of future people against the interests of presently existing people. So, again, those in this government don't ultimately know what they ought to do.'
Members of a Government are not principals, thus their own normative preferences are irrelevant, they are agents simply. If they know 'all the relevant facts about what would happen as a result of a tax', their duty is to inform their principal- viz. the citizens on behalf of whom they exercise authority. It is up to the citizens to decide how to allocate resources between generations. Once again, normative uncertainty can't arise unless members of a Government are violating their duty to act as agents, not principals, and thus are not living an ethical life.
'In both of these cases, the uncertainty in question is not uncertainty about what will happen, but rather is fundamental normative uncertainty. Recently, some philosophers have suggested that there are norms that govern how one ought to act that take into account one’s fundamental normative uncertainty. I call this suggestion metanormativism. '
Actually, in both these cases, people who are not living an ethical life are simply pretending that the reason for this is because they haven't yet made up their mind as to what type of ethical life they ought to adopt. Thus 'metanormativism' isn't normative, it is pathological. Indeed MacAskill himself writes 'Metanormativism isn’t about normativity, in the way that meta-ethics is about ethics, or that a meta-language is about a language. Rather, ‘meta’ is used in the sense of ‘over’ or ‘beyond’: that is, in the sense used in the word ‘metacarpal’, where, the metacarpal bones in the hand are located beyond the carpal bones. Regarding metanormativism, there is a clear analogy with the debate about the subjective or objective ought in moral theory (that is, whether moral norms are evidence-relative or belief-relative in some way). However, using the term ‘normative subjectivism’ instead of ‘metanormativism’ would have had misleading associations with subjectivism in meta-ethics. So I went with ‘metanormativism’ – with the caveat that this shouldn’t be confused with the study of normativity'
If you do the cooking, it is normative that I do the washing up. Meta-normativity is like my claiming I'm actually doing 'meta-washing-up' by getting drunk and vomiting in the sink in which you piled up the dishes.
If you are intelligent, you will say to me 'fuck off. Meta-normativity' is meaningless cognitivist shite. I'm going to beat you till you sober up and clean that sink.'
However, if you are stupid- for example if you subscribe to comuptational cognitivism- then you are obliged to take my claim seriously. Following MacAskill's 'Maximal Expected Choiceworthiness' framework, you will be distressed to find that I am ethically superior to you because I have caused you to devote scarce resources to 'Philosophical research' which stupid people like you (i.e. computational cognitivists) consider a very good thing even though sensible people condemn it for 'crowding out' socially beneficial actions.
'There are two main motivations for metanormativism. The first is simply an appeal to intuitions about cases. Consider the following example:
Moral Dominance
'Jane is at dinner, and she can either choose foie gras, or the vegetarian risotto. Let’s suppose that, according to the true moral theory, both of these options are equally choice-worthy: animal welfare is not of moral value so there is no moral reason for choosing one meal over another, and Jane would find either meal equally tasty, and so she has no prudential reason for preferring one over the other. Let’s suppose that Jane has high credence in that view. But she also finds plausible the view that animal welfare is of moral value, according to which the risotto is the more choice-worthy option. In this situation, choosing the risotto over the foie gras is more choice-worthy according to some moral views in which she has credence, and less choice-worthy according to none. In the language of decision-theory, the risotto dominates the foie gras. So it seems very clear that, in some sense of ‘ought’, Jane ought to choose the risotto, and ought not to buy the foie gras. But, if so, then there must be a sense of ‘ought’ that takes into account Jane’s first-order normative uncertainty.
Jane finds 2 options, which cost the same, equally good. Should she starve, like Buridan's ass or should she makes a choice based on an irrelevant alternative? Obviously, she should make a choice, finish her meal quickly, and get back to work. In this case, choosing the risotto represents compliance with a deontics that isn't 'true' because it includes supererogatory prohibitions of no ethical worth but which may have some signalling or strategic function.
There is no first order normative uncertainty here because we are told that an accessible 'true moral theory' obtains.
Decision theory is irrelevant. It doesn't matter what she eats. What matters is that she finish her meal quickly and get back to work.
'A second motivation for metanormativism is based on the idea of action-guidingness. There has been a debate concerning whether there is a sense of ‘ought’ that is relative to the decision-maker’s beliefs or credences (a ‘subjective’ sense of ought), in addition to a sense of ‘ought’ that is not relative to the decision-maker’s beliefs or credences (an ‘objective’ sense of ought). The principal argument for thinking that there must be a subjective sense of ‘ought’ is because the objective sense of ‘ought’ is not sufficiently action-guiding.
Once again, we find that the claimed motivation for metanormativism arises from the refusal to grant that some actions have no ethical or deontic status- they are 'supererogatory'. This is a good thing if Knightian Uncertainty obtains because the more 'free' choices each agent can make, the faster and more thoroughly the fitness landscape can be investigated. Suppose Knightian Uncertainty is small whereas the risk of a catastrophe is known to be high- e.g. 90 per cent. In this case, it might make sense to require that subjectivity be conditioned to show a preference for 'metanormativity' iff
1) there is always a null option- i.e. a choice which has neglibible effect
2) no scarce resources are used up as a result
In other words, metanormativism is not empty or pathological provided the people to whom it is touted can do no good but, at the margin, might do some harm. In this case, it makes sense to baffle them with bullshit.
However, there is a superior alternative. Tell them they are shite and they have a duty to resign from any responsible office or position of power or authority.
Consider the following case
Susan, and the Medicine -
Susan is a doctor, who has a sick patient, Greg. Susan is unsure whether Greg has condition X or condition Y: she thinks each possibility is equally likely. And it is impossible for her to gain any evidence that will help her improve her state of knowledge any further. She has a choice of three drugs that she can give Greg: drugs A, B, and C. If she gives him drug A, and he has condition X, then he will be completely cured; but if she gives him drug A, and he has condition Y, then he will die. If she gives him drug C, and he has condition Y, then he will be completely cured; but if she gives him drug C, and he has condition X, then he will die. If she gives him drug B, then he will be almost completely cured, whichever condition he has, but not completely cured. Her decision can be represented in the following table, using numbers to represent how good each outcome would be: Greg has condition X – 50% Greg has condition Y – 50% A 100 0 B 99 99 C 0 100 Finally, suppose that, as a matter of fact, Greg has condition Y. So giving Greg drug C would completely cure him.
What should Susan do? Obviously, she should give him drug B. It's called 'regret minimization' or 'hedging your bets'. But, since you are a Professor of Ethics or some such shite, you aren't gonna say 'D'uh! The answer is B.' because the way you guys get tenure is by staying the stupidest possible thing. In some sense, it seems that Susan ought to give Greg drug C: doing so is what will actually cure Greg. But given that she doesn’t know that Greg has condition Y, it seems that it would be reckless for Susan to administer drug C. As far as she knows, in doing so she’d be taking a 50% risk of Greg’s death. And so it also seems that there’s a sense of ‘ought’ according to which she ought to administer drug B. In this case, the objective consequentialist’s recommendation — “do what actually has the best consequences” — is not useful advice for Susan. It is not a piece of advice that the she can act on, because she does not know, and is not able to come to know, what action actually has the best consequences. So one might worry that the objective consequentialist’s recommendation is not sufficiently action-guiding: it’s very rare that a decision-maker will be in a position to know what she ought to do. In contrast, so the argument goes, if there is a subjective sense of ‘ought’ then the decision-maker will very often know what she ought to do. So the thought that there should be at least some sense of ‘ought’ that is sufficiently action-guiding motivates the idea that there is a subjective sense of ‘ought’. Similar considerations motivate metanormativism. Just as one is very often not in a position to know what the consequences of one’s actions are, one is very often not in a position to know which moral norms are true; in which case a sufficiently actionguiding sense of ‘ought’ must take into account normative uncertainty as well.
A Doctor is an agent, not a Principal. The Doctor only gains salience in a Decision situation if there is a 'skill' or information asymmetry- in which case there is a dilemma re. operationalizing informed consent. . In this case, however, nothing of the sort obtains. Since Susan is posited as someone for whom advice from an Ethicist could be 'useful', it must be the case that she is as stupid as shit and thus a shite Doctor. She should resign. Why? Because it is 'impossible for her to gain any evidence that will help her improve her state of knowledge any further.' In other words, she will learn nothing from a failure. Consequently, in obedience to the Hippocratic oath, she has a duty to give the guardian of the Patient all the information quoted above, return any fees she received, and quit the role of Doctor. There is no 'normative uncertainty' here, unless she is living an unethical life and is happy to continue doing so.
Metanormativism, MacAskill tells us, is motivated by wanting to continue acting in an ethical capacity even when one knows one ought not to so act by reason of ignorance or stupidity or lack of competence. But such metanormativism isn't part of Normative Decision making any more than my throwing up in the sink is part of my duty of doing the washing up. However, as a matter of fact, not theory, if you invite me to dinner and I promise to do the washing up, what actually happens is I get drunk and vomit all over the plates you have piled up in the sink. You manage to get me into a taxi and hope you've seen the last of me. I send you an Email the next day showing, using MacAskills' 'Maximal Expected Choice Worthiness' decision framework, how my actions at your dinner party were actually highly commendable from the Ethical p.o.v. After all, you could have left the dishes in the sink for a couple of days without being greatly inconvenienced- in other words, the duty of doing the washing up at the soonest possible time was supererogatory to some degree. By throwing up in your sink, I made the action of cleaning it and the dishes that much more urgent. This tackled a lacuna in MacAskill's theory which neglects supererogatory duties. Another lacuna in his theory arises from the neglect of culpa levis in concreto type implicit delegation of duties such that a required action is better or more thoroughly or more predictably performed. Clearly, my duty of doing the washing up can be delegated to you if I am incapacitated. By throwing up in the sink, the duty of cleaning the dishes and the sink have become more urgent- you had to perform it right away. Furthermore, you are better at cleaning sinks whereas I'm good at making them dirty. Thus, my actions at your dinner party did not result in the dishes not getting washed. They were washed, probably more thoroughly than would otherwise have been the case. However, it remains the case that you may think there was a Normative failure on my part. This is quite untrue. You are actually suffering from Normative Uncertainty. You don't understand that though Metanormativism has nothing to do with Normative Behavior, nevertheless, if MacAskill aint talking utter bollocks, by causing you to devote more resources to a purely philosophical argument- viz. my claim that my behavior at your dinner party was super ethical- I am advancing the cause of Ethical Altruism which is a true Moral Theory.
As MacAskill says 'Moral philosophy provides a bargain in terms of gaining new information: doing just a bit of philosophical study or research can radically alter the value of one’s options. So individuals, philanthropists, and governments should all spend a lot more resources on researching and studying ethics than they currently do.'
By throwing up in your sink, and then sending you this email, I have caused you to devote more resources to 'philosophical study' and thus made you an immeasurably better man. Thus getting drunk at dinner parties and throwing up in the sink instead of doing the dishes is prescriptive for Effective Altruists provided Normative Uncertainty is ubiquitous or computational cognitivism aint shite.
'Normative uncertainty is a fact of life.
'Suppose that I have £20 to spend. With that money, I could eat out at a delightful Indian restaurant. Or I could pay for four long-lasting insecticide-treated bednets that would protect eight children against malaria. In comparing these two options, let us suppose that I know all the morally relevant facts about what that £20 could do. Even so, I still don’t know whether I’m obligated to donate that money or whether it’s permissible for me to pay for the meal out, because I just don’t know how strong my moral obligations to distant strangers are. So I don’t ultimately know what I ought to do.'
This is a good reason to hold that normative uncertainty can never be a fact of a truly ethical life but merely a fallacy that a self-publicist may strategically cultivate. Why? Well, a person whose life is truly ethical can never have disposable income for indulgence in a luxury while some people lack necessities. Thus, an ethical person never has £20 to spend on a 'delightful' Indian meal at a restaurant because she is already eating at a langar- or Community Soup kitchen- and handing over her entire earnings to those in need.
'For an example of normative uncertainty on a larger scale, suppose that the members of a government are making a decision about whether to tax carbon emissions. They know, let us suppose, all the relevant facts about what would happen as a result of the tax: it would make presently existing people worse off, as they would consume less oil and coal, and therefore be less economically productive; but it would slow the onset of climate change, thereby increasing the welfare of people living in the future. But the members of the government don’t know how to weigh the interest of future people against the interests of presently existing people. So, again, those in this government don't ultimately know what they ought to do.'
Members of a Government are not principals, thus their own normative preferences are irrelevant, they are agents simply. If they know 'all the relevant facts about what would happen as a result of a tax', their duty is to inform their principal- viz. the citizens on behalf of whom they exercise authority. It is up to the citizens to decide how to allocate resources between generations. Once again, normative uncertainty can't arise unless members of a Government are violating their duty to act as agents, not principals, and thus are not living an ethical life.
'In both of these cases, the uncertainty in question is not uncertainty about what will happen, but rather is fundamental normative uncertainty. Recently, some philosophers have suggested that there are norms that govern how one ought to act that take into account one’s fundamental normative uncertainty. I call this suggestion metanormativism. '
Actually, in both these cases, people who are not living an ethical life are simply pretending that the reason for this is because they haven't yet made up their mind as to what type of ethical life they ought to adopt. Thus 'metanormativism' isn't normative, it is pathological. Indeed MacAskill himself writes 'Metanormativism isn’t about normativity, in the way that meta-ethics is about ethics, or that a meta-language is about a language. Rather, ‘meta’ is used in the sense of ‘over’ or ‘beyond’: that is, in the sense used in the word ‘metacarpal’, where, the metacarpal bones in the hand are located beyond the carpal bones. Regarding metanormativism, there is a clear analogy with the debate about the subjective or objective ought in moral theory (that is, whether moral norms are evidence-relative or belief-relative in some way). However, using the term ‘normative subjectivism’ instead of ‘metanormativism’ would have had misleading associations with subjectivism in meta-ethics. So I went with ‘metanormativism’ – with the caveat that this shouldn’t be confused with the study of normativity'
If you do the cooking, it is normative that I do the washing up. Meta-normativity is like my claiming I'm actually doing 'meta-washing-up' by getting drunk and vomiting in the sink in which you piled up the dishes.
If you are intelligent, you will say to me 'fuck off. Meta-normativity' is meaningless cognitivist shite. I'm going to beat you till you sober up and clean that sink.'
However, if you are stupid- for example if you subscribe to comuptational cognitivism- then you are obliged to take my claim seriously. Following MacAskill's 'Maximal Expected Choiceworthiness' framework, you will be distressed to find that I am ethically superior to you because I have caused you to devote scarce resources to 'Philosophical research' which stupid people like you (i.e. computational cognitivists) consider a very good thing even though sensible people condemn it for 'crowding out' socially beneficial actions.
'There are two main motivations for metanormativism. The first is simply an appeal to intuitions about cases. Consider the following example:
Moral Dominance
'Jane is at dinner, and she can either choose foie gras, or the vegetarian risotto. Let’s suppose that, according to the true moral theory, both of these options are equally choice-worthy: animal welfare is not of moral value so there is no moral reason for choosing one meal over another, and Jane would find either meal equally tasty, and so she has no prudential reason for preferring one over the other. Let’s suppose that Jane has high credence in that view. But she also finds plausible the view that animal welfare is of moral value, according to which the risotto is the more choice-worthy option. In this situation, choosing the risotto over the foie gras is more choice-worthy according to some moral views in which she has credence, and less choice-worthy according to none. In the language of decision-theory, the risotto dominates the foie gras. So it seems very clear that, in some sense of ‘ought’, Jane ought to choose the risotto, and ought not to buy the foie gras. But, if so, then there must be a sense of ‘ought’ that takes into account Jane’s first-order normative uncertainty.
Jane finds 2 options, which cost the same, equally good. Should she starve, like Buridan's ass or should she makes a choice based on an irrelevant alternative? Obviously, she should make a choice, finish her meal quickly, and get back to work. In this case, choosing the risotto represents compliance with a deontics that isn't 'true' because it includes supererogatory prohibitions of no ethical worth but which may have some signalling or strategic function.
There is no first order normative uncertainty here because we are told that an accessible 'true moral theory' obtains.
Decision theory is irrelevant. It doesn't matter what she eats. What matters is that she finish her meal quickly and get back to work.
'A second motivation for metanormativism is based on the idea of action-guidingness. There has been a debate concerning whether there is a sense of ‘ought’ that is relative to the decision-maker’s beliefs or credences (a ‘subjective’ sense of ought), in addition to a sense of ‘ought’ that is not relative to the decision-maker’s beliefs or credences (an ‘objective’ sense of ought). The principal argument for thinking that there must be a subjective sense of ‘ought’ is because the objective sense of ‘ought’ is not sufficiently action-guiding.
Once again, we find that the claimed motivation for metanormativism arises from the refusal to grant that some actions have no ethical or deontic status- they are 'supererogatory'. This is a good thing if Knightian Uncertainty obtains because the more 'free' choices each agent can make, the faster and more thoroughly the fitness landscape can be investigated. Suppose Knightian Uncertainty is small whereas the risk of a catastrophe is known to be high- e.g. 90 per cent. In this case, it might make sense to require that subjectivity be conditioned to show a preference for 'metanormativity' iff
1) there is always a null option- i.e. a choice which has neglibible effect
2) no scarce resources are used up as a result
In other words, metanormativism is not empty or pathological provided the people to whom it is touted can do no good but, at the margin, might do some harm. In this case, it makes sense to baffle them with bullshit.
However, there is a superior alternative. Tell them they are shite and they have a duty to resign from any responsible office or position of power or authority.
Consider the following case
Susan, and the Medicine -
Susan is a doctor, who has a sick patient, Greg. Susan is unsure whether Greg has condition X or condition Y: she thinks each possibility is equally likely. And it is impossible for her to gain any evidence that will help her improve her state of knowledge any further. She has a choice of three drugs that she can give Greg: drugs A, B, and C. If she gives him drug A, and he has condition X, then he will be completely cured; but if she gives him drug A, and he has condition Y, then he will die. If she gives him drug C, and he has condition Y, then he will be completely cured; but if she gives him drug C, and he has condition X, then he will die. If she gives him drug B, then he will be almost completely cured, whichever condition he has, but not completely cured. Her decision can be represented in the following table, using numbers to represent how good each outcome would be: Greg has condition X – 50% Greg has condition Y – 50% A 100 0 B 99 99 C 0 100 Finally, suppose that, as a matter of fact, Greg has condition Y. So giving Greg drug C would completely cure him.
What should Susan do? Obviously, she should give him drug B. It's called 'regret minimization' or 'hedging your bets'. But, since you are a Professor of Ethics or some such shite, you aren't gonna say 'D'uh! The answer is B.' because the way you guys get tenure is by staying the stupidest possible thing. In some sense, it seems that Susan ought to give Greg drug C: doing so is what will actually cure Greg. But given that she doesn’t know that Greg has condition Y, it seems that it would be reckless for Susan to administer drug C. As far as she knows, in doing so she’d be taking a 50% risk of Greg’s death. And so it also seems that there’s a sense of ‘ought’ according to which she ought to administer drug B. In this case, the objective consequentialist’s recommendation — “do what actually has the best consequences” — is not useful advice for Susan. It is not a piece of advice that the she can act on, because she does not know, and is not able to come to know, what action actually has the best consequences. So one might worry that the objective consequentialist’s recommendation is not sufficiently action-guiding: it’s very rare that a decision-maker will be in a position to know what she ought to do. In contrast, so the argument goes, if there is a subjective sense of ‘ought’ then the decision-maker will very often know what she ought to do. So the thought that there should be at least some sense of ‘ought’ that is sufficiently action-guiding motivates the idea that there is a subjective sense of ‘ought’. Similar considerations motivate metanormativism. Just as one is very often not in a position to know what the consequences of one’s actions are, one is very often not in a position to know which moral norms are true; in which case a sufficiently actionguiding sense of ‘ought’ must take into account normative uncertainty as well.
A Doctor is an agent, not a Principal. The Doctor only gains salience in a Decision situation if there is a 'skill' or information asymmetry- in which case there is a dilemma re. operationalizing informed consent. . In this case, however, nothing of the sort obtains. Since Susan is posited as someone for whom advice from an Ethicist could be 'useful', it must be the case that she is as stupid as shit and thus a shite Doctor. She should resign. Why? Because it is 'impossible for her to gain any evidence that will help her improve her state of knowledge any further.' In other words, she will learn nothing from a failure. Consequently, in obedience to the Hippocratic oath, she has a duty to give the guardian of the Patient all the information quoted above, return any fees she received, and quit the role of Doctor. There is no 'normative uncertainty' here, unless she is living an unethical life and is happy to continue doing so.
Metanormativism, MacAskill tells us, is motivated by wanting to continue acting in an ethical capacity even when one knows one ought not to so act by reason of ignorance or stupidity or lack of competence. But such metanormativism isn't part of Normative Decision making any more than my throwing up in the sink is part of my duty of doing the washing up. However, as a matter of fact, not theory, if you invite me to dinner and I promise to do the washing up, what actually happens is I get drunk and vomit all over the plates you have piled up in the sink. You manage to get me into a taxi and hope you've seen the last of me. I send you an Email the next day showing, using MacAskills' 'Maximal Expected Choice Worthiness' decision framework, how my actions at your dinner party were actually highly commendable from the Ethical p.o.v. After all, you could have left the dishes in the sink for a couple of days without being greatly inconvenienced- in other words, the duty of doing the washing up at the soonest possible time was supererogatory to some degree. By throwing up in your sink, I made the action of cleaning it and the dishes that much more urgent. This tackled a lacuna in MacAskill's theory which neglects supererogatory duties. Another lacuna in his theory arises from the neglect of culpa levis in concreto type implicit delegation of duties such that a required action is better or more thoroughly or more predictably performed. Clearly, my duty of doing the washing up can be delegated to you if I am incapacitated. By throwing up in the sink, the duty of cleaning the dishes and the sink have become more urgent- you had to perform it right away. Furthermore, you are better at cleaning sinks whereas I'm good at making them dirty. Thus, my actions at your dinner party did not result in the dishes not getting washed. They were washed, probably more thoroughly than would otherwise have been the case. However, it remains the case that you may think there was a Normative failure on my part. This is quite untrue. You are actually suffering from Normative Uncertainty. You don't understand that though Metanormativism has nothing to do with Normative Behavior, nevertheless, if MacAskill aint talking utter bollocks, by causing you to devote more resources to a purely philosophical argument- viz. my claim that my behavior at your dinner party was super ethical- I am advancing the cause of Ethical Altruism which is a true Moral Theory.
As MacAskill says 'Moral philosophy provides a bargain in terms of gaining new information: doing just a bit of philosophical study or research can radically alter the value of one’s options. So individuals, philanthropists, and governments should all spend a lot more resources on researching and studying ethics than they currently do.'
By throwing up in your sink, and then sending you this email, I have caused you to devote more resources to 'philosophical study' and thus made you an immeasurably better man. Thus getting drunk at dinner parties and throwing up in the sink instead of doing the dishes is prescriptive for Effective Altruists provided Normative Uncertainty is ubiquitous or computational cognitivism aint shite.
Friday, 10 July 2015
How to make Effective Altruism bulletproof without rendering it Silly- Part 1 of Zero
Effective Altruism (E.A) appears the easiest Ethical Theory to shoot down since, for any theory of Justified True Belief (J.T.B) necessary for its implementation, every possible course of action can be proven to be consistent with E.A provided it holds at least one course of action to be unambiguously prescriptive at that particular point in time and space.
This is because, if an action is prescriptive, it must be the case that an associated set of compatible Theories of Justified True Belief has instrumental value. Thus it would be rational to devote scarce resources to promoting interest in and research into that set of J.T.Bs which are consistent with the E.A prescription we have posited as occurring.
However, since, in any prescriptive course of action, a sub-optimal variation in a single step can have greater imperative force than its optimal completion- for e.g. in saving a drowning child, we may not complete the step of combing its hair, so that when the T.V cameras arrive, it presents a more piteous aspect, thus giving greater imperative force to whatever E.A prescription we are urging- it therefore follows that either
1) Any course of action is consistent with E.A if it is merely a sub-routine.
Or
2) All E.A sub-routines have the same property as its prescriptive 'courses of action'. For example, they must pass the test of protecting the life of another. Thus, no sub-routine could arise such that you let a child drown no matter how much attention and resources doing so might draw to the good cause.
The issue here is that a sorites type problem occurs in demarcating the last safe moment when a particular sub-routine becomes prescriptive.
If no such sorites problem arises then E.A can't be disambiguated from J.T.B. There is no supervenience or multiple realizability with respect to them. Either E.A is just a 'Turing Oracle' for J.T.B's halting problem or it is its own metalanguage and thus can't prove its own consistency or completeness- i.e. it can't show some of its propositions are prescriptive provided at least one isn't. In other words, it can exist, but only outside 'Public Reason' as an apophatic practice of a Mystic or Pietistic type.
This renders it bulletproof but silly.
In my next post I'll outline a method of saving E.A from silliness.
Edit- Well, I would have outlined such a method, for sure, but Waitrose has some real nice Rum marked down so fill your glass and take a shufti at this instead.
This is because, if an action is prescriptive, it must be the case that an associated set of compatible Theories of Justified True Belief has instrumental value. Thus it would be rational to devote scarce resources to promoting interest in and research into that set of J.T.Bs which are consistent with the E.A prescription we have posited as occurring.
However, since, in any prescriptive course of action, a sub-optimal variation in a single step can have greater imperative force than its optimal completion- for e.g. in saving a drowning child, we may not complete the step of combing its hair, so that when the T.V cameras arrive, it presents a more piteous aspect, thus giving greater imperative force to whatever E.A prescription we are urging- it therefore follows that either
1) Any course of action is consistent with E.A if it is merely a sub-routine.
Or
2) All E.A sub-routines have the same property as its prescriptive 'courses of action'. For example, they must pass the test of protecting the life of another. Thus, no sub-routine could arise such that you let a child drown no matter how much attention and resources doing so might draw to the good cause.
The issue here is that a sorites type problem occurs in demarcating the last safe moment when a particular sub-routine becomes prescriptive.
If no such sorites problem arises then E.A can't be disambiguated from J.T.B. There is no supervenience or multiple realizability with respect to them. Either E.A is just a 'Turing Oracle' for J.T.B's halting problem or it is its own metalanguage and thus can't prove its own consistency or completeness- i.e. it can't show some of its propositions are prescriptive provided at least one isn't. In other words, it can exist, but only outside 'Public Reason' as an apophatic practice of a Mystic or Pietistic type.
This renders it bulletproof but silly.
In my next post I'll outline a method of saving E.A from silliness.
Edit- Well, I would have outlined such a method, for sure, but Waitrose has some real nice Rum marked down so fill your glass and take a shufti at this instead.
Thu, 2015-07-09 15:13 — Joe Thorpe
Vivek Iyer highlights an important point - the difficulty calculating longterm consequences, that others flagged as well. But the nice thing about altruism is that your competition to do pure good isn't exactly intense, so it isn't THAT hard to find some genuine good to do. (Although I've heard from travellers that free mosquito nets tend to get used as fish nets, and quickly destroyed, I must admit.) The trouble with Hayekian reasoning is that the market, especially in developed nations, is largely devoted to meeting the requirements of human sexual selection (see Veblen.) So that's not a great way even to find personal happiness. Great way to piss away forests, though.
Thu, 2015-07-09 18:51 — Vivek Iyer
Joe Thorpe raises a very important point re. sexual selection. It could be argued that mimetic consumption of positional goods raises reproductive success which in turn entails the notion that one has a duty to your descendants to evade and avoid taxes. It is a short step to Social Darwinism- red in tooth and claw!
Fortunately, Zahavi's theory re. the handicap principle is actually eusocial across species because it sends a signal which allows 'Aumann correllated equlibria'. Thus when birds engage in flocking behaviour against a predator everybody individual benefits- including the predator which is discouraged from a costly attack.
Both hunter-gatherer and agricultural societies saw the need for egalitarian distribution of surpluses. A cynic might say Charity is good because it means you aim for a surplus so in a bad year it is the 'poor' or marginally connected who starve first. However, aiming for a surplus has positive 'externalities' and Knowledge effects. Furthermore, egalitarian rules for surplus distribution change the fitness landscape for a lot of co-evolved public goods- i.e. you get a better golden path. Interestingly, the Japanese Sage, Ninomiya thought of 'savings' not as a hedge or 'consumption smoothing' or in terms of 'time preference' (which is important because Mathematical General Eqbm theory soon becomes 'anything goes' once hedging or Knightian uncertainty etc are introduced- i.e. the maths doesn't support Ayn Rand type silliness) instead Ninomiya saw savings as a voluntary foregoing of luxuries so as to allow others to eat. However, since all humans must be treated as equal, there is an obligation to 'repay virtue'- i.e. the system still needs to be doing golden path savings and building Capital- though this can be discharged collecttively.
Ken Binmore's evolutionary game theory approach seems to be moving in this direction. If effective altruism makes people happier then we don't need to commit to either consequentialism or worry that the underlying deontics are probably apophatic- i.e. the rule set has no representation.
For the moment the argument from low hanging fruit makes sense. Of course, one has to be sensitive to the dangers posed by what Timur Kuran calls Preference Falsification & Availability Cascades. However that's the sort of thing co-evolved systems- as opposed to some substantivist Super Compurter- are good at doing.
Fortunately, Zahavi's theory re. the handicap principle is actually eusocial across species because it sends a signal which allows 'Aumann correllated equlibria'. Thus when birds engage in flocking behaviour against a predator everybody individual benefits- including the predator which is discouraged from a costly attack.
Both hunter-gatherer and agricultural societies saw the need for egalitarian distribution of surpluses. A cynic might say Charity is good because it means you aim for a surplus so in a bad year it is the 'poor' or marginally connected who starve first. However, aiming for a surplus has positive 'externalities' and Knowledge effects. Furthermore, egalitarian rules for surplus distribution change the fitness landscape for a lot of co-evolved public goods- i.e. you get a better golden path. Interestingly, the Japanese Sage, Ninomiya thought of 'savings' not as a hedge or 'consumption smoothing' or in terms of 'time preference' (which is important because Mathematical General Eqbm theory soon becomes 'anything goes' once hedging or Knightian uncertainty etc are introduced- i.e. the maths doesn't support Ayn Rand type silliness) instead Ninomiya saw savings as a voluntary foregoing of luxuries so as to allow others to eat. However, since all humans must be treated as equal, there is an obligation to 'repay virtue'- i.e. the system still needs to be doing golden path savings and building Capital- though this can be discharged collecttively.
Ken Binmore's evolutionary game theory approach seems to be moving in this direction. If effective altruism makes people happier then we don't need to commit to either consequentialism or worry that the underlying deontics are probably apophatic- i.e. the rule set has no representation.
For the moment the argument from low hanging fruit makes sense. Of course, one has to be sensitive to the dangers posed by what Timur Kuran calls Preference Falsification & Availability Cascades. However that's the sort of thing co-evolved systems- as opposed to some substantivist Super Compurter- are good at doing.
Fri, 2015-07-10 11:32 — Vivek Iyer
Forgive my narcissim in replying to my own comment! I wanted to draw attention to different ways to ground E.A and, if not make it 'bullet-proof', motivate useful reflection
1) Mike Munger's notion of the 'euvoluntary' and the evolution of eusocial behavior. Here, mimetics- neglected in Anglo-America- with its cheap 'out of control' computational solutions to co-ordination and matching problems gains salience and ground a unification with both Continental theories as well as 'mirror neuron' type research.
2) Baumol 'superfairness' should be re-examined in the light of Binmore's evolutionary approach. Interestingly, this opens 'Western' discourse to 'Eastern' thought. The Bhagvad Gita, a sacred text for Hindus, for example, is part of a bigger text which stresses the need for the 'Just King' or morally autonomous 'Principal' (as opposed to Agent) to learn Statistical Game Theory to make good decisions.
3) 'Euvoluntary' commitments need to be universalisable such that Hannan consistency- i.e. regret minimization- obtains in a Muth Rational manner.
All 3 avenues, quoted above, are currently neglected in E.A discourse though they provide better solutions and better ways forward than anything I've come across in salient apologetics.
Moreover, they are eminently unifiable under the rubric of Co-Evolution and generate powerful 'regulative' concepts or paradigmatic metaphors
1) Mike Munger's notion of the 'euvoluntary' and the evolution of eusocial behavior. Here, mimetics- neglected in Anglo-America- with its cheap 'out of control' computational solutions to co-ordination and matching problems gains salience and ground a unification with both Continental theories as well as 'mirror neuron' type research.
2) Baumol 'superfairness' should be re-examined in the light of Binmore's evolutionary approach. Interestingly, this opens 'Western' discourse to 'Eastern' thought. The Bhagvad Gita, a sacred text for Hindus, for example, is part of a bigger text which stresses the need for the 'Just King' or morally autonomous 'Principal' (as opposed to Agent) to learn Statistical Game Theory to make good decisions.
3) 'Euvoluntary' commitments need to be universalisable such that Hannan consistency- i.e. regret minimization- obtains in a Muth Rational manner.
All 3 avenues, quoted above, are currently neglected in E.A discourse though they provide better solutions and better ways forward than anything I've come across in salient apologetics.
Moreover, they are eminently unifiable under the rubric of Co-Evolution and generate powerful 'regulative' concepts or paradigmatic metaphors
Subscribe to:
Posts (Atom)
At worst, E.A would cause a person to secure a pure economic rent to maximise income and this rent would be associated with a dead-weight loss. He may distribute much of this rent to individuals who are faking or schemes which are incentive incompatible or have a design flaw. For example, a person who can either be a teacher (with little economic rent) may choose to be a monopolist or monopsonist (causing deadweight loss to the economy) and give this money to handicapped children who have actually been maimed by a beggar-king.
Thus, for Hayekian reasons, EA can't be an optimal information aggregation mechanism. However, it may actually yield more happiness to its practioners than mindless consumerism.