Friday, 5 September 2014

Rawls's Reasonableness vs Robot Rationality

Suppose there are N identical robots which can connect to M heterogenous Wireless Networks. Each robot would prefer the Network that assigns highest priority to its requests. Suppose further that the co-ordination problem for all robots is best solved if all are connected to the same Network and that the gains of co-ordination far outweigh any other consideration. Now assume 'common knowledge'.
What happens?
Presumably, ceteris paribus, in an infinite repeated game, sooner or later all robots will connect to one Network which is robustly (i.e. non-gameably) neutral re. the identity of the robot making the request.
How does the Network solve its concurrency problem in order to make this happen?- i.e. when it receives 2 or more simultaneous requests, of the same semantic class (which cashes out as cash offers), how does it decide which request to deal with first such that no bias towards a particular robot obtains?
Suppose there is an effable, white box as opposed to black box, method to do this for at least one Network. Then either there is a zero-knowledge proof of robust (i.e. non gameable) neutrality which that Network can give the robot or such isn't the case. The ability to discriminate such zero knowledge proofs would be a desirable feature for our robots. Assume they have this ability.
Now, what we have is a Network which can give a zero-knowledge proof that it solves concurrency problems in an unbiased, non-gameable and thus robustly neutral manner. But (Razborov Ruditch)  this means a proof of P=NP exists or, equivalently, pseudorandom strings can always be efficiently discriminated from truly random strings. Why? Because the string of robot requests arising from the same response to a global event will be received as truly random yet the Network can discriminate this from a pseudorandom string generated by an attempt to game it and thus violate its neutrality.

Rawls, in 'Justice as Fairness' draws a distinction between being 'reasonable' and being  'rational'. The robots aren't reasonable, they fail a Turing test but, provided there is a proof that P=NP, adhere to a Coordination Solution which is identical to the Co-operative solution for Rawls's 'reasonable' human beings.
 'Throughout I shall make a distinction between the reasonable and the rational, as I shall refer to them. These are basic and complementary ideas entering into the fundamental idea of society as a fair system of social cooperation. As applied to the simplest case, namely to persons engaged in co-
operation and situated as equals in relevant respects (or symmetrically, for short), reasonable persons are ready to propose, or to acknowledge when proposed by others, the principles needed to specify what can be seen by all as fair terms of cooperation. Reasonable persons also understand that they are to honor these principles, even at the expense of their own interests as circumstances may require, provided others likewise may be expected to honor them. It is unreasonable not to be ready to propose such principles, or not to honor fair terms of cooperation that others may reasonably be expected to accept; it is worse than unreasonable if one merely seems, or pretends, to propose or honor them but is ready to violate them to one's advantage as the occasion permits. 

'Yet while it is unreasonable, it is not, in general, not rational. For it may be that some have a superior political power or are placed in more fortunate circumstances; and though these conditions are irrelevant, let us assume, in distinguishing between the persons in question as equals, it may be rational for those so placed to take advantage of their situation. In everyday life we imply this distinction, as when we say of certain people that, given their superior bargaining position, their proposal is perfectly rational, but unreasonable all the same. Common sense views the reasonable but not, in general, the rational as a moral idea involving moral sensibility. '

Rawls says that an agent may be rational but unreasonable. Can there be a reasonable agent who is also irrational? For example, given that no rational argument obtains for assuming P=NP or that an efficient way exists to discriminate pseudorandom from random sequences or that a robustly neutral solution to Race hazard or Concurrency bias exists- could Rawls be considered reasonable for making an argument which depends crucially on assumptions such as these?
Certainly. Why not?  It may be that 'common sense' views Reason as irrational in so far as it involves a moral idea or moral 'sensibility'. If human beings are socially canalised to ontological dysphoria- i.e. to not feel at home in the world- then, it may be, Reason counsels irrationality (itself a moving target) or elite susbscription to a 'noble lie'.
But, surely, this is not Rawl's implication- he seems to be saying that Reason is a sub-set of the Rational with 'Moral Sensibility' providing the Partition. If this is not in fact, by the Maxim of Relevance, his Gricean implicature, then how is his political theory of Justice-as-Fairness different from an arbitrary theory based on some supposed 'Revealed Truth' or Supernatural Oracle or bogus Ideology like 'Post Colonial Reason'? Why would any rational person want to be Rawls reasonable?

Indeed, common sense tells us that, contra Rawls, no 'reasonable person would be ready to propose, or to acknowledge when proposed by others, the principles needed to specify what can be seen by all as fair terms of cooperation.' Why? Suppose I say to you- 'go get the pizza and I'll pick up the beer.'- and you reply- 'Cool'- is it really the case that either of us needs to specify what principle is involved so that everybody in our society can see that what we are doing is an example of fair co-operation?
Suppose a stranger who overhears our conversation says- 'Stop! It's unfair that Vivek gets to go for the beer just because he's got a bigger dick than you. Why shouldn't he go for the pizza for a change? Could you please justify the principle underlying this proposed co-operative act of yours in a manner which sets to rest my doubts as to its fairness by reason of gender bias and like Vivek just having such a huge swinging dick which is like itself unfair.'
I suppose, if we were both as reasonable as Rawls, we could spend a few months or years or decades attempting firstly to grasp what the underlying deciding principle was (hint- it's the theory of Comparative Advantage) and then to prove it was Baumol super-fair, or zero regret or whatever. We would fail because  'Fairness' is like a Wealth effect in Sonnenschein, Mantel, Debreu, i.e. not independent of the comparative statics or concurrency of the system.

Any 'political' regime (and Rawls redux is offering us only a purely Political conception of Justice) is going to display the same level of unfairness as arises out of an Economic regime purely because of concurrency problems in co-ordination games even if there is no other source of scarcity or even conflict of interest. Indeed, monarchy, oligarchy, the market, GOSPLAN etc. all reappear as concurrency solutions which fail the neutrality test. This means the Benthamite planner has a choice between devoting resources to reducing Race hazard rather than expanding the Network which is similar to the dilemma of the 'Super Intelligent Self improving Machine' approaching FOOM

2 comments:

  1. Are you saying a neutral network is impossible? In evolutionary theory, 'promiscuous activities' i.e. non-native activities- e.g. enzyme promiscuity- typically doesn't alter robustness but does impact plasticity and evolvability. I suppose, promiscuity is a bit like your robots gaming the network to gain a concurrency advantage. However, the gain from improved evolvability for non-promiscuous robots would surely outweigh the cost of Network non-neutrality and so the 'rational' robot would choose to be Rawls' reasonable- i.e. rather than having a 'zero-knowledge proof' module to test Network neutrality, it would simply have some cheap 'mimetic' type module.

    ReplyDelete
    Replies
    1. I think a truly neutral network in sequence space isn't effectively computable within its own in silico universe. Yes, you can get a 'holey' landscape which is nearly neutral but 'nearly fair' still isn't 'fair' more especially because outcomes get more and more skewed the longer you run the program.
      In any case, evolutionary neutrality has nothing to do with a substantively rational (as opposed to bilateral, 'small, cheap and out of control') Network for whom concurrency and race hazard can mean deadlock or livelock or other such pathologies catastrophically reducing inclusive fitness.
      Mimetics is a different issue- not one Liberal Political Philosophy should be comfortable getting in bed with.
      M.M Kaminski, author of 'Games Prisoners Play', shows that semantics as the solution of a zero knowledge proof discoordination game gains salience along with mimetics.
      Interestingly, Kaminski- who was very young when sent to Jail by the Communists and thus in danger of being turned into a 'fag'- thinks a rational choice hermeneutic is Muth Rational. An older man would conclude the opposite and choose to go from 'nutter' to 'Gandhi' because the mimetics of beastliness was actually the greater existential threat- i.e. ontological dysphoria is what most becomes the reasoning animal.

      Delete