Showing posts with label beenakker's boundary. Show all posts
Showing posts with label beenakker's boundary. Show all posts

Wednesday, 1 May 2013

South Park and Super intelligent Machines

This is a link to a potentially interesting, but not even wrong (because it is ignorant about Capital re-switching problems) Paper (pdf) about 'the microeconomics of cognitive returns' on self-improving machines which thus become super-intelligent- (FOOM)

What philosophical problems does such speculation give rise to?

Suppose there is a single A.I. with a 'Devote x % of resources to Smartening myself' directive. Suppose further that the A.I is already operating with David Lewis 'elite eligible' ways of carving up the World along its joints- i.e. it is climbing the right hill, or, to put it another way, is tackling a problem with Bellman optimal sub-structure. Presumably, the Self-Smartening module faces a race hazard type problem in deciding whether it is smarter to devote resources to evaluating returns to smartness or to just release resources back (re-switching) to existing operations. I suppose, as part of its evolved glitch avoidance, it already internally breeds its own heuristics for Karnaugh map type pattern recognition and this would extend to spotting and side-stepping NP complete decision problems. However, if NP hard problems are like predators, there has to be a heuristic to stop the A.I avoiding them to the extent of roaming uninteresting spaces and breeding only 'Speigelman monster' or trivial or degenerate results. In other words the A.I's 'smarten yourself' Module is now doing just enough dynamic programming to justify its upkeep but not so much as to endanger its own survival. At this point it is enough for there to be some exogenous shock or random discontinuity on the morphology of the fitness landscape for (as a corollary of dynamical insufficiency under Price's equation) some sort of gender dimorphism and sexual selection to start taking place within the A.I. with speciation events and so on. However, this opens an exploit for systematic manipulation by lazy good for nothing parasites- i.e. humans- so FOOM cashes out as ...oh fuck, it's the episode of South Park with the cat saying 'O long Johnson'.
So Beenakker solution to Hempel's dillemma was wrong- http://en.wikipedia.org/wiki/Hempel's_dilemma- The boundary between physics and metaphysics is NOT the boundary between what can and what cannot be computed in the age of the universe' because South Park resolves every possible philosophical puzzle in the space of what?- well, the current upper limit is three episodes.

Monday, 14 January 2013

Stalnacker-Lewis & a Mathesis Universalis

Suppose there were a Mathesis Universalis- i.e. an 'eidetic science of the object in general' (Husserl)- which, if propounded, everybody would agree fits the bill, and suppose this yields an evidentiary decision theory all Muth rational people would adhere to, what happens to counterfactual conditionals? In particular, what would Stalnaker's closest possible world to our own look like?
Let's take a Josephine's Kingdom type situation populated by Muth rational, Mathesis Universalis possessing, Vandana Shivas married to men who may or may not be evil, G.M food advocating, gang rapists of the genomes of innocent Saree wearing plants. No Vandana Shiva knows if her own husband is a gang rapist- there is no objective test for the condition since such creatures have no real existence but are a figment of Vandana Shiva discourse- but does know which of the other Vandana Shivas is married to a gang rapist because Vandanaji's PhD in Philosophy dealt with Quantum non-locality, Corporate Globalization, and the gang-rape of plant genomes this inevitably gives rise to . However, since Vandana Shivas never listen to each other, or themselves, there is no way for a particular Vandanaji to be told her husband is or is not a gang rapist. Josephine, the Queen of the Kingdom, announces that gang-rapist husbands exist and orders all Vandanas to shoot their gang rapist husband at midnight of the same day they realize he must be outraging the modesty of the genomes of innocent plants. All Vandanas can hear or otherwise get information about who got shot.
Suppose there is only one gang rapist. Then there is one Vandana who knows all other men, save her hubby, are innocent, so she shoots him. Suppose there are two gang rapists. Then there are two women who think there is only one rapist. So they expect one shooting and when it does not happen, next night shoot their own husband. Suppose there are three gang rapists. Then there are three women who think there are only two rapists and know, because no one gets shot on day one or two, that their husband is guilty and shoot him on the third day.
Clearly, by induction, all gang rapists will get shot within x number of days- where x is the number of gang rapists.
Now let us mix things up by changing Josephine's directive to her Vandana subjects so that it reads - 'kill your husband iff he is a gang rapist of the plant genome in the Stalnaker closest possible world, or even the Lewis 'sphere of very close possible worlds'.
The problem for us here is that though such gang rapists only exist in Vandana Siva theory and, since all Sivas are Muth rational, so the number of such gang rapists is knowable, still Josephine's directive is not well specified. Is she saying- kill x if in a closely possible world he is a y- or is the gravamen of her order rather- if you are married to a y in the closest possible world, then kill the x you are married to in this?' In any case, is the closest possible world one identical in every respect except x is a y? However, we are assuming that 'an eidetic science of the object in general' actually exists and is possessed by relevant Muth Rational agents. To say that the problem we encounter makes Josephine's directive dangerously ambiguous is to say that we can't conceive of a Mathesis Universalis compossible with our ordinary, or even Stalnaker specified, notions of intentionality and meaning. In other words, on one horn of the dilemma, eidetics is empty, on the other, intentionality is vacuous.

Wednesday, 2 January 2013

Deontology's Royal Road to Beenakker's boundary


This is a link to an interesting paper suggesting that any deontology can be collapsed into a Consequentialism by an appropriate weighting of Utilities but not vice versa  thus generating an asymmetry in favour of the latter.
An obvious rejoinder is that you can have a Deontology specified thus
1) first compute all possible Consequentialist solutions be they rule, act or whatever.
2) find something better than any of them.
However, there is one sort of Consequentialism, which I've just this moment invented, which goes something like 'discontinuously assign very high Utility to particular ordinal Utilities which have interesting mathematical properties, like Pi or e, such that what is maximized relates to something to do with  doing the Consequentialist calculus itself. In this case the deontology suggested above fails because something at step 1 encounters a halting problem.

Now as a matter of fact, not theory, it is the case that talk about Consequentialism vs Deontology is only interesting in so far as it drives maths or provides a concrete model for cool axiom systems arising from other fields.

The author of the paper linked to above writes-
 A consequentialiser who cannot account for the difference between act and rule consequentialism has not succeeded to deliver a theory that deserves the label ‘consequentialism’. However,only cardinal consequentialism can account for this distinction. Rule consequentialism presupposes that one is able to calculate averages (or at least sum up the utility of different consequences into a sum total) and this requires that we measure utility on a cardinal scale.
Is it the case that Rule Consequentialism (R.C) is constrained in the manner specified? Who is to say that, so long as R.C. doesn't throw away information, that single valued averages are necessary? Suppose a fractal captures the information rather than an average. It would have been news to many, prior to the Seventies, that fractals were in fact rankable on a cardinal scale on the basis of dimensionality. How do we know that the same thing is not true of other, currently exotic or unknown, mathematical objects which capture information?

I suppose this is just a sort of slapdash prelude to the realization that here as elsewhere what appears to be a Philosophical problem dissolves at Beenakker's boundary.

Tuesday, 15 November 2011

Beenaker's boundary

Reflecting on Hempel's dilemma, Carlo Beenakker has proposed a new boundary to demarcate physics from metaphysics- viz. what is computable within the age of the Universe.
A short essay of his on this theme can be read here.

We are familiar with computational constraints on computers e.g those arising from physical constants like the speed of light, Boltzmann's constant and Planck's constant-

Beenaker argues that if the Universe is a computer and if constants, like those above, don't evolve in an inflationary manner, and if the Universe has an end in time, then certain problems- such as the question of the immortality of the soul- will always remain metaphysical because there isn't enough time to do the necessary computations to reduce the question to one of physics.

I suppose someone has stated the dual to this proposition- viz. the metaphysics/ physics boundary arises from evolved diachronicity (otherwise Hempel's dilemma is meaningless) and thus Computability theory exists only in the vanishing present of its P versus NP problem.

For which I, personally, blame David Cameron. That boy aint right.