Support Philosophy Bites

  • Donate in GB Pounds
  • Donate in Euros
  • Donate in US Dollars
  • Subscribe
    Payment Options

Your email address:

Powered by FeedBlitz

« Luc Bovens on Catholicism and HIV | Main | Frank Jackson on What Mary Knew »

August 14, 2011


Neil Kandalgaonkar

There is an unstated assumption in the Simulation Argument; that a computer can simulate the world to a degree that could deceive the humans within it. This may seem like a simple matter of just improving the technology of computation, but even computation is bounded by physics (assuming the true substrate universe has the same limitations as our own).

For example, in the great science fiction novel Solaris by Stanislaw Lem, at one point the protagonist Kelvin worries that he dreaming. He devises a quick test; he gets a computer to solve a complicated equation, and records the answer. Then he solves the same equation by hand. The answers match. It seems inconceivable that Kelvin's dreaming brain could solve the equation any faster than his conscious mind, so he concludes he is not dreaming.

This is a case of using a computationally difficult task to prove that one is not in a simulation. (Assuming that the simulation does not also alter Kelvin's memory of the solution.) In the novel, Kelvin proved that his own brain wasn't making the simulation. So now we just have to find a test that would exceed the computational capabilities of any plausible future simulation.

This may not be as hard as it sounds. Of course a simulation could easily fool a human's senses, but the world as we know it is filled with subatomic particles all interacting in parallel, and there's no way that a simulation can be operating at that level of detail. It might be possible to craft a test which exploited these features of the world. We already have DNA-based computers which derive answers to otherwise intractable problems through the parallel interactions of billions of molecules. Perhaps a similar technology can be used to show that we aren't in a simulation.

At the very least, I don't think it is obvious that we *cannot* craft such a test.

Neil Kandalgaonkar

Eh, I see that Bostrom answers my objection in his FAQ -- "If the Universe is not simulated to a quantum degree of accuracy, the simulation can be immediately exposed via Bell's inequality or some similar test."

Bostrom's answer is twofold:

a) that we don't really know how much computing power the "basement" universe has to work with, and maybe universe-smashing forces of computation are available. That seems a bit sketchy to me. It changes the nature of the proposition from a potential historical inevitability, even given the universe we are observing, to a science-fictiony speculation about realities that that we have never observed.

b) and that the simulation may include censorship of what is viewed or remembered by conscious beings. I find it implausible that a simulation can both be quantum-scale consistent (no matter what test we do in the lab) AND also can have universal, perfect surveillance of all intelligent beings, like guardian angels.

Still, Bostrom concedes that through such experiments one might be able to chip away at this question a bit. At the very least we could establish the lower bounds of how powerful the simulation would have to be. But I think when and if we create a test that would need (in effect) a whole entire universe dedicated to the simulation, we can safely discard the Simulation Argument.

Jim Vaughan

Great argument! I am interested in the possibility of infinite regress. Those beings who designed our level of simulation, would use the same arguments to reason that they too were part of a simulation, those above them also would reason likewise and so on...

As the speed of calculation is limited by the physical limitations of each level e.g. mass, speed of light, time etc., the end of the regress i.e. "Basement" level would simply consist of all other possibilities existing at once.

Then, it starts to look very much like the universe described by Plotinus, 1700 years ago! Look it up, you'd be amazed.


In my opinion, this argument handily demonstrates some of the perils of deductive philosophy.

There are some whacking great assumptions built in there about what constitutes and is possible with high technology. It may be, as discussed for all of about 5 seconds in the podcast, that this sort of simulation is not possible, or even if it is possible, it may be that it is prohibitively expensive in terms of energy or even space.

Assuming some sort of physical reality that the simulation is running within, or even a regression of simulations, the substrate on which the simulation runs must also have some structure upon which the details of the simulation are encoded. Now, as touched on by a commenter earlier, if you are trying to simulate something down to the quantum scale, (which might be the only feasible way to construct such a simulation) to model all of the quantum scale components is likely to take up at least as many components in the substrate, thus making the simulation itself more complex than the actual physical manifestation of the simulation would be.

This would completely screw with the probabilistic aspect of the argument, as surely then, the likelihood of something being a simulation vs reality would then lean towards reality due to the increased complexity inherent in the simulation.

My actual point however, is that there are some empirical questions concerning the details of technology and simulation that need to be answered well before you start bringing in some deductive logic to lay out the landscape of possibilities and assigning probabilities to them.

Laurie Thompsom

this argument seems to miss one huge elephant in the room. Why is it that we expect advanced civilisations to be hugely technologically advanced but with the ethics of Genghis Khan? One very strong argument against this in my view is that they wouldn't do it because it’s highly immoral. After all imagine knowingly creating millions of sentient stimulants to relive the past 5000 years of earth history? Slavery, war, starvation, patriarchy and racism in abundance to say nothing of what animals suffer. It’s simply almost unthinkable that anyone would allow it, there would have to be some extraordinary necessity to do it surely?


I strongly agree with Nick, Bostrom's analysis of future technology and limitations on ancestor simulations is wholeheartedly naive, he focuses only on the processing power required, and doesn't consider anything else at all. He mentions that using current nanotechnology it would be possible to build a computer of planetary mass that could run an ancestor simulation in a microsecond, as if this lends credence to his argument, but he doesn't bother to qualify this with the fact that there would be extreme issues involved in building such a computer both in time and energy, he just says it is possible and therefore supports his argument.

Bostrom's paper could have been so much better if he actually analysed the technological requirements in some depth (there wasn't even any mention of the Landauer limit for God's sake), it would have also helped if he could have derived the correct formula for the fraction of simulated individuals, rather than randomly assuming that the average number of individuals in a post-human civilization is the same as the average number of individuals in non post-human civilizations, which is a bizarre assumption to make in the absence of any sure knowledge of the 'real' universe, and thus getting a formula that only holds true in certain special circumstances. His recent paper on patching the problem with his formula manages to argue oddly enough that from observation in our universe we can make assumptions about the 'real' universe. Frankly I can't help but think that Bostrom is trolling the philosophy community, either that or he has managed to confuse himself.

There is also the issue of whether or not in Bostrom's argument his formula is meant to be observer dependent and represent fractions observed in this universe, or observer independent and therefore represent fractions for the entire simulation hierarchy, he seems to argue from both perspectives in his papers. If the former he can't say that his formula represents the probability that an individual is themselves simulated and if the latter then he can't use observations in this universe to argue that the variables in the formula tend to certain values.

Jake Ellison

Well, I hate to just jump in the negative side here ... But perhaps more elegantly: proposition no. 1 is that no tech society makes it to the state of tech development of creating a simulated world. In this prop is the automatic inclusion of all the negative assumptions in the above objections. Our society projects a future where no tech roadblock will stand. So, if that future is closed off it is so because of a wholesale failure somewhere in our evolution ... No matter the specific failure.

But I do have a question related to prop no. 1 - if a simulated world simply is not possible, then what happens to the set up? Human beings continue down the road of tech development but never face the dilemma of prop 2 because prop 1 never really is an issue?


Re: Jake

Not quite. There is a problem with Bostrom's original formula that he has attempted to address in his recent paper "A Patch for the Simulation Argument", the problem is that he arbitrarily assumes that the average number of individuals in real civilizations and the average number of individuals in civilizations that run ancestor simulations is the same, leading to a formula that is not general and gives erroneous answers in certain circumstances.

While Bostrom discusses the problem in this paper, he does not give the correct form of his formula, which would be f = N/( (A/BF) + N) rather than f = N/( (1/F) + N) as he gives it originally (feel free to check the formula I give against the example in Bostrom's recent paper, also note that I have slightly rearranged his original formula for simplicity). Here A is the average number of individuals per real civilization, both post-human and non post-human, B is the average number of individuals per real post-human civilization, F is the fraction of real civilizations that have run simulations in the time over which averages A and B have been determined, N is the average number of ancestor simulations run by such civilizations, and f is the fraction of simulated individuals purported to give the probability that we are ourselves simulated, erroneously in my opinion because any estimate we might make of these factors is observer dependent and not from the position of one observing the entire simulation hierarchy.

For the fraction of simulated individuals to tend to 1, we would require that A/BF << N. Now, there is a relationship between B and F that is only defined at the extrema F=1 (B=A) and F=0 (B=0). In the first instance F=1 leads to A/BF = 1 reproducing a close approximation of Bostrom's original formula and in the second instance F=0 leads to N=B=0 and A/BF = inf giving f=0. These limits are well defined, but between these B is completely undefined. Given that A is also uncertain, f is completely undefined in this region and it is here that Bostrom's argument fails.

Bostrom attempts to suggest that we may extrapolate from the experience of our own civilization to give reasonable estimates of A and B, specifically he contends that A~B. This is ludicrous because even if we restrict our analysis to ancestor simulations such that we might extrapolate our knowledge upward the simulation hierarchy, at each level above our own it is possible that unknown and materially very different civilizations abound, distorting A and B and making the contention that A~B unreasonable. But, we need not even go as far as to take this point to refute the simulation argument as currently presented because the proposition that most if not all simulations will be ancestor simulations is clearly a weak one and Bostrom himself admits in the above podcast that simulations may be different to the history of the civilization running them, making his reasoning for the values of A and B illegitimate.

What the simulation argument amounts to when analysed properly is that Bostrom's disjunction might be true, as A~B might be true, but we have absolutely no way in which to even estimate the values of these two factors with any reliability and thus cannot assign probability to such a situation. Extrapolation from a sample of one, does not, an argument make.


Our reality is only simulated to very fine detail from our perspective. Scale is irrelevant because we only have our own reference frame.
The most convincing simulation theory including how it relates to physics, is by Brian Whitworth

It makes more sense than any other explanation of our reality.

We really are in a simulation and the sooner we accept that.. then we can make real progress in understanding the universe.

Guy Gauvin

Problem is that all of this hinges on his assumption that the simulating realm that creates our universe appears to be just like this simulated realm all about us. Is this not a class violation? In fact, we know nothing about the simulating realm that would create us other than the fact that it can host Turing machines, or universal computation. That's it. This in itself is an interesting attribute and says much, but it does not say what the speaker suggests. Nick looks about at this present level, he draws upon the characteristics of worlds at this level, he draws out how worlds at this level could host technologically mature realms that could simulate other realms. Then he uses this knowledge to conclude things about the inception of this very world - a level previous to that which was his starting point. Inference does not work backwards like this. He is developing arguments that are all at one level of abstraction to infer things about a previous level and this leads to problems. It could be that the realm that gives rise to our level does not have laws of physics as we understand them. Ed Fredkin has suggested this very thing, for example. This meta world might not even have causality as we understand it. But if it does exist, we know that it can give rise to a simulation of a world (ours) in which these things do indeed arise including the possibility of technological maturity by sentient beings in that realm leading them to make a myriad of simulations. That's all we know.


Neil, the assumption that's made in Solaris is that a simulation must be running in "real time" relative to non-simulated reality. But in existing computer simulations, time is just another variable. That is to say: simulated time is not something which just happens, it is actively simulated the same way any other phenomenon on the "inside" is.

Therefore, simulated time can be sped up, slowed down, or even paused relative to real time, and it would be completely undetectable from within the simulation. This allows any complex computations which arise from the simulation to be computed basically at the leisure of the simulator, since it can just refrain from progressing time until it has the correct answer.


Ha! You're all missing an important point: the simulation is just of one person: me! And my bandwidth for perception is fairly limited, and nothing a set of supercomputers couldn't simulate. I personally havent even seen quantum particles, all that needed to be done was feed me some text to cover that end of the universe.


... also, this simulation probably only lasts for a few micro-seconds at most.. I've just been pre-loaded with fake memories to make me behave more realistically. I'm probably part of some AI project..

Peter Hardy

Wow, I just happened to be reading Bostrom's original paper and then I thought I'd download some new philosophy bites and this had appeared. Like deja-vu and misperception, this sort of coincidence is surely a consideration in favour of the hypothesis.

Scott B.

Laurie Thompson makes an excellent point, but I think the most plausible argument against it would be the same as I would make against the Fermi paradox - a disinclination on the part of technologically advanced forms of life (despite Nigel's finding this option the most implausible on its face).

The Fermi paradox says: Why if much of the universe is older than our part, hasn't it had more than enough time to develop intelligent life that should have made itself known by now?

Perhaps because, even tho advanced science is still in its infancy on earth, we've already invented drugs that can - for a time - produce in one a serene state of ecstasy, so one would have to assume that devising the means of altering one's conscious base so as to produce permanent bliss would be achievable far in advance of fast-as-light travel or whatnot.

But once you've mastered that who would give a fuck about space travel, or for that matter, creating simulations!



I read all your postings and I am quite surprised that all of you speak about that issue so coolly. I first heard about that theory in a documentary about astrophysics at and I was quite shocked. I watched Matrix but I didn't think that our own universe could be a computer simulation. At first I thought I should take things easier but after some weeks this theory made me a little depressive because it seemed that it doesn't matter whether we are alive or dead, we are only computer figures in a game. I could imagine that our complicated brain, our bodies, plants, animals and cells are all computer simulations. But if that is really the case, we have to get used to it. I will also order the book "Programming the universe" by Seth Lloyd in order to know more and read what he thinks about that issue.

The comments to this entry are closed.