Skip to main content

Why we are likely NOT living in a simulation

I typically keep this blog about computer science, but I also dabble in a bit of philosophy. I was initially struck by Bostrom's simulation argument when I first read it years ago. Over the years, I've cycled a few times between disputing what I believed were some of its assumptions, and cautious acceptance after realizing my mistake.

The simulation argument in its simplest form is that one of the following must be true:
  1. simulations of humans can't be built, or
  2. simulations of humans won't be built, or
  3. we are almost certainly living in a simulation
I think this argument is absolutely valid, so one of those outcomes is true.

Claiming #3 is most likely is what's known as the simulation hypothesis, and has such proponents as Elon Musk. Sabine Hossenfelder recently argued against the simulation hypothesis by basically asserting that #1 above is plausible, but I actually think #2 is the most likely case. For reference, Bostrom calls future civilizations capable of running ancestor simulations "post-humans", so that's what I'll use.

I don't think the paper Sabine references demonstrates anything conclusive about the plausibility of the simulation hypothesis, because a) the model studied in the paper may bear no resemblence to how such a simulation might actually operate, and b) Bostrom's argument doesn't really depend on simulating physics.

I think (a) is self-explanatory, but for (b), the argument is as follows:
  1. post-humans must understand how the human mind works up to and including consciousness in order to simulate a civilization of minds,
  2. since the mind is understood, then the contents of a human mind are available to the simulation, including all senses, beliefs, etc.
  3. since simulating physics is not the point of an ancestor simulation (just run a physics simulation in that case), an ancestor simulation will be about simulating human minds and their aggregate behaviour to simulated sensory inputs
Thus, a simulation merely needs to add the relevant sensory facts in our mind, ie. I sensed I just typed a key, stopped and took a bite of an apple, it doesn't actually have to physically simulate every step, including the apple with a bite taken out of it.

Also, our senses needn't even be consistent with the senses of other minds in the simulation. Eyewitness testimony is unreliable, for example, which demonstrates that we have a high tolerance for assertions of inconsistent observations in every domain except science.

This covers macroscopic phenomena we can directly sense, but what about actual physics experiments that test physics beyond our sensory capabilities? Since these experiments are all mediated by our senses as well, and since the simulation has access to the contents of our minds, including what physics properties we are currently testing, the simulation really just needs to present data that conforms to the probability distribution for the phenomena being tested.

Of course this may be sounding increasingly implausible, but it is still hypothetically possible for a post-human civilization, so case #1 can't be asserted without some monstrous loopholes. However, if we take a step back and ask ourselves why we simulate things, it seems plausible that post-humans simply won't have much interest in ancestor simulations.

In general, we run simulations when we don't have sufficiently precise or accurate understanding of certain problems. For instance, a closed form mathematical description of P negates any reason to simulate P.

Asserting that post-humans would run ancestor simulations is to simultaneously assert that post-humans won't have a satisfactory understanding of whatever problems a full ancestor simulation would answer. That by itself already seems less plausible, because a civilization capable of simulating a human mind with perfect fidelity would likely understand human behaviour.

However, it's still a possibility, since aggregate behaviour doesn't always tractably follow from microscopic behaviour, eg. we have a pretty good grasp of how individual molecules behave, but predicting the weather is still intractable because of a) the sheer number of interacting molecules, and b) equations governing individual particle motion become chaotic when combined to describe systems of multiple particles. However, chaotic systems still have some regular macroscopic behaviour which we study under "statistical mechanics".

Stochastic calculations obviate the need for direct simulation in most cases, so even if human behaviour is similarly chaotic in aggregate, there is likely a statistical mechanics for human behaviour that would similarly reduce or eliminate the need for direct simulation of conscious minds.

Thus, it's not obvious that post-humans would need much from ancestor simulations. I therefore find it fairly unlikely that post-humans would be interested in running such simulations considering the costly resources they would require, and thus, we likely are not living in a simulation.


Popular posts from this blog

async.h - asynchronous, stackless subroutines in C

The async/await idiom is becoming increasingly popular. The first widely used language to include it was C#, and it has now spread into JavaScript and Rust. Now C/C++ programmers don't have to feel left out, because async.h is a header-only library that brings async/await to C! Features: It's 100% portable C. It requires very little state (2 bytes). It's not dependent on an OS. It's a bit simpler to understand than protothreads because the async state is caller-saved rather than callee-saved. #include "async.h" struct async pt; struct timer timer; async example(struct async *pt) { async_begin(pt); while(1) { if(initiate_io()) { timer_start(&timer); await(io_completed() || timer_expired(&timer)); read_data(); } } async_end; } This library is basically a modified version of the idioms found in the Protothreads library by Adam Dunkels, so it's not truly ground bre

The Fun of Floating Point Numbers in one Image

Programming with floating point is always fun. Here's a nice little screen capture summarizing the insanity that sometimes arises: .NET keeps 9 digits of precision internally, but typically only displays 7 digits of precision, so I had a hell of a time figuring out why a value from what's effectively a no-op was exceeding the 0.33F threshold I was looking for. Losing equational reasoning is always fun, but this is even more bizarre than usual. Yay floating point!

Easy Automatic Differentiation in C#

I've recently been researching optimization and automatic differentiation (AD) , and decided to take a crack at distilling its essence in C#. Note that automatic differentiation (AD) is different than numerical differentiation . Math.NET already provides excellent support for numerical differentiation . C# doesn't seem to have many options for automatic differentiation, consisting mainly of an F# library with an interop layer, or paid libraries . Neither of these are suitable for learning how AD works. So here's a simple C# implementation of AD that relies on only two things: C#'s operator overloading, and arrays to represent the derivatives, which I think makes it pretty easy to understand. It's not particularly efficient, but it's simple! See the "Optimizations" section at the end if you want a very efficient specialization of this technique. What is Automatic Differentiation? Simply put, automatic differentiation is a technique for calcu