Skip to main content

Why we are likely NOT living in a simulation

I typically keep this blog about computer science, but I also dabble in a bit of philosophy. I was initially struck by Bostrom's simulation argument when I first read it years ago. Over the years, I've cycled a few times between disputing what I believed were some of its assumptions, and cautious acceptance after realizing my mistake.

The simulation argument in its simplest form is that one of the following must be true:
  1. simulations of humans can't be built, or
  2. simulations of humans won't be built, or
  3. we are almost certainly living in a simulation
I think this argument is absolutely valid, so one of those outcomes is true.

Claiming #3 is most likely is what's known as the simulation hypothesis, and has such proponents as Elon Musk. Sabine Hossenfelder recently argued against the simulation hypothesis by basically asserting that #1 above is plausible, but I actually think #2 is the most likely case. For reference, Bostrom calls future civilizations capable of running ancestor simulations "post-humans", so that's what I'll use.

I don't think the paper Sabine references demonstrates anything conclusive about the plausibility of the simulation hypothesis, because a) the model studied in the paper may bear no resemblence to how such a simulation might actually operate, and b) Bostrom's argument doesn't really depend on simulating physics.

I think (a) is self-explanatory, but for (b), the argument is as follows:
  1. post-humans must understand how the human mind works up to and including consciousness in order to simulate a civilization of minds,
  2. since the mind is understood, then the contents of a human mind are available to the simulation, including all senses, beliefs, etc.
  3. since simulating physics is not the point of an ancestor simulation (just run a physics simulation in that case), an ancestor simulation will be about simulating human minds and their aggregate behaviour to simulated sensory inputs
Thus, a simulation merely needs to add the relevant sensory facts in our mind, ie. I sensed I just typed a key, stopped and took a bite of an apple, it doesn't actually have to physically simulate every step, including the apple with a bite taken out of it.

Also, our senses needn't even be consistent with the senses of other minds in the simulation. Eyewitness testimony is unreliable, for example, which demonstrates that we have a high tolerance for assertions of inconsistent observations in every domain except science.

This covers macroscopic phenomena we can directly sense, but what about actual physics experiments that test physics beyond our sensory capabilities? Since these experiments are all mediated by our senses as well, and since the simulation has access to the contents of our minds, including what physics properties we are currently testing, the simulation really just needs to present data that conforms to the probability distribution for the phenomena being tested.

Of course this may be sounding increasingly implausible, but it is still hypothetically possible for a post-human civilization, so case #1 can't be asserted without some monstrous loopholes. However, if we take a step back and ask ourselves why we simulate things, it seems plausible that post-humans simply won't have much interest in ancestor simulations.

In general, we run simulations when we don't have sufficiently precise or accurate understanding of certain problems. For instance, a closed form mathematical description of P negates any reason to simulate P.

Asserting that post-humans would run ancestor simulations is to simultaneously assert that post-humans won't have a satisfactory understanding of whatever problems a full ancestor simulation would answer. That by itself already seems less plausible, because a civilization capable of simulating a human mind with perfect fidelity would likely understand human behaviour.

However, it's still a possibility, since aggregate behaviour doesn't always tractably follow from microscopic behaviour, eg. we have a pretty good grasp of how individual moleclues behave, but predicting the weather is still intractable because of a) the sheer number of interacting molecules, and b) equations governing individual particle motion become chaotic when combined to describe systems of multiple particles. However, chaotic systems still have some regular macroscopic behaviour which we study under "statistical mechanics".

Stochastic calculations obviate the need for direct simulation in most cases, so even if human behaviour is similarly chaotic in aggregate, there is likely a statistical mechanics for human behaviour that would similarly reduce or eliminate the need for direct simulation of conscious minds.

Thus, it's not obvious that post-humans would need much from ancestor simulations. I therefore find it fairly unlikely that post-humans would be interested in running such simulations considering the costly resources they would require, and thus, we likely are not living in a simulation.

Comments

Popular posts from this blog

Software Transactional Memory in Pure C#

Concurrent programming is a very difficult problem to tackle. The fundamental issue is that manual locking is not composable , which is to say that if you have two concurrent programs P0 and P1 free of deadlocks, livelocks and other concurrency hazards, and you try to compose P0 and P1 to create a program P2, P2 may not be free of concurrency hazards. For instance, if P0 and P1 take two locks in different orders, then P2 will deadlock. Needless to say, this is a serious problem because composition is the cornerstone of all programming. I've been toying with some ideas for software transactional memory (STM) in C# ever since I started playing with FRP and reactive programming in general. The problem in all of these domains is fundamentally about how to handle concurrent updates to shared state, and how to reconcile multiple, possibly conflicting updates to said state. Rx.NET handles concurrency essentially by removing the identity inherent to shared state. An IObservable<T&g

async.h - asynchronous, stackless subroutines in C

The async/await idiom is becoming increasingly popular. The first widely used language to include it was C#, and it has now spread into JavaScript and Rust. Now C/C++ programmers don't have to feel left out, because async.h is a header-only library that brings async/await to C! Features: It's 100% portable C. It requires very little state (2 bytes). It's not dependent on an OS. It's a bit simpler to understand than protothreads because the async state is caller-saved rather than callee-saved. #include "async.h" struct async pt; struct timer timer; async example(struct async *pt) { async_begin(pt); while(1) { if(initiate_io()) { timer_start(&timer); await(io_completed() || timer_expired(&timer)); read_data(); } } async_end; } This library is basically a modified version of the idioms found in the Protothreads library by Adam Dunkels, so it's not truly ground bre

µKanren.NET - Featherweight Relational Logic Programming in C#

The µKanren paper is a nice introduction to a lightweight logic programming language which is a simplification of the miniKanren family of languages. The existing µKanren implementation in C# was a translation from Scheme, and thus is verbose, untyped with lots of casts, and non-idiomatic. I also found most of the other Kanren implementations unnecessarily obscure, heavily relying on native idioms that aren't clear to newcomers. uKanren.NET provides a clear presentation of the core principles of µKanren using only IEnumerable<T> and lambdas, showing that µKanren's search is fundamentally just a set of combinators for transforming sequences of states. The values of the sequence are sets of bound variables that satisfy a set of equations. For instance, given the following expression: Kanren.Exists(x => x == 5 | x == 6) You can read it off as saying there exists an integer value to which we can bind variable x, such that x equals either 5 or 6 [1]. Solving this eq