Playing dice: Randomness, determinism and the quantum world

As generative artists we use “randomness” all the time but what does that actually mean? What’s the difference between unexpected, random and chaotic? Does the universe contain any truly random events, or is it operating like clockwork, ticking from one event to the next?

I got thinking about determinism while discussing randomness in my article ‘What is Generative Art’. After a bit of research, I realised that I needed to know more about quantum physics to be able to investigate determinism in the way I wanted to. I did a 12-week introductory course about the quantum world and, while there’s still much for me to understand, I did discover a lot that is relevant to generativity.

 

Kinda random

In generative art we sometimes throw around the word ‘random‘ when we actually mean pseudorandom. A pseudorandom number generator (prng) is an algorithm which takes an initial value (known as a seed) and produces a numeric result.

If we run a prng 1000 times with 1000 different seeds, it will give us 1000 different results, all seemingly randomly distributed. This is what’s happening when we use a function like Math.random() - it takes a seed value, usually from the computer’s clock, spins it through an algorithm, and gives us a different result for each seed.

 

However, if we run the prng 1000 times with the same seed, it will always give us the same result. It’s not random at all, it’s deterministic.

This is usually sufficient for our purposes as generative artists but it seems to me to be an important conceptual point.

 

Truly Random

‘True’ random number generators also exist, and they’re used in applications like cryptography and banking. These generators often draw data from a chaotic natural source, like atmospheric noise or radioactive decay.

Dramatic lightning spreading across a purple cloudy sky
 

But is Chaos Really random?

Newtonian physics tells us that all motion is theoretically predictable. If we know all the relevant facts about the start conditions of a situation (mass of objects, velocities, etc) then we can predict the outcome.

If I toss a coin, I can’t predict how it will land but that’s just because I don’t have enough information. I don’t know exactly where my thumb will strike the coin or how fast, so I can’t make an accurate prediction. However, it is possible to build a machine that tosses a coin in a predictable and repeatable way. For example, this one tosses coins that land the same way up as they started, 100% of the time.

Dynamical Bias in the Coin Toss - Persi Diaconis, Susan Holmes, Richard Montgomery

 

All of the “randomness” in a natural coin toss is born of the start state - differences in the speed of the flick and the way the thumb is angled and so on. This is similar to the seed in a pseudorandom number generator in code. Differences in the seed produce seemingly random results but identical seeds produce identical results.

If we increase the complexity of the system and chuck 10,000 coins down a spiral staircase at once, it becomes impossible to predict the outcome. At a practical level, we could not produce the same result twice in a row, even if we built a machine to throw the coins.

Photo by Johannes Plenio on Pexels

 

One tiny difference in the start state, a speck of dust settled on a coin or a change in air pressure, causes one coin to hit another with minutely less velocity than before, affecting the spin of the second coin and causing it to hit a third coin which last time it passed by… and so on. A small almost imperceptible change multiplies out, affecting the outcome we see.

 

Like clockwork

According to Newtonian physics, a chaotic system, like the weather or 10,000 coins on a staircase, is theoretically no different to a single coin toss. Different outcomes still come from differences in the start state and the reason we can’t predict them is down to inexactness in measurements and setup.

Theoretically - if we had a better control over the environment, if we could build a perfect machine and run it in a temperature controlled vacuum, if we could guarantee that each coin started in the exact same position as last time, and that there were no dents on the stairs, and so on, then the outcome would be repeatable.

Close up of cogs inside a watch
 

Newtonian mechanics tells us that the universe is like a wind up toy or a Rube Goldberg machine. Each state is determined by the state preceding it, and then that state leads predictably to the next. The universe is chaotic, but it is a deterministic chaos and our inability to accurately predict outcomes in complex systems is a measurement problem, not randomness.

 

Is there any real randomness?

Quantum mechanics tells a different story, of a universe that is probabilistic, not deterministic. If we use a machine to fire tennis balls at a wall, one at a time, we can make them hit the wall in the same place every time. But if we fire electrons at a sensor screen, the locations the positions they strike it at spread out like a cloud.

As each electron leaves the emitter, its position and velocity can only be determined to a limited level of accuracy. Crucially, Heisenberg’s Uncertainty Principle tells us that this is not an observation problem. The issue is not that we cannot measure the electron precisely, but rather that it does not exist precisely. The electron has neither an exact position nor an exact velocity, rather it has a wave of possible states with different probabilities.

We can predict the probability of electrons hitting the sensor in any given position, but we cannot say where any individual electron will land.

Another example of the probabilistic nature of quantum phenomena is radioactive decay. Given a significant number of unstable atoms, we can predict the overall rate of decay, usually stated as the ‘half life’ - the amount of time it takes for half of the atoms to decay. But we cannot know exactly when any individual atom will decay. We can only calculate a probability that it will have done so after any given amount of time.

 

Weirdness with waves

When we talk about light being both a particle and wave, what we mean is that sometimes it exhibits characteristics of waves and sometimes of particles.

Let’s look at an experiment known as “Young’s Double Slits”. Imagine a card with two slits cut into it, held horizontally over a surface, with sand being poured over it. As the grains of sand land on the surface below the card, they create two piles. This is particle behaviour.

Particle behaviour

 

Now imagine a similar situation, but with sound instead of sand. Sound travels as a wave so, as it passes through the slits, it diffracts - spreading outwards. The waves coming from the two slits meet each other, creating interference. Where two peaks of a wave meet, the peak doubles in size (constructive interference) and where a peak and a trough meet, they cancel each other out (deconstructive interference). This creates areas where the sound can be heard more loudly and more quietly.

This is a very observable effect - in high school physics my teacher took us out to the field and played a tone from two speakers and had us walk back and forth in front of them and I clearly remember being able to hear the effect of louder and quieter bands. You can also notice this effect with waves on an otherwise still pool of water.

Diffraction and interference are wave behaviours.

Wave behaviour - diffraction

Wave behaviour - interference

 

Now let’s see what happens if we do this experiment on a quantum scale, with photons. We set up two slits and a sensor behind that detects when a photon hits it. If a photon is a particle then we would expect to see two areas of dots on the sensor (like the two piles of sand) and if photons are waves then we would expect to see smooth bands of light and dark - we wouldn’t be able to detect individual photons at all, any more than we can detect individual “sound particles”.

If photons are particles

 

If photons are waves

 

In fact, we see bands of dots.

Photons in Young’s double slit experiment

 

This implies that photons act as waves while they travel from the emitter to the sensor, causing diffraction and interference, and then act as particles when they hit the sensor, causing individual marks. Photons act as waves until they interact with something, at which point they act as particles.

There are two more important phenomena here, before we move on to talk about how it relates to determinism.

The first thing is that this doesn’t just happen with photons. It also happens with electrons and even whole atoms. Wave/particle duality is not just a feature of light, it’s also a feature of matter.

The second thing involves a modification to the experiment. In the new version, we fire particles one at a time. Remarkably, we still see interference bands, which build up over time as more particles are fired. It’s as if each electron goes through both slits, creating two sets of diffracted waves, which interfere. One electron creates interference with itself.

 

A probabilistic Universe

Coming back to determinism - the choice of path for each electron seems to be completely (genuinely, truly, really) random. In Newtonian mechanics, if we shoot two balls from a machine and each lands in a different spot, we can conclude that something was different about the scenarios - perhaps one ball was hit by a gust of wind.  In quantum mechanics, two electrons can be fired at a sensor, land in different places and there is no cause we know of.

It seems that the universe is actually random - run by probability and chance.

 

Hidden-variable interpretation

To put a spanner in the works - Einstein rejected the probabilistic universe and famously said, “God does not play dice”.

Instead, Einstein supported the hidden-variable theory, which says that there must be some underlying cause for the different paths the electrons take (and other such random seeming quantum phenomena). Hidden-variable theory assumes there is a hidden world of ‘gusts of wind’ that, once discovered, will explain the different outcomes and bring us back to our predictable, deterministic universe.

Einstein wrote a paper postulating that the idea of a universe run by probability was incomplete, but he withdrew the paper himself after discovering problems with his specific theory. The hidden variables interpretation is generally not well supported and theories that support it are more complicated than other interpretations.

 

Copenhagen interpretation

When an electron in the double slit experiment leaves the emitter, there are many possible paths it could travel. It could pass through the left slit, it could pass through the right slit, it could hit the card. The Copenhagen interpretation says that the electron exists in superposition - in all of these possible paths, as a wave of probabilities. It only collapses into one state when it has to, due to an interaction (in this case, hitting the sensor). This theory, mostly attributed to Heisenberg and Niels Bohr, supports a probabilistic universe. The state “chosen” from all possible states is done so at random.

The wave of probabilities is mathematically described by Heisenberg’s wave function and Schrodinger’s wave equation, which we don’t really need to worry about here, but I bring it up because I wanted to anecdotally mention that Schrodinger came up with his equation after sequestering himself in a remote mountain cabin on a skiing holiday with a mistress where, according to some sources, he worked with pearls in his ears to block out distractions.

Like Einstein, Schrodinger also struggled with the idea of chance underpinning the universe, later saying of the Copenhagen interpretation, “I don't like it, and I'm sorry I ever had anything to do with it."

Nevertheless, this interpretation is the most commonly taught in modern physics. It has the virtue of simplicity but many physicists dislike it due to the special role given to measurements.

 

Many-worlds interpretation

This interpretation is similar to Copenhagen but it states that there is no “collapse” into a single outcome, rather that every possible outcome is played out in a different world. The multiverse as a whole then, would be deterministic, with all possible outcomes existing somewhere, as determined by what is possible, and each individual universe being only one path through the possibilities.

This idea seems very unwieldy to me. We’re not talking about a Sliding Doors situation where missing a train or not creates two different lives for Gwyneth Paltrow - but rather that every. single. quantum. possibility. creates a whole new world. Surely billions of new worlds a second, most indistinguishable from their neighbours. On a purely intuitive level this seems wrong to me - but there is no real conclusion one way or another.

 

Probability Soup

There are more interpretations for quantum behaviour than these, but I like Copenhagen. A universe made not of solid spheres of matter, orbiting one another like planets and jostling around like snooker balls - but rather of ethereal fields of mathematical probabilities, piquing into moments of physical existence only when they interact with one another, cementing a state at random before spreading out into a wave of possibility again for the next scenario. Much in this feels appealing to me.

Einstein and Schrodinger were loathe to give up the deterministic universe that 19th century physics (and religion) had presented them, but to me there is not much comfort or joy in clockwork. I prefer possibility.

 

Scaling up

One thing I am particularly interested in and am struggling to get a clear picture on - is how much this quantum randomness affects the macro-world. How much does it play into everyday experience?

It’s certainly possible to rig up a system where a macro-world event is affected by quantum randomness. For example, imagine a party popper that explodes only when a certain radioactive atom decays. The time at which the party popper goes off would be random.

But how much does this happen naturally, and in what scenarios? Does it affect 10,000 coins on a staircase, the weather, our thought processes? The conventional view is that most “randomness” comes from chaos, but there are exceptions and edge cases and I’m keen to get a fuller picture of these. 

 

Generative art

Using a pseudorandom number generator for generative art is certainly convenient, produces interesting results and allows for a world of possibilities. However, I think that on a conceptual and aesthetical level there is a lot to be explored outside of pseudorandomness - sources of natural random and non-random data.

Here are some questions I’m keen to explore in future generative work:

  • How do different sources of randomness affect the experience and impression of a work?

  • How can we communicate or demonstrate the random source within or alongside the work?

  • What other sources of data can we use for generation?

  • Does it make a difference to how we think about the work if we use a pseudorandom versus a true random generator?

  • Further, does it make a difference if the “true random” is actually chaotically deterministic, versus quantum random?

  • What other conceptual threads can be pulled at on the topic of quantum behaviour and randomness?

 

With thanks to EDG and orbita for their physics advice!

 
Previous
Previous

Distributing Randomness

Next
Next

Emergence and Generative Art