Randomness and Time
20 February 2011When someone uses the word random
, part of me immediately wants a definition.[1]
One notion of randomness is essentially that of lawlessness. For example, I was recently slogging through a book that rejects the proposition that quantum-level events are determined by hidden variables, and insists that the universe is instead irreducibly random
. The problem that I have with such a claim is that it seems incoherent.
There is no being without being something; the idea of existence is no more or less than that of properties in the extreme abstract. And a property is no more or less than a law of behavior.
Our ordinary discourse does not distinguish between claims about a thing and claims about the idea of a thing. Thus, we can seem to talk about unicorns when we are really talking about the idea of unicorns. When we say that unicorns do not exist, we are really talking about the idea of unicorns, which is how unicorns
can be
this-or-that without unicorns really being anything.
When it is claimed that a behavior is random
in the sense of being without law, it seems to me that the behavior and the idea of the behavior have been confused; that, supposedly, there's no property in some dimension, yet it's going to express itself in that dimension.
Another idea of randomness is one of complexity, especially of hopeless complexity. In this case, there's no denial of underlying lawfulness; there's just a throwing-up of the hands at the difficulty in finding a law or in applying a law once found.
This complexity notion makes awfully good sense to me, but it's not quite the notion that I want to present here. What unites the notion of lawlessness with that of complexity is that of practical unpredictability. But I think that we can usefully look at things from a different perspective.
After the recognition that space could be usefully conceptualized within a framework of three orthogonal, arithmetic dimensions, there came a recognition that time could be considered as a fourth arithmetic dimension, orthogonal to the other three. But, as an analogy was sensed amongst these four dimensions, a puzzle presented itself. That puzzle is the arrow of time. If time were just like the other dimensions, why cannot we reverse ourselves along that dimension just as along the other three. I don't propose to offer a solution to that puzzle, but I propose to take a critical look at a class of ostensible solutions, reject them, and then pull something from the ashes.
Some authors propose to find the arrow of time in disorder; as they would have it, for a system to move into the future is no more or less than for it to become more disorderly.
One of the implications of this proposition is that time would be macroscopic; in sufficiently small systems, there is no increase nor decrease in order, so time would be said neither to more forward nor backward. And, as some of these authors note, because the propensity of macroscopic systems to become more disorderly is statistical, rather than specifically absolute, it would be possible for time to be reversed, if a macroscopic system happened to become more orderly.
But I immediately want to ask what it would even mean to be reversed here. Reversal is always relative. The universe cannot be pointed in a different direction, unless by universe
one means something other than everything. Perhaps we could have a local system become more orderly, and thus be reversed in time relative to some other, except, then, that the local system doesn't seem to be closed. And, since the propensity to disorder is statistical, it's possible for it to be reversed for the universe as a whole, even if the odds are not only against that but astronomically against it. What are we to make of a distinction between a universe flying into reverse and a universe just coming to an end? And what are we to make of a universe in which over-all order increases for some time less than the universe has already existed? Couldn't this be, and yet how could it be if the arrow of time were a consequence of disorder?
But I also have a big problem with notions of disorder. In fact, this heads us back in the direction of notions of randomness.
If I take a deck of cards that has been shuffled, hand it to someone, and ask him or her to put it in order, there are multiple ways that he or she might do so. Numbers could be ascending or descending within suits, suits could be separated or interleaved, &c. There are as many possible orderings as there are possible rules for ordering, and for any sequence, there is some rule to fit it. In a very important sense, the cards are always ordered. To describe anything is to fit a rule to it, to find an order for it. That someone whom I asked to put the cards in order would be perfectly correct to just hand them right back to me, unless I'd specified some order other than that in which they already were.
Time's arrow is not found in real disorder generally, because there is always order. One could focus on specific classes of order, but, for reasons noted earlier, I don't see the explanation of time in, say, thermodynamic entropy.
But, return to decks of cards. I could present two decks of card, with the individual cards still seeming to be in mint state, with one deck ordered familiarly and with the other in unfamiliar order. Most people would classify the deck in familiar order as ordered
and the other as random
; and most people would think the ordered
deck as more likely straight from the pack than the random
deck. Unfamiliar orderings of some things are often the same thing as complex orderings, but the familiar orderings of decks of cards are actually conventional. It's only if we use a mapping from a familiar ordering to an unfamiliar ordering that the unfamiliar ordering seems complex. Yet even people who know this are going to think of the deck in less familiar order as likely having gone through something more than the deck with more familiar order. Perhaps it is less fundamentally complexity than experience of the evolution of orderings that causes us to see the unfamiliar orderings as random
. (Note that, in fact, many people insist that unfamiliar things are complicated
even when they're quite simple, or that familiar things are simple
even when they're quite complex.)
Even if we do not explain the arrow of time with disorder, we associate randomness
with the effects of physical processes, which processes take time. Perhaps we could invert the explanation. Perhaps we could operationalize our conception of randomness in terms of what we expect from a class of processes (specifically, those not guided by intelligence) over time.
(Someone might now object that I'm begging the question of the arrow of time, but I didn't propose to explain it, and my readers all have the experience of that arrow; it's not a rabbit pulled from a hat.)
[1] Other words that cause the same reäction are probability
and capitalism
.
Tags: definitions, disorder, entropy, order, randomness, time
Think of randomness as "without trend". Not chaos, not disorder, but simply without trend. If we are manufacturing widgets, and we wish to analyze the quality (or qualities) of our widgets (in order to profit by maintaining superior quality widgets for a reasonable price), then we measure the characteristics of our widgets. And we WANT randomness. We do not want trends in our widgets because trends tell us that perhaps tooling is failing or our widget making machines are not running properly, or perhaps the operator is not loading the raw material as it needs to be loaded, and so on. I highly recommend that you take a look at Deming. As an engineer I happily applied his principals with much success at many places I worked. Achieving randomness (eliminating trend) not only made a better quality product but saved substantial money in the long run.
To say that something is without trend is to say that it is subject to a description that can be decomposed into a hyperplane on the axes of the explanatory variables (typically just a time dimension), and deviations from that hyperplane that are simply not explained by the model. These deviations may not be worth the bother of explanation, but that doesn't mean that they are not chaotic or not disorderly or not anything except that they are not explained.
When I have tried to explain to fellow bridge players that a deck of cards should be shuffled, however imperfectly, seven times to thoroughly mix the deck, I am usually met with something of a blank stare. In due course someone insists that three times is sufficient and then I have to cite this work:
http://www.dartmouth.edu/~chance/course/topics/winning_number.html
Specifically, I talk about the bridge players in New York who were found to be taking advantage of the fact that they decks were not being shuffled more than three times at their club:
I steadfastly shuffle seven times whenever the cards are going to played in a competition. For practice rounds, admittedly, I do not always do the full seven shuffles. Interestingly, there are still bridge players who insist that the computer dealt hands are "fixed", but I think what is actually occurring is that they are accustomed to people shuffling the cards three times and thinking, "Close enough."
Perfect shuffles result in relatively easily predicted orderings. Imagine a deck of 52 cards, numbered from 1 to 52 and in ascending order. Perfectly shuffle them, and they are ordered 1, 27, 2, 28, … or 27, 1, 28, 2, …. Of course, the extreme of imperfect shuffling (perfectly imperfect, as it were) basically just cuts the deck only to slap it back together exactly as it was.
Obviously, if an observer knows the prior order of the deck, and the shuffling proceeds in some manner that allows the observer to note and record how many cards are flipped from each side before more come from the other, then the new order could be completely predicted. This would be true no matter how many times the deck were shuffled. So when shuffling works, it is by some combination of speed and of concealment on the part of the shuffler, and of inattention (driven by honor, by sloth, or by limitations of ability) on the part of observers. Any
proof is going to entail some assumptions (implicit or otherwise) about that to which the observer attends.