This is an old revision of the document!

# Computational Irreducibility

“I do think that the history of the universe—and everything in it—is completely determined. But the point about computational irreducibility is that it shows that that doesn't mean it has to be dull. Even though it's determined, it can still be unpredictable and surprising. ” – Stephen Wolfram

Aliases

This identifies the pattern and should be representative of the concept that it describes. The name should be a noun that should be easily usable within a sentence. We would like the pattern to be easily referenceable in conversation between practitioners.

Intent

Describes in a single concise sentence the meaning of the pattern.

Motivation

This section describes the reason why this pattern is needed in practice. Other pattern languages indicate this as the Problem. In our pattern language, we express this in a question or several questions and then we provide further explanation behind the question.

Sketch

This section provides alternative descriptions of the pattern in the form of an illustration or alternative formal expression. By looking at the sketch a reader may quickly understand the essence of the pattern.

Discussion

The question in this section is to argue that Deep Learning NN's are computationally irreducible.??? This likely not true considering that there are methods to compress layers of a NN. However, there is evidence that there is a limit to the compression in CNN.

Is the learning process computationally irreducible? Likely not, one-shot learning is known to exist. However it is not known whether the learning process is optimal.

Is the inference process computationally irreducible? No, we may be able to shorten layers up to a certain limit. See equivalence of RNN and Residual Networks.

Computational irreducibility — the conjecture that some computations cannot be sped up by any shortcut, the only way to predict what is going to happen is to simulate each step.

Computational irreducibility is a more narrow definition of Undecidability. Undecidability is the concept that when you ask the question of what will ultimately happen the answer is undecidable. In mathematics all theorems are the opposite ( i.e. decidable or provable). Mathematics pursues the convenience of examining subjects wherein its tools have a likelihood of making good progress. In other words, mathematics has navigated through narrow paths and as a result avoiding subjects of undecidability.

One good way to frame this question is in the context of the Principle of Computational Equivalence by Wolfram. Wolfram shows that simple cellular automation are able to exhibit complex behavior that cannot be predicted from initial conditions or the simple rules of its evolution. Simple simples can exhibit complex behavior that cannot be reduced to mathematical models that capture its behavior. Examples of system that exhibit this complex irreducible behavior are the brain and weather systems.

A Deep Learning system (of the recursive kind) may belong to this class of universal machines, however still be unable to replicate the behavior of an observed system. Scholkopf reveals this conclusion in a paper “@Page on arxiv.org”. That is, a learning system is able only to derive the Cause while observing the Effect. This tells you that a Learning system can't derive the mechanisms of say, how DNA manufactures specific proteins. That is, predict Effect from Cause.

Known Uses

Here we review several projects or papers that have used this pattern.

Related Patterns In this section we describe in a diagram how this pattern is conceptually related to other patterns. The relationships may be as precise or may be fuzzy, so we provide further explanation into the nature of the relationship. We also describe other patterns may not be conceptually related but work well in combination with this pattern.

Relationship to Canonical Patterns

Relationship to other Patterns

We provide here some additional external material that will help in exploring this pattern in more detail.

References

http://mathworld.wolfram.com/ComputationalIrreducibility.html While many computations admit shortcuts that allow them to be performed more rapidly, others cannot be sped up. Computations that cannot be sped up by means of any shortcut are called computationally irreducible. The principle of computational irreducibility says that the only way to determine the answer to a computationally irreducible question is to perform, or simulate, the computation. Some irreducible computations can be sped up by performing them on faster hardware, as the principle refers only to computation time.

According to Wolfram (2002, p. 741), if the behavior of a system is obviously simple–and is say either repetitive or nested–then it will always be computationally reducible. But it follows from the principle of computational equivalence that in practically all other cases it will be computationally irreducible.“ Here, “practically all” refers to cases that arise naturally or from a simple rule system as opposed to cases that are constructed artificially such as the examples given by Wolfram (2002, p. 747).

Israeli and Goldenfeld (2004) have shown that some computationally irreducible elementary cellular automata have properties that are predictable, and so these properties are computationally reducible. In particular, these cellular automata can emulate reducible cellular automata by coarse-graining. One of those is rule 110, which is a universal cellular automaton.

However, as noted by Wolfram (2002, p. 745), “when the underlying rules are simple there is often still some superficial computational reducibility…. For example, in the rule 30 pattern on the right one can tell whether a cell at a given position has any chance of not being white just by doing a short computation that tests whether that position lies outside the center triangular region of the pattern. And in a class 4 cellular automaton such as rule 110 one can readily shortcut the process of evolution at least for a limited number of steps in places where there happen to be only a few well-separated localized structures present.”

On computational irreducibility and the predictability of complex physical systems

Complex physical systems that are CIR might therefore seem to be inherently unpredictable. It is tempting to conclude from this that the enterprise of physics itself is doomed from the outset; rather than attempting to construct solvable mathematical models of physical processes, computational models should be built, explored and empirically analyzed. This argument, however, assumes that infinite precision is required for the prediction of future evolution. Usually coarse-grained or even statistical information is sufficient: indeed, a physical model is usually correct only to a certain level of resolution, so that there is little interest in predictions from such a model on a scale outside its regime of validity.

It seems that a better classification of physical complexity is related to what classes of projection operator are required to coarse-grain the system: local, real space projections or more complex non-geometric projections?

Stephen Wolfram’s theory of computational equivalence suggests that simple, formulaic shortcuts for understanding evolution may never be discovered. We can only run the iterative algorithm forward to see the results, and the various computational steps cannot be skipped.

Thus, if we evolve a complex system, it is a black box defined by its interfaces. We cannot easily apply our design intuition to the improvement of its inner workings. We can’t even partition its subsystems without a serious effort at reverse-engineering. And until we can understand the interfaces between partitions, we can’t hope to transfer a subsystem from one evolved complex system to another.

https://arxiv.org/abs/1809.02942v1 Cellular automata as convolutional neural networks