Crisis is endemic to capitalism, but in fact it is endemic to all organisms (i.e., patterns of matter apparently being driven through time by a logic other than that of matter itself).

This is a fundamental physical principle. The principle is this: every strict subsystem of the universe (given by a state transition logic) eventually decoheres. Let’s call this the **breakdown principle**.

This applies to systems of all different sorts. From this we conclude:

- Every biological organism will die.
- The process of capital accumulation will end.
- Every social order will either collapse or transform into something unrecognizable.

This is a general sort of dialectical principle, which has been expressed in one form or another by many famous Marxists. However, without time bounds, these are all trivialities which can be deduced from the second law of thermodynamics.

To give these non-trivial content, we must somehow make the breakdown principle effective. I.e., we need a principle that associates to a given system a function which describes the rate of decoherence over time (or at least some bounds on the rate).

I don’t know how to do that, but I would like to at least state the principle in a (provisional) mathematical form as a step towards stating an effective version.

Definition.A system can be thought of as a set of possible states $S$ along with a time-indexed collection $\{S_t\}_{t \in \mathbb{R}}$ of $S$-valued random variables^{1}.

The universe will be a system $(U, \{ U_t \}_{t \in \mathbb{R}})$.

A subsystem of the universe is a system $(S, \{S_t\}_{t \in \mathbb{R}})$ as above plus a function $s \colon U \to S$ which tries to pick out the current state as represented in the universe^{2}.

The idea is that the random variables $S_t$ should approximate the $s(U_t)$. The degree to which these two sequences of random variables correspond measures how “coherent”, “actually embedded”, “actually manifest” or how “real” the system $S$ is in the universe $U$.

Now, there are many conceivable ways to measure the coherence of the system by comparing $S_t$ and $s(U_t)$ but a reasonable way to me seems to be the conditional entropy $H(s(U_t) \mid S_t)$. I.e., how much information is contained in $s(U_t)$ which cannot be found in $S_t$. A model system $S$ is coherent if $H(s(U_t) \mid S_t)$ is close to 0. Or in other words if the mutual information $I(s(U_t); S_t)$ is close to the maximal value. Contrapositively then, a system has decohered if $I(s(U_t) ; S_t)$ is close to the minimal value, which is 0.

**The breakdown principle would then be something like:**

For every embedded subsystem of the universe $S$, $s \colon S \to U$ which is “constructible” – this is a term that would have to be defined – we have $$\lim_{t \to \infty} I(s(U_t); S_t) = 0$$

In other words, as time goes on, the model system $S_t$ ceases to have any useful information about the state of the actual system in the universe.

Now there are two directions forward for future development:

- Defining a constructible embedded system. This would necessarily need to relate the system to the embedding, since otherwise for any set $S$ and function $s \colon S \to U$, ${s(U_t)}_{t \in T}$ is a perfectly coherent system. We need to rule out such systems as somehow “non-constructible”, or in other words non-physical.
- Giving bounds on the rate at which $I(s(U_t) ; S_t)$ converges to 0. Assumedly this would be somehow related to the complexity of the system $S$ and some assumptions on the system’s environment.

In my opinion, this is a step forward past general dialectical pronunciations, as now we have some precise language, and an inkling of how to move toward a more concretely usable version.