Dusk Somewhere

Cybernetics must be constructive

Posted at — Dec 15, 2022 by Izzy Meckler

I have started reading some work of Stafford Beer and read his article “The Will of the People”. In it, he cites a theorem of Conant and Ashby from the paper “Every good regulator of a system must be a model of that system”.

This theorem reflects a deep truth, which is that if you want to make a system able to respond to every possible state of another system (for a given notion of what an appropriate response is), then your system must be at least as complex as the system you’re trying to model (modulo the amount of granularity required to achieve the given notion of an appropriate response).

The title is honest, in that it characterizes potential good regulators and does not claim to construct one. However, in practice we are interested in constructing good regulators for a given notion of good.

The paper “constructs” a good regulator for a given adversarial system and notion of good. However, the construction is abstract, and as stated would require exhaustive search and an enormous lookup table to be implemented, and so is not particularly useful.

Real systems must be effectively runnable in whatever sense may be appropriate in one’s context (i.e., can be executed on the computational resources one has, or using the set of people that one has in their organization, etc.).

It’s clear that the idea of the Ashby–Conant theroem can be related to computational complexity. I would refer you to pages “512” and “513” in their paper for a description of the basic setup, but essentially1 we have the full state of the world parameterized by a set $D$, the state of a “regulator” (which we imagine that we control and that we are trying to design) parameterized by a set $R$, the state of the adversarial environment parameterized by a set $S$, and maps

$$ \begin{aligned} \phi &\colon D \to S \ \rho &\colon D \to R \ \psi &\colon S \times R \to { 0, 1 } \ \begin{aligned} $$ There are a few ways we can think about this setup from the point of view of complexity theory that may be useful. One is: $\phi$ is picking out a set of “problems to be solved” or “statements to be proved”. $\rho$ is picking out a corresponding set of solutions, or proofs. $\psi$ is the relation that checks a solution against a given problem (or that a proof proves the given statement).

Let’s suppose that $\psi$ runs in polynomial time and describes a total relation in the sense that for any $s \in S$ there is some $r \in R$ with $\psi(s, r) = 1$.

Then the map $h \colon S \to R$ from the Conant–Ashby theorem represents a solution to the total NP search problem presented by $S$ for the relation $\psi$. Therefore its complexity may be lower-bounded if $\psi$ represents a known hard problem, etc.

Another way to think about it: the whole setup is actually quite symmetric in $S$ and $R$. We may If $\psi$ is not checking in too complex a way, then we may be quite lazy with $R$, and not need to match $S$’s complexity.

What’s interesting is the whole setup is actually quite symmetric in $S$ and $R$. Perhaps no need to force the NP assymetry of statements and witnesses onto it.

Also, it kind of gives a notion of complexity relative to a given notion of “sameness”. Like, if $\psi$ is equality, then $\rho$ must be exactly as complex as $\phi$. But if $\psi$ is some kind of approximate equality (like if $\psi$ is trying to distinguish them), then $\rho$ could be merely pseudo-equal to $\phi$, up to however stupid $\psi$ is.

It would be interesting to do a

practically computable (whatever ) For this, we must pay

However , in practice, However, the name of the


  1. They actually have $\psi$ map into a set $Z$ of outcomes and distinguish a “good” subset $G \subseteq Z$, but I think it’s easier for $\psi$ just to check if the outcome looks good or not. ↩︎