Izaak Meckler

Categorified Complexity Theory

Posted on January 31, 2017
\(\newcommand{\PSPACE}{\mathrm{PSPACE}}\) \(\newcommand{\IP}{\mathrm{IP}}\) \(\newcommand{\NP}{\mathrm{NP}}\) \(\newcommand{\P}{\mathrm{P}}\) \(\newcommand{\PH}{\mathrm{PH}}\) \(\newcommand{\Set}[1]{\left\{#1\right\}}\) \(\require{AMScd}\)

Here's a famous theorem from complexity theory:

Theorem \[\PSPACE \subseteq \IP\]

There are many theorems like this. Like Toda's theorem:

Theorem \[\PH \subseteq \P^{\#\P}\]

or the PCP theorem:

Theorem \[\NP \subseteq \mathrm{PCP}[O(\log n), O(1)]\]

In general, many theorems are class inclusions: showing that the set of problems characterized by one definition are included in the set of problemss characterized by some other surprisingly different definition.

Let's focus on \(\PSPACE \subseteq \IP\) because I know the proof the best. I want to advance the thesis here that this (and all effective proofs -- more on that later -- of class inclusions) really ought to be thought of as the decatgeorification of constructions of maps of "realizers".

There is a category whose objects are sets of languages (i.e., subsets of \(2^*\)) and whose morphisms are just inclusions of sets. From this point of view, the above theorems are constructions of morphisms in this category.

But they are so much more than that! The proof that \(\PSPACE \subseteq \IP\) is a uniform algorithm which takes a \(\PSPACE\) machine and produces an "\(\IP\) machine", that is, an interactive verifier, in such a way that the language computed by the \(\PSPACE\) machine is equal to the language computed by the \(\IP\) machine.

Let's flesh this out a bit. Instead of thinking of a complexity class as a set of languages, let's think of it as a set of "realizers" \(R\) equipped with a map \(L : R \to \mathcal{P}(2^*)\). So \(L\) maps each realizer to the language which it realizes.

For example, the complexity class \(\P\) has as its realizers pairs \((q, M)\) where \(q\) is a polynomial and \(M\) is a Turing machine running in time \(q(n)\) and the map \(L_\P\) from realizers to languages is As a more interesting example, the class \(\NP\) has as its realizers pairs where \(q\) is a polynomial and \(V\) is a polytime Turing machine of the stated type. The interpretation map \(L_\NP\) is

In my head I think of complexity classes in this sense

  • as "diagrams" \[ \begin{CD} R \\ @VVV{L} \\ \mathcal{P}(2^*) \end{CD} \]
  • as a bundle over the space of languages \(\mathcal{P}(2^*)\), above each language sitting the fiber of realizers.
  • or just as the set of associated realizers
A map of complexity classes \((R, L) \to (R', L')\) is essentially a bundle map, or a map of realizers which preserves the interpretation. That is, a map on the space of realizers \(f : R \to R'\) such that

In other words, this category of complexity classes is just the over category \(\mathrm{Set} / \mathcal{P}(2^*)\) (pronounced, "sets over languages").

Let's think about how this works out in a really simple example: There's a map \(\iota : \P \to \NP\) given by This is a bundle map since for any \(M\) we have

That is, the map transforms the realizer into one which recognizes the same language.

The \(\PSPACE \subseteq \IP\) theorem is similar. It really exhbits a map \(\PSPACE \to \IP\) in our category which transforms a \(\PSPACE\) realizer (a polyspace Turing machine) into an \(\IP\) realizer (a randomized polytime interactive verifier).

Of course, the mere existence of a map \(C \to D\) (for classes \(C\) and \(D\)) is equivalent to a containment \(L(C) \subseteq L(D)\) (where by \(L(C)\) I mean the set of languages in the image of \(C\)'s interpretation map). But most class inclusion proofs (like \(\IP \subseteq \PSPACE\)) are actually doing far more: they are exhibiting effective constructions -- that is algorithms -- from the set of realizers of one class to the set of realizers of the other.

Apart from being (in my opinion) a more honest way of representing these kinds of facts (since I think everyone probably spends a few brain cells translating class inclusions into maps of realizers) I wonder if it would lead to new insights in complexity theory.

For example, can we get more interesting conclusions if we assume not just inclusions, but uniform inclusions? What happens if there is a "uniform inclusion" \(\NP \to \P / \mathrm{poly}\)? (It's not clear what that should mean, but maybe it would be a \(\P / \mathrm{Poly}\) machine which takes an \(\NP\) verifier \((V, q)\) and a string \(1^n\) as input and produces a circuit in time \(\mathrm{poly}{q(n), n}\) which computes the language corresponding to \(V\) for \(n\)-bit strings). Could you prove \(\P = \NP\) under this assumption?

Also, I wonder if anything can be gained by thinking of the category of complexity classes as bundles over some base space. The task of complexity theory is then classifying bundles. We have better tools for classifying bundles when the objects have more structure (e.g., fiber bundles, vector bundles, etc.). Is there a model of computation where complexity classes are vector bundles? Can we separate \(\P\) from \(\NP\) there?