Skip to main content

Examples of Syntax/Semantics Theorems Throughout Math

07 Sep 2021

I’ve been promising a blog post on syntax and semantics for a long time now. There’s a lot to say, as this duality underlies much of mathematical logic, but I want to focus on one particular instance of the syntax/semantics connection which shows up everywhere in mathematics. I talked about this briefly in a talk last year (my blog post debriefing from the talk is here) but it’s a kind of squishy and imprecise observation. Because of that, this post is going to be less expository than my usual ones, and is instead going to be a “definition by examples” of sorts. I’ll try to show examples from as many brances of math as possible, and hopefully by the end it becomes clear what the flavor of these theorems is, as well as how ubiquitous they are!

This post has been in the pipeline for a few months now, and I’m glad to have it finished! It was pretty tiring to revise, but was also a welcome break from all the analysis I’ve been reading lately. I hope you find it interesting ^_^!

As a reminder, Syntax is the part of mathematics that deals with symbols, and rules for manipulating them. So the syntax associated to groups are variables and the symbols $1$, $\cdot$, and ${}^{-1}$. There are also rules for manipulating the syntax. For instance, $1 \cdot x$ can always be replaced by $x$ (and vice versa).

Syntax, it turns out, gives us a way of asking questions or talking about an object. For instance we can ask the question “$\forall x . \forall y . xy = yx$”. The answer to this question, of course, depends on which group we’re talking about.

Dually, Semantics tell us what the syntax means. If you fix an actual group $G$, then you can interpret the syntax in $G$, and answer the questions that get asked. We typically denote this with the symbol $\models$ which is read “models” or “satisfies”.

As an example, $\mathbb{Z} \models \forall x . \forall y . xy = yx$, but $\mathfrak{S}_3 \not \models \forall x . \forall y . xy=yx$.

This is a very model theoretic notion of syntax and semantics, but I think it’s a good fundamental example1. The kind of theorem that I’m going to focus on in this post, though, are descriptive in nature. The idea is this:

If your object is simple syntactically, then it must be simple semantically as well.

Another way to phrase this is

If your object has a simple description, it can’t be too complicated.

Now let’s see some examples of this idea in action! I would love to hear any other examples that you can think of as well ^_^.


Definable Sets in a Topological Space

Let’s start simple. Say $f, g : X \to \mathbb{R}$ are continuous. Then

This is the kind of theorem that you probably know intuitively, but you might have never thought about formally. Notice that just by looking at the syntax (that is, how the set is defined) we can learn something nontrivial about the set itself (in this case, its topological complexity).

As a simple exercise, if you’ve never seen this before you should prove this!

You should recall that $\mathbb{R}$ is hausdorff, so \(\Delta = \{ (x,x) \}\) is closed in $\mathbb{R}^2$. Similarly, notice \(\{ (x,y) \mid x \leq y \}\) is closed in $\mathbb{R}^2$.

Do you see how to use this to show the claim?

Let’s make things a little more complicated. Again, say $f, g : X \to \mathbb{R}$ is continuous. Then we can bound the complexity of various sets defined by referencing $f$ and $g$ by counting quantifiers.

First, let’s look at things that are quantifier free. We can turn logical symbols in the definition of a set into boolean operations on sets themselves. So

Using the fact from earlier that $fx=0$ is a closed condition, we can immediately see that the first two sets are closed, and the third is open. Any connectives in the definition can be handled in this way.

That actually includes infinite connectives! For instance, say we have a sequence of functions $(f_n)_{n \in \mathbb{N}}$. Then

A countable conjunction/disjunction is often viewed as a countable quantifier since

Now we’ve extended our syntax to be more expressive. We can now use countable conjunctions/disjunctions, or equivalently countable quantifiers. Since our syntax is slightly more complicated, we should be able to describe more complex sets in this way.

As another fun exercise, you should check that every set definable with countable quantifiers is borel. In fact, there’s a hierarchy of complexity for borel sets, and the position of \(\{x \mid \varphi \}\) in this hierarchy is exactly in correspondence with the (countable) quantifier complexity of $\varphi$.

A big part of descriptive set theory is trying to turn real-valued quantifiers into countable quantifiers. For instance, let’s look at the following classic analysis exercise:

Let $(f_n : X \to \mathbb{R})_{n \in \mathbb{N}}$ be a sequence of measurable functions. Show the set of $x$ where $f_n(x)$ converges is measurable.

We can solve this by writing down what it means to be convergent, and converting this syntactic definition into a semantic one. Since we’re gunning for borel, we know we’re only allowed to use natural number quantifiers. This leads to the following argument:

\[\begin{aligned} \{ x \mid f_n x \text{ converges} \} &= \left \{ x \ \middle | \ \forall k . \exists N . \forall m, n \geq N . | f_n x - f_m x| \leq \frac{1}{k} \right \} \\ &= \bigcap_k \bigcup_N \bigcap_{m,n \geq N} \left \{ x \ \middle | \ | f_n x - f_m x| \leq \frac{1}{k} \right \} \\ &= \bigcap_k \bigcup_N \bigcap_{m,n \geq N} \left \{ x \ \middle | \ x \in | f_n - f_m |^{-1} \left [ 0, \frac{1}{k} \right ] \right \} \end{aligned}\]

Since $|f_n - f_m|$ is measurable, the set at the end is measurable, which makes our whole set measurable too. If the $f_n$ are assumed to be continuous instead, then we can get a more precise bound: The set is $\mathbf{\Pi^0_3}$.

You might be wondering what happens if we do allow real valued quantifiers. It turns out there are still theorems bounding the complexity of our sets!

Sets with only existential real quantifiers are called Analytic, and sets with only universal real quantifiers are called Coanalytic. A (very!) nontrivial theorem in descriptive set theory says that both classes of sets are universally measurable, have the property of baire, etc. See here for a proof.

But now you might be wondering if we allow alternating quantifiers! For instance, what if we have a set of the form \(\{ z \mid \forall x . \exists y . \varphi(x,y,z) \}\) where $x$ and $y$ are reals? This turns out to be independent of $\mathsf{ZFC}$!

The relevant search term is projective determinacy, which follows from certain large cardinal axioms. I was planning to write up a blog post about this, but I’ve been beaten to the punch! There’s a great introduction at the blog Complex Projective $4$ Space (which you can read here), and if you find set theory and large cardinals interesting, you might want to read Tom Leinster’s take on set theory here. It’s a great introduction so far, and is phrased in a way that I suspect will be a bit more accessible to the generic mathematician than a traditional set theoretic reference might be.


Restricting the Syntax of an Algebraic Structure

Say we have a group $G = \langle g_1, \ldots, g_n \mid R_1, \ldots, R_m \rangle$ defined by generators and relations. This is a syntactic description of the group, since it tells us what symbols to use and what the rules are for pushing those symbols around.

It’s often the case in algebra that we have some syntactic description of an algebraic object like this. A general life pro tip in this situation is to try to make restrictions on the syntax. Oftentimes this will give you surprisingly powerful restrictions on how complicated your object itself is. That is, powerful restrictions on the semantics.

Again, let’s start easy. The Word Problem for a group $G = \langle S \mid R \rangle$ is the set of words2 $w \in S^*$ which equal the identity in $G$. What can we say about $G$ if we assume the word problem isn’t too complicated?

The word problem is a subset $W \subseteq S^*$, which we can view as a formal language. There are a few well known classes of languages, of increasing complexity, and we can ask what must be true of our group if we assume $W$ falls into each of these classes.

Notice how natural restrictions on the syntax of $G$ correspond to (relatively) natural restrictions on the semantics of $G$.

One particularly famous result in this vein is Gromov’s theorem on groups of polynomial growth. This theorem says that a group $G$ has “polynomial growth” (a syntactic condition) if and only if $G$ is virtually nilpotent.

This is also leaving aside all of the work done on one relator groups. Mostly I’m leaving this aside because I’m not particularly familiar with this area. I know that there are theorems of this form, though. For instance, if the one relator is not a proper power, then $G$ is torsion free. If anyone is more familiar than me with one relator groups (and this is not a high bar to clear) I would love to hear about other examples!

But why stop at group theory? The notion of “presentation” exists in commutative algebra as well. Can we find similar theorems there3?

Obviously the answer is “yes”, but I’m still very new to commutative algebra, and I wasn’t familiar with any theorems in this area. I knew that there should be some examples in the theory of monomial ideals, but I didn’t know of any concrete examples. Thankfully, in my first year I got to know Eloísa Grifo and Alessandra Costantini, and even though they’re both leaving4, they were nice enough to give me some examples!

First, what is a monomial ideal? Well, it’s more or less what it says on the tin. If we work in a ring $k[x_1, \ldots, x_n]$ of polynomials over a field $k$, then a Monomial Ideal is an ideal $I = (f_1, \ldots, f_m)$ where each $f_i$ is a monomial.

There are lots of algorithms in commutative algebra (Buchberger’s algorithm and gröbner bases come to mind) which treat polynomials as formal symbols to be pushed around. So putting restrictions on what kinds of polynomials we have to work with is a syntactic condition, which we should expect to give us nice semantic theorems!

Indeed, monomial ideals admit particularly simple primary decompositions. Moreover, there is an extremely simple algorithm to compute the primary decomposition of a monomial ideal, which starkly contrasts the difficulty of the general computation5.

Monomial ideals (in particular the squarefree monomial ideals) can be studied with combinatorial tools using the dictionary of Stanley-Reisner Theory. This theory associates simplicial complexes to (squarefree) monomial ideals, and geometric conditions on this complex get turned into syntactic conditions on the monomials generating the ideal (and also into semantic theorems on the ideal and the quotient ring).

One deep open problem in commutative algebra is the question

When do the symbolic powers $I^{(n)}$ of an ideal agree with the ordinary powers $I^n$?

In the special case of squarefree monomial ideals whose simplicial complex is a graph, we completely understand this problem! It is a theorem of Gitler, Valencia, and Villareal that the following are equivalent for the edge ideal of a (simplicial) graph $G$:

  1. $G$ is bipartite
  2. $I_G^{(n)} = I_G^n$
  3. $I_G$ is “packed6

I’m sure there are other syntax/semantics theorems in commutative algebra, and I would love to hear about them!


Some Miscellany

There are way more results of this form, and I feel like I could keep listing them forever. But this blog post is already getting long, and the thought of revising it is becoming daunting. Plus, I want to include at least a bit of exposition regarding each example so that people with less familiarity can still get something out of it.

With that in mind, I really should stop. But I want to show some examples that don’t quite fit the mold of any of the examples we’ve seen so far. This miscellany section is here to scratch that itch!

This one is a bit more overtly logical in nature, as it’s a corollary of the $o$-minimality of real closed fields. But the statement itself doesn’t require any logical terminology, so I figure it counts:

If $f : \mathbb{R} \to \mathbb{R}$ is a semialgebraic function, then we can partition $\mathbb{R} = I_1 \cup I_2 \cup \ldots \cup I_n \cup X$, where $X$ is finite and each $I_k$ is an open interval, so that $f$ is continuous on each $I_k$.

Here semialgebraicity says that the graph \(\Gamma_f = \{ (x, fx) \mid x \in \mathbb{R} \}\) of $f$ can be carved out by polynomial inequalities. This theorem says that semialgebraicity roughly guarantees continuity everywhere! This is one manifestation of Brouwer’s folklore result that (for the right definition of “definable”) all definable functions are continuous. You can find a discussion of this “theorem” here.

Unrelatedly, we have definable combinatorics. The idea here is that by putting restrictions on how complicated a combinatorial object is allowed to be, we can get sharper extremal bounds for the complexity of that object. I don’t know much about this myself, but Terry Tao has talked about it on his blog (see here).

Lastly, to end on a logical theorem close to my heart, lots of families of logical formulas are decidable. For an easy example, if you promise to only ask questions with bounded quantifiers, I can always tell you whether your formula is true or false in $\mathbb{N}$. There are lots of results which say that every formula satisfying some syntactic condition is decidable, and so a computer program can tell you whether that formula is true or false! I talked about this a bit in a talk of mine, but I’m planning to write up a full blog post on it at some point.

Maybe that’s something to look forward to!


  1. Like any good example, this is true but far from the whole story.

    For instance, free groups (and free objects more generally) tend to be made out of pure syntax. We are able to interpret the strings of symbols as themselves having a (purely formal) group structure. This method of building semantics out of syntax is a great way to make sure that you get a “universal” object in some sense. Oftentimes it turns out that something is true of every model if and only if it’s true of this “syntactic” model, though this takes some work to formalize.

    We can go the other way as well, and make our syntax so rich it forces the semantics to have some properties. This trick of turning semantics into syntax is extremely useful throughout model theory, because our big tool, compactness is syntactic. We do this using a gadget called the diagram of a model, and you can see a simple example of this in my talk about model theory. 

  2. Let’s assume $S = S^{-1}$ is closed under inverses for simplicity. 

  3. Arguably certain homological conditions are “syntactic” too, since we’re putting restrictions on the number of generators/relators/higher syzygies (or, more commonly, on how long we need a resolution to be). This feels a little bit more abstract than the rest of the post, though, so I’m relegating it to a footnote. The stuff in the body is a little bit more down to earth, and is also obviously “syntactic” in a way that homological conditions aren’t. 

  4. A fact which is extremely sad for me and the department, though I’ll put aside my selfishness and wish them the best! Goodness knows they deserve it ^_^. 

  5. See Decker, Greuel, and Pfister’s Primary Decomposition: Algorithms and Comparisons for more details. 

  6. The significance of the definition of “packed” is not one I’ll pretend to understand, but at the very least I can recount it for the interested reader.

    A (squarefree monomial) ideal is called König if it contains a regular sequence of monomials of the same length as its height.

    Now a (squarefree monomial) ideal is called Packed if every ideal obtained by setting any of the variables to $0$ or $1$ is König.

    There is a conjecture that for all squarefree monomial ideals, the symbolic and ordinary powers coincide if and only if $I$ is packed. This is called the Packing Problem, and it’s a topic of much interest.