Examples of Syntax/Semantics Theorems Throughout Math
07 Sep 2021
I’ve been promising a blog post on syntax and semantics for a long time now. There’s a lot to say, as this duality underlies much of mathematical logic, but I want to focus on one particular instance of the syntax/semantics connection which shows up everywhere in mathematics. I talked about this briefly in a talk last year (my blog post debriefing from the talk is here) but it’s a kind of squishy and imprecise observation. Because of that, this post is going to be less expository than my usual ones, and is instead going to be a “definition by examples” of sorts. I’ll try to show examples from as many brances of math as possible, and hopefully by the end it becomes clear what the flavor of these theorems is, as well as how ubiquitous they are!
This post has been in the pipeline for a few months now, and I’m glad to have it finished! It was pretty tiring to revise, but was also a welcome break from all the analysis I’ve been reading lately. I hope you find it interesting ^_^!
As a reminder, Syntax is the part of mathematics that
deals with symbols, and rules for manipulating them. So the syntax associated
to groups are variables and the symbols
Syntax, it turns out, gives us a way of asking questions or talking about
an object. For instance we can ask the question
“
Dually, Semantics tell us what the syntax means.
If you fix an actual group
As an example,
This is a very model theoretic notion of syntax and semantics, but I think it’s a good fundamental example1. The kind of theorem that I’m going to focus on in this post, though, are descriptive in nature. The idea is this:
If your object is simple syntactically, then it must be simple semantically as well.
Another way to phrase this is
If your object has a simple description, it can’t be too complicated.
Now let’s see some examples of this idea in action! I would love to hear any other examples that you can think of as well ^_^.
Definable Sets in a Topological Space
Let’s start simple. Say
and are closed and are open
This is the kind of theorem that you probably know intuitively, but you might have never thought about formally. Notice that just by looking at the syntax (that is, how the set is defined) we can learn something nontrivial about the set itself (in this case, its topological complexity).
As a simple exercise, if you’ve never seen this before you should prove this!
You should recall that
Do you see how to use this to show the claim?
Let’s make things a little more complicated. Again, say
First, let’s look at things that are quantifier free. We can turn logical symbols in the definition of a set into boolean operations on sets themselves. So
Using the fact from earlier that
That actually includes infinite connectives! For instance, say we have a
sequence of functions
is .
A countable conjunction/disjunction is often viewed as a countable quantifier since
Now we’ve extended our syntax to be more expressive. We can now use countable conjunctions/disjunctions, or equivalently countable quantifiers. Since our syntax is slightly more complicated, we should be able to describe more complex sets in this way.
As another fun exercise, you should check that every set definable with
countable quantifiers is borel. In fact, there’s a hierarchy of complexity
for borel sets, and the position of
A big part of descriptive set theory is trying to turn real-valued quantifiers into countable quantifiers. For instance, let’s look at the following classic analysis exercise:
Let
We can solve this by writing down what it means to be convergent, and converting this syntactic definition into a semantic one. Since we’re gunning for borel, we know we’re only allowed to use natural number quantifiers. This leads to the following argument:
Since
You might be wondering what happens if we do allow real valued quantifiers. It turns out there are still theorems bounding the complexity of our sets!
Sets with only existential real quantifiers are called Analytic, and sets with only universal real quantifiers are called Coanalytic. A (very!) nontrivial theorem in descriptive set theory says that both classes of sets are universally measurable, have the property of baire, etc. See here for a proof.
But now you might be wondering if we allow alternating quantifiers! For instance,
what if we have a set of the form
The relevant search term is projective determinacy, which follows from
certain large cardinal axioms. I was planning to write up a blog post about
this, but I’ve been beaten to the punch! There’s a great introduction at
the blog Complex Projective
Restricting the Syntax of an Algebraic Structure
Say we have a group
It’s often the case in algebra that we have some syntactic description of an algebraic object like this. A general life pro tip in this situation is to try to make restrictions on the syntax. Oftentimes this will give you surprisingly powerful restrictions on how complicated your object itself is. That is, powerful restrictions on the semantics.
Again, let’s start easy. The Word Problem for a
group
The word problem is a subset
is regular if and only if is finite. is context free if and only if is virtually free is computable (that is, the word problem is decidable, or solvable) if and only if it embeds into every algebraically closed group. is always computably enumerable, so this does not actually pose any restrictions on .
Notice how natural restrictions on the syntax of
One particularly famous result in this vein is Gromov’s theorem on groups of
polynomial growth. This theorem says that a group
This is also leaving aside all of the work done on one relator groups.
Mostly I’m leaving this aside because I’m not particularly familiar with
this area. I know that there are theorems of this form, though. For instance,
if the one relator is not a proper power, then
But why stop at group theory? The notion of “presentation” exists in commutative algebra as well. Can we find similar theorems there3?
Obviously the answer is “yes”, but I’m still very new to commutative algebra, and I wasn’t familiar with any theorems in this area. I knew that there should be some examples in the theory of monomial ideals, but I didn’t know of any concrete examples. Thankfully, in my first year I got to know Eloísa Grifo and Alessandra Costantini, and even though they’re both leaving4, they were nice enough to give me some examples!
First, what is a monomial ideal? Well, it’s more or less what it says on the
tin. If we work in a ring
There are lots of algorithms in commutative algebra (Buchberger’s algorithm and gröbner bases come to mind) which treat polynomials as formal symbols to be pushed around. So putting restrictions on what kinds of polynomials we have to work with is a syntactic condition, which we should expect to give us nice semantic theorems!
Indeed, monomial ideals admit particularly simple primary decompositions. Moreover, there is an extremely simple algorithm to compute the primary decomposition of a monomial ideal, which starkly contrasts the difficulty of the general computation5.
Monomial ideals (in particular the squarefree monomial ideals) can be studied with combinatorial tools using the dictionary of Stanley-Reisner Theory. This theory associates simplicial complexes to (squarefree) monomial ideals, and geometric conditions on this complex get turned into syntactic conditions on the monomials generating the ideal (and also into semantic theorems on the ideal and the quotient ring).
One deep open problem in commutative algebra is the question
When do the symbolic powers
In the special case of squarefree monomial ideals whose simplicial complex is
a graph, we completely understand this problem! It is a theorem of Gitler,
Valencia, and Villareal that the following are equivalent for the edge ideal
of a (simplicial) graph
is bipartite is “packed6”
I’m sure there are other syntax/semantics theorems in commutative algebra, and I would love to hear about them!
Some Miscellany
There are way more results of this form, and I feel like I could keep listing them forever. But this blog post is already getting long, and the thought of revising it is becoming daunting. Plus, I want to include at least a bit of exposition regarding each example so that people with less familiarity can still get something out of it.
With that in mind, I really should stop. But I want to show some examples that don’t quite fit the mold of any of the examples we’ve seen so far. This miscellany section is here to scratch that itch!
This one is a bit more overtly logical in nature, as it’s a corollary of the
If
Here semialgebraicity says that the graph
Unrelatedly, we have definable combinatorics. The idea here is that by putting restrictions on how complicated a combinatorial object is allowed to be, we can get sharper extremal bounds for the complexity of that object. I don’t know much about this myself, but Terry Tao has talked about it on his blog (see here).
Lastly, to end on a logical theorem close to my heart, lots of families of
logical formulas are decidable. For an easy example,
if you promise to only ask questions with bounded quantifiers, I can always
tell you whether your formula is true or false in
Maybe that’s something to look forward to!
-
Like any good example, this is true but far from the whole story.
For instance, free groups (and free objects more generally) tend to be made out of pure syntax. We are able to interpret the strings of symbols as themselves having a (purely formal) group structure. This method of building semantics out of syntax is a great way to make sure that you get a “universal” object in some sense. Oftentimes it turns out that something is true of every model if and only if it’s true of this “syntactic” model, though this takes some work to formalize.
We can go the other way as well, and make our syntax so rich it forces the semantics to have some properties. This trick of turning semantics into syntax is extremely useful throughout model theory, because our big tool, compactness is syntactic. We do this using a gadget called the diagram of a model, and you can see a simple example of this in my talk about model theory. ↩
-
Let’s assume
is closed under inverses for simplicity. ↩ -
Arguably certain homological conditions are “syntactic” too, since we’re putting restrictions on the number of generators/relators/higher syzygies (or, more commonly, on how long we need a resolution to be). This feels a little bit more abstract than the rest of the post, though, so I’m relegating it to a footnote. The stuff in the body is a little bit more down to earth, and is also obviously “syntactic” in a way that homological conditions aren’t. ↩
-
A fact which is extremely sad for me and the department, though I’ll put aside my selfishness and wish them the best! Goodness knows they deserve it ^_^. ↩
-
See Decker, Greuel, and Pfister’s Primary Decomposition: Algorithms and Comparisons for more details. ↩
-
The significance of the definition of “packed” is not one I’ll pretend to understand, but at the very least I can recount it for the interested reader.
A (squarefree monomial) ideal is called König if it contains a regular sequence of monomials of the same length as its height.
Now a (squarefree monomial) ideal is called Packed if every ideal obtained by setting any of the variables to
or is König.There is a conjecture that for all squarefree monomial ideals, the symbolic and ordinary powers coincide if and only if
is packed. This is called the Packing Problem, and it’s a topic of much interest. ↩