So to start out, I’m just going through the Homotopy Type Theory book. The introduction is really inspiring and exciting! It looks like it’s what I hoped it would be.

Sections 1.1-1.3 seem to be pretty straightforward, although I still feel like I’m missing some of the nuance behind judgements, definitional equalities, and the like. Function types make sense, since I have already run into them through Haskell programming. The universes splitting into an infinite hierarchy was slightly disappointing, but completely expected.

The section on dependent function types is where I stopped having a good intuition for what was going on.The notation for a function type, , makes intuitive sense to me, but for a dependent function type, the notation is just really hard for me to wrap my head around. Why are they using the product symbol for this? I skipped around a little looking for something that might be helpful, and in section 1.11, they mention that you can think of as saying, for all , holds. The idea is that types are like logical propositions, and a type having an object basically means the proposition is true. That helps ground the concept for me, not perfectly, but enough to continue.

Pressing on, section 1.5 has a nice list of what you need to specify a type. Recursion and induction are tricky for me to get at first, but after going through to section 1.10 and seeing many different examples, the concept starts to make sense. The recursor for a type is just a function that you use to define functions on that type, and induction is basically the same thing for dependent functions. For example, if you want to specify a function of type , you take a function of type , and then plug this into the recursor (along with some other data) to get your function.This kind of reminds me of currying….

There was one thing that made me feel a bit uncomfortable. You can construct an object of type just by taking a tuple for and . But it is not assumed that *every* object of type has this form, this can apparently be proved using the inductor. But then, how is the recursor well-defined? It seemed like it already just assumes you can split any object of type into a tuple . After rereading the section, I realized that the function is defined for *elements that are tuples, *and that’s it! That’s all you need to know to define a function.

Also, I thought at first that recursion was just the baby version of induction, but actually, it looks like you need to have the recursor first to be able to construct an element of the inductor.

Dependent pair types were also a bit confusing, but things are starting to fit together better now. In section 1.8, on the binary type, it’s finally explained why the product and sum notation for dependent types makes sense. You can think of , and . I like that!

For the natural number type, I was really confused by how we could use a function in the recursor for the natural number type! Wasn’t the whole point of recursors to tell you how to build functions from that type? Luckily, section 1.10 clears up the issue – this is simply a convenient notational shortcut.

Section 1.11 was really useful for me, and helped me develop a good bit of intuition for what was going on. It was really nice to think about types as being propositions, with objects of that type witnessing the type’s truth. Since there can be multiple objects of a certain type, there is more information kept than in normal logic. At first I felt kinda weird about the law of excluded middle being excluded, but after seeing that you could still add it back in if you needed, and that it really just gave you another level of nuance of proofs, *and* that you automatically had nice constructibility properties – it began to feel more natural and good to me.

Pingback: Elimination vs. Compuation – dissolving my confusion | Univalence Playground