The deep linear network (DLN) is a phenomenological matrix model of training dynamics in deep learning introduced by computer scientists Arora, Cohen and Hazan. Recent work by several groups has revealed a rich mathematical structure in this model. This talk will focus on several surprising geometric properties and some of their consequences for training dynamics. It is based on joint work with Nadav Cohen (Tel Aviv) and several students at Brown (Lulabel Ruiz Seitz, Zsolt Veraszto, Tianmin Yu).