The determinant of a matrix is just the product of its columns
This article is part of my migration effort, moving some of my articles over from the excellent Functor Network.
The determinant is unique
A fact I’ve known for a very long time, but never bothered to prove for myself, is the unicity of the determinant for square matrices. More precisely, let be any commutative ring with identity; then the determinant is the only -valued function on matrices that: (i) is linear on each column, (ii) is such that switching two columns changes the sign, and (iii) sends the identity matrix to 1.
A nice way to prove this is to find an explicit formula for the determinant of a square matrix using only these three properties. Let’s do the 2x2 matrices for a start. The linearity in the columns means we can decompose the problem into smaller ones:
From the fact switching two columns changes the sign, the determinants in the middle give out zero, and the one on the right evaluates to . Hence the determinant comes out to be , as expected.
The idea of the general proof is contained in the previous 2x2 case. The argument is as follows. From linearity in the columns, we can split the general determinant problem into computing the determinant of all matrices which are made up of standard basis vectors (i.e. in each column there is exactly one 1 and the rest are zeroes). As soon as two columns are identical, the determinant vanishes, hence the only determinants left to calculate are those for matrices made up of standard basis vectors that are all different, that is to say, matrices where there’s exactly one 1 in each row and each column, the rest being zeroes. In other words, the only things left are all of the column (or rows) permutations of the identity matrix. From that description we can simply read off the known determinant formula:
Computing the determinant is just multiplying the columns
Let’s get a little more abstract. Consider to be a -dimensional vector space over some field . We could use free -modules instead, where is a commutative ring with unity, but I believe you’ll forgive me if we stick with the vector space. The -th exterior power is a one-dimensional vector space over . The proof is left to the reader; it’s fun and it uses the same idea as the proof outline above. Anyways, writing for a basis for , the -blade is a basis for .
Multilinear alternating maps are in a natural bijective correspondence with linear maps such that . The proof is left to the reader hehe. Hence there is a unique multilinear alternating map that sends the identity matrix to ; we call that map the determinant. What!! Isn’t the determinant a number?? Well, yes and no. We often identify complicated objects with their images for simplicity. The determinant-number is the image of the determinant-map. Notice that we have one determinant-map, which is able to compute determinants for any correctly-sized matrix you throw at it.
Basically as a consequence of the universal property I mentioned in the previous paragraph, defining the determinant in this way gives us the following equation: Hence to compute the determinant you just multiply the columns using the wedge product, and reduce in until you find it.
For instance,
If you can imagine that the -blade represents the volume of the -dimensional cube, then this formula says the determinant is the volume of the -parallelotope spanned by the columns of the matrix.