I recently saw a post on Quora asking what people generally find exciting about Linear Algebra, and it really took me back, since Linear Algebra was the first thing in the more modern part of mathematics that I fell in love with, thanks to Dr Erwin. I decided to write a Mathemafrica post on concepts that I believe are foundational in Linear Algebra, or at least concepts whose beauty almost gets me in tears (of course this is only a really small part of what you would expect to see in a proper first Linear Algebra course). I did my best to keep it as fluffy as I saw necessary. I hope you will find some beauty as well in the content. If not, then maybe it will be useful for the memes. The post is incomplete as it stands. It has been suggested that this can be made more accessible to a wider audience than as it stands by possibly building up on it, so I shall work on that, but for now, enjoy this! (I will be happy to explain anything.)
Introduction
So far, there are two posts on Mathemafrica under my name. The first one dealt in a more general sense with counting objects in sets, introducing some ways to do this using functions. The second post had some very informal introduction to the vector spaces, and linear independence of vectors, giving examples in 3D space. This post shall take some ideas from both, directly from the second post. In that spirit, without much repetition:
Vector Spaces
The set of real numbers together with the usual addition and multiplication of numbers is an example of a classification of mathematical structures known as fields. In what is about to follow, fields will be denoted as . Feel free to think of as being , but remember that this need not be the case.
A vector space over is a non-empty set of elements called vectors, with two laws of combination: vector addition and scalar multiplication, satisfying the following properties: To every pair of vectors , there is an associated vector in called their sum, denoted by . Addition is associative: for all . There exists a vector, , such that for all . Each element, , in has an inverse, . Addition is commutative: . To every scalar, , in , and vector in , there is a unique vector called the product of and , . Scalar multiplication is associative: . Scalar multiplication is distributive with respect to vector addition: . Scalar multiplication is distributive with respect to scalar addition: . Lastly, where is in .
Bases
A basis of a vector space is a set that contains the maximum number of linearly independent vectors in the vector space. As a reminder, it shall be mentioned that a set of vectors is linearly independent if the following is true: (i.e if one takes any one of the vectors, it cannot be written in terms of the others in the same set). It is said that the set containing just these vectors, , is linearly independent. If this set has the largest number of linearly independent vectors that one can find in the vector space, then it is said to be a basis of the vector space. Intuitively: this means that any element, , of the vector space can be written as (i.e every other element in the vector space can be written using the elements of the basis).
Linear Maps
A lot of people know what a linear function is, for instance . More generally, linear functions are said to be functions of the form . These functions tend to be nicer (for instance fix the origin), and the idea of a linear function extended to linear algebra allows the study of a lot of things related to the structure of vector spaces, but of course a lot of other things in addition to that. Starting off gently, a short discussion is due.
Let be vector spaces over some fields. A linear map is defined as follows: such that . This simply means that if one has a linear combination of vectors and they apply the linear map on the linear combination, then the result they obtain is the same with when they ‘act’ on each vector by the map, then they multiply by the scalar that multiplies the corresponding vector. It is worth noting that in the above while . The theory of operators is a powerful one that forms the basis of the wonders of linear algebra. Linear maps in this study are usually thought of as transformations of vectors in a vector space–because that is what they are–so this post shall adopt that naming.
The focus of this is of course not linear transformations, so no example shall be provided here, but one can be found at: Linear Transformations.
Conservation of Dimension
Let be vector spaces over some field as usual. The image and kernel of a linear transformation (from one of the vector spaces to the other) are defined as follows: Considering , the image of the transformation is the set of all elements, , of such that there is some element, , in such that . The kernel is defined as the set of all elements, , of such that , where is the vector with zeros on all entries.
One of the truly remarkable results of linear algebra states that for any linear transformation , dim=dim(Im())+dim(Ker()). This result is known as the conservation of dimension. The true power lies in the fact that it makes no mention of the linear transformation directly, nor does it make mention of the space . This means that by considering some linear transformation, and studying the dimensions of its image space and the kernel, one can recover information about the space whose elements are ‘acted’ upon by the transformation.
Remark: The dimension here represents the number of linear independent elements in the corresponding sets. In the beginning, the basis of a space was introduced, and the number of elements of such is known as the dimension of the vector space. Correspondingly, given the image or kernel of a transformation, one can single out linearly independent elements, take the maximum number that they can find, and then count them to find the dimension of the image or respectively the kernel.
Quotient Spaces
If is a subspace (i.e a subset of a vector space that itself is a vector space), the quotient space is defined by if . This means that if two elements have a difference that is some element of , then it is said that they are related/similar. It should be clear that this is an equivalence relation to those who are familiar with the notion.
Set theoretically, . Note that this is a vector space in its own right (one can check that the axioms hold), and the elements of can be regarded as the parallel translates of the subspace . There is a natural surjective (onto) linear map , and it is the case that: dimdimdim.
Intuitively, one can think of this as isolating a subspace inside the vector space, then translating it by considering what happens when they add (to every element of the subspace) some element in the bigger space, taking all such effects, then what they get is . Observe that the dimension of this new space is dim dim , and this emphasises the fact that one clearly defines the region that they are interested in, and this operation ‘invalidates’ any translations by elements that are already inside the specific subspace. This happens naturally since the subspaces are closed under addition of elements and scalar multiplication. Another more fluffy way to think about this is to consider some solid object making way through the air (maybe someone threw it). If one wants to analyse the motion of the object, then internal forces of atoms/molecules are less likely to give any clear detail about the motion. The only variables that have effect are external variables. One can now think of effects of translations of elements from the subspace as being internal forces (they keep the space intact ), while elements outside do the actual translation.
Example:
Consider , then , where the last symbol means that the sets are essentially the same, and is the subspace of isomorphic to . Observe that using the idea of a linear map from above : Ker, and Im. Beautiful!
Decomposition of Vector Spaces
Getting to even more golden grounds, more abstract concepts shall be introduced.
Inner Products
Consider a vector space, , over some field, . An inner product is a beast that satisfies the following four properties. Let be vectors and be a scalar, then:
1. ,.
2. ,.
3. .
4. with equality if and only if .
The best way to think about inner products is that they are maps that have the exact properties listed above. They might have more, but cannot afford to miss even one of those listed. Of course, for our purposes.
Projections within Vector Spaces
Inner products are closely tied to projections in interpretation. As an example, one might consider the dot product as an example of an inner product. The dot product has a wonderful intuition behind it. Given two vectors, it projects one of the vectors on the other, then multiplies the magnitude of the projected vector and that of the vector along which is has been projected. The dot product will be zero if the vectors in consideration are perpendicular to each other. Let this be the basis of thinking, although it might be limiting, but it should help a bit if the above definition seems too mechanical. Moving forward, then:
Consider then any vector space, . Define for any subspace , .
This is called the orthogonal complement of the subspace , and one can think of this as the set of all vectors in that are perpendicular to all elements of . More intuitively, suppose that there is a basis of , and for , then one can think of this as being i.e take the basis vectors that make up the whole vector space, and take away those that also belong to the subspace, then remaining ultimately is some subspace that is different from that which was in consideration initially, and it is in some sense ‘orthogonal’ to what was started with (or rather independent, since definition is that involving the inner products and not basis vectors directly).
Decomposition
Consider as a vector space over . The claim is that if is a subspace of , .
Think of (direct sum) as a way of adding mathematical spaces together, so that given , one can write . This is the definition of a direct sum.
The argument of proof goes as follows: Since the subspaces are in , dim dim . The other direction of the inequality is given by the properties of the inner product (in fact, the famous Pythagoras theorem) to show that the map has a trivial kernel, and so we get that dimdim which is sufficient for a conclusion.
The nerds will find more peace in knowing the formality of the argument from the pythagorean argument in what follows. Suppose is a vector space, , , then let , and define by . Then , so call a projection. Generally, Ker Im, and this can be seen because for all one can write . In the case at hand, clearly Ker. So . Lastly, consider any , and write where and , then the following is true: , using the definition of the product, then from here, Bob’s your uncle.
At this point, it could bring some level of joy (to the reader) to play around with the idea of quotient spaces given the discussion above, and see how the ideas unite.
As a last remark: I think linear algebra is absolutely phenomenal, and I believe that it is the beginning of all the things that make life great. I hope that this post was clear and interesting enough. I surely wish I had learnt these concepts, and a couple of other things in my second year linear algebra course. Please, let me know if you find any errors (more importantly logical) as not much editing went into this. I shall gladly correct them.
[The formatting could be better, but that is WordPress, not me. Also, I converted Latex to WordPress, so there are some glitches surely in terms of formatting. Let me know if you see any remaining. I did my best to keep them minimal.]
Leave a Reply