Not so long ago, I started reading some linear algebra, just out of interest. I was uncertain about whether or not I would understand the concepts, or if it would be worth it to go through all the trouble. I can now say that it was worth it. Honestly, it was the most frustrating, but at the same time rewarding, experience. I have come to realise that there are things that we often have to accept without knowing the beauty of the logic behind their existence, and the idea presented here is one of them. This post answers a simple question about vector notation.
You might have asked yourself at some point in your life (… or maybe you haven’t, but you should): Why is it “legal” to write a vector,, as , and why can we switch between different notations without finding trouble (for example, we can represent the vector in the form: ) ?
There are clever ways to represent vectors, and each of them has advantages and its short comings. This is why it is often advised that the student should learn to switch between different forms. We know that we can represent a vector as an arrow in “space” ( if we are working in 3 or less dimensions, otherwise this becomes awkward), or we can use the form (i.e ,in general, for an arbitrary vector, ) or my favourite, the parentheses, as mentioned above. There are a few other ways, but I will simply focus on less than three forms, for control. It is worth noting that in the form, , the is a scalar while the is a basis vector.
It is interesting to ask whether there is a canonical/natural way of representing vectors, and I intend to do some research around that. The report will be in the form of another post at some later stage.
I will start with a few definitions (some more formal than others). Because the emphasis of this post is not on everything in algebra, we will simply think of a field as a set of numbers that obeys certain properties of addition and multiplication, and this should be sufficient for this adventure.
Definition 1:
A vector space, , over is a non-empty set of elements called vectors, with two laws of combination: vector addition and scalar multiplication, satisfying the following properties:
To every pair of vectors , in , there is an associated vector in called their sum, denoted by .
Addition is associative: .
There exists a vector,, such that for all in .
Each element, , in has an inverse, .
Addition is commutative: .
To every scalar, , in , and vector in , there is a unique vector called the product of and , .
Scalar multiplication is associative: .
Scalar multiplication is distributive with respect to vector addition: .
Scalar multiplication is distributive with respect to scalar addition: .
where is in .
It is in fact true that a field, which we did not formally define, is a set in which scalars obey “similar” properties as those mentioned above. In this case, fields are important, because it is where we take coefficients for our vectors.
Adding a little bit of background: It is no secret that vectors live in vector spaces, and vector spaces are made up of vectors (more accurately, are spanned by basis vectors). A linearly independent set is one in which there do not exist scalars with which you can multiply the vectors in the set to get vectors that add up to another vector in the same set.
Example 1:
Suppose that we have and . It is obvious that there is no way to multiply by a scalar such that you obtain . The two vectors are linearly independent! Again, if we have and , then obviously, , so the set is linearly dependent.
In formal terms, a linearly dependent set of vectors is one which there exists a non-trivial linear relation among them. Otherwise, the set is said to be linearly independent.
You can think of a dimension of a vector space as the maximum number of linearly independent vectors in a set contained in that vector space.
Example 2:
If you have one non-trivial vector, then you have one dimension, eg: . This vector is a unit vector on the x-axis. If we take a scalar, , from the real numbers, we can say . The scalar, , can be any number, so the vector describes the set of points on the x-axis. If we put a second vector, , then this vector makes up the whole of the y-axis–again, the set of points. If we put the two vectors together, they form the xy-plane. Adding a 3rd vector will extend this space to higher dimensions, on the condition that the vectors you add into our collection are linearly independent to those that currently exist. And so, I present to you, a vector from -space, . It’s beautiful, isn’t it? We call the set of linearly independent vectors the basis of a vector space. A basis is a set of vectors that spans a vector space!
This is the fun part!
One popular way of representing elements of a set is to use curly brackets. Some people like to write their vectors as , where represent the ith component, and is a vector. This corresponds to .
In my last post, I mentioned something about mappings. A map is a function, in simple terms. We have special names for mappings, and what we are interested in now is an isomorphism. An isomorphism is basically a mapping that is both a monomorphism (one-to-one), and an epimorphism ( like the youth will say, “Leave no loose ends.” I have never heard anyone say this, but… that is not the point). An epimorphism will make sure that every element in the range is tied to a unique element in the domain, and a monomorphism will make sure that none of the elements in the range get more than one element assigned to them. We can define an isomorphism, , as follows:
In this way, we send one element from to the corresponding element in . The dimensions of will be equal to those of , since this is the same vector (represented differently). Both and could be in the same vector space (or not). We can show easily that the mapping, , is an isomorphism ( one to one and onto) if both sets of vectors are finite, and if both are infinite, then we can use the nifty tricks from my previous post. As a reminder, to show that countable infinite sets have equal cardinalities, we simply showed that if one had time, they could arrange the elements in a certain set, and apply a mapping rule that assigns one element from one set to another element in the second set. Also, if the set is infinite and uncountable, we then find a bijective mapping! This is similar to the reason why we are allowed to use parentheses to denote vectors, as well as how we can change from one form of representation to another. There exists an isomorphism between the n-tuples with coefficients in our field of interest, and the components in . This means that each vector specified in one form has another unique form that we can use. The fact that this is unique ensures us that we will not run into trouble somewhere.
Below is a figure showing a basic graphic of an isomorphism:
A visual representation of an isomorphism
An isomorphism goes both ways due to its wonderful properties. I only focused on the forward direction–the backward direction is the inverse– but I hope that is sufficient. The letters A,B,C,D are vector components, and the “primes” are the components A,B,C,D written in another form.
Conclusion:
The point of this post is to show why it is legal to have different vector representations, and in answering this, we find that given any form, one may construct an isomorphism that takes elements from one form and ‘transforms’ them into another form such that they form a unique representation of the given form. This definition of an isomorphism assures us that there is no ambiguity in moving from one form to another, so everything is still well defined. Then a person may choose whatever representation they feel is better for their work.
Lovely! Just one thing, change “A linearly independent set is one in which there does not exist a scalar with which you can multiply a vector in the set to get another vector in the same set” to “A linearly independent set is one in which there do not exist scalars with which you can multiply the vectors in the set to get vectors that add up to another vector in the same set”.
Thank you very much for the correction. It is much appreciated, Christine!
[…] Vector representations […]