Serlo: EN: Isomorphisms
{{#invoke:Mathe für Nicht-Freaks/Seite|oben}}
Isomorphic Structures and Isomorphisms
Isomorphic Structures
We consider the vector space of polynomials of degree less than or equal to and we consider . Vectors in these spaces have a one-to-one correspondence, as we have already seen in the introduction article to vector spaces: {{#lst:Serlo: EN:_Introduction: Vector space|polynom_vektor}} We also found that addition and scalar multiplication work the same way in both vector spaces: {{#lst:Serlo: EN:_Introduction: Vector space|polynom-vektor_1}} {{#lst:Serlo: EN:_Introduction: Vector space|polynom-vektor_2}}
In general, vector spaces can be thought of as sets with some structure. In our example, we can match the sets 1 to 1. And also the structures (i.e. addition and multiplication) can be matched. So both vector spaces "essentially carry the same information", although they formally comprise different objects. In such a case, we will call the two vector spaces isomorphic (to each other). The bijection which identifies the two vector spaces is then called an isomorphism.
We now derive what the mathematical definition is of "two vector spaces and are isomorphic":
The identification of the ``sets`` is given by a bijective mapping . Preserving the structure means that addition and scalar multiplication are preserved when mapping back and forth with and . But "preserving addition and scalar multiplication" for a mapping between vector spaces is nothing else than "being linear". So we want and to be linear.
Mathe für Nicht-Freaks: Vorlage:Definition
Let us now return to our example from above. In this case, the identification map we are looking for from the Definition would look like this:
Isomorphism
We also want to give a name to the map introduced above:
Mathe für Nicht-Freaks: Vorlage:Definition
Alternative Derivation Vorlage:Anker
Now let's look at the term "vector space" from a different point of view. We can also think of a vector space as a basis together with corresponding linear combinations of the basis. So we can call vector spaces "equal" if we can identify the bases 1 to 1 and the corresponding linear combinations are generated in the same way. In other words, we are looking for a mapping that preserves both bases and linear combinations. What property must the mapping have in order to generate the same linear combinations? The answer is almost in the name: The mapping must be linear.
Let us now turn to the question of what property a linear map needs in order to map bases to bases. A basis is nothing else than a linearly independent generator. Thus, the map must preserve generators and linear independence. A linear map that preserves a generator is called an epimorphism - that is, a surjective linear map. A linear map that preserves linear independence is called a monomorphism and is thus an injective linear map. So the function we are looking for is an epimorphism and a monomorphism at the same time. As a monomorphism it must be injective. As an epimorphism, on the other hand, the mapping must be surjective. So overall we get a bijective linear map. This we again call an isomorphism. This gives us the alternative definition:
Mathe für Nicht-Freaks: Vorlage:Definition
Inverse Mappings of Linear Bijections are Linear
We have derived two descriptions for isomorphisms. Thus we have also two different definitions. The first one seems to require more than the second one: In the first definition, an isomorphism must additionally satisfy that is linear. Does this give us two different mathematical objects, or does linearity of already imply linearity of ? According to our intuition, both definitions should define the same objects. So being linear should then imply being linear. And indeed, this is the case:
Mathe für Nicht-Freaks: Vorlage:Satz
Classifying Isomorphic Structures
Bijections of Bases Generate an Isomorphism
In the alternative derivation , we used the intuition that an isomorphism is a linear map that "preserves bases". This means that bases are sent to bases and linear combinations are preserved. So, describing it a bit more formally, we considered the following:
We already know the following: If is a linear map between two vector spaces and is an isomorphism, then maps bases of to bases of .
But we don't know yet whether a linear map that sends a basis to a basis, is already an isomorphism. This statement indeed turns out to be true.
Mathe für Nicht-Freaks: Vorlage:Satz
Mathe für Nicht-Freaks: Vorlage:Satz
If we have given a bijection between bases, then there is a nice description of the inverse of : We know that is characterized by the conditions and . Further, the principle of linear continuation tells us that we need to know only on a basis of to describe it completely. Now we have already chosen the basis of W. That is, we are interested in for . Because is bijective, there is exactly one with . Therefore, we get from the above conditions. Now how can we describe this element more precisely? is the unique preimage of under . So . In other words, is the linear map induced by from to .
Classification of Finite Dimensional Vector Spaces
When are two finite-dimensional vector spaces isomorphic? If and are finite-dimensional vector spaces, then we have bases of and of . From the previous theorem we know that an isomorphism is uniquely characterized by the bijection of the bases. When do we find a bijection between these two sets? Exactly when they have the same size, so . Or in other words, if and have the same dimension:
Mathe für Nicht-Freaks: Vorlage:Satz
We have shown that all -vector spaces of dimension are isomorphic. In particular, all such vector spaces are isomorphic to the vector space . Because the is a well-describable model for a vector space, let us examine in more detail the isomorphism constructed in the last theorem.
Let be an -dimensional -vector space. We now follow the proof of the last theorem to understand the construction of the isomorphism. We use that bases of and of have the same size. For the isomorphism, we construct a bijection between a basis of and a basis of . The space has as kind of "standard basis", given by the canonical basis .
Following the proof of the last theorem, we see that we must choose a basis of and a basis of . For we choose the standard basis and for we choose some basis of . Next, we need a bijection between the standard basis and the basis . That is, we need to associate exactly one with each . We can thus name the images of as . Because is bijective, we get . In essence, we have used this to number the elements of B. Mathematically, numbering the elements of is the same as giving a bijection from to , since we can simply map to the -th element of .
The principle of linear continuation now provides us with an isomorphism . By linear continuation, this isomorphism sends the vector to the element .
Now what about the map that sends to , i.e., the inverse map of ?
We have already computed above what the mapping looks like in this case. is just the mapping induced by via the principle of linear continuation. That is, for basis vectors, we know that maps to . And where does it map a general vector ? HEre, we use the principle of linear continuation: We write as a linear combination of our basis . By linearity, the mapping now sends to . In particular, the describe where is located with respect to the basis vectors . This is just like GPS coordinates, which tells you your position with respect to certain anchor points (there prime meridian and equator). Therefore, we can say that sends each vector to its coordinates with respect to the basis .
Mathe für Nicht-Freaks: Vorlage:Definition
We now want to investigate how many choices the construction of the coordinate map depends on.
Mathe für Nicht-Freaks: Vorlage:Beispiel The coordinate mapping depends on the choice of the basis. If you have different bases, you get different mappings.
Mathe für Nicht-Freaks: Vorlage:Beispiel Even if we only change the numbering of the elements of a base, we already get different coordinate mappings. Mathe für Nicht-Freaks: Vorlage:Beispiel
In order to speak of the coordinate mapping, we must also specify the order of the basis elements. A basis where we also specify the order of the basis elements is called an ordered basis. Mathe für Nicht-Freaks: Vorlage:Definition With this notion we can simplify the notation of the coordinate mapping. If is an ordered basis, we also denote the coordinate mapping as .
We have now talked about a class of isomorphisms from to . Are there any other isomorphisms from to ? That is, are there isomorphisms that are not coordinate mappings? In fact, every isomorphism from to is a coordinate mapping with respect to a proper basis.
Mathe für Nicht-Freaks: Vorlage:Satz
Examples of vector space isomorphisms
Mathe für Nicht-Freaks: Vorlage:Beispiel
Mathe für Nicht-Freaks: Vorlage:Beispiel Mathe für Nicht-Freaks: Vorlage:Beispiel
Exercises
<section begin=aufgaben_isomorphismus /> Mathe für Nicht-Freaks: Vorlage:Aufgabe
Mathe für Nicht-Freaks: Vorlage:Aufgabe
Mathe für Nicht-Freaks: Vorlage:Aufgabe
Mathe für Nicht-Freaks: Vorlage:Aufgabe<section end=aufgaben_isomorphismus /> Mathe für Nicht-Freaks: Vorlage:Hinweis {{#invoke:Mathe für Nicht-Freaks/Seite|unten}}