Serlo: EN: Isomorphisms: Unterschied zwischen den Versionen

Aus testwiki
Zur Navigation springen Zur Suche springen
imported>Sascha Lill 95
KKeine Bearbeitungszusammenfassung
 
(kein Unterschied)

Aktuelle Version vom 10. Juni 2022, 18:23 Uhr

{{#invoke:Mathe für Nicht-Freaks/Seite|oben}}

Isomorphic Structures and Isomorphisms

Isomorphic Structures

We consider the vector space K[X]2 of polynomials of degree less than or equal to 2 and we consider 3. Vectors in these spaces have a one-to-one correspondence, as we have already seen in the introduction article to vector spaces: {{#lst:Serlo: EN:_Introduction: Vector space|polynom_vektor}} We also found that addition and scalar multiplication work the same way in both vector spaces: {{#lst:Serlo: EN:_Introduction: Vector space|polynom-vektor_1}} {{#lst:Serlo: EN:_Introduction: Vector space|polynom-vektor_2}}

In general, vector spaces can be thought of as sets with some structure. In our example, we can match the sets 1 to 1. And also the structures (i.e. addition and multiplication) can be matched. So both vector spaces "essentially carry the same information", although they formally comprise different objects. In such a case, we will call the two vector spaces isomorphic (to each other). The bijection which identifies the two vector spaces is then called an isomorphism.

We now derive what the mathematical definition is of "two vector spaces V and W are isomorphic":

The identification of the ``sets`` is given by a bijective mapping f:VW. Preserving the structure means that addition and scalar multiplication are preserved when mapping back and forth with f and f1. But "preserving addition and scalar multiplication" for a mapping between vector spaces is nothing else than "being linear". So we want f and f1 to be linear.

Mathe für Nicht-Freaks: Vorlage:Definition

Let us now return to our example from above. In this case, the identification map we are looking for from the Definition would look like this:

Vorlage:Einrücken

Isomorphism

We also want to give a name to the map f introduced above:

Mathe für Nicht-Freaks: Vorlage:Definition

Alternative Derivation Vorlage:Anker

Now let's look at the term "vector space" from a different point of view. We can also think of a vector space as a basis together with corresponding linear combinations of the basis. So we can call vector spaces "equal" if we can identify the bases 1 to 1 and the corresponding linear combinations are generated in the same way. In other words, we are looking for a mapping that preserves both bases and linear combinations. What property must the mapping have in order to generate the same linear combinations? The answer is almost in the name: The mapping must be linear.

Let us now turn to the question of what property a linear map needs in order to map bases to bases. A basis is nothing else than a linearly independent generator. Thus, the map must preserve generators and linear independence. A linear map that preserves a generator is called an epimorphism - that is, a surjective linear map. A linear map that preserves linear independence is called a monomorphism and is thus an injective linear map. So the function we are looking for is an epimorphism and a monomorphism at the same time. As a monomorphism it must be injective. As an epimorphism, on the other hand, the mapping must be surjective. So overall we get a bijective linear map. This we again call an isomorphism. This gives us the alternative definition:

Mathe für Nicht-Freaks: Vorlage:Definition

Inverse Mappings of Linear Bijections are Linear

We have derived two descriptions for isomorphisms. Thus we have also two different definitions. The first one seems to require more than the second one: In the first definition, an isomorphism f must additionally satisfy that f1 is linear. Does this give us two different mathematical objects, or does linearity of f already imply linearity of f1? According to our intuition, both definitions should define the same objects. So f being linear should then imply f1 being linear. And indeed, this is the case:

Mathe für Nicht-Freaks: Vorlage:Satz

Classifying Isomorphic Structures

Bijections of Bases Generate an Isomorphism

In the alternative derivation , we used the intuition that an isomorphism is a linear map that "preserves bases". This means that bases are sent to bases and linear combinations are preserved. So, describing it a bit more formally, we considered the following:

We already know the following: If f:VW is a linear map between two vector spaces and f is an isomorphism, then f maps bases of V to bases of W.

But we don't know yet whether a linear map that sends a basis to a basis, is already an isomorphism. This statement indeed turns out to be true.

Vorlage:Anker

Mathe für Nicht-Freaks: Vorlage:Satz

Vorlage:Anker

Mathe für Nicht-Freaks: Vorlage:Satz

Vorlage:Anker

If we have given a bijection between bases, then there is a nice description of the inverse of f: We know that f1 is characterized by the conditions f1f=idV and ff1=idW. Further, the principle of linear continuation tells us that we need to know f1 only on a basis of W to describe it completely. Now we have already chosen the basis BW of W. That is, we are interested in f1(bW) for bWBW. Because h is bijective, there is exactly one bVBV with h(bV)=bW. Therefore, we get f1(bW)=f1(f(bV))=bV from the above conditions. Now how can we describe this element bV more precisely? bV is the unique preimage of bW under h. So bV=h1(bW). In other words, f1 is the linear map induced by h1 from W to V.

Classification of Finite Dimensional Vector Spaces

When are two finite-dimensional vector spaces isomorphic? If V and W are finite-dimensional vector spaces, then we have bases {b1,,bn} of V and {c1,,cm} of W. From the previous theorem we know that an isomorphism is uniquely characterized by the bijection of the bases. When do we find a bijection between these two sets? Exactly when they have the same size, so n=m. Or in other words, if V and W have the same dimension:

Mathe für Nicht-Freaks: Vorlage:Satz

We have shown that all K-vector spaces of dimension n are isomorphic. In particular, all such vector spaces are isomorphic to the vector space Kn. Because the Kn is a well-describable model for a vector space, let us examine in more detail the isomorphism constructed in the last theorem.

Let V be an n-dimensional K-vector space. We now follow the proof of the last theorem to understand the construction of the isomorphism. We use that bases of V and of Kn have the same size. For the isomorphism, we construct a bijection between a basis of V and a basis of Kn. The space Kn has as kind of "standard basis", given by the canonical basis {e1,,en}.

Following the proof of the last theorem, we see that we must choose a basis of V and a basis of Kn. For Kn we choose the standard basis E and for V we choose some basis B of V. Next, we need a bijection h:EB between the standard basis and the basis B. That is, we need to associate exactly one bB with each ei. We can thus name the images of ei as bi=h(ei). Because h is bijective, we get B={b1,,bn}. In essence, we have used this to number the elements of B. Mathematically, numbering the elements of B is the same as giving a bijection from E to B, since we can simply map ei to the i-th element of B.

The principle of linear continuation now provides us with an isomorphism f:KnV. By linear continuation, this isomorphism sends the vector (x1,,xn)TKn to the element f((x1,,xn)T)=x1b1++xnbn.

Now what about the map that sends B to E, i.e., the inverse map f1 of f?

We have already computed above what the mapping f1 looks like in this case. f1 is just the mapping induced by h1 via the principle of linear continuation. That is, for basis vectors, we know that f maps biB to h1(bi)=ei. And where does it map a general vector vV? HEre, we use the principle of linear continuation: We write v as a linear combination of our basis v=λ1b1++λnbn. By linearity, the mapping f1 now sends v to f1(v)=(λ1,,λn)T. In particular, the λi describe where v is located with respect to the basis vectors bi. This is just like GPS coordinates, which tells you your position with respect to certain anchor points (there prime meridian and equator). Therefore, we can say that f1 sends each vector to its coordinates with respect to the basis B.

Mathe für Nicht-Freaks: Vorlage:Definition

We now want to investigate how many choices the construction of the coordinate map depends on.

Mathe für Nicht-Freaks: Vorlage:Beispiel The coordinate mapping depends on the choice of the basis. If you have different bases, you get different mappings.

Mathe für Nicht-Freaks: Vorlage:Beispiel Even if we only change the numbering of the elements of a base, we already get different coordinate mappings. Mathe für Nicht-Freaks: Vorlage:Beispiel

In order to speak of the coordinate mapping, we must also specify the order of the basis elements. A basis where we also specify the order of the basis elements is called an ordered basis. Mathe für Nicht-Freaks: Vorlage:Definition With this notion we can simplify the notation of the coordinate mapping. If B=(b1,,bn) is an ordered basis, we also denote the coordinate mapping kb1,,bn as kB.

We have now talked about a class of isomorphisms from V to Kn. Are there any other isomorphisms from V to Kn? That is, are there isomorphisms that are not coordinate mappings? In fact, every isomorphism from V to Kn is a coordinate mapping with respect to a proper basis.

Mathe für Nicht-Freaks: Vorlage:Satz

Examples of vector space isomorphisms

Mathe für Nicht-Freaks: Vorlage:Beispiel

Mathe für Nicht-Freaks: Vorlage:Beispiel Mathe für Nicht-Freaks: Vorlage:Beispiel


Exercises

<section begin=aufgaben_isomorphismus /> Mathe für Nicht-Freaks: Vorlage:Aufgabe


Mathe für Nicht-Freaks: Vorlage:Aufgabe

Mathe für Nicht-Freaks: Vorlage:Aufgabe

Mathe für Nicht-Freaks: Vorlage:Aufgabe<section end=aufgaben_isomorphismus /> Mathe für Nicht-Freaks: Vorlage:Hinweis {{#invoke:Mathe für Nicht-Freaks/Seite|unten}}