Serlo: EN: Kernel of a linear map
{{#invoke:Mathe für Nicht-Freaks/Seite|oben}} The kernel of a linear map intuitively contains the information that is "deleted" when applying the linear map. Further, the kernel can be used to characterize the injectivity of linear maps. It also plays a central role in solving systems of linear equations.
Introduction
We have learned about special mappings between vector spaces, called linear maps. Those are structure-preserving; that is, they are compatible with addition and scalar multiplication of a vector space. We can therefore think of a linear map from to as something that transports the vector space structure from to .
Introductory examples
We consider two accounts, each with the account balance and respectively. We can describe this information with a vector . The total account balance is the sum of the two account balances. We can calculate it by using the map Vorlage:Einrücken This map is linear and therefore transports the vector space structure from to . In the process, information is lost: one no longer knows how the money is distributed among the accounts. For example, one can no longer distinguish the individual account balances and because they both map to the same total account balance . In particular, the mapping is not injective. However, we get the information about how much money is in the accounts in total.

Next, we consider the map Vorlage:Einrücken Visually, this corresponds to a counterclockwise rotation of by degrees. By undoing this rotation, one can recover the original vector from any rotated vector in . Formally speaking, this mapping is an isomorphism and no information is lost. In particular, the image of linearly independent vectors is linearly independent again (because an isomorphism is injective, see the article monomorphism) and the image of a generator of is again a generator of (because an isomorphism is surjective, see the article epimorphism).
Finally, we consider a rotation again, but then embed the rotated plane into the : Vorlage:Einrücken Although this mapping is no longer bijective, no information is lost here when transporting the vector space structure of the into the : As in the previous example, different vectors in the are mapped to different vectors in the because of injectivity. Linear independence of vectors is also preserved. However, a generating system of is not mapped to a generator of . For example, the linear map sends the standard basis to , which is not a generator of . The property of a set of vectors to be a generator depends on the ambient space. This is not the case with linear independence; it is an "intrinsic" property of sets of vectors.
Derivation Vorlage:Anker
We have seen various examples of linear maps that transport a -vector space into another -vector space, while preserving the structure. In the process, varying amounts of "intrinsic" information from the original vector space (such as differences of vectors or linear independence) were lost. The last example suggests that injective mappings preserve such intrinsic properties. On the other hand, we see: If is not injective, then there are vectors with . So in that case, "eliminates" the difference of and . The difference is again an element in . Since is linear, we can reformulate: Vorlage:Einrücken Intuitively, is injective if and only if differences of vectors under are not eliminated (i.e., mapped to zero). Because is structure-preserving, we have that for all and , that implies Vorlage:Einrücken If the difference of and is eliminated under , so is that of and . In the same way, if : if and , then also Vorlage:Einrücken So the difference of and is also eliminated. The differences eliminated by are themselves vectors in . These are send by to the zero element of and thus, the eliminated vectors are in the preimage . Conversely, any vector can be written as a difference ; that is, the difference between and the zero vector is eliminated by . The preimage measures exactly what differences of vectors (how much "information") is lost in the transport from to . Our considerations show that is even a subspace of . We give a name to this subspace: the kernel of .
Definition
The kernel of a linear map intuitively measures how much "intrinsic" information about vectors from (differences of vectors or linear independence) is lost when applying the map. Mathematically, the kernel is the preimage of the zero vector. Mathe für Nicht-Freaks: Vorlage:Definition
In the derivation we claimed that the kernel of a linear map from to is a subspace of . We will now prove this in detail.
Mathe für Nicht-Freaks: Vorlage:Satz
Examples
We determine the kernel of the examples from the introduction.
Vector is mapped to the sum of entries
We consider the mapping Vorlage:Einrücken The kernel of is made up by the vectors with , so . In other words Vorlage:Einrücken Thus the kernel of is a one-dimensional subspace of . More generally, for we can consider the mapping Vorlage:Einrücken Again, by definition, a vector lies in the kernel of if and only if holds. So we can freely choose and then set . Thus Vorlage:Einrücken Hence, the kernel of is a -dimensional subspace of . It is also called a hyperplane in .
Rotation in
We consider the rotation Vorlage:Einrücken Suppose lies in the kernel of , i.e. it holds that Vorlage:Einrücken From this we obtain . So only the zero vector lies in the kernel of and we have that .
is rotated and embedded into the Vorlage:Anker
Next we consider Vorlage:Einrücken As in the previous example, we determine the kernel by choosing any vector . Thus it holds that Vorlage:Einrücken Again it follows that , so that also for this mapping holds.
Derivatives of polynomials Vorlage:Anker
Finally, we consider a linear map that did not appear in the introduction: Vorlage:Einrücken which maps a real polynomial to its derivative. That is, a polynomial Vorlage:Einrücken with coefficients is mapped to the polynomial Vorlage:Einrücken Graphically, we associate with a polynomial that indicates the gradient of at each point. From this information, we still learn what the shape of the polynomial is (just as if we were given a stencil). However, we no longer know where it is positioned on the -axis, because the information about the constant part of the polynomial is lost when taking the derivative. Polynomials that just differ by a displacement along the -axis can no longer be distinguished after derivation. For example, both and have the derivative . So the mapping maps them to the same polynomial.
The kernel of thus contains exactly the constant polynomials: Vorlage:Einrücken The inclusion "" is clear, because the derivative of a constant polynomial is always the zero polynomial. For the converse inclusion "", we consider any polynomial and show that it is constant. We can always write such a polynomial as for some and certain coefficients . Because of it holds that Vorlage:Einrücken and by comparison of the coefficients, we obtain . So is constant. Vorlage:Todo
Kernel and injectivity
In the derivation above, we saw that a linear map preserves all differences of vectors (i.e., no vector is eliminated) if and only if the kernel consists only of the zero vector. We also saw there that linearity implies: A linear map is injective if and only if no difference of vectors is eliminated. So we have the following theorem:
<section begin="InjektivitätSatz" />Mathe für Nicht-Freaks: Vorlage:Satz <section end="InjektivitätSatz" />
The larger the kernel is, the more differences between vectors are "eliminated" and the more the mapping "fails to be injective". The kernel is thus a measure of the "non-injectivity" of a linear map.
Injective maps and subspaces
In the introductory examples we conjectured that injective linear maps preserve "intrinsic" properties of vector spaces. By this, we mean properties that do not depend on the ambient vector space, such as the linear independence of vectors or vectors being distinct. The property of being a generator can be lost in injective linear maps, as we have seen in the example of the twisted embedding of into : The mapping is injective, but the standard basis of is not mapped to a generator of .
What exactly does it mean that a property of a family of vectors does not depend on the ambient space ? Often, properties of vectors from (for example, linear independence) depend on the vector space structure of , that is, addition and scalar multiplication. To make dependences as small as possible, we restrict our attention to the smallest subspace of containing , that is, we restrict to . Now, we call a property of intrinsic if it depends only on but not on .
Mathe für Nicht-Freaks: Vorlage:Beispiel
What do intrinsic properties of a family of vectors have to do with injectivity? Let be a linear map. Suppose preserves intrinsic properties of vectors, that is, if a family has some intrinsic property, then its image under also has this property. Then also preserves the property of vectors being different, since this is an intrinsic property. That means, if are different, i.e., , then their image under is also different, i.e., . So is injective.
Conversely, if is injective, then is isomorphic to the subspace of : If we restrict the target space of to its image, we obtain an injective and surjective linear map , that is, an isomorphism. In particular, for any family in , it holds that the subspace of is isomorphic to . Thus, the latter has the same properties as and hence, preserves intrinsic properties of subsets of .
So we have seen that is injective if and only if preserves intrinsic properties of subsets of .
Kernel and linear independence
In the previous section we have seen that injective linear maps are exactly those linear maps which preserve intrinsic properties of . The linear independence of a family of vectors is such an intrinsic property, as they either hold for any choice of an ambient space or do not hold for any choice of an ambient space.
So, injective linear maps should preserve linear independence of vectors, i.e., the image of linearly independent vectors is again linearly independent. Conversely, a linear map cannot be injective if it does not preserve the linear independence of vectors, since the intrinsic information of "being linearly independent" is lost.
Overall, we get the following theorem, which has already been proved in the article on monomorphisms:
Mathe für Nicht-Freaks: Vorlage:Satz
In particular, for any linear map , the vector space is a -dimensional subspace of . In the finite-dimensional case, there cannot exist an injective linear map from to if . This has also already been shown in the article on monomorphisms.
Kernel and linear systems Vorlage:Anker
The kernel of a linear map is an important concept in the study of systems of linear equations.
Let be a field and let . We consider a linear system of equations Vorlage:Einrücken with variables and rows. We have , where and . We can also write this system of equations using matrix multiplication: Vorlage:Einrücken where , and . We denote the set of solutions by Vorlage:Einrücken
Determining a solution to the linear system of equations for a given right-hand side is the same as finding a preimage of under the linear map Vorlage:Einrücken
Vorlage:Todo The system of equations has solutions if the preimage is not empty. In this case, we may ask whether there are multiple solutions, that is, whether the solution is not unique. In other words, we are interested in how many preimages a has under .
By definition of injectivity, every point has at most one element in its preimage if and only if is injective. This means that the linear system of equations has at most one solution for each , that is, . Because is linear, injectivity is equivalent to . So we can already state: Mathe für Nicht-Freaks: Vorlage:Satz Mathe für Nicht-Freaks: Vorlage:Hinweis
Even if is not injective, i.e., holds, we can still say more about the set of solutions by exploiting the kernel: The difference of two vectors and , which maps to the same vector, lies in the kernel of . Therefore, the preimage of some under can be written as Vorlage:Einrücken where is any element of . This is shown by the following theorem: Mathe für Nicht-Freaks: Vorlage:Satz We have thus even extended the statement of the theorem above. The larger the kernel of is, that is, the "less injective" the mapping is, the "less unique" are solutions of , if any exist. The set of solutions of a linear system of equations is the kernel of the induced linear map shifted by a particular solution . Furthermore, Vorlage:Einrücken The set of solutions of the homogeneous system of equations (that is, with right-hand side zero) is exactly the kernel of . Mathe für Nicht-Freaks: Vorlage:Hinweis
Exercises
<section begin=injektivität_und_dimension /> Mathe für Nicht-Freaks: Vorlage:Aufgabe<section end=injektivität_und_dimension /> <section begin=aufgabe_kern_bestimmen /> Mathe für Nicht-Freaks: Vorlage:Aufgabe
Mathe für Nicht-Freaks: Vorlage:Frage<section end=aufgabe_kern_bestimmen /> <section begin=kern_nilpotenter_endo /> Mathe für Nicht-Freaks: Vorlage:Aufgabe<section end=kern_nilpotenter_endo />
{{#invoke:Mathe für Nicht-Freaks/Seite|unten}}