Serlo: EN: Introduction: Matrices

Aus testwiki
Zur Navigation springen Zur Suche springen

{{#invoke:Mathe für Nicht-Freaks/Seite|oben}} In this article, we introduce matrices as an efficient representation of linear maps. A matrix (of a linear map f:KnKm) is a rectangular arrangement of elements from K ("numbers") that specifies where the standard basis of Kn is mapped by f.

Derivation Vorlage:Anker

Let K be a field and f:KnKm a linear map. We want to describe this map in the most efficient way. Since we know from the article "vector space of a linear map" that the space of linear maps from Kn to Km has dimension nm, and that f is an element of this space. So we need nm numbers to describe our linear map. We are looking for a way to write down these numbers in an organized way.

Let {e1,,en} be the standard basis of Kn. Then, following the principle of linear continuation, f is already completely determined by the vectors f(e1),,f(en)Km : If xKn is an arbitrary vector, we can write it as a linear combination x=x1e1++xnen of the basis elements, and because of linearity we know the value f(x)=x1f(e1)++xnf(en).

So we need the "data" f(e1),,f(en) to describe the linear map. These data are n vectors in Km. So we can write them as Vorlage:Einrücken for certain "numbers" aijK. This is a notation for tracking all necessary data of the linear map. But we can still make it more efficient: We just omit the "f(ei)=" and agree on the convention that the i-th column describes the image of the i-th basis vector: Vorlage:Einrücken To save even more space, we can also combine the entries of these vectors into a single "table", still with the image of the i-th basis vector being in the i-th column: Vorlage:Einrücken We call this "table in parenthesis" a matrix. It is the matrix associated with the linear map f.

The matrix completely determines f and it consists of nm numbers as entries, which is consistent with our considerations above.

Definiton

Mathe für Nicht-Freaks: Vorlage:Definition

Mathe für Nicht-Freaks: Vorlage:Beispiel

Mathe für Nicht-Freaks: Vorlage:Beispiel

Mathe für Nicht-Freaks: Vorlage:Beispiel

Matrix-Vector Multiplication Vorlage:Anker

Derivation

We have just seen how we can represent a linear map by a matrix. Suppose, we now do not a linear map, but only its associated matrix. What does the image of an arbitrary vector under this linear map look like?

First, for simplicity, let's consider the vector space 2 and any linear map f:22 be a linear map, of which we know that the associated matrix is Vorlage:Einrücken That means, we have

Vorlage:Einrücken

We want to calculate the image of an arbitrary vector (x,y)T2 under the map f, using the entries of the matrix A.

To do so, we represent our vector as a linear combination of the standard basis vectors, i.e. Vorlage:Einrücken Now we can exploit the linearity of f and calculate:

Vorlage:Einrücken

By this calculation, we can describe the effect of applying a linear map f to a vector, only by using the matrix A. This calculation works for any vector and any 2×2-matrix. To simplify the notation, let us define a "multiplication operation" for matrices and vectors:

Vorlage:Einrücken

We call this the "matrix-vector multiplication" and formally write it as a product. The generalization from a 2×2 to an n×n-matrix is given in the following exercise:

Mathe für Nicht-Freaks: Vorlage:Aufgabe The solution of this exercise provides us with a formula to calculate the value of a vector under a mapping, using the associated matrix. We now define Av using the formula found in the solution.

Definition

Mathe für Nicht-Freaks: Vorlage:Definition

From another point of view this means: If we consider the matrix A as a collection of column vectors Vorlage:Einrücken then the product Ax is a linear combination of the columns of A with the coefficients in x, namely Ax=x1a1+xnan.

How can you best remember how applying a matrix to a vector works?

To apply a matrix to a vector, you need to compute "row times column".

You may perform a matrix-vector multiplication by using the rule "row times column": The first entry of the result is the first row of the matrix times the column vector. The second entry is the second row of the matrix times the column vector, etc. for larger matrices. For each "row times column" product, you multiply the related entries (first times first, second times second, etc.) and add the results.

It is important that the type of the matrix and the type of the vector match. If you have set up everything correctly so far, this should always be the case, because a linear map f:KnKm includes an m×n matrix. You can apply this matrix to vectors of Kn, since rows and columns have both length n.

Reverse direction: The induced linear mapVorlage:Anker

We have seen that every linear map has an associated matrix. Given a linear map f, we constructed a matrix A such that f(v)=Av. That is, some matrices define a linear map. But do all matrices define a linear map? And if yes, what does the corresponding mapping look like?

If a matrix A is derived from a linear map f, then we can get f back from A by defining it as the map vAv. More generally, we can apply this rule to any matrix A and obtain corresponding a linear map f.

So let A be an m×n matrix. We consider KnKn, vAv. This map is indeed linear:

Vorlage:Einrücken

That means, every matrix defines a linear map.

Mathe für Nicht-Freaks: Vorlage:Definition

Thus, we now know that for each linear map there is an associated matrix, and for each matrix there is an associated linear map. For a linear map f, we call the associated matrix M(f). Our construction of the induced mapping is built exactly such that f=fM(f). This is quite intuitive: the linear map induced by the matrix associated to a linear map f is just map f itself. We can now ask the "reverse question": If we consider the associated matrix of a linear map induced by some original matrix, is this the original matrix, again? So in mathematical terms: Is A=M(fA)? The following theorem answers this question in the affirmative:

Mathe für Nicht-Freaks: Vorlage:Satz We have thus shown that matrices and linear maps are in a "one-to-one-correspondence". {{#invoke:Mathe für Nicht-Freaks/Seite|unten}}