Last time, we saw the nice property: the result of multiplication of elimination matrices

*L*has sign inverted components of the elimination matrices. I think many of my friends would say: ``so what?'' I think it is convenient if the answer of multiplication of matrices can be found without multiplication.

I would like to have a note first. The last blog,

*L = (E_{32} E_{31} E_{21})^{-1}'*

*L*has inverted signed components of

*E*s. But this is not true for (

*E_{32} E_{31} E_{21}*). Only true if this is inverted. Here,

*L^{-1}*is

*E_{31}*is not preserved. (In this example,

*E_{31}*has only inverted sign, but, if you solve the Gilbert Strang's book, p. 103, Problem 8, you will find out there is more behind.)

Let's come back to the property. Intuitively, this is based on the matrix row operation property from the left side. Let's me explain this with 2 by 2 example matrix. A 2 by 2 Elimination matrix for component 21 (

*E_{21}*) is a lower triangler matrix shown in Equation 6. Equation 7 emphasizes the row operation, A is written in set of the rows. If you think A is simultaneous linear equations (two equations,

*a_{r_1}, a_{r_2}*), then, we can remove the first term of the

*a_{r_2}*by multiplying

*l_{21}*to the first row (

*a_{r_1}*) and adding it to the second row (

*a_{r_2}*). The Elimination matrix

*E_{21}*performs this. The inverse matrix of

*E_{21}*is Equation 9. You can find this from the rule of 2 by 2 inverse matrix, but, you can also consider the operation's meaning. The inverse operation of this

*E_{21}*is removing the first row with multiplied by

*l_{21} (= l_{21} a_{r_1}*) from the first row. This is

*l_{21} a_{r_1} + a_{r_2} - l_{21} a_{r_1} = a_{r_2}*. You can get the inverse in this way. This is a better understanding than using the 2 by 2 matrix's inverse rule. Because this idea doesn't depend on the size of matrix, 3 by 3, 4 by 4, ..., n by n, this always works. Moreover, a general problem is n by n.

According to the Elimination's property, matrices L,U has the following structure, some of the components are always zero.

As you saw, L has non-zero component in Lower triangle part, U has non-zero component in Upper triangle part. When L's lower triangler part has coincidentally zero(s), or U's upper triangler part has coincidentally zero(s), then we have the following interesting property. (p.96 in Gilbert Strang's book)

- When a row of
*A*starts with zeros, so does that row of*L*. - When a column of
*A*starts with zeros, so does that column of*U*.

*U*has one column three zeros from the top, the third row of

*U*is

*A*'s row 3 minus a linear combination of

*A*'s row 1 and row 2. Then, if row 1 and row 2 component are zero, it's linear combination is also zero (

*l_{31} 0 + l_{32} 0 + 0 = 0*).

At the end, we did here is only elimination. Then why we just do elimination, instead of LU decomposition. As my understand, there are two reasons. Next time finally I would like to explain why we do LU decomposition.

## No comments:

Post a Comment