2012-12-25

Authors in a Markov matrix: Which author do people find most inspiring? (22)


How to compute the eigenvectrors?

Here I don't explain the details of algorithm to compute the eigenvectors and eigenvalues. Since there are some free software available to compute them. I use octave's function eig() in this article.

Let's compute the eigenvectors and eigenvalues of the matrix \(M\).
octave:34> [L V] = eig(M)
L =
   0.83205  -0.70711
   0.55470   0.70711
V =
Diagonal Matrix
   1.00000         0
         0   0.50000
By the way, I told you eigenvalue seems equivalent to the matrix. Yes, this is correct, however, one number can not exactly represent a matrix. If it is possible, why we need a matrix at all? A usual matrix has multiple eigenvalues and eigenvectors. But the number of elements of a matrix is \(n^2\) and its eigenvalues is \(n\). So, eigenvalues are still simpler.

The eigenvalues of matrix \(M\) are 1 and 0.5, corresponding eigenvectors are (0.83205, 0.55470) and (-0.70711, 0.70711). Here we ignore the eigenvalue 0.5 since eigenvector of eigenvalue 0.5 doesn't tell the convergence value. The eigenvector of eigenvalue 1 tells us how the Markov matrix converges. If you are curious, please look up the ``matrix diagonalization''. We can compute when the total number of inhabitants is 1000 case.
octave:38> x1 = L(:,1)/ sum(L(:,1))
   0.60000
   0.40000
octave:39> x1 * 1000
   600
   400
This is exactly the same result we have already seen. The difference between this and former method is this doesn't need to apply \(M\) many times. We even don't know how many times we need to apply the matrix, but this eigenanalysis directly gives us the answer. Also, this example employed a small size of matrix and that didn't make any difference especially we used a computer instead of computing by hand. However, these two methods are totally different efficiency if you have a large size matrix.

In the next section, we will see how to generate the Markov matrix from the adjacency matrix. We could compute the station staying probability, which station will we most probably end up with, by this Markov matrix as in the station example. It sounds a bit odd, but, this is exactly the same to authors' network analysis. We will see how in the next artices. The part 1 of this article (theory) came close to the finale.

No comments: