Skip to main content

(2) Max determinant problem

I would like to talk about why mathematicians are interested in max determinant problem. This is just my personal theory and I could not find an article that say this directly. So, I warn you that I might be completely wrong.

The max determinant problem is mentioned in a context of partial differential equation.  Is a partial differential equation interesting? I safely say, yes. This includes heat and wave problem. We can design buildings, computers, cars, ships, airplanes, ... and so on. There are so many applications of this in our world.

Hadamard is one of the mathematicians who contributed the max determinant problem. His one of the interests was partial differential equation. A basic partial differential equation, for instance, a wave equation is like this.
This can be re-written as
(By the way, if we wrote it as above, an operator d^2/d x^2 looks like to have an eigenvalue λ. Like Mu = -λu. This is a clue of relationship between integral equation and linear algebra.)

Fredholm wrote this kind of integral equation in a finite sum form. This is a matrix form. He had an idea to solve this equation by taking the limit of it, e.g., the dimension of matrix goes to infinite.

If we can write an integral equation in his form, each finite sum equation can be solved by solving linear equations. At that time, people solved the linear equations by Cramer's rule. Cramer's rule has 1/det(A) form. If a matrix A's size becomes larger, the solution can converge or not is depends on the (absolute) max determinant value, whether it is less than 1 or not. I imagine that this is the reason of mathematicians are interested in the max determinant problem. Though, I could not find a direct article about the motivation of why they are interested in the max determinant problem.

(Note: I explained only a little that how an integral equation has a matrix form. Because this is quite detail, I will add appendix in case one is interested in this.)

Although Fredholm didn't use Cramer's rule directly, the proof of convergence needs maximal value of determinant (see. 30 lectures of Eigenproblem, Shiga Kouji, p.121. (in Japanese)).

Hilbert removed the determinant from this problem and established eigenvalue based solution --- Hilbert space. He climbed up the view one level. I think determinant is still an important subject, though, eigenanalysis is much interesting. This is also just my impression, but, when Hilbert established the Hilbert space, people's interest moved to eigenvalues from the determinant. I just imagine this.

Comments

Popular posts from this blog

Why A^{T}A is invertible? (2) Linear Algebra

Why A^{T}A has the inverse Let me explain why A^{T}A has the inverse, if the columns of A are independent. First, if a matrix is n by n, and all the columns are independent, then this is a square full rank matrix. Therefore, there is the inverse. So, the problem is when A is a m by n, rectangle matrix.  Strang's explanation is based on null space. Null space and column space are the fundamental of the linear algebra. This explanation is simple and clear. However, when I was a University student, I did not recall the explanation of the null space in my linear algebra class. Maybe I was careless. I regret that... Explanation based on null space This explanation is based on Strang's book. Column space and null space are the main characters. Let's start with this explanation. Assume  x  where x is in the null space of A .  The matrices ( A^{T} A ) and A share the null space as the following: This means, if x is in the null space of A , x is also in the null spa

Gauss's quote for positive, negative, and imaginary number

Recently I watched the following great videos about imaginary numbers by Welch Labs. https://youtu.be/T647CGsuOVU?list=PLiaHhY2iBX9g6KIvZ_703G3KJXapKkNaF I like this article about naming of math by Kalid Azad. https://betterexplained.com/articles/learning-tip-idea-name/ Both articles mentioned about Gauss, who suggested to use other names of positive, negative, and imaginary numbers. Gauss wrote these names are wrong and that is one of the reason people didn't get why negative times negative is positive, or, pure positive imaginary times pure positive imaginary is negative real number. I made a few videos about explaining why -1 * -1 = +1, too. Explanation: why -1 * -1 = +1 by pattern https://youtu.be/uD7JRdAzKP8 Explanation: why -1 * -1 = +1 by climbing a mountain https://youtu.be/uD7JRdAzKP8 But actually Gauss's insight is much powerful. The original is in the Gauß, Werke, Bd. 2, S. 178 . Hätte man +1, -1, √-1) nicht positiv, negative, imaginäre (oder gar um

Why parallelogram area is |ad-bc|?

Here is my question. The area of parallelogram is the difference of these two rectangles (red rectangle - blue rectangle). This is not intuitive for me. If you also think it is not so intuitive, you might interested in my slides. I try to explain this for hight school students. Slides:  A bit intuitive (for me) explanation of area of parallelogram  (to my site, external link) .