2.8.2 Regression methods
Section 2.8.2's Equation (2.33) was a mystery for me at first. I asked this to a few specialists, but they told me that is not so important, and I should go forward, the good ones are coming... Therefore, this annotation might not so helpful, but I like this straightforward idea.
The hint of understanding of this Equation is in the paper: ``Equation (2.33) is the standard minimum variance unbiased linear estimator of desired mean I, ....'' This means he used Gauss's least square method. Most of the part of Veach's paper is self contained and easy to understand, but, personally I would like to have one more equation --- Equation (1) --- here.
This means each final estimation F is equal to the sample mean \hat{I}. Veach might think this is too obvious and he didn't feel to write it down. But it surely helps for dummies like me. We can derive Equation (2.33) from Equation (1), so let's try that.
First of all, Equation (1) has usually no solution. If so, the estimator can always estimate the exact value of the sample mean. I write transpose of X as X^* following the Veach's notation, then apply the Gauss's least square method. This is Equation (2). But I would say this is too naive. In Veach's paper, he suggest more sophisticated method that uses variance covariance matrix \hat{V} as a weight for the least square.
Using \hat{V}, Equation (3) is the base equation. I suppose that the weight is an inversed matrix since we want to minimize the variance. The rest is the standard least square method, then we obtained Equation (4). This Equation (4) is exactly the same to Equation (2.33). In this way, now we see this equation is nothing mysterious, but a natural one.
Section 2.8.2's Equation (2.33) was a mystery for me at first. I asked this to a few specialists, but they told me that is not so important, and I should go forward, the good ones are coming... Therefore, this annotation might not so helpful, but I like this straightforward idea.
The hint of understanding of this Equation is in the paper: ``Equation (2.33) is the standard minimum variance unbiased linear estimator of desired mean I, ....'' This means he used Gauss's least square method. Most of the part of Veach's paper is self contained and easy to understand, but, personally I would like to have one more equation --- Equation (1) --- here.
This means each final estimation F is equal to the sample mean \hat{I}. Veach might think this is too obvious and he didn't feel to write it down. But it surely helps for dummies like me. We can derive Equation (2.33) from Equation (1), so let's try that.
First of all, Equation (1) has usually no solution. If so, the estimator can always estimate the exact value of the sample mean. I write transpose of X as X^* following the Veach's notation, then apply the Gauss's least square method. This is Equation (2). But I would say this is too naive. In Veach's paper, he suggest more sophisticated method that uses variance covariance matrix \hat{V} as a weight for the least square.
Using \hat{V}, Equation (3) is the base equation. I suppose that the weight is an inversed matrix since we want to minimize the variance. The rest is the standard least square method, then we obtained Equation (4). This Equation (4) is exactly the same to Equation (2.33). In this way, now we see this equation is nothing mysterious, but a natural one.
Comments