p.88 Notation of Equation 3.6.3
In Equation 3.6.3, there is a d!, it looks like a operator. It is used as d! cos theta d phi. However, I could not find the definition of this (friends and web.)
p.122 particle tracing Equation 4.32
In this Equation about alpha, there is a mysterious (for me) term 1/(q_{i+i}). f is projected solid angle, p_{i+1} is approximation of BSDF, so no problem. But, what is this 1/(q_{i+i})?
For example, if the ray terminate probability 0.5, then the weight is 1/0.5 = 2.0. If the ray terminate probability 1/3, then, the weight is 1/(1-1/3) = 3/2. The following two examples are not Russian roulette anymore, but, If the ray didn't terminate all the time, then 1/(1-0) = 1, means using the sampled value. If the ray always terminate, there is no bounce, then, no weight defined since the weight only has meaning when the ray bounce.
So far, I told you ``intuitive'' or ``natural'' something. I confess, my intuition is not so good. The following is a proof of overview of why this is OK.
The issue is if this weight causes a bias, we are in trouble. I wrote what is unbias means in the other blog entry. An unbias algorithm has zero expectation of the sampled calculated error. Figure 1's expectation is (where the true answer is Q),
But, sample value s_1 is zero because it is terminated. To be E to Q (or unbias means the error E-Q = 0), sample value s_2 has a weight alpha,
Therefore, we could compensate s_2 with alpha = 1/(1-p) and this lead us unbias.
Acknowledgements
Thanks to Leonhard G. who told me the overview of the proof.
In Equation 3.6.3, there is a d!, it looks like a operator. It is used as d! cos theta d phi. However, I could not find the definition of this (friends and web.)
p.122 particle tracing Equation 4.32
In this Equation about alpha, there is a mysterious (for me) term 1/(q_{i+i}). f is projected solid angle, p_{i+1} is approximation of BSDF, so no problem. But, what is this 1/(q_{i+i})?
Figure 1 shows the alpha update. The sampling is done by the Russian roulette method, then, the termination of a ray is decided by a probability. Intuitively, when you continue to sample, it is natural to respect the sampled result more. Because a sample has more information than no sample. Therefore, the sampled result has a weight of 1/(sample probability).Figure 1: Sampling weight $\frac{1}{q_{i+1}}$. (1) terminate sampling by probability $p$, (2) bounce probability is $(1-p)$. Because, the sample value is better than nothing, the case (2) is respected by $\frac{1}{1-p}$, that is $\frac{1}{q_{i+1}}$.}
For example, if the ray terminate probability 0.5, then the weight is 1/0.5 = 2.0. If the ray terminate probability 1/3, then, the weight is 1/(1-1/3) = 3/2. The following two examples are not Russian roulette anymore, but, If the ray didn't terminate all the time, then 1/(1-0) = 1, means using the sampled value. If the ray always terminate, there is no bounce, then, no weight defined since the weight only has meaning when the ray bounce.
So far, I told you ``intuitive'' or ``natural'' something. I confess, my intuition is not so good. The following is a proof of overview of why this is OK.
The issue is if this weight causes a bias, we are in trouble. I wrote what is unbias means in the other blog entry. An unbias algorithm has zero expectation of the sampled calculated error. Figure 1's expectation is (where the true answer is Q),
But, sample value s_1 is zero because it is terminated. To be E to Q (or unbias means the error E-Q = 0), sample value s_2 has a weight alpha,
Therefore, we could compensate s_2 with alpha = 1/(1-p) and this lead us unbias.
Acknowledgements
Thanks to Leonhard G. who told me the overview of the proof.
Comments