## 2010-06-30

### Some recommended books I recently read.

``The Kyougen oferrors まちがいの狂言'', 高橋康也, 理想社
What If the comedy of errors in Kyougen?  This is fantastic.

``The Deadline'', Tom DeMarco, 伊豆原弓 訳

This book is formed as a novel. This makes the management effect more vividly. This is Fun and this also makes me think.

I felt now I know Gauss and Humboldt as people near by. Of course this is an illusion, but I like this.

## 2010-06-23

### 2.5 Problem 34 and 45, Introduction to Linear Algebra by Gilbert Strang

Gilbert Strang's book, Introduction to Linear Algebra 4th ed., chapter 2 section 5, problem 34 and 45 are following. I think Problem 45 is an extension of Problem 34.

Problem 45 is to find the S after this. Then, we could naturally extends this problem to the next:

45' Elimination for a 2 by 2 block matrix: Find inverse of 2 by 2 block matrix.

I try to find the solution of this problem here.

First, I would like to demonstrate the Equation (1) is not correct in the block matrix. (This is correct when the elements are scalars.)

There is a question that how 1/(AD-BC) is defined since A,B,C, and D are all matrices. Isn't it 1/|AD-BC|? But this doesn't matter anyway. Because,

Equation (2)'s element 1,2 is DB-BD != 0, element 2,1 is -CA+AC != 0. This comes from the matrix multiplication is not commutative. This means these are non zero
element, however, this should be zero. Therefore, Equation(1) is not correct. Many of the relationships held in (scalar) matrix also held block matrix. But,
some of the scalar equations are demand on commutative law. This doesn't work on block matrix case.

Then, let's compute the inverse by the straightforward method, elimination.
Let's test it. T^{-1}T is:

TT^{1} is the same. Fantastic! Of course there is no magic and just as expected, but, this is fun.

## 2010-06-20

### Paper cut picture: Do not stand at my grave and weep

Sometimes I make Kirie (cutting peper picture). I gave most of them to some friends. This is one of the recent my kirie. I thank Claudia for giving me a permission to put my Kirie's picture in my blog.

I translated a poem, "Sen no Kaze ni natte" (Original English "Do not stand at my grave and weep") to German from Japanese.  Why did I translate it based on non-original poem? Hum, I thought I could sing it since Japanese poem has a song. But, it is difficult, of course.

---
Tausend Winde (Stehe nicht vor meinem Grab und weine)
Englisch Autor: Unbekannte (oder Mary Elizabeth Frye)
Japanische Übersetzen: 新井満 (Arai Man)
Deutsche Übersetzen: Hitoshi und Leo

Stehe nicht vor meinem Grab und weine.
Ich bin nicht dort, ich schlafe nicht.
Ich bin tausend Winde, die im Himmel wehen.

Ich bin der gänzende Diamant auf dem Schnee.
Ich bin der Sonnenstrahl auf reifem Getreide.
Am Morgen werde ich ein friedlicher Vogel
der Dich aufweckt, im kreisenden Flug.
In der Nacht werde ich die sanften Sterne.

Stehe nicht vor meinem Grab und weine.
Ich bin nicht dort, ich schlafe nicht.
Ich bin tausend Winde, die im Himmel wehen.
Ich bin überall, wo du bist.
--

## 2010-06-16

### World Cup 2010 Monte-Carlo Simulator (2)

Here is the source code of wc2010.rb.

#! /usr/bin/ruby
#
# World Cup 2010 point simulator
# Copyright (C) 2010 Yamauchi Hitoshi
#------------------------------------------------------------

require "getoptlong.rb"

WC2010      = "0.0.0"
MAX_MATCHES = 64

#------------------------------------------------------------
# class Wc2010
#------------------------------------------------------------
class Wc2010
# constructor
def initialize()
# last year's result as probability distribution function
@sample = [
4, 2,    0, 2,    1, 0,    0, 0,    2, 1,
0, 1,    3, 1,    0, 1,    3, 1,    0, 3,
2, 0,    2, 1,    0, 0,    1, 0,    4, 0,
2, 2,    1, 0,    3, 0,    2, 0,    1, 0,
6, 0,    2, 1,    0, 0,    2, 0,    0, 2,
1, 1,    0, 0,    2, 0,    1, 1,    0, 2,
0, 4,    3, 1,    0, 3,    1, 2,    2, 0,
2, 2,    2, 1,    1, 1,    0, 0,    3, 2,
0, 2,    2, 1,    1, 4,    2, 2,    0, 1,
1, 0,    0, 2,    2, 0,    2, 0,    2, 1,
1, 0,    1, 0,    1, 0,    0, 0,    3, 0,
1, 3,    1, 1,    3, 0,    0, 0,    0, 1,
0, 2,    0, 1,    3, 1,    1, 1]

@max_point = @sample.max {
|a,b|
a <=> b
}
# sample bin. Array [0, 0, ..., 0], size == @max_point + 1
@sample_bin = Array.new(@max_point + 1, 0)
@accum_dist = Array.new(@max_point + 1, 0)

# put them to the bin
for val in @sample
@sample_bin[val] = @sample_bin[val] + 1
end

@accum_max = 0
idx   = 0
for hist in @sample_bin
@accum_max = @accum_max + hist
@accum_dist[idx] = @accum_max
idx   = idx + 1
end

idx   = 0
for val in @accum_dist
idx   = idx + 1
end

@range_array = []
@range_array << Range.new(0, @accum_dist[0], true)

# create range objects
idx = 0
while idx < (@accum_dist.length - 1)
@range_array << Range.new(@accum_dist[idx], @accum_dist[idx + 1], true)
idx = idx + 1
end

idx = 0
for r in @range_array
idx = idx + 1
end
end

# get random value -> point with the same distribution of last year
# \param  _rd uniform random value
# \return generated random variable
def get_point(_rd)
begin
idx = 0
for r in @range_array
if r.include? _rd then
return idx
end
idx = idx + 1
end

raise RuntimeError.new(_rd.to_s + " is not in any range.\n")
end
end

# run the simulation
def run()
begin
srand(42)                  # always the same for debug.
idx = 0
while idx < MAX_MATCHES do
rd1 = rand(@accum_max)
rd2 = rand(@accum_max)
pt1 = get_point(rd1)
pt2 = get_point(rd2)
print "[" + (idx + 1).to_s + "]: " + pt1.to_s + ' ' + pt2.to_s + "\n"
idx = idx + 1
end
end
end
end

#------------------------------
# command line option parsing
#------------------------------
args = GetoptLong.new();
args.set_options(['--version',   '-V', GetoptLong::NO_ARGUMENT]);

begin
args.each_option do |name, arg|
# print(name + ", " + arg + ":\n")
eval "\$OPT_#{name.sub(/^--/, '').gsub(/-/, '_').upcase} = '#{arg}'"
end
rescue
exit(1)
end

#--- show version
if \$OPT_VERSION
\$stderr.print(WC2010 + "\n")
exit(1)
end

wc = Wc2010.new()
wc.run()

# --- end of wc2010.rb

### World Cup 2010 Monte-Carlo Simulator (1)

One of my friends suggested us to predict the world cup points. However, I have no idea. Then I decided to write a Monte-Carlo simulator for the world cup points prediction. As a Sunday researcher, I am interested in Monte-Carlo simulation and discuss about this method with my colleagues. But, I have not implemented such simulator that simulates specific probability distribution. I see this is a good opportunity to implemented it.

My company produces programs that uses a variant of Monte-Carlo simulators. This is good for some specific area, like physical simulation, however, can I use this for world cup prediction? I doubt it. I also don't want to spend more than an hour to implement it. If you are a world cup fun, to simulate it doesn't make sense and no fun, I presume. Also this method can not predict each result anyway. It is like Hari Seldon's Psychohistory in Asimov's Foundation. We could predict average or distribution of the points in this world cup 2010 based on past world cups, however, we hardly predict the each result. Although, this method has this limitation, I have totally no idea about the teams, I think it is better than my prediction. I heard this method is also used to predict the stock market.

In mathematics, we could solve some problems in the range of the assumption. Assumption is important. The problem is how much my assumption works. I set the world cup prediction problem with the following assumptions.

• Assumption 1. Each match, each team's point is independent.
• Assumption 2. The point distribution of this 2010 world cup is the same as the last 2006's one.

First, the assumption 1, this is outrageous assumption. This means the point prediction doesn't matter the opposite team. For example, Japan against any team, the predicted points are the same. Unreasonable. However, to be honest, I don't know anything about world cup (Yesterday, I happened to know Japan plays in the world cup this year). So I imagine, maybe most of the team have the similar skills. This is assumption 1. If this instinct is wrong, the result will tell me. If the assumption is wrong, any mathematics gives us garbage.

Second, I set the assumption 2 since my friend's web page has only the last time's result. It might be better to use the past world cup data as many as possible if the rule did not changed. Well, I am just lazy. This assumption might be also wrong.

I implemented a simulator based on these assumptions (wc2010.rb). This program generates a similar point distribution based on 2006's point distribution via ruby's pseudo random number generator. One problem is how to initialize the pseudo-random number generator. This is just a luck. I need one number, called seed. I could use my birthday, or current time in seconds from this January 1st, ... I just pick my friend's suggestion, 42.

Last world cup points distribution is as follows.

WC2006 result distribution

Points
0    :************************************************
1    :************************************
2    :****************************
3    :***********
4    :****
5    :
6    :*

My simulator's distribution

Points
0    :******************************************
1    :******************************************
2    :*******************************
3    :*******
4    :*****
5    :
6    :*

They are kind of similar. In world cup 2006, no team had a point 5, therefore, the prediction doesn't have point 5 also. If there is point 5 this time, no chance. Also there is no points more than 6.

The following is the prediction result of the simulator. So far only one result is correctly predicted. I think the assumption 1 is not so good.

Estimate       Result
[1]: 2 1       1 1
[2]: 2 0       0 0
[3]: 2 1       2 0
[4]: 1 0       1 0     -> match ARG:NGA
[5]: 2 3       1 1
[6]: 1 2       0 1
[7]: 1 1       0 1
[8]: 2 3       4 0
[9]: 2 2       2 0
[10]: 0 0       1 0
[11]: 0 1       1 1
[12]: 0 2       1 1
[13]: 2 0       0 0
[14]: 0 0
[15]: 1 1
[16]: 0 0
[17]: 1 1
[18]: 0 4
[19]: 2 2
[20]: 1 2
[21]: 1 4
[22]: 0 6
[23]: 2 1
[24]: 1 0
[25]: 1 1
[26]: 0 1
[27]: 1 2
[28]: 1 3
[29]: 1 3
[30]: 0 2
[31]: 1 0
[32]: 0 1
[33]: 0 0
[34]: 0 2
[35]: 1 0
[36]: 3 3
[37]: 0 2
[38]: 1 0
[39]: 1 2
[40]: 2 1
[41]: 1 0
[42]: 4 0
[43]: 0 0
[44]: 1 1
[45]: 0 1
[46]: 2 0
[47]: 0 4
[48]: 0 1
[49]: 2 0
[50]: 1 2
[51]: 1 0
[52]: 2 0
[53]: 0 1
[54]: 1 2
[55]: 0 2
[56]: 0 0
[57]: 1 1
[58]: 2 4
[59]: 3 1
[60]: 0 2
[61]: 1 2
[62]: 1 2
[63]: 1 0
[64]: 0 2

## 2010-06-01

### A personal annotations of Veach's thesis (8)

Page 11: Light implementation

In page 11, there is a paragraph about light implementation.
Ideally, the performance of the light transport algorithm should depend only on what the scene represents, rather than the details of how it is represented. For example, consider a scene illuminated by a square area light source. If this light source is replaced with a 10 by 10 grid of point sources, then the visual results will be nearly identical. However, the performance of many light transport algorithms will be much worse in the second case.
Why 10x10 points is more expensive? As an amateur, I think if the area light is sampled by 20x20, then it seems 10x10 points is cheaper. But this is just an example. It seems the author want to say if you tell the renderer more straight forward way, the rendere can usually optimize the process. It is better to say that an area light is an area light, not an area light is many points. (Thanks to D.S.)

Page 14: unbias and consistent

Consistent: error -> 0
Unbias:     E(error) -> 0.

Consistent is that the error becomes zero at the end. This means the solution converges with no error, we could get always the correct error if we have enough computation power. Please note: this doesn't talk about intermediate solution. It is possible some intermediate solution has a large error.

On the other hand, unbias is expectation of error (average of error) is zero. In this case, if intermediate solutions have a lot of error, then average of error isn't zero.

Consistent is a larger concept since there is a case it is consistent (= converges) but biased. (I think here there is an assumption that the function is integrable. I think this is not a bad assumption for light transport problem anyway.) If the function is integrable and the integrated value exists, I think unbias is always consistent.

By the way, I don't know what is the error's variance. Does it matter in this context? Since error itself and its first moment matter. Then second moment could matter also? I should ask this to someone.

### Skewed Commutative (?) Matrix (7)

Source code

You can find the source code of AB=-BA visualizer here.