Skip to main content

Posts

Showing posts from July, 2010

Computer graphics for all by Takeo Igarashi

This issue of Communications of the ACM has an article by Igarashi, Takeo. http://doi.acm.org/10.1145/1785414.1785436 I liked this article since I can see the overview of his research history. If you allow my presumption to mention his research: his research has full of fun demos, is practical, is developing solid base technologies. They are great. I think people like his full of dream research. Yes, I also like that. But, I especially like the technology oriented papers from his team which I feel less high-lighted from the community. One example is SmoothTeddy. It seems that work is not treated well like as Teddy. But, I think SmoothTeddy has a significant technological contribution for mesh optimization for interactive sketching systems. I like his technical contribution like those papers.

A personal annotations of Veach's thesis (12) P.71

2.8.2 Regression methods Section 2.8.2's Equation (2.33) was a mystery for me at first. I asked this to a few specialists, but they told me that is not so important, and I should go forward, the good ones are coming... Therefore, this annotation might not so helpful, but I like this straightforward idea. The hint of understanding of this Equation is in the paper: ``Equation (2.33) is the standard minimum variance unbiased linear estimator of desired mean I , ....'' This means he used Gauss's least square method. Most of the part of Veach's paper is self contained and easy to understand, but, personally I would like to have one more equation --- Equation (1) --- here.  This means each final estimation F is equal to the sample mean \hat{I} . Veach might think this is too obvious and he didn't feel to write it down. But it surely helps for dummies like me. We can derive Equation (2.33) from Equation (1), so let's try that. First of all, Equation (1) h

Real Professionals

Yesterday, I read a modest article, yet impressive for me. A staff of MAFF (Ministry of Agriculture, Forestry and Fisheries) found an iniquity from one receipt and and protect a lot of people's health. This is a real professional. http://www.asahi.com/national/update/0726/TKY201007260261.html If I believe this article, magnificent. I thank the staff in MAFF. Also, the people support him/her. Since sometimes one found something, it doesn't come up to the managers. (Some managers sometimes corrupted and later it becomes a big scandal.) This time, it seems MAFF also has a part of responsibility, but, they did it right. I hope these are properly evaluated and awarded. I have an impression on yesterday's articles as not a usual day: A son of a drag king tries to propitiate with his father's issues, 15 years ago's unsolved murder, an American (living in Berlin) is against Hiroshima's anti-nuclear movement ceremony in Potsdam, and so on. http://www.asahi.com/nat

A personal annotations of Veach's thesis (11) P.67

2.7.2 Russian Roulette Russian roulette methods is one of sampling techniques. It selects sample path stochastically. This includes stochastic ray termination. The light transport equation is an integral and the equation has several terms. The magnitude of the term depends on the sample. If the term is quite small or the material's reflectance property said one direction is not so probable, sampling such term or direction has less effect, yet still costs the same. But, if we don't sample such term at all, the solution will be biased. Because it is always possible that the light comes from that direction. Figure 1. Russian Roulette: (a) non-Russian roulette (splitting) ray tracing, sample many directions when sampling a glossy surface, (b1)  Russian roulette sample, sampling only one direction depends on the probability, (b2) Russian roulette stochastic termination (c) a trace path is also chosen and terminated by specific probabilities. Figure1 (a) is an example of n

A personal annotations of Veach's thesis (10) P.60

Star discrepancy D_N^* I might say, the goal of Quasi-Monte Carlo method seems to have a sort of contradictory. It is: To minimize the irregularity of distribution Not using regular uniform samples to avoid aliasing  To minimize the irregularity of distribution, a straightforward answer is using regular distributed sampling. But, this causes aliasing that is the point to use the Monte-Carlo method (Note: not Quasi-Monte-Carlo). A discrepancy is a quantitative measure of irregularity of distribution. This measure tells us how much integration error occurs. The star discrepancy D_N^* is a discrepancy. When we sample many boxes that always includes the origin, it shows the maximum error of estimation of the area of these boxes. In the case of one dimension, boxes are lines. Therefore we estimate the length of line that includes the origin by sampling. If we sample N points, if we sample regular uniformly, we could not sample less than 1/(2N) length correctly. This is the sam

Sierpinski tetrahedron

While ago, I have a blog about a template error. I finally finished it today. I made a Sierpinski tetrahedron generator. http://en.wikipedia.org/wiki/Sierpinski_triangle This time, I would like to learn OpenMesh ( http://www.openmesh.org ). I have my own mesh library, but usually I would like to solve a problem, so if there is a well developed open source library, I think it is better to switch to it. Figure 1 shows how to get consistent list of vertices by OpenMesh. Here, "consistent" means, I always would like to have the vertices indexed as in the Figure 1, up. For example, if O_1 and O_3 are exchanged, I have a problem to create a Sierpinski tetrahedron. Figure 1: tetrahedron configuration Figure2: creation rule 1 Figure 3: creation rule2 Actually, I don't know what is the best way to do that, I depends on the OpenMesh's face halfedge circulator. If anybody knows better way, please give me a comment. In my method, first I need to guarantee the input is a te

A conference schedule table generator program

I use a conference schedule table for a long time. A few months ago, a blog http://www.realtimerendering.com/ wrote about that schedule table and there was some discssion how to improve the table. This table is optimized to look up the next closest deadline date. So when I got a rejection of my paper, I looked up this table. After the discussion, the author of the table generator program made the program free software. You can download the source code from http://code.google.com/p/gen-conference-schedule-table/ , new BSD license. I downloaded the program and run the unit test script, then you can see the following figure. A html image map file is also generated and if you put it into a html file, the image is a clickable map. (but not in this blog...) I have one more wish that Eurographics's conference schedule table has this figure. http://www.eg.org/EG/News/upcomingevents http://confcal.vrvis.at/

A personal annotations of Veach's thesis (9)

Page 50: Stratified I was not familiar with the word `Stratified.' A Monte-Carlo method usually uses a pseudo random number sequence. A random number sequence usually should have a property, `unexpectedness,' this means a same number sequence like 1, 1, 1, 1, 1, ..., 1 could be also a part of random number sequence. This issue is well explained in Knuth's book (D.E.Knuth, The Art of Computer Programming, Vol.2 Random numbers). The sampling pattern could be the left of Figure 1.  Figure 1. Stratified example. Left example of random 12 samples (clumping problem), Right example of 2x2 stratified random 12 samples In Figure 1 left, some part is not well sampled, or some part is oversampled -- the sampling pattern is biased. This is clumping, a bad luck in a sense. Usually it is solved when the number of samples is increased, but, it computationally costs, means it takes more time to compute. To solve this clumping problem without losing performance, the sampling region is

Fighting with a template error (as usual)

I would like to make a program with OpenMesh (http://www.openmesh.org). I tried to test some examples that worked while ago, but now they don't work by the following compile error (template instantiation). /usr/X11R6/bin/g++ -Wp,-MD,Ubuntu9.04/VisOMTriMeshDNode.dep -DHAVE_SSTREAM -DUSE_GMU_GERR -std=c++0x -Wall -Wnon-virtual-dtor -Woverloaded-virtual -DARCH_LINUX-DCOMP_GCC -I/usr/X11R6/include -I/usr/X11R6/include -I/usr/include/tcl8.4 -I/usr/include/qt4 -I/usr/include/qt4/QtCore -I/usr/include/qt4 -I/usr/include/qt4/QtGui -I/usr/include/qt4/QtNetwork -I/usr/include/qt4/QtOpenGL -I/opt/OpenMesh/src -I/home/project/shared_proj -fPIC -g -Wno-uninitialized -D_INCTEMP -DUSE_GMU_GERR -DGMU_DBG_STRIPPATH=\"ALL\" -o Ubuntu9.04/VisOMTriMeshDNode.o -c VisOMTriMeshDNode.cc /usr/include/c++/4.3/ext/new_allocator.h: In member function 'void __gnu_cxx::new_allocator<_Tp>::construct(_Tp*, _Args&& ...) [with _Args = long int, _Tp = OpenMesh::BaseProperty*]': /usr/

Ryouma ga yuku by Shiba Ryoutarou

I finaly finish to read "Ryouma ga yuku" by Shiba Ryoutarou. My problem is how to get the books. This time, my friend's sister visited Germany, I asked her to carry the last two volumes. I spent the whole Saturday to read them. It was so exciting and I couldn't stop to read them. I don't so prefer other Shiba's books, therefore I haven't read this until now. What a stupid I am. I didn't understand Shiba's book when I was a high school student. Maybe I was not enough mature to understand them. 面白き,こともなき世を,面白く, 住みなすものは心なりけり. 高杉晋作 (Even if the world has no fun, our mind is a possibility to make it fun.  Takasugi Shinsaku) (This is just one of possible translations. I cannot translate this well.) 常日頃,闇に覆わる我が心, 光となるは,竜馬の言葉. 山内斉 (Every day, my mind is easily covered by darkness and depression,  Ryouma's word lit in my mind. Yamauchi Hitoshi)

Visiting conferences

My hobby as a Sunday Researcher is visiting a conference. Since many of my friends are now all over the world, it is easier to visit a conference to meet some of them. Also conference gives me a lot of ideas and motivation. I visited to Eurographics 2010, HPG 2010, and SGP 2010 this year. This hobby is an expensive one for a person, since I paid everything and needed to take vacations. The pictures are snapshots of SGP2010, a conference place and a city hall in Lyon. There are some conferences in France this year, SMI, Curves and Surfaces, and SGP. I chose SGP since I know some of my friends will be there in advance. When I visited to a conference, one thought always came to me: I am not cool. OK, I was not cool all the time, but at least I tried before. I did not present any new ideas, I just was there and got ideas. I doubt my life's meaning, this is sad... Yet, I could do something, maybe. I should do something.

Unnecessary complexity

This is a picture of my friend's toilet (Nancy, France). There should be heater's in and out, toilet's in and out, and a basin's in and out. But, water in will be shared and water out are as well. Though, I don't understand how it becomes so complex. I have a impression that I could find easily such unnecessary complex things in France.. It is just my impression. Well, I experienced one of the train system in England was also unnecessary complex.

2.5 LU decomposition (3) Why do we do LU decomposition?

Finally I can conclude LU decomposition. If you are not interested in linear algebra, unfortunately this is not for you. Until the last two blog entries, we say LU decomposition is just an elimination, but keeping the elimination matrices as the inverse of them. As I understand, LU decomposition has two advantages. The first point is, L and U are triangler matrix. A triangler matrix is the easiest matrix to be solved by back substitution. For example, if we want to solve the following triangler matrix system, this is simple. I rewrite this as simultaneous equations: Equation 12 shows directory x=5 . Equation 13 is x + y = 2 , we know x=5 , therefore, y=-3 . Equation 14 is x + 2y + z = 2 , therefore, 5 - 6 + z = 2 , then z = 3 . We don't need any elimination process. Actually, we have done elimination to get this triangler matrix, therefore, we don't need to wonder. It is so easy to solve a triangler matrix. The second point is LU decomposition needs only matrices

2.5 LU decomposition (2) Properties of L and U

This is a continuation of LU decomposition, if you have no interest about linear algebra, unfortunately this is not for yours. Last time, we saw the nice property: the result of multiplication of elimination matrices L has sign inverted components of the elimination matrices. I think many of my friends would say: ``so what?'' I think it is convenient if the answer of multiplication of matrices can be found without multiplication. I would like to have a note first. The last blog, L = (E_{32} E_{31} E_{21})^{-1}' L has inverted signed components of E s. But this is not true for ( E_{32} E_{31} E_{21} ). Only true if this is inverted. Here, L^{-1} is E_{31} is not preserved. (In this example, E_{31} has only inverted sign, but, if you solve the Gilbert Strang's book, p. 103, Problem 8, you will find out there is more behind.) Let's come back to the property.  Intuitively, this is based on the matrix row operation property from the left side. Let's me e

Drei Gäste im Restaurant

Now my favirite quiz is in German (help with Isabelle, thanks). Drei Gäste im Restaurant Wir (ich, Isabelle, und mein Freund) waren in einem Restaurant.  Wir nahmen ein Menue. Es kostete 300 Euro. Der Kellner kassierte das Geld von uns und ging in die Kueche zurück, dann sagte der Chef: ,,Heute ist der Jahrestag des Restaurants, also kostet das Menu nur 250 Euro''. Der Kellner kam an den Tisch zurück, aber er dachte: ,,Man kann 50 Euro nicht durch drei Personen teilen''. Dann sagte er: ,,Heute ist unser Jubilaeums Tag, also bekommen Sie 30 Euro zurück''. Jeder bekam 10 Euro. Wir dankten dem Kellner und gingen nach Hause. Der Kellner fühlte sich ein bisschen schuldig, aber er entscheied es zu vergessen. Aber, er bemerkte etwas seltsam. ''Jeder bezahlt 90 Euro, also ist die Summe 270. Ich habe 20 Euro von ihnen. Die Summe ist 270 plus 20 gleich 290. Aber zuerst bekam ich 300 Euro.'' Der Kellner schrieb:    Jeder bezahlt 90 x drei = 270 + Ich

What is LU decomposition, why do I care it? (1)

Gilbert Strang's Introduction to Linear Algebra 2.5 There is a method called LU decomposition. I, a Sunday researcher, always start a kind of stupid question. - What is the LU decomposition? - Why do I care about that? LU decomposition is actually almost the Elimination method. The reason we do Elimination is we want to solve the system of linear equations.  I think the most important thing is understanding the problem. If we could not get the answer, but only understand the problem, it is a bit sad for me. I hope I can have a solution also. Elimination is one of the methods to solve the system. This method adds or subtracts one equation from other equation to remove some of the variables. In a junior high school in Japan, I learned this as elimination of simultaneous equation (連立一 次方程式の消去法). Let's see an elimination example using the following matrix A . Remove a_{21}, a_{31} of A by applying Elimination matrices E_{21},E_{31} . For example, E_{21} subtracts row 1 from