What is the problem? What is the transformation matrix of normal?We use transformation matrices every day when we move objects in a computer. Current the state of the art DCC (digital contents creation) software usually represents objects with triangles or polygons. Each vertex of the triangles or polygons usually has its coordinates. When we rotate or move each vertex, we apply a transformation matrix on each vertex. A vertex is usually three dimensional vector in computer graphics.
We can define a normal vector for each triangle. A normal vector points out to which direction a triangle face is oriented. This normal vector is also a three dimensional vector. In a 3D computer graphics system, normal vectors are important since we need these normal vectors to compute how bright the surfaces are. Because a usual vector can be transformed by a matrix, it seems straightforward to use the same matrix to transform a normal vector. However, this fails. But why? The article is all about this ``why?''
Why an usual transformation matrix fails on a normal vector?According to , an explanation by Eric Haines is quite good. The book  has the same explanation, I see that is a great explanation. A similar explanation can also found in . Figure 1 shows the similar explanation.
|Figure1. Scaling on a normal break the normal.|
Why cannot we transform the normal vector same as usual vectors?