What is the problem? What is the transformation matrix of normal?
We use transformation matrices every day when we move objects in a computer. Current the state of the art DCC (digital contents creation) software usually represents objects with triangles or polygons. Each vertex of the triangles or polygons usually has its coordinates. When we rotate or move each vertex, we apply a transformation matrix on each vertex. A vertex is usually three dimensional vector in computer graphics.We can define a normal vector for each triangle. A normal vector points out to which direction a triangle face is oriented. This normal vector is also a three dimensional vector. In a 3D computer graphics system, normal vectors are important since we need these normal vectors to compute how bright the surfaces are. Because a usual vector can be transformed by a matrix, it seems straightforward to use the same matrix to transform a normal vector. However, this fails. But why? The article is all about this ``why?''
Why an usual transformation matrix fails on a normal vector?
According to [3], an explanation by Eric Haines is quite good. The book [6] has the same explanation, I see that is a great explanation. A similar explanation can also found in [2]. Figure 1 shows the similar explanation.Figure1. Scaling on a normal break the normal. |
Now you see the wall is double sized in x direction, but, if we apply this matrix to the normal, the normal is no longer normal vector of this wall. In the left figure of Figure 1, the normal vector is perpendicular to the wall, but, in the right figure, the transformed normal vector is no perpendicular to the wall any more. This is the problem.
Why cannot we transform the normal vector same as usual vectors?
Comments