Notes on the Phong algorithm

As we said in the last two lectures, the Phong algorithm was developed by Bui-Tong Phong to approximate surface shading. The algorithm takes, as input, the following quantities:

Given the above, the Phong shading algorithm is:

Argb + Ii ( Drgb max(0, nLi) + Srgb max(0, RiE)p )
where Ri is the mirror reflection angle of the light direction vector Li, and E is the unit direction vector to the camera.

As we showed in class, Ri can be computed from Li and n by:

Ri = 2 (nLi) n - Li

Since we are transforming the scene into a coordinate system in which the camera looks into negative z, we can assume that E = (0,0,1).

Transforming surface normals

In class we also went over how to transform surface normals. A surface normal is a special kind of linear equation; it describes how far any point p is in the direction normal (ie: perpendicular) to a surface.

In particular, given a surface containing point s, the distance of any point p from that surface is n • (p - s), which is the same as (np) - (ns), since all the operations are linear.

So when we transform our geometry, clearly we want to preserve the value of equations of the form np.

Our transformation matrix M will transform point p to (Mp). Therefore the transformation of n must be (nM-1). We can prove this by using the associative rule:

(nM-1) • (Mp) =
n(M-1M)p =
np

You can use the same matrix/vector multiply routine you are already using to transform vertices, if you just transform the inverse matrix, since:

nM-1 = (M-1)Tn

For your convenience, I have placed code to compute the inverse of a 4×4 transformation matrix in class MatrixInverter.

Assignment due Wednesday March 26

For Wednesday March 26 I would like you to put all the pieces together, implementing the complete rendering pipeline for your animated shapes. For each animation frame:

  1. Clear the image and set all z-buffer values to zero.

  2. Transform all mesh vertices and normals;

  3. On each transformed vertex, do Phong shading, thereby replacing surface normal n with color (r,g,b);

  4. Apply perspective to each vertex;

  5. For each triangle (if you have four sided polygons, split them up into two triangles):

    1. Apply the viewport transform the the triangle's three vertices;

    2. Scan-convert the triangle, thereby interpolating (r,g,b,pz) from the three triangle vertices to each image pixel covered by the triangle.

      Remember to apply the z-buffer algorithm, so that each pixel retains the (r,g,b) of whatever triangle is nearest to the camera (ie: has the lowest pz value at that pixel).