Course notes for November 20


Introduction to particle systems:

Examples of uses of particle systems:

This week we just scratched the surface of particle systems. Next week we will go into more detail about this rich topic. Meanwhile, here's a high level introduction to the subject.

Particle systems are very flexible; they can be used to simulate many natural phenomena, including water, leaves, clouds/fog, snow, dust, and stars.

When they are "smeared out" so that they are rendered as trails, rather than as discrete particles, they can be used to render hair, fur, grass, and similar natural objects.

Basic mechanism:

Generally speaking, particles in a particle system begin by being emitted from the surface of an "emitter" object. When a particle begins its life, it has an initial trajectory, which is usually normal to the surface of the emitter object.

After that, the path of the particle can be influenced by various things, including gravity and other forces, and collisions with object surfaces.

Particles usually have a lifetime, after which they are removed from the system.

Also, a particle can itself be an emitter of other particles, spawning one or more other particles in the course of its lifetime. In this way, particles can be made to cascade, generating complex patterns such as flamelike shapes.

All of the qualities of a particle -- its lifetime, its velocity and mass, how many particles it spawns, can be a randomly chosen value within some range. By controlling the ranges from which these various properties are chosen, artists can control the look and feel of a particle system.

History:

Particle systems were first developed by Bill Reeves at Pixar in 1981. Its first public use was for the Genesis Effect in Star Trek 2, the Wrath of Khan 1982. Since then, it has become a mainstay of computer graphic films and games.

Rendering:

One nice thing about particle systems is that they are not that difficult to implement in vertex shaders. In addition to their behavior, their appearance can also be hardware accelarated. One common technique is to render each particle as a "billboard": a polygon that is always perpendicular to the camera. This polygon is textured with a translucent image of a fuzzy spot. The effect is to make the particle look like a small gaseous sphere, but at fairly low computational cost.


Linear blend skinning:

In class we discussed an approximation to animating the soft skin of game characters which is cheap and can be implemented very easily in vertex shaders.

In an animated character, the rigid bones of the character's articulating skeleton are generally covered in some sort of soft skin. A fairly accurate way to model this skin would be to think of each point on its surface (approximated by the vertices of a polygon mesh), as being influenced by the various rigid matrix transformations of nearby bones in the skeleton.

To do this properly, one would compute a composite transformation matrix that was influenced by all of those individual bone matrices. However, in practice this is a more expensive operation than can be accommodated in the real-time rendering budget of game engines.

So most games instead do a kind of cheat called linear blend skinning. The basic idea is to compute the matrix transformation of each vertex as a part of each of the various nearby bones. This will result in a different position for each bone. Then these positions are blended together into a weighted average to find the final position for the vertex.

To make this work, each vertex maintains a list of [bone,weight] pairs, where all of the respective weights sum to 1.0.

This technique is very fast, and very easy to implement efficiently in hardware accelarated vertex shaders, but it has some practical deficiencies. For example, twisting between the two ends of a limb can cause the middle of the limb to appear to collapse. To handle cases like this linear blend skinned skeletons are rigged with extra bones to mitigate the effects of such problems.


Marching cubes:

Marching Squares (2D case):


Given a function f(x,y), where (x,y) are pixels in an image, marching squares is a way to approximate the curve along f(x,y) = 0.

For example, consider the function below (which you can edit), evaluated over the unit square:


To the right you can see a very low resolution (10×10) rendering of this function. Suppose we want to know the shape of the curve where this function has its roots (that is, where f(x,y) = 0).

Ideally we'd like to know this without having to evaluate the function at more samples.


Marching squares provides a way to get a sense of what a level-set curve of a unction looks like, without taking more samples.

The key insight is that the curve can be approximated just by looking at those pixels bounded by corner points (i,j),(i+1,j),(i+1,j+1),(i,j+1) for which the signs of f at the four corners are not all the same. If the signs of f are different at two adjoining corner points of a pixel's square, that means the curve will cut the edge which connects those two corners.

One thing we need to figure out is where :his transition happens along each such edge.

Given a value of A at corner a, and a value of B at adjoining corner b, we can compute the proportional distance t of the transition point along the edge [a,b] by observing, by similar triangles:

     t/A = (1-t)/-B

     -Bt = (1-t)A

     -Bt = A - tA

     (A-B)t = A

     t = A / (A-B)

Each corner can have two states: f<0 or f≥0, so in general, there are sixteen cases, as shown in the diagram to the right. Consider the second case along the top row of the diagram, where f at the top left corner (i,j) of a pixel is positive, but is negative at the other three corners of the pixel.

In this case, there is a transition point p along the top edge -- between (i,j) and (i+1,j), and another transition point q along the left edge -- between (i,j) and (i,j+1). Within this pixel, we can approximate the f(x,y)==0 curve by the line segment [p,q].

So that for any pixel we need to do three things:

  1. Figure out which edges, if any, of the pixel contain transition points
  2. Compute the locations of these points;
  3. Draw line segments between transition points, to approximate pieces of the curve.

Marching Cubes (3D case):

Marching cubes is the 3D equivalent of marching squares. Rather than approximate a closed curve where f(x,y)=0 via small straight edges inside square pixels, as in marching squares, the marching cubes algorithm approximates a closed surface where f(x,y,z)=0 via small triangles inside cubic voxels. The technical paper describing this algorithm, published by Lorensen and Cline in 1987, has been cited more often than any other paper in the field of computer graphics.

Each voxel cube has eight corners, which can be numbered as follows:

0 x=0 y=0 z=0
1 x=1 y=0 z=0
2 x=0 y=1 z=0
3 x=1 y=1 z=0
4 x=0 y=0 z=1
5 x=1 y=0 z=1
6 x=0 y=1 z=1
7 x=1 y=1 z=1

Because the value of f(x,y,z) at each of these eight corners can be either positive or negative, there are 28 or 256 cases to consider. These are shown in the figure to the right.

I have included a table to make things easier for you. The table has 256 entries, one for each of the 256 cases. Each entry contains between 0 and 4 triangles, which is the number of triangles that will be produced by the marching cube algorithm for a voxel of that type.

Each triangle is described by the three edges of the cube that contain its respective vertices, and each vertex is described by identifying one cube corner as well as the orientation of the cube edge that contains that vertex.

For example, a particular vertex of a triangle in the table may be described by the number sequence 0,1, indicating that this vertex lies on edge [0,1] of the cube. This is the edge that connects the x=0 y=0 z=0 corner of the cube and the x=1 y=0 z=0 corner of the cube.

Marching Tetrahedra (simpler to implement, less efficient):

To avoid the big table look-up of Marching Cubes, a technique I've used is to split up each voxel into six tetrahedra. Given the same corner numbering we used for Marching Cubes, we can partition the voxel cube by "turning on" the binary bits of the numbered corners in different orders, giving the six tetrahedra:

[0,1,2,7] , [0,1,5,7] , [0,2,3,7] , [0,2,6,7] , [0,4,5,7] , [0,4,6,7]
Since a tetrahedron has only four edges, there are only two non-trivial boundary cases: (1) the boundary is a single triangle, or (2) the boundary is a four sided shape, which can be split into two triangles.

This algorithm is less efficient than Marching Cubes, because it generally produces more triangles for each boundary cube. However it requires much less code, and therefore is easier to program, to debug, and to port to a vertex shader.


Fun with vertex shaders:

A vertex shader allows you to algorithmically displace each vertex of a triangle or triangle mesh any way you want. Since vertex shaders run in the GPU, they can be very fast (much faster than computations done on the CPU), and therefore it can be very advantageous to move modeling operations off the CPU and down to vertex shaders where possible. In commercial computer games, linear blend skinning and other procedural mesh animations (such as the one I showed for the fish) are often done in vertex shaders.

Start with a simple vertex shader:

In class we illustrated this with a vertex shader. A piece of that code is shown here. The code started out looking like this:

      vec3 vp = aVertexPosition;
      vec3 vn = aVertexNormal;
      gl_Position = uPMatrix * uMatrix * vec4(vp, 1.);
      vNormal = normalize(uNMatrix * vec4(vn, 0.)).xyz);

Then we displaced the surface, creating a ripple pattern by adding cosine wave functions:

      float amp = .05;
      float freq = 10.;
      vec3 vp = aVertexPosition;
      vec3 vn = aVertexNormal;
      float f = amp * cos(freq * vp.x);
      vp += vec3(f,0.,0.);
      gl_Position = uPMatrix * uMatrix * vec4(vp, 1.);
      vNormal = normalize(uNMatrix * vec4(vn, 0.)).xyz);

This still won't be shaded properly, because we have not modified the surface normal. In class we did this by explicitly computing the analytic derivative for our displacement function, and adding that displacement to the surface normal:

      float amp = .05;
      float freq = 10.;
      vec3 vp = aVertexPosition;
      vec3 vn = aVertexNormal;
      float f = amp * cos(freq * vp.x);
      float df = amp * freq * -sin(freq * vp.x);
      vp += vec3(f,0.,0.);
      vn += vec3(df,0.,0.);
      gl_Position = uPMatrix * uMatrix * vec4(vp, 1.);
      vNormal = normalize((uNMatrix * vec4(vn, 0.)).xyz);

Computing the change in normal by finite differences:

In the above case, we were able to compute the derivative directly, because our displacement function was so simple. In general, it is often too difficult to explicitly compute the derivative. For this reason, people often use finite differences to compute an approximation to the function derivative.

This can be done by computing the displacement function four times, first f0 at vp, then f1, f2 and f3 at (vp + ε vpx), (vp + ε vpy) and (vp + ε vpz), respectively, for some small distance ε.

The displacement to add to normal vector vn can then be approximated by the vector [ f1-f0 ,  f2-f0 ,  f3-f0 ] / ε.


Homework, due November 28

As with last week's homework, for this week's homework, which is due by class on Wednesday November 27, feel free to pick and choose from among the above directions.