Adverti horiz upsell
Facial Animation Rig for Delgo
Facial Animation Rig for Delgo
gfunk, updated 2006-05-30 22:05:03 UTC 430,526 views  Rating:
(25 ratings)
Page 1 of 5

I am Warren Grubb, Animation Director for Fathom Studios on the film Delgo. (click here for the trailer) When we R&D for the feature film Delgo, I was Technical Director and I knew we would need a facial rig for our characters that was more powerful and flexible than standard multi-targeted blendshape rigs. The solution we came up with, to describe it very simply, uses NURBS curves as influence objects on a poly mesh that is bound to control joints using smooth skining. We retained the ability to create blendshapes if necessary, but we gained, among other things, the ability for animators to create very subtle or extreme changes in expressiveness without having to send the character to a modeler for new target shapes. In most cases, this process is simple and fast enough that you can rig a head in a single day.

Before we get to the rig itself, allow me to relate the idea that first led me down this road. I was creating blendshapes for another project and I was looking for a quicker way to sculpt the target heads from the base model. I was using wire deformers, lattices, and other various tools to push and pull the topology around to create the expressions and phonemes I wanted when it dawned on me that if I had some sort of unified toolset for creating the targets, I could just drive that toolset �on the fly� to achieve those targets instead of deleting all the modeling history and creating blendshapes. About this time, I saw some work Caleb Owens was working on with facial animation for motion capture- he utilized something similar to what I was doing, so I knew it was possible, I just had to create a manageable workflow for rigging and for an animator and make sure it was foolproof for production- or as foolproof as possible.

The Concept

The head of the character is bound to a few joints (that I will explain shortly) using a smoothSkin cluster, but the majority of the face is deformed with NURBS curves that are influence objects to the smoothSkin cluster.

If you haven�t used NURBS curves as influence objects in smoothSkinning, it can be a little confusing to conceptualize, but read through the tutorial and it should make sense. In influencecurve.avithis movie (avi) . I have created a sphere, bound it to a single joint and then added a NURBS curve as an influence. In the movie, you can see the influence the curve has on the surface- by deforming the curve, you will deform the surface. It�s sort of like a wire deformer, but since it is added to the skinCluster, it is one less layer of deformers to deal with (also, when we started on Delgo, wire-deformers were a lot more limited).

The influence NURBS curves act as our muscles, so the placement of the curves should look pretty logical and intuitive.

We typically only use a few joints in the head- one that doesn�t move at all and acts as a holder for and skin that is tight against the skull (such as the back of the head). A few more joints are used for the jaw rig (only one of those is a member of the skinCluster) and we used joints for the eyelids (you could used NURBS influence curves for those too, and probably with better results, but we decided we didn�t need the extra detail). The jaw rig has the joints you could expect for the base of the jaw and the chin but there is on extra bone on each side of the mouth that is used to squash and stretch the corners of the mouth as the jaw opens and closes- this will make more sense when you get into later steps in the tutorial. Finally, some joints will affect the neck and shoulders, depending on how far down the head model extends. If this all sounds terribly confusing, just read on for now and the specific joints are explained in more detail later. The point I want to convey here is that there aren�t many joints in the rig that are really influencing the head.

The NURBS influence curves- let�s call them faceCurves- are driven by two layers of controls- one that will be connected to attributes that are on centralized controls (this would be control names like �smile�, �frown�, �blink�, etc) and another layer remains available for hand keyframing so the animator can always tweak the results