Monday, May 10, 2010

Final Post

My project has finally come to a close, and to recap here's a quick video of what I implemented (it pretty much combines videos from some of the previous posts):

I also went back and updated posts in the blog to better highlight my progress throughout the semester. I found myself often saving drafts while waiting to upload a video, only to forget to publish the post, so I cleaned all that up.

Anyway, it's been fun!

Sunday, May 2, 2010

Cleaning Things Up

After the presentation, I spent some time cleaning up and my code to make it easier to read/edit. I kind of restructured it so parts of the code can hopefully be useful for functionality even outside of overlays.

I also went over my implementation of methods from previous works to test if it worked ok. I actually found some mistakes in my implementation and fixed those.

I'm going to to find some more upper body motions and generate more overlays before the project is finally due.

Wednesday, April 28, 2010

Generating and Sorting the Overlays

I've been generating overlays using my script throughout the project. Now that everything relevant is implemented (spatial alignment, time alignment, physics checks) I can generate the final set of overlays. The script will automatically sort the overlays based on whether or not they passed the physics checks.

I have to look at each individual overlay and determine whether or not it looks natural. This is very subjective and there are a lot of borderline cases where I'm not sure what to do...

The goal is to sort all the overlays into sets of Natural & Physically Correct, Unnatural & Physically Correct, Natural & Not Physically Correct, Unnatural & Not Physically Correct. Doing so will give a broad overview of how well the physics checks are working in terms of classifying an overlay as natural or not.

Based on the results of the generated overlays, I think the physics checks do a solid job of filtering out the bad, unnatural overlays as failing the checks. However, there are some overlays that fail the checks but look natural. There are actually a lot of overlays that pass the physics checks but look unnatural. So from an even rudimentary analysis we can see that physical correctness is not enough to guarantee that an overlay looks natural.

Monday, April 26, 2010

Changing Speeds Based on Center of Mass

When demoing my project to Dr. Badler, he offered the suggestion of correcting the motions when the center of mass of the character changes significantly after creating an overlay. For example, if the character is running fast, he is naturally leaning forward. If you want to transplant an upper body motion on to this and use spatial alignment, the character will be leaning forward too much for it to look natural or even possible. We can use our physics checks to determine the character's center of mass throughout a motion. By comparing how the center of mass position changes in the upper body motion before and after transplanting it to the lower body motion, I can identify whether the character will be leaning forward too much or leaning back too much. If the change is enough that the character will need to slow down to maintain a close enough center of mass to before the transplant, I interpolate the lower body motion with a slower speed motion. Here is an example of what an overlay looks like before and after the speed is corrected based on the change in center of mass:

This is an admittedly rough implementation more useful as an example. I think this would be especially useful in games if the player wants to perform an action, but the character needs to change speeds automatically to perform that action properly.

Updated Note: For my presentation, I switched the way I did spatial alignment to align the lower body motion to the upper body motion. However, the approach I took initially is better I think (aligning upper body motion to lower body motion). As a result, the example I used in the presentation is the opposite of what is here, but I think the approach here is the correct one.

Tuesday, April 20, 2010

Game Environment

I created a game environment in OGRE. This preliminary version does not contain much but, serves as another tool to visualize motions. I used one of our skinned character models and exported it to OGRE from Maya. I linked the game code with the code for our framework to allow the character to perform any motion easily. Here is what it looks like:

Obviously there is a lot of room for improvement. I may work on adding more to the game environment or adding some level of user control for the character. When I first planned the project, I thought I would use a motion graph and allow the user to fully control the character. However, I figured that I would not have enough time to include everything so I thought just loading a motion would be sufficient.

Friday, April 16, 2010

Time Alignment

Another element in creating overlays is making sure both motions are in phase at the same time. For example, if you have two walking motions, you want both motions to be in the same phase of the walk cycle at each frame before doing a transplant.

Coincidentally, a lot of overlays already look as if they are in phase when doing a regular transplant. If we transplant the upper body of a character standing to the lower body of a character slowly walking, it does not look that bad. Problems however can occur when dealing with motions where the character is moving at different paces.

There were two methods I thought about to implement time alignment. One is dynamic time warping, which is used in the "Splicing Upper-Body Actions with Locomotion" paper by Heck, Kovar, and Gleicher. The other is uniform time scaling. I decided to go with dynamic time warping because I could also use it to check for similarity between motions (with a metric proposed in "Enriching a Motion Collection by Transplanting Limbs" by Ikemoto and Forsyth). The similarity checks was something I thought about to consider when auto generating overlays (I'll probably post more on that later).

The dynamic time warping approach computes point clouds for the lower bodies of both motions and creates a distance matrix, using an approach from the "Motion Graphs" paper by Kovar and Gleicher. I found that our framework had a similar metric for computing distances between frames so I decided to use that.

First, I had to make sure that the first frames of both motions were in phase. I compared the first frame of the shorter motion with frames of the longer motion to determine the most similar frame, and cropped the longer motion so both initial frames were similar enough.

I then looked at the dynamic programming method in "Flexible Automatic Motion Blending with Registration Curves" also by Kovar and Gleicher (at this point it seems like they do just about every relevant paper!) However, the paper didn't actually describe the algorithm so I had to look through a lot of other papers (many of which contained different elements that I didn't need) to find the correct method for our purposes. When computing the matrices, I also had to ensure that too many frames didn't map to a single frame (which guarantees a good slope in the time alignment curve). I set the limit so that there are at most 3 frames mapping to one. After computing the cost matrices, I computed the optimal path starting from the first frame and assigned each new frame based on the frame to frame mapping.

Here is an example of how this time alignment can improve an overlay:

Of course, time alignment is only useful when you have two motions that are moving similarly.

I may look into implementing uniform time alignment as well, if time permits. (To reiterate, this method was dynamic time warping.)

Friday, April 9, 2010

Spatial Alignment Results

After a lot of debugging and testing, I believe I have spatial alignment implemented correctly. I would say the difference between using spatial alignment and not using it is noticeable. In general though, the spatial alignment method can preserve details of the lower body motion before the transplant. But is that necessarily a good thing? Here are some of the results:

We can choose to try to align the upper body motion with the lower body motion (as done in the video above) or vice versa. From my observations, choosing either or does not improve all overlays and which is better depends on the overall postures in both motions and what the particular goal is. I'm assuming in most cases we want the upper body motion to adapt to whatever the lower body is doing so I'm leaving it as is for now. I'll have to look into possible ways to fix what is happening above.

I wanted to implement spatial alignment to solve the twitching problem caused because of the lack of correlation between the upper and lower body. Although it doesn't solve that problem in all cases, here's an example where spatial alignment can alleviate the twitching problem:

Notice that the alignment eliminates the unnecessary lean to the left in the original overlay. However, noise (in the form of very rapid twitching) is prevalent in a lot of the overlays. Perhaps we'll have to look into methods in filtering out the noise to produce better results.

Spatial alignment does not correlate the lower body with the new upper body in all cases. We have a lot of upper body motions that involve a character doing an action while standing. Even with alignment, these upper body motions look awkward when transplanted to a character that's running.

I'm currently working on time aligning the motions before transplanting limbs, although that presents some issues as we may be combining motions where the character is standing, running, or doing some other type of motion. I'm also looking into research for quantifying the naturalness of a motion, which could be useful when generating overlays. Lastly, I'm working on implementing the game environment in OGRE to visualize the overlays on a skinned character.

Monday, April 5, 2010

Implementing Spatial Alignment

First, let me recap why spatial alignment is important: when we transplant the upper body from one motion to the lower body of another, it is likely there will be noticeable problems in the form of twitching unless both motions were similar enough. This is because the upper body is usually correlated with lower body movements, and transplanting destroys this correlation.

Earlier in my project, I found a quick way to get rid of the twitching was to not transplant the joint above the root, the lowerback joint, which is responsible for rotation about the torso. Although excluding the lowerback joint eliminates the twitching, it sacrifices some rotation. A solution to this is proposed in "Splicing Upper-Body Actions with Locomotion" by Heck, Kovar, Gleicher [2006].

Following their method, called spatial alignment, we should compute a rotation for the lowerback joint that best aligns the shoulders and spine of the motion we use for the upper body with the motion for the lower body. The orientation is found by computing point clouds and solving for the closed form solution in "Closed-form solution of absolute orientation using unit quaternions" [Horn 1987].

Although it sounds simple enough, I've encountered a number of issues while trying to implement it. The closed form solution needs to compute eigenvalues of a system, so I had to find the appropriate matrix library to do that. I then implemented the solution and had to run many unit tests to ensure that it was implemented correctly.

The method involves computing point clouds in both motions using the locations of the shoulders and spine of the character. Before computing the orientation from the closed form solution using the point clouds as input, the paper advises that we translate and rotate the point clouds so that the lowerback joint coincides with the origin. However, this seems to have introduced more twitching/noise (assuming I implemented correctly). I found that just making the root coincide with the origin to produce less noise, but I'm not entirely sure if that's the right approach. To troubleshoot whether I was doing things correctly, I wrote some code to visualize the point clouds in different colors for the two motions - this definitely helped to find quick orientation problems. The implementation involves conversions between Euler angles, rotation matrices, and quaternions. What initially caused me problems was the order of the Euler angles, which was YZX for the root and ZYX for the rest of the joints.

Results and observations coming soon...

Monday, March 29, 2010

Creating a Game Environment and Shoulder Alignment

While we work on trying to fine tune the physics checks, I also have to think about other ways to improve the overlay motions...

My primary goals right now are:

  1. Create the game environment in OGRE

  2. Implement a method of shoulder alignment using point clouds to improve the overlays, which is described in "Splicing Upper-Body Actions with Locomotion" by Heck, Kovar, and Gleicher (I'm sure I mentioned this in an older post as I've been trying to find time to do this for a while).

I am hoping to get a preliminary version of the game up soon. Once that is done and assuming things go fairly smoothly, I'll be able to see how the overlays motions I generated look on another character model.

Monday, March 22, 2010

Generating Overlays using Physics Checks

I previously wrote a script to automatically generate overlays given the directories of motions desired for the lower body and upper body actions. The next step was implementing constraints to filter the results.

I integrated our physics checker (thanks to Aline) with the generation script to denote whether or not a created overlay is physically correct. Of course, the checker is not perfect, and even if it were, physically correct overlays are not guaranteed to look realistic or natural. For now, we're using it as one step in the right direction for generating better overlays.

With this script, I can generate a lot of different overlays. Currently, I'm examining which overlays I think look good (does it look like something a human would realistically do?) and why some others do not look as good.

Wednesday, March 17, 2010

Transplanting Individual Limbs

I improved the overlay creation tool to allow the user to transplant individual limbs. By default, all the limbs above the root will be transplanted. However, the user can select and de-select individual limbs. The interface is also very easy to use, allowing you to toggle between transplanting an entire arm. To demonstrate, here is a more recent video of the overlay creation process.

The user first loads the motion he/she wants to use for the lower body. Then the user selects to create an overlay and loads the motion for the upper body. Options include interpolating between the two motions before transplanting and selecting individual limbs as mentioned before.

Here is an example of the difference between an overlay where all the upper body limbs are transplanted and one where one of the arms is not transplanted:

In the example above, I am obviously using a running motion for the lower body and a drinking motion for the upper body. In the first overlay, all upper body limbs are transplanted from the drinking motion to the running motion. As a result, the right arm remains stiff because it wasn't moving in the original drinking motion. However, if we choose not transplant this arm we get the result shown in the second overlay, which I think looks a bit better.

Transplanting individual limbs can really help make better looking overlays. Of course, it also multiplies the number of possibilities we have to consider when thinking of ways to automatically generate overlays.

Thursday, March 4, 2010

A Script to Generate Overlays

I previously implemented the ability to generate multiple overlays through the GUI. I decided to also create a standalone script to generate the overlays and parse through multiple directories of lower and upper body motions. This should be useful and quicker than having to rely on doing it through the motion viewer program.

Meanwhile, I've been studying previous works in closer detail. I will probably post a detailed review of related papers as the project continues.

Sunday, February 28, 2010

What's on the Agenda

I thought I'd do a quick post about what I'm working on now and planning for the next few weeks, as well as the remainder of the project.

  • Thorough review of related work - I reviewed previous work when I first proposed the project and have found other relevant work as I've gone on. I'm going to through all the work again to determine the best direction to go in as I get into the major part of the project.

  • Implement constraints - So far, I'm only generating overlays without constraints. The next step will be to implement constraints when automatically generating overlays, primarily physics checks for now.

  • Analyzing results - After the constraints are implemented, I will thoroughly analyze the resulting overlays to see what works and what can be improved.

  • Game environment - I'll create a game environment in OGRE to visualize the overlays through a game character.

I'm going to deviate from my original timeline a little bit. I don't think I'll spend nearly as much time on the game portion as I originally thought. Instead, the game environment will primarily be another means of viewing the overlays, and hopefully will make the results look more interesting. I'll be focusing much more of my time in analyzing the overlays and trying different methods to improve the results when automatically generating them.

Thursday, February 25, 2010

Automatically Generating Overlays

The key part of this project is generating a large number of overlays and analyzing them. I implemented a feature in the user interface where you can generate overlays by specifying one folder of lower body motions and one folder of upper body motions.

I thought this would be a useful way to generate overlays especially when I implement constraints. This can easily be extended into a console application to generate a massive amount of different combinations.

We can quickly inspect the overlays once they are generated. I structured the output so that every upper body motion has a different folder, and the files within it are named lowerbodymotion_upperbodymotion.amc with their respective names.

Wednesday, February 24, 2010

Fixing Twitches

We noticed that there was some twitching in some of the overlays. After observing and playing around it with it a bit, I found that the problem lied in transplanting everything above the root. Update: Ignoring the lower back joint and transplanting everything above it looks better but loses some of the movement of the motion.

I'm also keeping in mind other methods of alignment, namely spatial alignment mentioned in the paper "Splicing Upper-Body Actions with Locomotion" by Heck, Kovar, and Gleicher. This involves finding a local rotation of the pelvis that best aligns the shoulders and spine of the upper body motion with that of the lower body motion. The best rotation is computed using point clouds.

Update: Something like the method mentioned above seems like a good way to start.

Wednesday, February 17, 2010

Interpolated Overlays

Last week, I implemented the ability to create interpolated overlays as well as specify the interpolation percentage. Here is a demo of the results:

As you can see, the lower body motion is a character jogging, and the upper body motion is a character standing still and drinking from a cup. In the basic transplant, one of the arms barely moves while the character is jogging. This is very noticeably unnatural. In the interpolated transplant, the arms move somewhat believably but the drinking motion is no longer apparent, and the overall result looks strange.

Perhaps a better solution would be to only transplant certain limbs rather than the entire upper body? There are plenty of things to think about in terms of improving the results of these overlays.

For this week, I'll be implementing the ability to automatically generate overlays.

Monday, February 8, 2010

Finding More Motions

Since this project relies on a useful set of lower body motions and upper body motions, it is important to first find some good motions we can use.

I've been browsing through the CMU mocap database ( All we have to do is downsample a relevant motion and we can use it in our own framework.

For now, I've found a couple motions of a character standing still and drinking something that will be very useful in creating examples. Throughout the project, I'll try to incorporate some new motions that we can use to augment our results.

Wednesday, February 3, 2010

Creating Basic Overlays

I've implemented a tool for creating basic overlays. You can load any two motions (AMC files) and the upper body limbs of the second motion will be transplanted to the lower body limbs of the first motion. The root of the body from the second motion is first aligned with the body from the first motion (in position and orientation) before the transplanting. These are just basic transplants for now so it is expected that motions will not look completely natural.

Below are two different examples:

As you can probably see in the overlays, idle motion from the second motion is naively copied over. Ignoring idle motion by using some type of cropping or time alignment would alleviate such problems.

Until Next Week:
I will continue to flesh out the tool to allow more options in creating the overlays. Once I finish this, I will try to find some more motions that we can use to test the overlays.

Monday, January 25, 2010

Research and Learning the Framework

Last week I created my proposal and began my research.

This week I will be continuing researching work related to overlays. Unfortunately, the term "overlay" is not universal and the method of interest could be published under different terminology. I will be doing a thorough search over the next few weeks to find more related works.

I will also be getting used to the code framework this week. I will be building the code and learning how to implement new functionality within it.

Project Abstract

Blending is a fundamental method of generating new motions for character animation in video games. One particular type of motion blending known as overlays is widely used in the video game industry but has not been significantly explored in research literature. Overlays allow you to combine the upper body of one character motion with the lower body of another character motion. For example, you can combine a motion of a character drinking a cup of water with a motion of a character walking to obtain a single composition of the two motions, a character drinking water while walking. As it is difficult to capture every combination of motion desired, overlays provide a powerful and efficient alternative.

The purpose of this project will be to explore methods in implementing overlays and how they may be used in video games. The initial step will involve creating an interface to transplant the upper body of one character motion to the lower body of another character motion to create a composite character motion, without considering the quality of the result. The next step will be automating the generation of these overlays by considering how physically correct the resulting motion would be as well as taking into account any other necessary constraints to create character motions that look human. The final part of the project will be using the generated overlay motions in a game environment to examine the results.