Proj #3: Tracking

“Use Nuke to create composited image sequences using combinations of live footage and computer-generated elements; with stabilization, tracking, and matchmoving.”

Final Videos:

Part 1: Stabilization

Part 2: Tracking

Part 3: MatchMoving

Part 1: Stabilization

Main Idea: Stabilize a given clip (shakey subway clip)

Tracking Data/Node Tree:

Final Image:

Stabilization: Final Image

Comments: I didn’t really have difficult time with this; it was fairly straightforward. Although, it would have helped to de-noise the video once I transformed it to fit the frame size/clip away the obvious movements.

Part 2: Tracking

Main Idea: Live-action background with moving object and live action/computer-generated foreground element matching that movement (ski light clip)


Matte/Tracking Data/Node Tree:

Final Images:

Tracking Final Image - Clefairy

Tracking Final Image - Crow

Comments: I learned how to find better tracking points, that’s for sure. It was also difficult to find tracking points that remained throughout the whole video, so there was a lot of fudging involved. I also realize that having tracking points in the problematic frames really helps out the automated tracking in Nuke.

I also rendered out passes to use for Clefairy but considering the quality of the original video was so poor and she was sort of a tinier than I expected that it didn’t seem necessary…I could have worked out the IBL a bit better in Maya. It wasn’t as obvious with the crow, but with Clefairy, it was a little off.

Part 3: MatchMoving

Idea: Live-action background with moving camera and matching computer-generated foreground (I didn’t chose a movie clip; I filmed one to get a feel of how camera tracking worked and markers)


Render Passes:

Camera Tracking – Tracking Data/Node Tree/In Maya:

Marker Removal – Tracking Data:

Final Node Tree:

MatchMoving Node Tree

Final Image:

MatchMoving Final Image


I’m pretty sure I was way in over my head for this one, as this is the whole reason why my project 3 is shamefully late.

1.) Motion Blur/Interlacing makes it a PAIN to track. I couldn’t do much about the motion blur except to hand track the parts where it was incredibly awful to the point that I wasn’t even sure where the point should go. The interlacing, however, I was able to use Nuke’s DeInterlace node to fix that issue.

2.) Autodesk’s MatchMover hates me. I’ve heard it was easy to use and does wonders, but I have never been able to get it to work and end up with the correct focal length. I spent, literally, nights trying to get it to work before I finally caved in to figure out how to use Nuke’s CameraTracker node. Surprisingly, it was a lot more accurate and a lot easier than MatchMover. Figuring out how to export the camera and the point cloud, on the other hand, was not so intuitive but I was able to find it in the attribute editor window. I understand that this was a costly mistake, trying to brute force MatchMover but, on the other hand, I think it was also a valuable lesson since I understood MatchMover a bit better (though still not enough to get it to work).

3.) Once I finally had the camera/point cloud imported in Maya, I, yet again, went a bit overboard and wanted to learn how to use render passes/layers in Maya to benefit me in compositing. This was an extremely costly decision, as the render times for the sentry was at least 4 hours, and I was constantly making mistakes. For example, I wanted to try and put in reflections (the table’s a bit glossy) using this DT Tutorial but I could not get the reflection pass to work for me. I did get a reflection pass but it wasn’t taking into consideration the shadows so when I was merging the reflection pass with the rest of the render passes, it cut through the shadows. At this point, I was already several days late so I had to cut my losses with the glossy reflection I wanted.

4.) Shadows. The contact shadows is not dark enough, and the end of the shadows were too dark (prior to using a grade node to match everything up). Kind of like the reflection, I had to cut my losses since I was already late for the project. I do think it’s a simple fix – either by roto or having done another pass in Maya to get it (which would have meant another long render time).

5.) Motion Blur – the sentry wouldn’t look right without the blurring. Following the DT Tutorial I linked prior, I got a motion vector pass from Maya to throw into the composite for a little bit of blurring. There was also some depth of field blurring based off of the DepthRemapped pass from Maya (which isn’t so obvious in this case).

6.) Removing markers were a pain. My markers were honestly too big to remove very easily. I ended up doing a track for a point, merging it with an offset background of the original image for the first point. The subsequent points merged with the previous merge.

I think that was everything…overall, despite being late, I’m honestly glad I went this route. I learned a lot more from it than if I had just done a simple cornerpin. Kara’s tutorial really made more sense after doing this, and I did understand Nuke’s CameraTracker node. I got a decent introduction to using render passes/render layers in Maya, and how to place an object based on the camera/particle cloud.

I really wish I had higher quality videos to work with; so in retrospect, a movie would have been higher quality but I wouldn’t have had the experience of removing markers and understanding how interlacing/motion blur can really mess up tracking.


TF2 Level 1 Sentry and crow models are from Clefairy model is from Clefairy is a property of Pokemon/Nintendo, while the TF2 Sentry is a property of Valve.