So, I didn't miss the The Future of Teaching Computer Graphics for Students in Engineering, Science, and Mathematics panel, but I wasn't exactly there on time either. It was both nice and disturbing to hear that other teachers encounter the same problems that I do. Hearing Pete Shirley complain that math is "bad" everywhere in the US was a little surprising, but I guess it shouldn't have been. I have to teach, re-teach, and re-re-teach a lot of math in my graphics sequence. I had thought this problem was localized to the Art Institute, but I guess not.

Teaching graphics panel

Panelists from left to right: Dave Shreiner, Pete Shirley, Evan Hart, and Ed Angel.

There were two other highlights of the panel. One was Ed Angel saying, "It will take you longer to get started with [OpenGL] 3.1, ... but they're actually going to have to do something with the math and they're going to be stronger for it." Yeah, I figured that one out too. :) That's 90% of the reason I stopped teaching fixed-function this year.

The other highlight was the discussion after the panel that Dave Shreiner and I had with Andries van Dam about teaching graphics. Seriously. That's like discussing teaching baseball with Yogi Berra!

Andries van Dam

The Capture and Display session in the afternoon was a lot more interesting than I thought it would be. Dense Stereo Event Capture for the James Bond Film "Quantum of Solace" was really, really cool. Basically, they used a few of high-speed cameras to record the actors in a free-fall wind tunnel. The multiple images per-frame were then used to create really good models per-frame via a technique called "shrink wrapping." With the approximate models, they could view the scene from new angles and relight it. Too bad none of their work was actually used in the movie.

The big surprise of the session was ILM's Multitrack: A New Visual Tracking Framework for High-End VFX Production. They described a new technique for tracking elements in a video sequence with a moving camera. This is used in VFX, for example, to infer the camera position so that CG elements can be added. I think it might be time for an open-source package to do this. An add-on for Blender, perhaps? I like the idea of using this in a demo to combine video with real-time graphics. drool...

Estimating Specular Roughness From Polarized Second-Order Spherical Gradient Illumination was, aside from the crazy-long name, as interesting as I hoped it would be. The novel thing from their work is that their data can be used to generate parameters from real objects for arbitrary analytic BRDFs. I really want to build one of those geodesic lighting domes used to capture BRDF data. Their technique only requires one camera, so I don't think it would be that hard.

Unfortunately, I had to miss Creating Natural Variations because the OpenGL BoF was at the same time. Since the BoF was at kind of a bad time and SIGGRAPH is smaller this year, I wasn't expecting a good turn out. However, it was standing room only. I suspect the traditional free beer may have helped. Anyway, people seemed generally pleased with the progress that's been made. We had a lot of good questions and comments. One person even asked if we were considering pixel transfer shaders. This was in the old 3dlabs OpenGL2 proposal, but I haven't heard any mention of it since then. It seems almost unnecessary when OpenCL is available. Still... might be worth considering.