For me, there is one recurring theme every year at SIGGRAPH: I need to take more math classes. Good grief! Anyway... Automatic Pre-Tessellation Culling was really interesting, but I was a bit disappointed that Tomas Akenine-Möller wasn't presenting. This is not to knock Jacob Munkberg, who was a fine presenter. Their idea is pretty straight forward, and I similar to something I've been thinking about for sometime. Basically, analyze the shader to partition it into the parts that calculate the position and the parts that calculate everything else. Run the part that calculates the position, perform culling, and conditionally run the rest.

The idea I had some time ago was to do similar analysis on shaders to pull low-frequency calculations up to higher levels. For example, detect calculations in the vertex shader that only change per primitive group (i.e., calculations that only depend on uniforms) and pull them out of the vertex shader.

Real-Time Hand-Tracking With a Color Glove was good research, but it has a ways to go. They basically store a big-ass database of hand images. At run-time, they search the database for an image matching the image from the camera. This is a gross simplification, but it's the right general idea. They're able to match really well, but it soaks two cores on a quad-core C2D system. Ouch. From some of the discussion after the presentation, I think there's a lot they can do to accelerate the database search. Maybe they'll have a follow-on next year.

Everyone loved Achieving Eye Contact in a One-to-Many 3D Video Teleconferencing System, but it was really hard for me to get excited about it. Yeah, it's really cool technology, but it's just too fragile. By fragile I mean that you have to have a bunch of special hardware setup in a very special way. They had a demo in the e-tech, and that's probably to only instance of this thing that will ever exist. Like a couple people at the session commented "it's just a head in a box", but "it's the best head in the best box".

So, I didn't get to see Tomas Akenine-Möller present today, but I did get to see Ken Perlin present! The UnMousePad is just plain kick-ass. It's a technology for making arbitrary sized multitouch, pressure sensitive input devices. They had 24" (diagonal) versions in the e-tech display. I talked to they guys at the e-tech display, and they're going to have an SDK, with Mac, Windows, and Linux support in a month or so. I think X.org needs to get one for who-t. I talked to the guy that has been working on the API, and I think I may have convinced him to go to XDC.

Ken Perlin UnMousePad

I took a late lunch because I decided to go to AMD's Next-Generation Graphics: The Hardware and the APIs talk. They were pitching a whole ton of Shader Model 5 stuff that they're wanting to add to GLSL. A lot of the features are mostly useful for video decode. The neat thing is that I think i965 hardware can do a lot of it.

In the afternoon session, Inferred Lighting: Fast Dynamic Lighting and Shadows for Opaque and Translucent Objects was excellent. They came up with an algorithm that, basically, straddles deferred shading and forward shading. I did cringe when he explained that they do alpha materials using screendoors, but it's pretty much the only way to do what they wanted to. It's a cool enough algorithm that I'm going to add it to VGP352 next year.

I was surprisingly interested in the last paper, Heuristics for Continuity Editing of Cinematic Computer-Graphics Scenes. They're system does a really good job, and I'm looking forward to the paper. From the clips they showed, they freely admit that there automated system isn't as good as an Academy Award nominated film editor. lol. In any case, automatically generating instant replays was something that I was thinking about way back when I was working on Dactyl Joust on the Jaguar.