This year I was finally smart enough to register at the conference the day before it starts. Yay me. That made is so much easier the first day.

I wasn't expecting a lot from envLight: An Interface for Editing Natural Illumination, but I was pleasantly surprised. There were a couple of surprises in their results. I keep meaning to implement some sort of parameter editing interface for my BRDF demos, and I would have done it quite differently before seeing this paper.

I understand the method in Interactive On-Surface Signal Deformation, but I don't really get how it would be implemented in a real system. I guess that's what I get for not reading the paper before attending the presentation. I would have read the paper if I had remembered to bring the DVD drive for my laptop. That's the one annoying thing about the X200s... no built-in optical drive. In any case, it seems like this approach could be used to great effect in a demo. I imagine a scene where, say, the shock wave from a heavy object hitting the ground causes shadows on the ground to get pushed away.

My first Avatar presentation, PantaRay: Fast Ray-Traced Occlusion Caching was impressive. I guess rendering scenes with 10M to ONE BILLION DOLLARS... er... one billion polygons needs some new acceleration techniques. And a petabyte of "live data" needs some kick ass I/O. There are so many different optimizations implemented here that I think they could have gotten multiple papers out of it. Seriously. Just the layers of optimizations in the spatial index calculation should have been sufficient. Showoffs! :)

On a whim, I went the SIGGRAPH awards presentation. The cool thing there was a visualization app that shows, on a map of the world, where every technical paper ever published at SIGGRAPH originated. It will supposedly be posted on the web, so I'll link it here soon. The talk by Don "The Tornado" Marinelli really made it worth it, though. You can tell that he comes from a drama background. They also said that some of the content would be posted on YouTube, so his talk may get linked here later.

The word(s) of the day: deferred shading. Screen Space Classification for Efficient Deferred Shading was the first of the bunch. The basic technique is to read back the G-buffer to generate a batch of polygons to draw. The screen is divided into tiles, and the tiles are classified by attributes (e.g., in shadow, direct lit, etc.). On both PS3 and Xbox360 it cuts the frame time by about half. They talked a bit about the optimization of the polygon submission. Since the polygons are tile aligned, it seems like some sort of quadtree could be generated and rendered using point sprites. I'll have to think on that a bit more.

"Boom. 60 is better." lol. How to Get From 30 to 60 Frames Per Second in Video Games for "Free" uses the Z-buffer and the velocity buffer from the current frame and the color buffer of the previous frame to generate an in-between frame. The author was inspired by MPEG motion vectors. Here, it's a big hack. It has a bunch of hacks on top of it to fix artifacts. The best part is the way dynamic objects are "erased" from the frame (this is only necessary with deferred shading). I like it.

Split-Second Motion Blur covered some additional work from Split Second. They render a per-pixel 2D motion vector ID. This ID is then used in a blur post-process. In addition, they update the texture sampling derivatives based on the motion vector. Cheap, easy, good results. I'll definitely add this to the motion blur techniques covered in VGP351 next time it comes around.

Over the last couple years systems for indirect lighting in real-time have been all the rage. Pretty much all of them have used deferred shading. A Deferred-Shading Pipeline for Real-Time Indirect Illumination is shock no different. However, they're algorithm has good quality and speed. The other algorithms that I have seen pick one or the other. They incorporated a clever way to include occlusion for bounces, but this costs a lot of memory. I guess that's the trade off. The method for mipmapping the G-buffer was pretty clever. At each downscale they average the attributes from the largest object. This is sort of a bilateral filter in reverse. It's still screen space, so it has some drawbacks:

  • losing indirect light from an object that goes off screen

  • no support for mirrors reflecting objects behind the camera

Also, if I hear "current console hardware can't" one more time, I'll puke... or just roll my eyes again. I knew realize dynamic branching was bad on PS3, but I didn't it was so bad on Xbox360. sigh...

I wrapped up the day with Live Real-Time Demos. As Eric Haines notes, they were pretty cool. Thinking back to my days as a demo coder, it brings a tear to my eye to see demos on the big screen at SIGGRAPH.