CRAZY DAY OF RENDERING TALKS!!! AHHHHH!!!

I didn't think Practical Uses of a Ray Tracer for "Cloudy With a Chance of Meatballs" would be interesting to me, and the content wasn't. However, the presenters mentioned that Sony / Imageworks are open-sourcing a bunch of projects, including the shading language used in the movie. Strong work!

There was quite a bit of information in Multi-Layer, Dual-Resolution Screen-Space Ambient Occlusion that I plan to include in my lectures on AO later this summer. Their algorithm use depth peeling, so I'll have to also cover that if I'm going to explain their algorithm. I might skip that bit. We'll see.

RACBVHs: Random-Accessible Compressed Bounding Volume Hierarchies was interesting, but it wasn't really what I was expecting. That's the bad part of the two sentence summary of the abstract in the advance program. The focus of this work is BVHs for huge data sets. That is, data sets so large that the full BVH may not fit in main memory. Their example had a BVH of 8GB. Still, it was worth seeing. The part that really impressed me is that the performance hit on small models (i.e., ones that fit in memory without compression) was only ~3%.

The Real Fast Rendering session was, as predicted, full of win. :)

I have to read the paper for Volumetric Shadow Mapping because there's some bit of the algorithm that I missed. I get the general idea, and I like it. Basically, you trace viewing rays through the (existing) shadow map to determine fogging. Cool stuff.

BVH for Efficient Raytracing of Dynamic Metaballs on GPU earns points for running the presentation from a Linux laptop. There was a shocking lack of Linux at the conference. :(

I share Naty's skepticism of depth peeling methods. Bucket Depth Peeling provides some promise to improve things. The massive use of MRT (8 targets) and the abuse of the outputs (packing 4 8-bit values in each 32-bit output component) may limit the use of this technique.

Normal Mapping With Low-Frequency Precomputed Visibility built on a bunch of algorithms that I'm completely unfamiliar with. I don't know anything about using spherical harmonics for lighting, for example, so I didn't get a lot out of this talk. Add that to my list of things to learn about before next SIGGRAPH.

After lunch, I listened to a couple papers in the Rendering and Visibility session. An Efficient GPU-Based Approach for Interactive Global Illumination was really impressive. Still, they get less than 10 fps for some relatively simple scenes. It seems like they have a good grasp of the issues they still need to tackle. Maybe by the time they get that done hardware will be fast enough to make this generally useful. (Note: They also use spherical harmonics, so mod+1 for me learning about spherical harmonics.)

Single Scattering in Refractive Media With Triangle Mesh Boundaries was pretty interesting. They've made some reasonable simplifying assumptions for caustics rendering, and they got some good results for ray tracing. It seems like something similar to displacement maps could be used to achieve this faster. Hmm... I'll add that to my list of things to (eventually) research.

Something else I noticed... every single real-time related paper presented results rendered on Nvidia hardware, usually Geforce 280 GTX.

Nit of the day: If you're going to take pictures during a talk, turn off the fake shutter sound on your digital camera. And don't use the *#%$ing flash either.