I got to the Textures session a bit late because I went to Nvidia's "OpenGL 4.0 for 2010" session, and the session went over by 20 minutes. It was a good overview of OpenGL 4.0 / 4.1. Nvidia (Mark Kilgard, more precisely) continues to spread mis-information about the compatibility profile, and that's really frustrating.

I missed all of By-Example Synthesis of Architectural Textures and everything except the results of Synthesizing Structured Image Hybrids. The results in the later were very cool. I especially liked the synthesized pirate flags.

Every year there's some piece of cool tech at SIGGRAPH that I want to implement. I think Vector Solid Textures is it this year. :) It's a very clever method of compactly storing volumetric textures as a sort of voxelized SVG. The complex step, of course, is generating the texture data. Any algorithm that utilizes sub-algorithms with names like "L-BFGS-B minimizer" and "teleportation" is bound to be frightening. lol.

The Mesh Colors presentation started with an excellent overview of all the problems with 2D texture mapping. If he had stopped there, I would have called it a useful presentation. Of course, he didn't stop there. The idea behind mesh colors is that a set of colors are stored for each triangle (one per vertex, one per edge, and N per triangle) along a tessellation grid (basically). At run-time the UV value is used to weigh the nearest colors. The frustrating bit of the presentation was the massive amount of hand waving (shown in the photo below) about the implementation. I suspect the details are in the paper. The hair farm shirt was a nice touch.

It also seems like there is a lot of area for future research. The filtered performance is pretty poor. I had a brief discussion with Cem about using hardware tessellation (to a level that matches the mesh colors pattern) to solve some of the filtering issues. It seems like it could work in some cases, but it presents problems of its own (i.e., potentially a lot more work for the rasterizer).

I may have to implement this algorithm too.

There were a bunch of cool things in the poster session, and I took the opportunity today to talk to some of the authors. There were several useful posters about soft-shadow algorithms, a couple about image enhancement, and a couple about ambient occlusion. I especially liked Curvature depended Real-Time Ambient Occlusion (sic). They use a preprocess to determine the concave curvature at each vertex. At run-time, this is interpolated, and the interpolated to determine the AO. The interpolation here gives a better result vs. per-vertex AO in the same way that interpolating light parameters gives better results than interpolating per-vertex light calculations.

My two favorite posters were GPU Ray Casting of Virtual Globes and, I kid you not, A Physical Rendering Model for Human Teeth. I plan to add both of these to my VGP curriculum. The tooth shader is useful because it shows a case of a lighting model developed for a specific type of surface.

The other poster that really caught my eye was the WebGLU poster. It sounds like Benjamin is doing a lot of the same things for WebGL that I'm doing for desktop GL. He and I talked, and there may be some colaboration in our future.

There are some things to really like about Open Shading Language. Being able to arbitrarily connect shaders (via matching inputs and outputs) in a DAG is something that I've been talking about doing in OpenGL for years. Pulling data out by specifying an arbitrary expression on shader outputs is also genius. In OSL the data gets dumped as an image, but I still see use for this in GL.

The implementation of automatic differentiation is really, really smart. I suspect that something like this could even be implemented in our hardware shaders. This would allow higher quality derivatives in some cases. I guess I need to read the Piponi paper from journal of graphics tools.

And they now use LLVM.

Since this system has texture mapping support in this system, this might make an interesting test bed for my thesis work. Hmm...

REYES Using DirectX 11 wasn't too interesting to me. During the Q-and-A period, someone (the previous presenter, in fact) asked what could be changed in either DirectX or the hardware to make this either easier or faster. Being able to insert compute shaders directly into the pipeline (to eliminate the intermediate buffers, synchronization, etc.) would have helped a lot. This isn't the first time I've heard a request for this.

The WebGLot: High-Performance Visualization in the Browser (slides available) guys are doing some really cool stuff on top of WebGL. We'll need to use this app to test WebGL on our drivers!

The Gazing at Games: Using Eye Tracking to Control Virtual Characters course didn't feel like a course. It felt like a presentation of a survey paper on what people have done related to eye tracking in games. I got references to a bunch of papers that seem interesting, but that's about it. I do like the idea of "smart text." Text (dialog, etc.) is displayed until you look away from it. I had to leave early to go to the OpenGL BoF, so I probably missed the best parts. shrug

The highlight of the OpenGL BoF was, of course, the announcement of GLU3 as part of the OpenGL SDK. Getting a round of applause was cool. :)

One of the trivial questions was, "What ARB extension has not been shipped in any publicly available driver?" Answer? GL_ARB_shading_language_include. Damnit!!! Okay, we need to add that to our compiler todo list. Ugh. It's a cool feature, and John Kessenich, Jon Leech, and I spent a lot of time working on it. In all fairness, GLSL needs a much better module mechanism than it currently has. I have some ideas, but they'll have to wait until our new compiler actually ships at least GLSL 1.30.

compatibility profile

"Nvidia (Mark Kilgard, more precisely) continues to spread mis-information about the compatibility profile, and that's really frustrating."

Ian, can you elaborate on this?

Comment by jkolb Thu 12 Aug 2010 01:34:41 PM PDT
RE: compatibility profile

Sure. Mark has repeatedly said to never use the core (non-compatibility) profile. He has said that there is no reason for it to exist. He has said that it (paraphrasing) only removes really useful functionality that you really want to use. All of which is wrong. He then goes on to make derogatory comments about the motives of the other ARB members in creating the compatibility profile. I'd like to point out the idea of deprecating and removing functionality from OpenGL was Nvidia's idea. But the rest of us, apparently, are only trying to hurt developers by actually doing it. Because, you know, people really want glBlendFunc and glBlendFuncSeparate.

The reality is quite the opposite of what Mark says. There is benefit to using the core profile. There are quite a few areas of the spec where we have, for example, intentionally not specified interactions with fixed-function. By not having fixed-function even available you can be sure that you're not accidentally string into undefined behavior territory. There's also the potential for improved performance. While there may be no performance benefit to using the core profile on Nvidia's implementation, that just means that Nvidia hasn't taken advantage of the optimizations afforded by removing 10 years of duplicated or broken interfaces. Admittedly neither has Mesa.

The idea, and Mesa does this for OpenGL vs. OpenGL ES, is that the driver will load a different back-end depending on which profile is requested. There are so many state validation checks that just go away in a driver that doesn't have fixed-function, vertex arrays in client memory, or display lists.

While I'm on the topic of OpenGL ES, Mark seems to forget that the vast majority of the things removed in the core profile are also removed in OpenGL ES. Ever want to port to ES? Use the core profile.

Comment by IanRomanick Wed 18 Aug 2010 01:41:57 PM PDT