Add one more nit to pick with the iPhone: turning off the ringer volume also turns of the alarm clock volume. Fail. So, I got up late today and missed the whole first technical paper session.
I ended up going to the Nvidia tech talk about rendering realistic hair in real-time. That was actually quite interesting. Unfortunately a lot of the details of the algorithm presented depend on new DX11 shader stages. Still, pretty much all of it could be done with multiple passes on DX10 or OpenGL 3.0.
Since I was at the Nvidia sessions, I only caught the last paper, "Modeling Anisotropic Surface Reflectance With Example-Based Microfacet Synthesis," in this session. I was really impressed with the results they were able to achieve with a really simple BRDF measurement rig. Instead of the usual geodesic dome with a dozen or more cameras and scores of lights, they had a single camera with 20 or so lights on a moving track. You can't really tell from the picture, but on the big screen it looked like it was build from Legos.
Seriously. I could build that rig in my basement. The real magic is in the algorithm the extrapolates the "missing" data from the measured data. There are, of course, some types of materials where it fails, but for a lot of stuff it works really well.
The two shadow mapping papers, as predicted, were the most interesting to me, even though I missed the entire first paper. Specifically, "Resolution Matched Shadow Maps" was a great paper that was very well presented. I had avoided hierarchical shadow map techniques in VGP353 specifically because of the frame-to-frame variability. This effectively makes these techniques useless for games. However, these guys have managed to eliminate that fault, speed up the algorithm, and make it easier to implement. Can it even be real?
In a way, I feel bad for the researchers from "Logarithmic Perspective Shadow Maps." Their algorithm is very useful in theory, but it relies on a rasterization primitive (logarithmic rasterization) that will likely never be implemented. Once hardware is fast enough to implement their technique generally, it will be fast enough to do things like "Resolution Matched Shadow Maps" at full speed. This will make their technique irrelevant.
I missed the first few minutes (this is my theme for today) of "Multiresolution Texture Synthesis," but I got a lot out of it. The technique uses "exmplar" images to roughly describe the synthesized texture at multiple levels of details. These exemplars are connected in a graph that can contain cycles. This allows textures with infinite detail. There was an example, that reminded me of some infinite zoomer demos on the Amiga, where an image of sand was used to create an infinite zoom. Very, very cool.
"Inverse Texture Synthesis" is a very clever idea. Most texture synthesis algorithms take some sort of sample texture (some algorithms call it an exemplar or an epitome) and use that to generate a larger image. Usually the sample texture is cut, either by hand or algorithmically, from an original texture. This algorithm starts with the larger image, generates a novel sample texture, and uses that to generate a alrger texture. The sample texture is usually not directly cut from the original texture, but it is still representitive of the whole original. This is generated using a "control map" that describes different regions of the texture. One of the examples is a "ripeness" control map for a banana that corresponds to black versus yellow regions of the banana.
The really cool thing about this is that it allows an artist to generate a novel control map for an object. A new texture is generated using the sample texture and the control map that matches what the artist expects. (That was worded poorly, but I think you get what I mean.)
"Lapped Solid Textures" was also a nice advancement. It probably not as directly useful to me, though. I hope to see games use this technique to make objects "solid." The other interesting thing about this paper is that the author suggests a number of areas further research that would be within the reach of undergrads (wink, wink, nudge, nudge). One of those areas being to improve the user interface for editing the texture exemplars used in this approach. This differs from the 2D patched used by Hoppe, et al because they are volumetric.
In the final paper of the session, "Anisotropic Noise" addresses one of the issues with noise that has bothered me for years. It is pretty much impossible correctly filter noise that is generated from textures. Octave truncation works, but it always ends up with textures that are both under-sampled and over-sampled. FAIL + FAIL. Using some off-line processing to generate some noise "tiles," correctly anisotropically filetered noise textures can be generated. The hard part is the off-line processing, and the real-time processing is trivially higher than octave truncation.
As an added bonus, this algorithm can fix "texture distortion." Imagine a terrain where the assigned texture coordinates are the X/Y values. Areas with steep slopes have more texels than flat areas, and this results in stretching. Using the same anisotropic technique can fix this problem. WIN + WIN.
The OpenGL BoF went really well, I think. Nobody showed up with torches or pitchforks. Of course, the free beer may have helped. The most useful part of it for me was the mingling period after all the presentations. I talked with quite a few people and, contrary to the /. reports, nobody was furious. Whew!
EDIT: Added links to most of the papers.