(Better never than late... or something...)
I had high expectations for Hiding Complexity, but I was largely disappointed. I only stayed for the first two talks. As far as I could tell, Occlusion Culling in Alan Wake summarized the combination of a bunch of known techniques for occlusion. Their combination seems to have the strong disadvantage that it has a lot of temporal instability, and this can lead to frame rate jitter. Even the bit of the talk about optimizing shadow generation seemed like a subset of the optimizations from CC Shadow Volumes. I'll have to look at their I3D paper, Shadow Caster Culling for Efficient Shadow Mapping to be sure.
Increasing Scene Complexity: Distributed Vectorized View Culling can be summarized by: we optimized a slow part of our code in all the obvious ways, and it got a lot faster. shock.
After that, I bailed and went to Compiler Techniques for Rendering (slides are also available.
I missed the first couple talks, but I did arrive in time for AnySL. They have an interesting project. It's essentially an N:M compiler. It seems like we should connect their OpenCL work with various open-source efforts that are underway. This is especially the case if their claim of beating Intel's (CPU) OpenCL compiler on almost all kernels turns out to be true.
Automatic Bounding of Shaders for Efficient Global Illumination was interesting, but I don't think it has anything directly applicable to our GLSL compiler work. However, it did give me some ideas to try for a real-time light integrator that I've been working on.
Compilation for GPU Accelerated Ray Tracing in OptiX looked mostly like their talk from SIGGRAPH last year. I didn't recall any mention of the previous RTSL work, and that paper was an interesting read. There are a couple built-in functions in that language that are useful. I've open-coded a couple of them in the past...
The final session of the conference was Real-Time Rendering Hardware.
Clipless Dual-Space Bounds for Faster Stochastic Rasterization and Decoupled Sampling for Graphics Pipelines were related pieces of work. Each was a different part of solving the automatic defocus and motion blur problem. I like the idea of having the hardware assist with these in a similar way that it helps with antialiasing. While it's trivial to expose MSAA to the developer (it's mostly transparent), it's not clear to me how to expose motion blur or, to a lesser degree, defocus blur. Given all the weird ways that people implement animation in real-time systems, how can the API directly expose a time dimension as a shader parameter?
Spark: Modular, Composable Shaders for Graphics
Hardware echoes a lot of concerns that
I've had for a few years. That people have to machine generate 10,000
shaders or use #define
madness to specialize 10,000 variations of
their shaders shows that we haven't given them a useful system. Even
without that it's pretty much impossible to have separation of
concerns in a shader stack. They way that
OSL allows shaders to
be composed solves some of this, and Spark takes a different approach.
Physically Based Real-Time Lens Flare Rendering goes in the "I should try to implement that" bin. Bastards! It was nice ending the conference with pretty pictures.