This is the documentation for Enlighten.
Local IBL reflections
High resolution sky reflections
When you specify the emissive environment resolution for a cubemap, Enlighten outputs low resolution sky lighting to the cubemap. Alternatively, you can use the alpha value in the cubemap to render a high resolution sky for reflections.
To preserve the alpha value in the Enlighten Cubemap, when you add Enlighten cubemap data to the Update Manager, don’t call BaseCubeMap::SetEmissiveEnvironment(). With this change, Enlighten will not output the sky lighting to the cubemap when you call IUpdateManager::EnqueueUpdateEmissiveEnvironment().
Parallax and local reflections
Cube maps simply record the view of the world as seen from the cube map centre. By themselves they do not tell us anything about how far away those surfaces are. This information is important if we are to accurately place a reflection in the world. If we do not provide any additional depth information and simply look up a colour with a reflection vector, we would be assuming that the view of the world in the cube map is infinitely distant. If we translate an object in world space, the reflections we see on the object from the same angle should change if the object seen in the reflection is nearby. To be able to do this we need to provide some form of additional depth information.
If we were to compute a specular reflection through ray tracing, we would be tracing a ray from the world space location of the sample along the reflection vector to find the first surface seen along the ray. This is the process we will approximate. A simple and effective approach is to approximate the depth of all surfaces seen by the cube map with a box or similar primitive. The primitive should be a reasonable match for the actual geometry in the scene, but cheap enough to intersect a ray with. In the shader we perform the ray intersection between the reflection ray and the box to find a collision point, then use the vector from the sample location to the collision point as the lookup vector for the cube map. This places all reflections on the surface of the box, so where the box matches the actual scene geometry we will get an accurate reflection.
This technique has been documented previously under different names, but appears to be best known as "box projection". Notable references include:
- Sebastien Lagarde and Antoine Zanuttini in their SIGGRAPH 2012 talk, "Local Image Based Lighting with Parallax-correct Cube Map". This presents a longer description of the same topic and their results from using the technique in a console title. http://seblagarde.wordpress.com/2012/08/11/siggraph-2012-talk/
- Bartosz Czuba, describing his implementation of box projection in a GameDev.net post. http://www.gamedev.net/topic/568829-box-projected-cubemap-environment-mapping/
- Kevin Bjorke, in the GPU Gems article "Image Based Lighting". This uses a sphere as a depth primitive, which is cheaper but less likely to be a good approximation for depth. http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch19.html
You can see the difference including depth makes in the example below.
without depth | with depth using box projection |
---|---|
Box projection
Our example implementation of box projection can be found in SpecularCubeMap.cg
in the Resources folder of the GeoRender
library. It is used by the PS_Radiosity_Body()
function in Deferred.cg
in the same library. Our implementation supports oriented boxes, and allows the cube map centre to be different to the centre of the box. This flexibility adds some instruction overhead but allows better approximations of the scene depth.
The projection itself is implemented by the CubeMapSpecular
function:
// Compute the cube map value using box projection, all in cube map space. float4 CubeMapSpecular(samplerCUBE cubeMap, float3 sampleCentre, float3 boxHalfExtents, float3 cubeMapCentre, float3 reflectionVec, float roughness, int numMips) { // In brief, this does 6 ray-plane intersections. // Some of them won't collide properly (ray and plane are parallel) and generate {{inf}}s, but these are filtered out by the comparisons. const float3 sampleCentreToBoxMax = (+boxHalfExtents) - sampleCentre; const float3 sampleCentreToBoxMin = (-boxHalfExtents) - sampleCentre; const float3 rRecip = (1.0 / reflectionVec); const float3 rayBoxMax = sampleCentreToBoxMax * rRecip; const float3 rayBoxMin = sampleCentreToBoxMin * rRecip; // now back-face cull 3 of the planes const float3 rayBoxFront = (reflectionVec > 0.0) ? rayBoxMax : rayBoxMin; // and pick the closest intersection point const float rayBoxClosest = min(min(rayBoxFront.x, rayBoxFront.y), rayBoxFront.z); // reconstruct the intersection point in cube map space, and re-centre on the cube map origin const float3 collisionPoint = sampleCentre + reflectionVec * float3(rayBoxClosest); const float3 vecFromBoxCentre = collisionPoint - cubeMapCentre; // return the cubemap lookup, potentially after decoding HDR format const float mip = ComputeMipFromRoughness( roughness, numMips ); return DecodeIrradianceValue( texCUBElod( cubeMap, float4(reflectionVec, mip) ) ); }
This function accepts coordinates in 'cube map space' rather than world space. This is the space where the box centre is the origin and the coordinate axes align with the box axes. This allows the box to be represented by a single half extents vector (the vector from the box centre to the maximum corner in cube map space). The sample position and reflection vector both need to be transformed into cube map space up front. The implementation can be derived by describing the 6 ray-plane intersections for the box and simplifying. Rays parallel to a plane will generate an inf
during the reciprocal but should be filtered out during the back face culling and closest intersection tests, but some devices have non-standard behaviour for inf
. If this does prove to be problematic, the reciprocal can be adjusted to never divide by zero.
Blending
In practice we will need to combine the contributions from multiple cube maps. This could be done with a simple falloff, or more advanced blending. In our example implementation we use a bilateral-style weighting scheme. We bind the N closest cube maps to the shader, evaluate each of them and assign a weight for each. Each cube map contribution is scaled by the weight and the final sum is renormalised by the sum of the weights. To compute a weight we compute the squared distance to the closest point inside the box from the sample point like so:
float CalcDistanceSqToClosestPointInHalfExtents(float3 p, float3 halfExtents) { // return the squared distance from p to the closest point inside the AABB described by the half extents const float3 pInBox = min(halfExtents, max(-halfExtents, p)); const float3 d = pInBox - p; return max(0, dot(d, d)); } ... // compute the weight float epsilon = 0.000001; const float cubeWeight = 1.0 / (epsilon + CalcDistanceSqToClosestPointInHalfExtents(cubeSpaceSamplePos, g_CubeMapHalfExtents));
We do this calculation in cube map space, where the box is axis aligned, to simplify the calculation. This weighting approach ensures that when the sample point is inside a cube map the result is most influenced by that cube map, but still blends smoothly between cube maps. For further details, please see the implementation in SpecularCubeMap.cg
.
Although our implementation binds cube maps to the shader per draw call, there is no dependence on the primitive and the blending could also be done in a full screen compute shader pass.
PBR reflections
Cube maps are often used to provide specular image-based lighting (IBL) as part of a physically-based shading model. The most detailed mip of the cube map is normally pre-filtered according to the normal distribution function (NDF) used by the shading model. A range of increasing material roughness values are used and the results are stored in the mip chain of the cube map. Then, when applying specular IBL in the shader, the material roughness value for the point being shaded is used to sample the appropriate pre-filtered cube map mip .
GGX is the most commonly used normal distribution function (NDF) used to pre-filter cube maps but it is very expensive and normally performed offline or infrequently at runtime. The Enlighten cube map solver can optionally generate a full mip map chain using a very fast, simple down-sample operation. However, we have found that acceptable results can be achieved with shading models that expect GGX-filtered cube maps by using the following function to select an appropriate mip given a material roughness value and the number of mips in the cube map:
float ComputeMipFromRoughness(float roughness, int numMips) { return numMips + 2.0f * log2( roughness ); }
Alternatively, you could use the down-sampled cube map mip chain generated by the Enlighten cube map solver as the input to a filtered importance sampling GGX filter operation performed on the GPU.