This is the documentation for Enlighten.
Light visibility data
When a light source casts shadows, Enlighten needs light visibility data to avoid unwanted bounce light in shadowed areas.
This visibility data contains one bit for each input sample point on the surface of a radiosity system. The bit for a given sample point determines whether the light casts a shadow on that point.
We recommend to compute this visibility data in advance for a light source which does not move. For light sources that move, we recommend to sample existing shadow maps or ray trace on the GPU to produce this visibility data.
An efficient way to compute the visibility data for a moving light is to use a GPU pixel or compute shader which reads from shadow maps.
Algorithm
- Extract sample positions from the Enlighten workspace
- Pack extracted positions into texture or compute buffer
- Execute compute or pixel shader
- Project samples into light space
- Perform depth comparison of samples against a shadow map
- Collate visibility into contiguous bit values
- Write results into the output buffer / render target
- Read contents of output buffer / render target into CPU memory
- Pass CPU memory pointers to input lighting solver
Visibility format
The input lighting reads the visibility as a stream of bits where each bit in the buffer represents the visibility of one sample point on the geometry for a given light. Each system requires a separate visibility buffer for each light. The size of the visibility buffer is determined based on the number of sample positions of the system using the following function:
Geo::s32 perLightSizeForSystem = CalcLightVisibilitySize(inputWorkspace, VisibilityFormat::PER_DUSTER_VISIBILITY);
Since each bit in the visibility buffer represents visibility for a point on the geometry, the order of the bits in the buffer needs to match the order of the positions returned by the Enlighten API.
Input sample positions
Input sample positions are placed in the surfaces that are included in the radiosity computation. The mesh that is drawn may be slightly different from the mesh included in the radiosity computation when:
- the mesh has multiple LOD levels, or
- the mesh has a simplified target mesh proxy
To ensure accurate shadowing when sampling your shadow map, make sure that the input sample positions are based on the same mesh that was drawn into the shadow map.
The simplest solution is to use the same LOD when rendering shadows that was included in the radiosity computation, and call GetInputWorkspacePositionArray to obtain the input sample positions.
Projected sample positions
If you need to draw a different LOD for your shadow map than was included in the radiosity computation, use the projected sample positions. To generate these positions, when you export the Enlighten scene, add the includeProjectedPointData parameter.
Call GetInputWorkspaceProjectedPointVersion to obtain the projected positions for a given instance and LOD.
The following function from GpuSpotlightVisibilityEngine.cpp
is used in GeoRadiosity to extract projected sample points:
//-------------------------------------------------------------------------------------------------- void GEO_CALL PatchDusterPositionsForLodLevel(Geo::v128* dusterPositions, const Enlighten::InputWorkspace* inputWorkspace, Geo::s32 lodLevel) { // get a list of all the instance ids in the system s32 numInstanceIds = 0; GetInputWorkspaceNumInstanceIds(inputWorkspace, &numInstanceIds); if (numInstanceIds > 0) { // check the max number of points per instance is sensible const s32 numDusterPoints = Enlighten::GetNumberOfPointsInInputWorkspace(inputWorkspace); { s32 maxPointsAnyInstance = 0; const bool gotMaxPoints = GetInputWorkspaceMaxProjectedPointsInAnyInstance(inputWorkspace, &maxPointsAnyInstance); } // array for instance ids GeoAutoPtr<s32, GeoDeleteArrayDestructor<s32> > instanceIds(GEO_NEW_ARRAY(s32, numInstanceIds)); // arrays for patched positions and indices GeoAutoPtr<s32, GeoDeleteArrayDestructor<s32> > pointIndices(GEO_NEW_ARRAY(s32, numDusterPoints)); GeoAutoPtr<v128, GeoDeleteArrayDestructor<v128> > pointPositions(GEO_NEW_ARRAY(v128, numDusterPoints)); GetInputWorkspaceInstanceIds(inputWorkspace, instanceIds.GetPtr()); for (s32 i = 0; i < numInstanceIds; ++i) { // dig out the points for this instance const s32 instanceId = instanceIds[i]; s32 numProjectedPoints = 0; GetInputWorkspaceProjectedPointVersion(inputWorkspace, instanceId, lodLevel, pointIndices.GetPtr(), pointPositions.GetPtr(), &numProjectedPoints); // apply the patch (may be zero length if there is no data for this version) for (s32 p = 0; p < numProjectedPoints; ++p) { const s32 ii = pointIndices[p]; dusterPositions[ii] = pointPositions[p]; } } } }
Shaders
For an example of a visibility pixel shader, see GpuSpotlightVisibility.cg
in the Enlighten solution. This example shader is written to be maximally compatible, and uses a render target with four 8-bit components. This may not be the most efficient format to render to on a given target platform.
In this example, each pixel calculates the visibility for 32 samples and reduces them to a single 32 bit value which is written as output from the pixel shader. The output texture we use in our sample code is set to be 32 pixels wide with the height determined as the number of rows required to fit the entire visibility buffer.
A compute shader implementation of GPU visibility would be more efficient, since we could compute the visibility of a single sample per thread and then combine the visibility result of 32 threads into a single 32 bit value. These values would then be written to an output buffer / texture so that they can be fed as input into the input lighting solver.