This is the documentation for Enlighten.

Per pixel probe lighting


Per pixel probe lighting provides the highest possible effective indirect lighting resolution in areas close to the viewer, but automatically reduces the effective resolution in distant areas.

The technique uses a 3D virtual texture, implemented with an indirection clipmap texture which indexes into an atlas of tiles. The indirection texture and atlas are updated efficiently using compute shaders.

This technique provides accurate lighting across very large meshes. The runtime cost increases only with the number of probes updated, regardless of the number of objects that are lit by probes.

To use per pixel probe lighting:

  • The High Level Runtime must be configured to enable the Entire Probe Set Solver. 
  • Automatic probe placement must be used.
  • L1 SH probe output must be used.
  • floating point probe output must be used.

Use the High Level Runtime update manager to update the virtual texture with the results of the radiosity computation. To render indirect lighting, sample the probe SH coefficients from the virtual texture and evaluate SH lighting in the direction of the pixel normal.

Initial setup

At the same point in your pipeline that you process the Enlighten runtime data, load the IPrecompProbeAtlasMaxima object to obtain the maximum runtime memory footprint of per pixel probe lighting data. 

For most implementations, we recommend to use the precompute pipeline API both to run the precompute and to load the output objects.

// path: "precomp/ProbeOctree_[paramset name].pam"
const Enlighten::IPrecompProbeAtlasMaxima* atlasMaxima
	= Geo::LoadInterface<Enlighten::IPrecompProbeAtlasMaxima>(path);

PppiAtlasFootprint footprint = atlasMaxima->GetProbeAtlasMaxima();

To configure the 3D virtual texture, call MakePppiConfiguration with the PppiAtlasFootprint obtained earlier. When constructing the High level runtime update manager, assign this configuration to UpdateManagerProperties::m_PppiConfiguration.

	Enlighten::UpdateManagerProperties properties;
	properties.m_ProbeOutputFormat = Enlighten::PROBE_OUTPUT_FORMAT_FP16;
	properties.m_PppiConfiguration = Enlighten::MakePppiConfiguration(footprint);

Call GetPppiRequiredOutputTextures with the same configuration to find the size and format of the required output textures. 

	Enlighten::PppiOutputTextureRequirements requirements = GetPppiRequiredOutputTextures(properties.m_PppiConfiguration);

Create rendering resources of the required size and format for each of the output textures.

Compute shader update

Configure the output textures to optimise for access only by the GPU.

To obtain the maximum required size of each input buffer for the compute shaders, call GetPppiMaximumClipmapUpdateRequirements and GetPppiMaximumAtlasUpdateRequirements with the PppiAtlasFootprint obtained earlier. Create these input buffers at load time.

Implement IPppiComputeUpdateHandler::UpdateClipmap and PppiComputeUpdateHandler::UpdateAtlas to upload data to the input buffers.

Before rendering, bind the input buffers and output textures and dispatch the compute shaders:

  1. Dispatch a GPU command to copy solved probe coefficients to the start of the probe coefficient buffer.
  2. Dispatch GeoRuntime/Resources/PppiInterpolate.hlsl to fill the rest of the probe coefficient buffer with interpolated probe coefficients.
  3. Dispatch GeoRuntime/Resources/PppiClipmapUpdate.hlsl and GeoRuntime/Resources/PppiAtlasUpdate.hlsl to update the virtual texture.

One PppiUpdateData object consists of multiple buffers that are to be combined into a single input buffer. In most scenarios we recommend to call PppiUpdateData::CopyCombined to do this, but you can also access the buffers directly if you prefer.

Provide a pointer to your implementation of IPppiComputeUpdateHandler when calling IUpdateManager::Update.

The Sample Runtime application includes a simplified example implementation within PppiDx11TextureUpdateHandler.

If your engine implements dynamically updated vertex buffers, you may be able to reuse the implementation for the compute shader input buffers.

We recommend to use shared GPU memory and write directly to the input buffer on the CPU, on platforms which allow this. Your implementation is responsible for ensuring correct synchronisation to prevent writes to a buffer still in use by the GPU.

Each frame

Before drawing meshes, call IUpdateManager::Update. Specify the view origin to obtain full resolution indirect lighting close to the viewer.

As an optional optimization, limit the LOD distance to trade lighting accuracy for faster updates. 

If you call IUpdateManager::Update for different views on alternating frames, you may incur some unnecessarily expensive indirection texture updates.

If both views are very similar, provide the average view origin to IUpdateManager::Update. If the views are very different, create one instance of the update manager for each view.

Render indirect lighting

To sample the virtual texture in a pixel shader, use the output textures and the shader parameters returned by IUpdateManager::Update with the SamplePppiVirtualTexture function in GeoRuntime/Resources/PppiCommon.cg.

Excerpt: PppiCommon.cg
struct PppiSample
{
	float4 R;
	float4 G;
	float4 B;
	float InverseValidity;
};

PppiSample SamplePppiVirtualTexture(float3 worldPosition, float3 viewOrigin, float dither)
{
	...
}

The worldPosition argument is the world space position of the pixel. The viewOrigin argument is the position of the camera, used to determine the level of detail.

The dither argument is a per pixel "random" value between zero and one, used to dither between detail levels. For best results we recommend to use a low discrepancy noise function such as Interleaved Gradient Noise.

PppiSample R, G and B contain the L1 SH coefficients for each colour channel. Use this to evaluate SH lighting in the direction of the pixel normal.

In areas close to invalid (culled) probes, the lighting result is darkened. To compensate for this, multiply the lighting result by InverseValidity.