This is the documentation for Enlighten.

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »


The low level precompute API is implemented by the EnlightenPrecomp2 library.

You may need use the low level precompute API when you want to run the Enlighten precompute in a way that is not supported by the Enlighten Pipeline library. It is not necessary to use the low level precompute API in a typical integration of Enlighten.

Most precompute tasks are performed using the IPrecompute interface, which is created using the standalone CreatePrecompute() function. This interface then needs to be setup with license information (using SetLicense()) and some optional global precompute settings:

Key

System assignment

Before the main precompute can begin, geometry needs to be assigned to systems, and the dependencies between systems must be defined. 

The low level precompute API provides a utility function GroupInstancesIntoSystems that can split a large collection of instances into a number of similarly sized systems, each system containing an appropriate number of radiosity pixels.

System dependencies

The low level precompute API function CalculateSystemDependencies computes the dependencies for a given system. The function checks against a collection of packed system objects. It then classifies the systems into two categories: inner systems and outer systems:

  • Inner systems are all those systems from the systems list whose bounding boxes intersect the bounding box of the given system expanded by the expansionDistance parameter.
  • Outer systems are all the remaining systems from the systems array.

Next, visibility of the inner systems and outer systems is calculated. All systems are added to the IPrecompSystemDependencies object with the system's visibility and minimum distance data. This data can then be used to create a minimal dependency set. As an example, GeoRadiosity uses a slider to vary the visibility cutoff and illustrate the effect on system dependencies of changing this metric.

Visibility data is generated as follows. The geometry of the given system and all the inner systems is combined into a ray tracing world representation. All the outer systems are represented by their bounding boxes and are not part of the ray tracing world. Ray origins are generated uniformly on the surface of the system, with the density specified in the rayOriginsPerPixelArea parameter. Each ray origin is used to generate a bunch of rays which are then traced against the world.

The visibility of inner systems is calculated as the highest (among all ray origins) ratio of rays cast to rays that hit that system.

Because systems from the outer systems array are not included in the ray tracing world, their visibility is calculated as the highest (among all ray origins) ratio of rays cast to rays that (1) do not intersect anything in the ray tracing world and (2) would intersect a bounding box of the outer system if that bounding box was in the ray tracing world.

The outer systems do not contribute their geometry to the ray tracing world, so they do not need to be fully loaded; only the header load is required. This saves memory and increases performance of ray tracing, because there is less geometry to trace against. The default value of expansionDistance is set to 0, which means that all systems are fully represented in the ray tracing (the list of outer systems is empty).

Probe set dependencies

The low level precompute API function CalculateProbeSetSystemDependencies computes dependencies of a given probe set. It works in the same way as CalculateSystemDependencies function described above, except that:

  1. The bounding box of the probe set expanded by the expansionDistance is used to classify the candidate systems into inner systems and outer systems.
  2. ray origins used for generating visibility rays are placed one per probe position.
Cube map dependencies

The low level precompute API function CalculateCubeMapSystemDependencies computes dependencies of a given cube map. It works in the same way as the CalculateSystemDependencies function described above, except that:

  1. The bounding box of the cube map used to classify the candidate systems into inner systems and outer systems is defined as a bounding box located at the centre of the cube map and with expansionDistance extents in all directions.
  2. Only one ray origin, at the centre of the cube map, is used for visibility checking.
  3. During visibility ray casting, all back face hits are ignored; the ray continues as if the back face hit didn't happen. This is to prevent cube maps placed within closed geometry from having no dependencies.
Estimating system dependencies based on distance

There is an alternative API function that can be used to estimate system dependencies. The CalculateSystemDependenciesByDistance function calculates system dependencies among the input array of IPrecompInputSystem objects by using a distance threshold. In this method, System A depends on System B if the distance between the centre of System A's bounding box and System B's bounding box is less or equal to maxDistanceInPixelSizeUnits multiplied by the pixelSize of System B. This function requires an array of all the IPrecompInputGeometry objects referenced in the list of input systems, in order to calculate the bounding boxes of the input systems.

This function does no ray casting, so it is quicker then the CalculateSystemDependencies function. However, the resulting dependencies are not as precise.

Reporting of dependencies

The precompute attempts to identify superfluous inter-system dependencies during the Clustering stage and the LightTransport stage.

  • In the Clustering stage, dependencies that are likely to be superfluous are detected and a warning is issued for each such dependency. Additionally, the list of candidate system dependencies is stored in the IPrecompSystemClustering interface. Note that the Clustering stage may introduce false positives as the visibility testing used as a basis for the choice of dependencies is approximate. This is especially the case in scenes with large leaf clusters. Thus, it is advised that this information is used with care as it may remove too many dependencies.
  • The LightTransport stage additionally reports system dependencies that are superfluous and will in fact not be used during the runtime. A warning is issued for such dependencies and the list of actual dependencies is stored in the ILightTransportOutput interface. Unlike the Clustering stage, the system dependency list produced by the LightTransport stage is accurate. For details of how to instruct the precompute to create the ILightTransportOutput interface, see Debugging the precompute.

The dependencies that are detected are a subset of the authored dependencies; therefore, the stages cannot suggest any dependencies that were explicitly removed during authoring. Also, this is not intended to remove the need to author dependencies; the Preclustering and Clustering stages will be excessively slow without authored dependencies for massive scenes.

Configuring precompute parameters

Precompute parameters can be configured via the following objects:

Validation

The ValidateBuildParameters function checks the build parameters for internal consistency, and exceeding limitations of the runtime. Current checks include:

  • Number of Clusters < 2^31 per system.
  • Number of Output Pixels < 256K per system.

The intent is to prevent a poor choice of parameters from causing a runaway precompute. Validation requires only the IPrecompInputSystem and IPrecompInputGeometry objects, so can be done before any other time consuming tasks.

System processing

The initial stage of the precompute is:

  1. Load your mesh data into instances of the Enlighten IPrecompInputGeometry class.
  2. Create an IPrecompInputSystem for each runtime system you want.
  3. For each instance of the previous geometry in that system, create a PrecompInputInstance and add it to the system.
  4. All IPrecompInputGeometry should be packed once.
  5. When all the geometry of a system is ready you can pack the system.

Outputs from a previous stage are never modified by a later one, so you can often run these stages in parallel (such as through distributed build systems such as IncrediBuild).

The diagram below shows the creation of one IPrecompPackedSystem. If you have multiple systems, each one needs to be packed. They may share the same IPrecompPackedGeometry objects if they use the same mesh with a different location/orientation.

Radiosity processing

  1. When you have decided what the system dependencies are, pass these systems into the IPrecompute::CreatePreClustering function and subsequently the IPrecompute::CreateClustering function. The output data is used for both radiosity and spherical harmonics generation.
  2. For radiosity processing, pass these systems (and their cluster data) into the IPrecompute::CreateLightTransport function. The output from this task is the light transport.
  3. The light transport is passed to IPrecompute::CompileRadiosity in order to generate per-platform runtime data, a RadSystemCore within the IPrecompSystemRadiosity class.

The diagram below shows a scene with two IPrecompPackedSystems; there may be more or less than this.

You should ensure that the collection of systems passed to pre-clustering and clustering is the same as that passed to light transport. Light will only travel between systems that were given to both the (pre-)clustering and light transport stages. If you are implementing your own build pipeline, the object that is created by CompressLightTransport may be stored as a result of the precompute, and the later stage of CompileRadiosity done only when you know what platforms are required.

Workspace creation

The next step is to create runtime workspaces from the precompute data.

These are all fast operations, as they are simply compressing data from the intermediate objects into runtime formats. The InputWorkspace and AlbedoWorkspace are relocatable objects, so you can stream them in and out of memory whenever you need to.

Functions are provided in EnlightenUtils to do the endian swapping.

ProbeSet processing

The (optional) probeset processing can be started as soon as the IPrecompSystemClustering is available (so it can be done in parallel with radiosity calculations). Take the same collection of clusters you had when generating the radiosity and pass them to the IPrecompute::CreateProbeSet function. The IPrecompOutputProbeSet is then passed on to IPrecompute::CompileProbeSet, which generates a IPrecompProbeSetRadiosity output. Finally, the IPrecompProbeSetRadiosity contains a RadDataBlock that you can pass to the runtime.

To use adaptive probe placement in an octree, an extra stage is required (shown dashed in the picture below). Use IPrecompute::CreateOutputProbeOctree to convert an IPrecompInputProbeOctree into an IPrecompOutputProbeOctree. The IPrecompInputProbeSet::SetOctreeProbePositions function can then be used to set up the input for the remainder of the probeset pipeline.

Cube map processing

Cube map processing follows the same pipeline structure as probe sets. Like the probe sets, the cube map processing can be started as soon as the IPrecompSystemClustering is available, and can be done in parallel with other precompute calculations. The correspondence between the interface objects and precompute functions for probe sets and cube maps are:

Probe set item

Cube map item

Description

IPrecompInputProbeSet

IPrecompInputCubeMap

Object describing the item we want to precompute

IPrecompOutputProbeSet

IPrecompOutputCubeMap

Object describing the result of the precompute, but not in a runtime format

IPrecompProbeSetRadiosity

IPrecompCubeMapCore

Wrapper object around solver-specific runtime data


IPrecompDepthCubeMap

Wrapper object around depth data for cube map (not in a runtime format)

CreateProbeSet(...)

CreateCubeMap(...)

Runs the precompute

CompileProbeSet(...)

CompileCubeMap(...)

Compiles the precompute output into a solver-specific version


CompileDepthCubeMap(...)

Extracts the depth data from the precompute output for custom processing

Message reporting and error handling

If you experience problems when using the Low Level Precompute API, call the GeoAttachSystemLogger/GeoAttachLogger functions to issue error and warning messages.

You can also implement the IGeoProgressProxy class in order to process errors reported via ReportError(). These errors contain an error code, severity, textual message and a payload which contains extra information specific to that error. Here is a small example of a ReportError implementation:

void MyProgressProxy::ReportError(const PrecompError& error)
{
    if (error.m_Code == 2106) //we don't care about warning PE2106
    {
        return;
    }
    else if (error.m_Code == 2001) //extract specific information from PE2001 and highlight the problem system
    {
        Enlighten::Errors::PE2001* payload = static_cast<Enlighten::Errors::PE2001*>(error.m_Payload);
        GeoGuid systemGuid = payload->m_SystemGuid;
        HighlightSystemInError(systemGuid);
    }

    //print the error message to the console
    switch(error.m_Severity)
    {
    case ES_FATAL:
        printf("\nError %i: %s\n", error.m_Code, error.m_Message.ToUtf8().GetCString());
        break;
    case ES_WARNING:
        printf("\nWarning %i: %s\n", error.m_Code, error.m_Message.ToUtf8().GetCString());
        break;
    }
}

You can also use the provided TxtProgressBar implementation which simply prints errors to the console. You should pass this object to the precompute function you are calling in order to receive error and progress information. Please see the header files in EnlightenPrecomp2/Errors for details of the errors which can be reported from the precompute using this method.

It is recommended that you use both GeoAttachSystemLogger/GeoAttachLogger and an implementation of IGeoProgressProxy to catch all possible errors from the precompute. For more information, see the API documentation on message reporting and error handling.

Debugging using GeoRadiosity

GeoRadiosity can load files used as input for the high-level build system. This is a useful for debugging, as GeoRadiosity will use its own renderer to show the scene, thus enabling you to eliminate integration issues of the run-time as the source of error. Furthermore, it provides debugging view mode that allow you to identify setup problems. When using the low-level API, no files are required to be writing to disk and consequently GeoRadiosity can not be used as is.

To enable you the same features as when using the high-level build system, GeoRadiosity contains a Extract from cache command. If you serialize all your input files to the precompute and store them in one folder, GeoRadiosity will be able to recreate the scene and create the input files to use the high-level build system and thus also GeoRadiosity.

To help us more quickly reproduce and debug issues with your Enlighten scene, please provide these precompute input files when you contact Enlighten support.

  • No labels