Known Limitations

Ambient Occlusion
There is currently no scene selection mechanism supported to exclude certain objects from native ambient occlusion computation. Especially, the finalgather object and instance flags are not respected. If this is desired final gathering may be used to calculate just the ambient occlusion information, even if it takes longer to finish due to calling of shaders.
The native ambient occlusion cache can not be stored in files on disks.
Assemblies may not update after incremental changes of their scene representation.
There is currently no support for measured BRDF data as input to a mental ray BRDF. The BRDF/BSDF which is attached to a material is currently used to estimate lighting distribution for direct illumination and IBL only, to improve performance and quality. It is not used similarly in any indirect illumination algorithm in mental ray, but instead it is point-sampled like regular material 'black-box' shaders.
Catmull-Clark Meshes
The current implementation does not support automatic splitting of large input faces.
The current implementation does not support variable creases. If this is needed the regular hierarchical subdivision surfaces should be used.
Detail Shadowmaps
Detail shadow maps cannot be used in segmented shadow mode. They do not support shadowmap merging.
Frame Buffer Files
For cached frame buffers, the temporary disk files are saved in 'tiled' .map format which is limited to a file size of 2GB. Therefore, a single frame buffer may not exceed this 2GB size limit. For example, the resolution of a square 8-bit RGBA frame buffer is limited to about 23,000 x 23,000 pixels.
When tracing importons lens shaders are currently ignored.
Irradiance Particles
Irradiance Particles cannot be used in combination with globillum photons. mental ray will automatically adjust the rendering options on attempts to enable incompatible features, to allow existing scenes to be rendered with the new algorithm easily. However, Irradiance Particles are compatible with caustic photons and final gathering.
iray Rendering Mode
The iray rendering mode is functional but may be limited on certain platforms. If the installed GPU hardware, graphics driver software, or CUDA software is not capable of running iray then mental ray will execute a CPU version of the algorithm instead, which delivers identical results but typically requires much more time to finish.
Map Data
There is currently no support for visualization of map files in the native image tools.
Using map data in distributed network rendering is not fully supported yet. Although material shaders are allowed to manipulate a map at the same time it is used during rendering, this mode currently requires special attention in custom shaders to synchronize the map content properly. This is not necessary in more common use cases of generating the map content prior to rendering, like in a pre-process, a previous rendering, or by an external application.
MetaSL Support
The current support for MetaSL features is not completed. Two back-ends are currently supported for software rendering on all platforms: the C/C++ back-end using an external compiler, and the LLVM back-end not depending on external tools, and replacing the previously offered .NET back-end running on Windows only. Most MetaSL shaders should work in the current version, but certain advanced effects may show problems until more MetaSL features are fully functional.
Progressive Rendering
In this rendering mode just the main color framebuffer is computed. Although the traditional shading model is supported certain advanced features implemented in shaders may not work. Especially shaders which perform oversampling are generally not well suited for this rendering mode because sampling cannot be controlled and optimized by mental ray. The progressive rendering performance may suffer noticeably in such cases.
The order of compositing of the samples from transparent objects is undetermined, like not using a depth-sorted ordering for example.
Visible area lights are not rendered by the rasterizer.
The only algorithms affected by stereoscopic rendering are the "first hit" renderers: scanline, rasterizer and tracing eye rays. Other view dependent algorithms like final gathering, importons, or tessellation, operate as if a single camera was used, that is they work from the "center eye". Similarly, shaders can not determine which of the two eyes is being rendered. In particular, state->camera refers to the "center eye", not the one actually being rendered.

Copyright © 1986, 2012 NVIDIA Corporation