Lightmap shaders can be attached to materials to sample the object that the material is attached to, and compute a map that contains information about the object. The most common application is sampling illumination, or just indirect illumination, and storing it into a writable texture file that can be texture-mapped later during rendering. This makes rendering much faster, although the illumination contribution is now frozen into the object and cannot change with changing lighting conditions.

Lightmap shaders are called in two different modes:

- In vertex mode,
`state→type`

is`miRAY_LM_VERTEX`. mental ray will call the shader in this mode once for every triangle vertex. The shader is expected to collect data about the object in this mode. - In mesh mode,
`state→type`

is`miRAY_LM_MESH`. After all vertex samples have been taken, the shader is called once in mesh mode. It can now use the collected information to generate the final output.

This section describes the lightmap shaders in the base shader
library that is included with mental ray. It puts direct, and
optionally also indirect, illumination into a
writable texture. It is split into two
shaders, and
**mib_lightmap_sample**, to make it
easy to write a new sampling function without having to rewrite the
entire lightmap shader. The sample shader is attached to the main
shader as a shader parameter.

In vertex mode, the main shader could simply sample the illumination and store it in the texture, but this would result in a texture with only a few isolated dots. (This might be a good approach for generating vertex color for a hardware game engine though.) The shader could also sample and save the illumination and during vertex mode and then paint triangles into the texture in mesh mode by interpolating these points, but this would result in very coarse lightmap. Therefore, the standard shader in the base shader library that comes with mental ray and that is listed below only collects point, normal, and texture coordinates in vertex mode, and paints triangles during mesh mode by sampling for every written pixel.

Here is the standard lightmap base shader:

typedef struct mib_lightmap_write_result{ miVector point; /* point in space */ miVector normal; /* vertex normal */ miVector tex; /* texture coordinates of vertex */ } mib_lightmap_write_result; typedef struct mib_lightmap_write_param{ miTag texture; /* writable texture */ miTag coord; /* texture coordinate shader */ miTag sample_sh; /* sampling shader */ } mib_lightmap_write_param; DLLEXPORT miBoolean mib_lightmap_write( mib_lightmap_write_result *result, miState *state, mib_lightmap_write_param *param, miRclm_mesh_render const *arg) /* argument */ { int i; mib_lightmap_write_result const *resdata; miImg_image *img; miTag tex_tag; miTag shader_tag; miTag coordshader_tag; void *handle; miBoolean success; switch (state->type) { case miRAY_LM_VERTEX: /* Gathering vertex data */ result->point = state->point; result->normal = state->normal; mi_vector_normalize(&result->normal); coordshader_tag = *mi_eval_tag(¶m->coord); /* need to call the shader to get access to the success value*/ success = mi_call_shader_x((miColor*)&result->tex, miSHADER_TEXTURE, state, coordshader_tag, 0); if (!success) result->tex.x = -1; /* mark this vertex as bad */ break; case miRAY_LM_MESH: if (!arg) return(miFALSE); tex_tag = *mi_eval_tag(¶m->texture); shader_tag = *mi_eval_tag(¶m->sample_sh); if (!tex_tag || !shader_tag) return(miFALSE); if (!(img = mi_lightmap_edit(&handle, tex_tag))) return(miFALSE); resdata = (mib_lightmap_write_result const *)arg->vertex_data; for (i=0; i < arg->no_triangles; i++) {mi_state_set_pri(state, arg->pri, arg->triangles[i].pri_idx);mib_lightmap_do_triangle(state, img, &resdata[arg->triangles[i].a], &resdata[arg->triangles[i].b], &resdata[arg->triangles[i].c], shader_tag); } mi_lightmap_edit_end(handle); } return(miTRUE); }

The marked line shows new functionality
using *mi_state_set_pri*.

Note the use of the texture access functions
*mi_lightmap_edit* and
*mi_lightmap_edit_end*, which give
the shader writing access to the
writable texture. The texture must be
defined with the `writable` flag for this to work. It is not
sufficient to use standard access functions like
*mi_db_access* because they do not
permit writing access, and because they would not cause the
finished texture to be written to disk.

In mesh mode, mental ray uses the fourth argument *arg* of
the lightmap shader to pass triangle information and the data
collected in vertex mode:

typedef struct miRclm_mesh_render { struct miRc_intersection *pri; int no_triangles; miRclm_triangle const *triangles; void const *vertex_data; } miRclm_mesh_render; typedef struct miRclm_triangle { miInteger a; miInteger b; miInteger c; miGeoIndex pri_idx; } miRclm_triangle;

The *vertex_data* is an array of the data blocks stored by
the lightmap shader in vertex mode. The *triangles* array
contains *no_triangles* records of the type
`miRclm_triangle`. The shader will loop over these
triangles, scan-converting each into the writable texture. Each
triangle has three vertex indices that are indices into the
*vertex_data* array. For example, to find the data that the
lightmap shader has stored when it was called for the first vertex
of the fifth triangle, it uses this expression:

typedef struct {...} Stored; Stored *list = (Stored *)arg->vertex_data; Stored *vertex = &list[arg->triangles[5].a];

This assumes that the fourth shader argument is named
*arg*. The *pri* pointer identifies the object that is
being lightmapped. If the shader needs to cast rays or call other
mental ray functions, it should store this pointer in the state
using the function
*mi_state_set_pri*.
Previous versions of mental ray had to store the
pointer directly into `state→pri`

, and also store
the *pri_idx* in the `miRclm_triangle` struct in
`state→pri_idx`

.

The data collected in vertex mode is stored in
*vertex_data* by mental ray. For this to work, the main shader
must be declared correctly with all return variables, so that
mental ray knows how many bytes to allocate:

declare shader struct { vector "point", vector "normal", vector "tex" } "mib_lightmap_write" ( color texture "texture", # output texture vector texture "coord", # texture coords to use color texture "input" # evaluated texture ) version 1 end declare

Here is the lightmap sample shader that generates colors for the main lightmap shader to store into the writable texture. It simply computes the light irradiance at the current intersection point. It relies on the point, normal, and tex data stored in mesh mode.

typedef struct mib_lightmap_sample_param{ miBoolean indirect; /* do indirect illumination? */ int flip; /* flip normals? */ int i_light; int n_light; miTag light[1]; /* lights to sample */ } mib_lightmap_sample_param; DLLEXPORT miBoolean mib_lightmap_sample( miColor *result, miState *state, mib_lightmap_sample_param *param) { int i_light; int n_light; miTag *light; miColor color, sum; int l, m; int flip; int times; flip = *mi_eval_integer(¶m->flip); i_light = *mi_eval_integer(¶m->i_light); n_light = *mi_eval_integer(¶m->n_light); light = mi_eval_tag(param->light) + i_light; times = flip==2 ? 2 : 1; result->r = result->g = result->b = 0.0f; for (m=0; m<times; m++) { if (flip == 1 || m==1) { mi_vector_neg(&state->normal); mi_vector_neg(&state->normal_geom); } for (l=0; l < n_light; l++) { miVector dir; miScalar dot_nl; int samples = 0; sum.r = sum.g = sum.b = 0.0f; while (mi_sample_light(&color, &dir, &dot_nl, state, light[l], &samples)) { sum.r += dot_nl * color.r; sum.g += dot_nl * color.g; sum.b += dot_nl * color.b; } if (samples) { result->r += sum.r / samples; result->g += sum.g / samples; result->b += sum.b / samples; } } /* indirect illumination */ if (*mi_eval_boolean(¶m->indirect)) { mi_compute_irradiance(&color, state); result->r += color.r; result->g += color.g; result->b += color.b; } } if (flip >= 0) { mi_vector_neg(&state->normal); mi_vector_neg(&state->normal_geom); } result->a = 1.0f; return(miTRUE); }

The main shader uses a static function to scan-convert triangles in mesh mode. The following function paints a single triangle into the writable texture, evaluating illumination for every painted pixel. The triangle extents are determined in pixel space. Next, a mapping from pixel space to barycentric coordinates is computed, and a loop over the pixels whose centers fall within the triangle performs the actual sampling. This is done by using a scanline approach where a line pair is generated for the upper and lower part of the triangle. This makes certain that inside detection for adjacent triangles is handled identically and so no pixels are missed. For each such center an intersection is computed and a source shader called to get a value which is then stored in the writable texture.

typedef struct Line2d { float s; /* slope */ float o; /* offset */ } Line2d; static void mib_lightmap_do_triangle( miState *state, miImg_image *img, mib_lightmap_write_result const *a, mib_lightmap_write_result const *b, mib_lightmap_write_result const *c, miTag shader_tag) { miVector pixa, pixb, pixc; miMatrix tmp1, tmp2; miMatrix pixel_to_bary; miVector p; miVector d1, d2; miVector const *pix_y[3], *tmp; Line2d line[3]; Line2d const *left[2], *right[2]; float y_min, y_max; miBoolean long_right; /* give up if any of the vertices was marked as not-to-use */ if (a->tex.x < 0 || b->tex.x < 0 || c->tex.x < 0) return; /* * compute pixel coordinates from texture coordinates. They are offset * by half so that integer values land in the center of the pixels. */ pixa.x = a->tex.x * img->width - 0.5f; pixb.x = b->tex.x * img->width - 0.5f; pixc.x = c->tex.x * img->width - 0.5f; pixa.y = a->tex.y * img->height - 0.5f; pixb.y = b->tex.y * img->height - 0.5f; pixc.y = c->tex.y * img->height - 0.5f; pix_y[0] = &pixa; /* sort vertices in y increasing order */ pix_y[1] = &pixb; pix_y[2] = &pixc; if (pix_y[0]->y > pix_y[1]->y) { tmp = pix_y[0]; pix_y[0] = pix_y[1]; pix_y[1] = tmp; } if (pix_y[1]->y > pix_y[2]->y) { tmp = pix_y[1]; pix_y[1] = pix_y[2]; pix_y[2] = tmp; } if (pix_y[0]->y > pix_y[1]->y) { tmp = pix_y[0]; pix_y[0] = pix_y[1]; pix_y[1] = tmp; } if (pix_y[0]->y >= pix_y[2]->y) /* avoid empty triangles */ return; /* compute lines */ line[0].s = (pix_y[1]->x - pix_y[0]->x) / (pix_y[1]->y - pix_y[0]->y); line[0].o = pix_y[0]->x - pix_y[0]->y * line[0].s; line[1].s = (pix_y[2]->x - pix_y[1]->x) / (pix_y[2]->y - pix_y[1]->y); line[1].o = pix_y[1]->x - pix_y[1]->y * line[1].s; line[2].s = (pix_y[2]->x - pix_y[0]->x) / (pix_y[2]->y - pix_y[0]->y); line[2].o = pix_y[0]->x - pix_y[0]->y * line[2].s; /* remove degenerate line */ if (pix_y[1]->y == pix_y[0]->y) { line[0] = line[1]; long_right = line[1].s > line[2].s; } else if (pix_y[2]->y == pix_y[1]->y) { line[1] = line[0]; long_right = line[0].s < line[2].s; } else long_right = line[0].s < line[2].s; if (long_right) { /* arrange the lines */ left[0] = &line[0]; left[1] = &line[1]; right[0] = &line[2]; right[1] = &line[2]; } else { left[0] = &line[2]; left[1] = &line[2]; right[0] = &line[0]; right[1] = &line[1]; } /* * pixel to barycentric coordinate transform. This is a 2D homogeneous * problem (to allow for translation) so the third component is set to * 1 and we have a 3-by-3 matrix equation. */ mi_matrix_ident(tmp1); tmp1[ 0] = pixa.x; tmp1[ 4] = pixb.x; tmp1[ 8] = pixc.x; tmp1[ 1] = pixa.y; tmp1[ 5] = pixb.y; tmp1[ 9] = pixc.y; tmp1[ 2] = 1.0f; tmp1[ 6] = 1.0f; tmp1[10] = 1.0f; mi_matrix_ident(tmp2); /* corresponds to barycentric vectors */ /* solve pix * pix_to_space = bary */ if (!mi_matrix_solve(pixel_to_bary, tmp1, tmp2, 4)) return; /* compute geometric normal of the triangle */ mi_vector_sub(&d1, &b->point, &a->point); mi_vector_sub(&d2, &c->point, &a->point); mi_vector_prod(&state->normal_geom, &d1, &d2); mi_vector_normalize(&state->normal_geom); state->pri = pri; /* set up primitive */ state->pri_idx = pri_idx; p.z = 1.0f; /* Loop over the texture y range */ y_min = ceil(pix_y[0]->y); if (y_min < 0) y_min = 0; y_max = floor(pix_y[2]->y); if (y_min >= img->height) y_min = img->height-1; for (p.y=y_min; p.y <= y_max; p.y++) { float left_x, right_x; int i = p.y < pix_y[1]->y ? 0 : 1; /* Loop over texture X range */ left_x = ceil(left[i]->o + p.y*left[i]->s); if (left_x<0) left_x = 0; right_x = floor(right[i]->o + p.y*right[i]->s); if (right_x>=img->width) right_x = img->width-1; for (p.x=left_x; p.x <= right_x; p.x++) { miVector bary; miColor color; mi_vector_transform(&bary, &p, pixel_to_bary); /* constrain barycentric coordinates to triangle */ mib_lightmap_bary_fixup(&bary); /* pixel center is inside triangle */ mib_lightmap_combine_vectors(&state->point, &a->point, &b->point, &c->point, &bary); mib_lightmap_combine_vectors(&state->normal, &a->normal, &b->normal, &c->normal, &bary); mi_vector_normalize(&state->normal); /* get the color to write */ mi_call_shader_x(&color, miSHADER_MATERIAL, state, shader_tag, 0); /* write to the image */ mi_img_put_color(img, &color, (int)p.x, (int)p.y); } } } /* * combine vectors using weights */ static void mib_lightmap_combine_vectors( miVector *res, miVector const *a, miVector const *b, miVector const *c, miVector const *bary) { res->x = bary->x * a->x + bary->y * b->x + bary->z * c->x; res->y = bary->x * a->y + bary->y * b->y + bary->z * c->y; res->z = bary->x * a->z + bary->y * b->z + bary->z * c->z; } /* * Correct barycentric coordinates by projecting them to the * barycentric plane. The plane equation is (P-u)*n = 0, where * 'u' is e.g. (1 0 0) and 'n' is the plane normal (1 1 1). * We seek a scalar s so that * (B-sn-u)*n = 0 => s = ((u-B)*n) / (n*n) * and then add s*n to B. * * We then clip the barycentric coordinates and as a final touch, * compute z as a function of x and y since they are not independent. * This means that we can leave z out of the projection and * clipping phase */ static void mib_lightmap_bary_fixup( miVector *bary) { float s; s = (1.0f - bary->x - bary->y - bary->z)/3.0f; bary->x += s; bary->y += s; /* now clip coordinates */ if (bary->x < 0.0f) bary->x = 0.0f; else if (bary->x > 1.0f) bary->x = 1.0f; if (bary->y < 0.0f) bary->y = 0.0f; else if (bary->y + bary->x > 1.0f) bary->y = 1.0f-bary->x; /* Finally, compute the dependent z */ bary->z = 1.0f - bary->x - bary->y; }

If the
lightmap shader wishes to cast rays, and apply proper jittering,
mental ray offers special jittering support. Suppose the shader
needs to sample the lightmap texture raster coordinate *p*, it
can derive a jittered raster coordinate *p* like this:

double jitter[2]; state->raster_x = p.x; state->raster_y = p.y; if (mi_query(miQ_PIXEL_SAMPLE, state, 0, jitter) && state->options->jitter) { p.x += jitter[0]; p.y += jitter[1]; }

This code fragment initializes QMC sequences and provides a
jittered subpixel coordinate offset. It uses the
`miQ_PIXEL_SAMPLE` mode of
*mi_query*. Note that
the function should be called even if jittering is disabled. The
*mi_query* function in older versions
of mental ray will return false if miQ_PIXEL_SAMPLE (or the numeric
equivalent 143) is not supported.

The full version of the shader source code is publicly available.

Lightmapping used with finalgathering requires a special attention. In practice there are two commonly used types of lightmap shaders: shaders which compute illumination on vertices and interpolate the per-vertex results in the mesh mode, and shaders which compute per-pixel shading.

For the former case, mental ray computes a new finalgather point
if *mi_compute_irradiance* is called at a vertex. The
interpolation among finalgather points is disabled.

In the latter case, mental ray kernel delegates the control over
the order of sampling to a lightmap shader. For optimal results, it
is recommended to implement the two-pass approach similar to the
camera image rendering with finalgathering. In the first pass the
lightmap shader may call *mi_finalgather_store* function in
the *miFG_STORE_COMPUTE* mode on some sparsely selected pixels
to force finalgather point computations. The second pass could be
the standard shader evaluation over the mesh.

Future support for on-demand lightmap generation may require
changes to the way that the shader finds the writable texture (or
writable user data) to write to; the tag will be stored in
`miRclm_mesh_render` instead of being passed as a shader
parameter. Shader writers should avoid writing more than one
texture per lightmap shader to remain compatible with future
versions of mental ray.

Copyright © 1986, 2013 NVIDIA Corporation