linear_mesh_lod - ryzom/ryzomcore GitHub Wiki


title: Linear Mesh LODing (2000) description: Design document for the linear mesh LOD system — geomorphing between discrete LOD levels built from Progressive Meshes published: true date: 2026-03-15T00:00:00.000Z tags: editor: markdown dateCreated: 2026-03-15T00:00:00.000Z

Translated from "fr 2000 3d linear_mesh_loding" — an early Nevrax design document describing the Linear Mesh LODing approach that became the foundation of the MRM system used in Ryzom. {.is-info}

Principle

This method is inspired by Linear MipMapping for textures, but applies the same principle to geometry.

Starting from a Progressive Mesh (PM) representation, multiple levels of detail are constructed and geomorphing is used to interpolate between these levels.

We denote LODi the i-th level of detail, where LOD0 represents the coarsest level and LODn-1 the finest (with n total levels of detail).

Example: Consider a mesh with 3,000 faces. Its PM is constructed, and from it 10 levels of detail are derived with a linearly decreasing face count (3000, 2700, 2400, 2100, and so on). As described in Hoppe's paper, geomorphing can be performed because each vertex has an ancestor in a finer mesh: one must morph between its current position (at LODi) and the position of its ancestor (at LODi+1).

Each LODi therefore contains:

  • A "conventional" mesh with a face/vertex/wedge count equivalent to the finer level.
  • A list of vertices/normals/UVs to blend in order to reach the coarser level: LODi-1.

LODi can thus interpolate between the coarser level (LODi-1) and the finer level, although the actual number of displayed faces remains the same (within a given LOD).

Depending on the distance, one can decide which LOD to display and interpolate that LOD accordingly. The method then resembles a classic fixed LOD system, except that there is interpolation between each LOD, which eliminates popping artifacts.

Advantages

  • Speed: Same as a classic LOD system, except that at each LOD there is an interpolation of M vertices/wedges. From experience:

    M = NFaces / NLods

    For example, to go from 3,000 faces to 2,700, there will be only 300 vertex/normal/UV blends. This is comparable to the face counts of older generation games like ODT, suggesting the processor can handle this task easily (especially since blending a vertex is not as costly as projecting it or clipping triangles).

  • Instancing is possible. This system is not incremental: one can jump from LOD0 to LOD9 without any loss of speed. Unlike a classic PM (such as the one in Messiah), there is no need to maintain the current mesh state for each instance. One can have 100 instances of the same mesh without any increase in memory usage.

Disadvantages

  • Memory hungry. Many LODs must be stored per mesh. For NLods=10, approximately 5.5 times more data is stored compared to a normal mesh (without LODs). This is due to the linear decrease in face count per LOD. An exponential decrease (3000, 1500, 750, ...) would reduce storage to only 2x the original, but the number of vertices to blend becomes problematic. With the 3,000 face example, LODn-1 would require blending 1,500 vertices. Furthermore (from experience), the result is much less visually appealing.

    An interesting intermediate solution is a squared decrease: (3000, 2430, 1920, 1470, 1080, 750, 480, 270, 120, 30). With 10 LODs, the memory multiplier is 3.85. The number of vertices to blend is larger when the mesh is close (470), but much smaller when far away (70), which is advantageous.

  • PM: no control over Progressive Mesh degradation. The artist cannot (a priori) control how the mesh degrades (as they could with a classic LOD system). They could potentially use a weight system on vertices/edges to give importance to certain parts of the mesh (which would degrade more slowly).

  • PM: computing the PM is difficult. It is either slow or mediocre. Each time an artist changes a mesh, its PM must be recomputed, which takes time. Should this computation be done on the server when the artist "unlocks" their mesh?

  • PM: how to handle skinning? A skin weight is just another parameter (like UVs, normals, ...), but this risks causing problems with OpenGL, which originally only accepts 2 matrices for vertex weighting.

Open Problems

  • Possible solution to the memory problem: The PMPackedVertex structure. Exploit coherence between two LODs. Indeed, from one LOD to the next, many vertices/wedges/faces do not change. Rather than having 10 completely different meshes, the faces of different LODs could point to more or less the same vertices. This is fairly complex, but the advantage of reducing memory size also reduces disk size. One would not need to store the normal PM representation (i.e. the list of splits, etc.), but could directly store our representation.

    Test calculations for a mesh of 3,000 faces / 1,500 vertices, across 10 LODs. For 300 faces collapsed each time, there are 300 vertex blends (from experience) => 300 new vertices. This gives 1,500 + 3,000 vertices for the entire PM. The same would need to be done for wedges and faces. Each LOD in our model would just have a list of indices to the faces it uses (one could be cleverer but material sorting must not be forgotten) + the list of vertices/wedges to blend. Roughly speaking, this model only triples the size of a normal mesh.

    Correction: something must be done for geomorphs. If geomorphing is done on the global vertex array, it does not work because certain vertices must no longer be geomorphed when changing LOD. Consequently, additional vertices ("blendable" vertices) must be created at the end of the list. Thus this PM model triples the disk size but x3.2 the memory size (there are 300 vertex blends per LOD => 300 possible blendable vertices at each LOD). Per LOD, we then have the list of vertices to blend with just the indices: {uint Src, Dest, Result}.

    Additional advantage: the base fixed vertices (all fixed vertices) could be flagged as "static" for glVertexPointerEXT() (one might even use compiled_vertex_array). Only the "blended" vertices would not be optimized (which is expected). But careful consideration of glVertexPointerEXT() usage is needed, as there is a serious issue: UVs/normals must be per-vertex.

  • Morphing/Blending. How to handle BlendShapes, for example? The deformations from BlendShapes must be propagated to the LODs. Important reminder: morphing can prevent the use of instancing in certain cases. To avoid excessive memory usage, morphed meshes should be kept separate (like tree stumps, or faces for BlendShapes). The alternative is to apply the morph every frame, which can be too CPU-intensive if the instance does not change between frames.

    • BlendShape case: Complex. For each BlendShape key, all LOD levels of the collapsed vertices can be stored. When the morph is performed, it must be applied to both LODi-1 and LODi, using the LODs of the given BlendShapes, then the geomorph is done between the morphed vertices on the two LODs. Note: BlendShapes may be used for facial animation, which would likely only be done for characters within 50 meters.

    • Pre-computation case: Some objects may be deformed "statically", like tree stumps (influenced by the ground) or faces. The morph would need to be propagated to lower LODs. This can be done by storing, for each vertex of LODi, how it is computed relative to the vertices of LODi+1. Of course some vertices will not have moved. But all LODs must be traversed from N to 0 to propagate the morph. Speed needs testing. One optimization would be to only compute the vertices of LODi that are affected by the modified vertices of LODi+1.

    • General case: If the PMPackedVertex structure is used, there may be no issue: just blend all vertices across all LODs (that is 3x normal) and that is all. The blending method depends on the type of blend applied. For BlendShapes, each BlendShape key stores the vertices at all levels (3x normal) and one can determine at build time which vertex is morphed by testing against the base BlendShape (just like a normal BlendShape). For the tree stump problem, each vertex across the different PM LODs can recalculate its influence vertex based on its position on the grid.

  • "Lego" meshes: It should be possible to manage meshes that interconnect (a torso and legs). The problem with PMs is then having a good junction between the two meshes. Solution: the edge collapse (ecol) computation on junction edges must be done on one mesh (e.g. the torso) and propagated to the others (e.g. the legs). If an ecol occurs on a junction edge between LOD4 and LOD5, the same ecol must occur on both the torso and legs between LOD4 and LOD5. Should vertices be shared between meshes, or should one trust the interpolators of both meshes to keep the vertices correctly snapped (=> no holes)? The second solution is appealing as it cleanly separates the two meshes and simplifies rendering (although computing "constrained ecol" on PMs remains problematic).

  • Wedge/vertex problem, or "sharing/discontinuity of UVs/normals/... at certain vertices". If managing through VBs or glDrawElements(), some information must be duplicated. How then to ensure that a modification to a vertex/UV/normal is applied to all wedges?

  • Triangle Strips/Fans. For each LOD and each material, strip/fan indices and residual triangles could be precomputed. This abandons the idea of compressing the face list (cf. PMPackedVertex structure). Disk/memory size? If uint16 is used, and the vertex/wedge problem is null here, then: 3,000 faces x 5.5 x 3 vertices x 2 bytes = 100 KB for non-strip/fan. Assuming strip/fan halves this => 50 KB per 3,000-face mesh of triangulation info. Compared to vertex size: 1,500 x 3.2 x 32 bytes (t2f/n3f/v3f) = 153 KB. Adding vertex blends: 300 x 10 x 6 bytes (uint16 Src, Dest, Result) = 18 KB. => PM size: 220 KB. Normal mesh size: 57 KB. Our PM therefore takes approximately 4x more memory than a normal mesh.

  • Display lists? Possible at each LOD only for non-interpolated faces. However, there would be many more stored in graphics card memory (since many LODs need to be stored).

Examples

If using the linear approach, it is interesting to set:

detail = sqr(dist)
⚠️ **GitHub.com Fallback** ⚠️