3D graphics eBook - Course Materials Repository
3D graphics eBook - Course Materials Repository 3D graphics eBook - Course Materials Repository
Level of detail 71 Level of detail In computer graphics, accounting for level of detail involves decreasing the complexity of a 3D object representation as it moves away from the viewer or according other metrics such as object importance, eye-space speed or position. Level of detail techniques increases the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast. Although most of the time LOD is applied to geometry detail only, the basic concept can be generalized. Recently, LOD techniques included also shader management to keep control of pixel complexity. A form of level of detail management has been applied to textures for years, under the name of mipmapping, also providing higher rendering quality. It is commonplace to say that "an object has been LOD'd" when the object is simplified by the underlying LOD-ing algorithm. Historical reference The origin oldOldOld of all the LoD algorithms for 3D computer graphics can be traced back to an article by James H. Clark in the October 1976 issue of Communications of the ACM. At the time, computers were monolithic and rare, and graphics was being driven by researchers. The hardware itself was completely different, both architecturally and performance-wise. As such, many differences could be observed with regard to today's algorithms but also many common points. The original algorithm presented a much more generic approach to what will be discussed here. After introducing some available algorithms for geometry management, it is stated that most fruitful gains came from "...structuring the environments being rendered", allowing to exploit faster transformations and clipping operations. The same environment structuring is now proposed as a way to control varying detail thus avoiding unnecessary computations, yet delivering adequate visual quality: “ For example, a dodecahedron looks like a sphere from a sufficiently large distance and thus can be used to model it so long as it is viewed from that or a greater distance. However, if it must ever be viewed more closely, it will look like a dodecahedron. One solution to this is simply to define it with the most detail that will ever be necessary. However, then it might have far more detail than is needed to represent it at large distances, and in a complex environment with many such objects, there would be too many polygons (or other geometric primitives) for the visible surface algorithms to efficiently handle. ” The proposed algorithm envisions a tree data structure which encodes in its arcs both transformations and transitions to more detailed objects. In this way, each node encodes an object and according to a fast heuristic, the tree is descended to the leafs which provide each object with more detail. When a leaf is reached, other methods could be used when higher detail is needed, such as Catmull's recursive subdivision Catmull . “ The significant point, however, is that in a complex environment, the amount of information presented about the various objects in the environment varies according to the fraction of the field of view occupied by those objects. ” The paper then introduces clipping (not to be confused with culling (computer graphics), although often similar), various considerations on the graphical working set and its impact on performance, interactions between the proposed algorithm and others to improve rendering speed. Interested readers are encouraged in checking the references for further details on the topic.
Level of detail 72 Well known approaches Although the algorithm introduced above covers a whole range of level of detail management techniques, real world applications usually employ different methods according the information being rendered. Because of the appearance of the considered objects, two main algorithm families are used. The first is based on subdividing the space in a finite amount of regions, each with a certain level of detail. The result is discrete amount of detail levels, from which the name Discrete LoD (DLOD). There's no way to support a smooth transition between LOD levels at this level, although alpha blending or morphing can be used to avoid visual popping. The latter considers the polygon mesh being rendered as a function which must be evaluated requiring to avoid excessive errors which are a function of some heuristic (usually distance) themselves. The given "mesh" function is then continuously evaluated and an optimized version is produced according to a tradeoff between visual quality and performance. Those kind of algorithms are usually referred as Continuous LOD (CLOD). Details on Discrete LOD The basic concept of discrete LOD (DLOD) is to provide various models to represent the same object. Obtaining those models requires an external algorithm which is often non-trivial and subject of many polygon reduction techniques. Successive LOD-ing algorithms will simply assume those models are available. DLOD algorithms are often used in performance-intensive applications with small data sets which can easily fit in memory. Although out of core algorithms could be used, the information granularity is not well suited to this kind of application. This kind of algorithm is usually easier to get working, providing both faster performance and lower CPU usage because of the few operations involved. DLOD methods are often used for "stand-alone" moving objects, possibly including complex animation methods. A different approach is used for geomipmapping geomipmapping , a popular terrain rendering algorithm because this applies to terrain meshes which are both graphically and topologically different from An example of various DLOD ranges. Darker areas are meant to be rendered with higher detail. An additional culling operation is run, discarding all the information outside the frustum (colored areas). "object" meshes. Instead of computing an error and simplify the mesh according to this, geomipmapping takes a fixed reduction method, evaluates the error introduced and computes a distance at which the error is acceptable. Although straightforward, the algorithm provides decent performance.
- Page 25 and 26: Binary space partitioning 20 [1] Bi
- Page 27 and 28: Bounding interval hierarchy 22 Prop
- Page 29 and 30: Bounding volume 24 bounding boxes b
- Page 31 and 32: Bump mapping 26 Bump mapping Bump m
- Page 33 and 34: CatmullClark subdivision surface 28
- Page 35 and 36: CatmullClark subdivision surface 30
- Page 37 and 38: Conversion between quaternions and
- Page 39 and 40: Cube mapping 34 Advantages Cube map
- Page 41 and 42: Cube mapping 36 Related A large set
- Page 43 and 44: Diffuse reflection 38 2), or, of co
- Page 45 and 46: Displacement mapping 40 Meaning of
- Page 47 and 48: DooSabin subdivision surface 42 Ext
- Page 49 and 50: False radiosity 44 False radiosity
- Page 51 and 52: Geometry pipelines 46 Geometry pipe
- Page 53 and 54: Global illumination 48 Rendering wi
- Page 55 and 56: Gouraud shading 50 Gouraud shading
- Page 57 and 58: Graphics pipeline 52 Graphics pipel
- Page 59 and 60: Graphics pipeline 54 References 1.
- Page 61 and 62: Hidden surface determination 56 imp
- Page 63 and 64: High dynamic range rendering 58 Hig
- Page 65 and 66: High dynamic range rendering 60 Ton
- Page 67 and 68: High dynamic range rendering 62 Fro
- Page 69 and 70: High dynamic range rendering 64 •
- Page 71 and 72: Irregular Z-buffer 66 Applications
- Page 73 and 74: Lambert's cosine law 68 than would
- Page 75: Lambertian reflectance 70 Lambertia
- Page 79 and 80: Level of detail 74 Hierarchical LOD
- Page 81 and 82: Newell's algorithm 76 Newell's algo
- Page 83 and 84: Non-uniform rational B-spline 78 Us
- Page 85 and 86: Non-uniform rational B-spline 80 of
- Page 87 and 88: Non-uniform rational B-spline 82 ar
- Page 89 and 90: Non-uniform rational B-spline 84 Ex
- Page 91 and 92: Normal mapping 86 How it works To c
- Page 93 and 94: OrenNayar reflectance model 88 Oren
- Page 95 and 96: OrenNayar reflectance model 90 , ,
- Page 97 and 98: Painter's algorithm 92 The algorith
- Page 99 and 100: Parallax mapping 94 • Parallax Ma
- Page 101 and 102: Particle system 96 A cube emitting
- Page 103 and 104: Path tracing 98 History Further inf
- Page 105 and 106: Path tracing 100 Scattering distrib
- Page 107 and 108: Phong reflection model 102 Visual i
- Page 109 and 110: Phong reflection model 104 Because
- Page 111 and 112: Phong shading 106 Visual illustrati
- Page 113 and 114: Photon mapping 108 Rendering (2nd p
- Page 115 and 116: Photon tracing 110 Advantages and d
- Page 117 and 118: Potentially visible set 112 • Can
- Page 119 and 120: Potentially visible set 114 Externa
- Page 121 and 122: Procedural generation 116 increases
- Page 123 and 124: Procedural generation 118 • Softi
- Page 125 and 126: Procedural generation 120 Reference
Level of detail 71<br />
Level of detail<br />
In computer <strong>graphics</strong>, accounting for level of detail involves decreasing the complexity of a <strong>3D</strong> object representation<br />
as it moves away from the viewer or according other metrics such as object importance, eye-space speed or position.<br />
Level of detail techniques increases the efficiency of rendering by decreasing the workload on <strong>graphics</strong> pipeline<br />
stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the<br />
small effect on object appearance when distant or moving fast.<br />
Although most of the time LOD is applied to geometry detail only, the basic concept can be generalized. Recently,<br />
LOD techniques included also shader management to keep control of pixel complexity. A form of level of detail<br />
management has been applied to textures for years, under the name of mipmapping, also providing higher rendering<br />
quality.<br />
It is commonplace to say that "an object has been LOD'd" when the object is simplified by the underlying LOD-ing<br />
algorithm.<br />
Historical reference<br />
The origin oldOldOld of all the LoD algorithms for <strong>3D</strong> computer <strong>graphics</strong> can be traced back to an article by James H.<br />
Clark in the October 1976 issue of Communications of the ACM. At the time, computers were monolithic and rare,<br />
and <strong>graphics</strong> was being driven by researchers. The hardware itself was completely different, both architecturally and<br />
performance-wise. As such, many differences could be observed with regard to today's algorithms but also many<br />
common points.<br />
The original algorithm presented a much more generic approach to what will be discussed here. After introducing<br />
some available algorithms for geometry management, it is stated that most fruitful gains came from "...structuring<br />
the environments being rendered", allowing to exploit faster transformations and clipping operations.<br />
The same environment structuring is now proposed as a way to control varying detail thus avoiding unnecessary<br />
computations, yet delivering adequate visual quality:<br />
“<br />
For example, a dodecahedron looks like a sphere from a sufficiently large distance and thus can be used to model it so long as it is viewed<br />
from that or a greater distance. However, if it must ever be viewed more closely, it will look like a dodecahedron. One solution to this is<br />
simply to define it with the most detail that will ever be necessary. However, then it might have far more detail than is needed to represent it at<br />
large distances, and in a complex environment with many such objects, there would be too many polygons (or other geometric primitives) for<br />
the visible surface algorithms to efficiently handle. ”<br />
The proposed algorithm envisions a tree data structure which encodes in its arcs both transformations and transitions<br />
to more detailed objects. In this way, each node encodes an object and according to a fast heuristic, the tree is<br />
descended to the leafs which provide each object with more detail. When a leaf is reached, other methods could be<br />
used when higher detail is needed, such as Catmull's recursive subdivision Catmull .<br />
“<br />
The significant point, however, is that in a complex environment, the amount of information presented about the various objects in the<br />
environment varies according to the fraction of the field of view occupied by those objects. ”<br />
The paper then introduces clipping (not to be confused with culling (computer <strong>graphics</strong>), although often similar),<br />
various considerations on the graphical working set and its impact on performance, interactions between the<br />
proposed algorithm and others to improve rendering speed. Interested readers are encouraged in checking the<br />
references for further details on the topic.