3D graphics eBook - Course Materials Repository
3D graphics eBook - Course Materials Repository 3D graphics eBook - Course Materials Repository
Non-uniform rational B-spline 85 Notes [1] Foley, van Dam, Feiner & Hughes: Computer Graphics: Principles and Practice, section 11.2, Addison Wesley 1996 (2nd ed.). [2] David F. Rogers: An Introduction to NURBS with Historical Perspective, section 7.1 [3] Demidov, Evgeny. "NonUniform Rational B-splines (NURBS) - Perspective projection" (http:/ / www. ibiblio. org/ e-notes/ Splines/ NURBS. htm). An Interactive Introduction to Splines. Ibiblio. . Retrieved 2010-02-14. [4] Les Piegl & Wayne Tiller: The NURBS Book, chapter 2, sec. 2 [5] Les Piegl & Wayne Tiller: The NURBS Book, chapter 4, sec. 2 [6] Les Piegl & Wayne Tiller: The NURBS Book, chapter 5 External links • About Nonuniform Rational B-Splines - NURBS (http:/ / www. cs. wpi. edu/ ~matt/ courses/ cs563/ talks/ nurbs. html) • An Interactive Introduction to Splines (http:/ / ibiblio. org/ e-notes/ Splines/ Intro. htm) • http:/ / www. cs. bris. ac. uk/ Teaching/ Resources/ COMS30115/ all. pdf • http:/ / homepages. inf. ed. ac. uk/ rbf/ CVonline/ LOCAL_COPIES/ AV0405/ DONAVANIK/ bezier. html • http:/ / mathcs. holycross. edu/ ~croyden/ csci343/ notes. html (Lecture 33: Bézier Curves, Splines) • http:/ / www. cs. mtu. edu/ ~shene/ COURSES/ cs3621/ NOTES/ notes. html • A free software package for handling NURBS curves, surfaces and volumes (http:/ / octave. sourceforge. net/ nurbs) in Octave and Matlab Normal mapping In 3D computer graphics, normal mapping, or "Dot3 bump mapping", is a technique used for faking the lighting of bumps and dents. It is used to add details without using more polygons. A normal map is usually an RGB image that corresponds to the X, Y, and Z coordinates of a surface normal from a more detailed version of the object. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model. History Normal mapping used to re-detail simplified meshes. The idea of taking geometric details from a high polygon model was introduced in "Fitting Smooth Surfaces to Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996 [1] , where this approach was used for creating displacement maps over nurbs. In 1998, two papers were presented with key ideas for transferring details with normal maps from high to low polygon meshes: "Appearance Preserving Simplification", by Cohen et al. SIGGRAPH 1998 [2] , and "A general method for preserving attribute values on simplified meshes" by Cignoni et al. IEEE Visualization '98 [3] . The former introduced the idea of storing surface normals directly in a texture, rather than displacements, though it required the low-detail model to be generated by a particular constrained simplification algorithm. The latter presented a simpler approach that decouples the high and low polygonal mesh and allows the recreation of any attributes of the high-detail model (color, texture coordinates, displacements, etc.) in a way that is not dependent on how the low-detail model was created. The combination of storing normals in a texture, with the more general creation process is still used by most currently available tools.
Normal mapping 86 How it works To calculate the Lambertian (diffuse) lighting of a surface, the unit vector from the shading point to the light source is dotted with the unit vector normal to that surface, and the result is the intensity of the light on that surface. Imagine a polygonal model of a sphere - you can only approximate the shape of the surface. By using a 3-channel bitmap textured across the model, more detailed normal vector information can be encoded. Each channel in the bitmap corresponds to a spatial dimension (X, Y and Z). These spatial dimensions are relative to a constant coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model, especially in conjunction with advanced lighting techniques. Since a normal will be used in the dot product calculation for the diffuse lighting computation, we can see that the {0, 0, –1} would be remapped to the {128, 128, 255} values, giving that kind of sky blue color seen in normal maps (blue (z) coordinate is perspective (deepness) coordinate and RG-xy flat coordinates on screen). {0.3, 0.4, –0.866} would be remapped to the ({0.3, 0.4, –0.866}/2+{0.5, 0.5, 0.5})*255={0.15+0.5, 0.2+0.5, 0.433+0.5}*255={0.65, 0.7, 0.933}*255={166, 179, 238} values ( ). Coordinate z (blue) minus sign flipped, because need match normal map normal vector with eye (viewpoint or camera) vector or light vector (because sign "-" for z axis means vertex is in front of camera and not behind camera; when light vector and normal vector match surface shined with maximum strength). Calculating Tangent Space In order to find the perturbation in the normal the tangent space must be correctly calculated [4] . Most often the normal is perturbed in a fragment shader after applying the model and view matrices. Typically the geometry provides a normal and tangent. The tangent is part of the tangent plane and can be transformed simply with the linear part of the matrix (the upper 3x3). However, the normal needs to be transformed by the inverse transpose. Most applications will want cotangent to match the transformed geometry (and associated uv's). So instead of enforcing the cotangent to be perpendicular to the tangent, it is generally preferable to transform the cotangent just like the tangent. Let t be tangent, b be cotangent, n be normal, M 3x3 be the linear part of model matrix, and V 3x3 be the linear part of the view matrix. Normal mapping in video games Interactive normal map rendering was originally only possible on PixelFlow, a parallel rendering machine built at the University of North Carolina at Chapel Hill. It was later possible to perform normal mapping on high-end SGI workstations using multi-pass rendering and framebuffer operations [5] or on low end PC hardware with some tricks using paletted textures. However, with the advent of shaders in personal computers and game consoles, normal mapping became widely used in proprietary commercial video games starting in late 2003, and followed by open source games in later years. Normal mapping's popularity for real-time rendering is due to its good quality to processing requirements ratio versus other methods of producing similar effects. Much of this efficiency is made possible by distance-indexed detail scaling, a technique which selectively decreases the detail of the normal map of a given texture (cf. mipmapping), meaning that more distant surfaces require less complex lighting simulation. Basic normal mapping can be implemented in any hardware that supports palettized textures. The first game console to have specialized normal mapping hardware was the Sega Dreamcast. However, Microsoft's Xbox was the first console to widely use the effect in retail games. Out of the sixth generation consoles, only the PlayStation 2's GPU lacks built-in normal mapping support. Games for the Xbox 360 and the PlayStation 3 rely heavily on normal
- Page 39 and 40: Cube mapping 34 Advantages Cube map
- Page 41 and 42: Cube mapping 36 Related A large set
- Page 43 and 44: Diffuse reflection 38 2), or, of co
- Page 45 and 46: Displacement mapping 40 Meaning of
- Page 47 and 48: DooSabin subdivision surface 42 Ext
- Page 49 and 50: False radiosity 44 False radiosity
- Page 51 and 52: Geometry pipelines 46 Geometry pipe
- Page 53 and 54: Global illumination 48 Rendering wi
- Page 55 and 56: Gouraud shading 50 Gouraud shading
- Page 57 and 58: Graphics pipeline 52 Graphics pipel
- Page 59 and 60: Graphics pipeline 54 References 1.
- Page 61 and 62: Hidden surface determination 56 imp
- Page 63 and 64: High dynamic range rendering 58 Hig
- Page 65 and 66: High dynamic range rendering 60 Ton
- Page 67 and 68: High dynamic range rendering 62 Fro
- Page 69 and 70: High dynamic range rendering 64 •
- Page 71 and 72: Irregular Z-buffer 66 Applications
- Page 73 and 74: Lambert's cosine law 68 than would
- Page 75 and 76: Lambertian reflectance 70 Lambertia
- Page 77 and 78: Level of detail 72 Well known appro
- Page 79 and 80: Level of detail 74 Hierarchical LOD
- Page 81 and 82: Newell's algorithm 76 Newell's algo
- Page 83 and 84: Non-uniform rational B-spline 78 Us
- Page 85 and 86: Non-uniform rational B-spline 80 of
- Page 87 and 88: Non-uniform rational B-spline 82 ar
- Page 89: Non-uniform rational B-spline 84 Ex
- Page 93 and 94: OrenNayar reflectance model 88 Oren
- Page 95 and 96: OrenNayar reflectance model 90 , ,
- Page 97 and 98: Painter's algorithm 92 The algorith
- Page 99 and 100: Parallax mapping 94 • Parallax Ma
- Page 101 and 102: Particle system 96 A cube emitting
- Page 103 and 104: Path tracing 98 History Further inf
- Page 105 and 106: Path tracing 100 Scattering distrib
- Page 107 and 108: Phong reflection model 102 Visual i
- Page 109 and 110: Phong reflection model 104 Because
- Page 111 and 112: Phong shading 106 Visual illustrati
- Page 113 and 114: Photon mapping 108 Rendering (2nd p
- Page 115 and 116: Photon tracing 110 Advantages and d
- Page 117 and 118: Potentially visible set 112 • Can
- Page 119 and 120: Potentially visible set 114 Externa
- Page 121 and 122: Procedural generation 116 increases
- Page 123 and 124: Procedural generation 118 • Softi
- Page 125 and 126: Procedural generation 120 Reference
- Page 127 and 128: Procedural texture 122 Self-organiz
- Page 129 and 130: Procedural texture 124 References [
- Page 131 and 132: 3D projection 126 The distance of t
- Page 133 and 134: Quaternions and spatial rotation 12
- Page 135 and 136: Quaternions and spatial rotation 13
- Page 137 and 138: Quaternions and spatial rotation 13
- Page 139 and 140: Quaternions and spatial rotation 13
Non-uniform rational B-spline 85<br />
Notes<br />
[1] Foley, van Dam, Feiner & Hughes: Computer Graphics: Principles and Practice, section 11.2, Addison Wesley 1996 (2nd ed.).<br />
[2] David F. Rogers: An Introduction to NURBS with Historical Perspective, section 7.1<br />
[3] Demidov, Evgeny. "NonUniform Rational B-splines (NURBS) - Perspective projection" (http:/ / www. ibiblio. org/ e-notes/ Splines/<br />
NURBS. htm). An Interactive Introduction to Splines. Ibiblio. . Retrieved 2010-02-14.<br />
[4] Les Piegl & Wayne Tiller: The NURBS Book, chapter 2, sec. 2<br />
[5] Les Piegl & Wayne Tiller: The NURBS Book, chapter 4, sec. 2<br />
[6] Les Piegl & Wayne Tiller: The NURBS Book, chapter 5<br />
External links<br />
• About Nonuniform Rational B-Splines - NURBS (http:/ / www. cs. wpi. edu/ ~matt/ courses/ cs563/ talks/ nurbs.<br />
html)<br />
• An Interactive Introduction to Splines (http:/ / ibiblio. org/ e-notes/ Splines/ Intro. htm)<br />
• http:/ / www. cs. bris. ac. uk/ Teaching/ Resources/ COMS30115/ all. pdf<br />
• http:/ / homepages. inf. ed. ac. uk/ rbf/ CVonline/ LOCAL_COPIES/ AV0405/ DONAVANIK/ bezier. html<br />
• http:/ / mathcs. holycross. edu/ ~croyden/ csci343/ notes. html (Lecture 33: Bézier Curves, Splines)<br />
• http:/ / www. cs. mtu. edu/ ~shene/ COURSES/ cs3621/ NOTES/ notes. html<br />
• A free software package for handling NURBS curves, surfaces and volumes (http:/ / octave. sourceforge. net/<br />
nurbs) in Octave and Matlab<br />
Normal mapping<br />
In <strong>3D</strong> computer <strong>graphics</strong>, normal mapping, or "Dot3 bump mapping",<br />
is a technique used for faking the lighting of bumps and dents. It is<br />
used to add details without using more polygons. A normal map is<br />
usually an RGB image that corresponds to the X, Y, and Z coordinates<br />
of a surface normal from a more detailed version of the object. A<br />
common use of this technique is to greatly enhance the appearance and<br />
details of a low polygon model by generating a normal map from a<br />
high polygon model.<br />
History<br />
Normal mapping used to re-detail simplified<br />
meshes.<br />
The idea of taking geometric details from a high polygon model was introduced in "Fitting Smooth Surfaces to<br />
Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996 [1] , where this approach was used for<br />
creating displacement maps over nurbs. In 1998, two papers were presented with key ideas for transferring details<br />
with normal maps from high to low polygon meshes: "Appearance Preserving Simplification", by Cohen et al.<br />
SIGGRAPH 1998 [2] , and "A general method for preserving attribute values on simplified meshes" by Cignoni et al.<br />
IEEE Visualization '98 [3] . The former introduced the idea of storing surface normals directly in a texture, rather than<br />
displacements, though it required the low-detail model to be generated by a particular constrained simplification<br />
algorithm. The latter presented a simpler approach that decouples the high and low polygonal mesh and allows the<br />
recreation of any attributes of the high-detail model (color, texture coordinates, displacements, etc.) in a way that is<br />
not dependent on how the low-detail model was created. The combination of storing normals in a texture, with the<br />
more general creation process is still used by most currently available tools.