3D graphics eBook - Course Materials Repository
3D graphics eBook - Course Materials Repository 3D graphics eBook - Course Materials Repository
Z-buffering 249 The invention of the z-buffer concept is most often attributed to Edwin Catmull, although Wolfgang Straßer also described this idea in his 1974 Ph.D. thesis 1 . On recent PC graphics cards (1999–2005), z-buffer management uses a significant chunk of the available memory bandwidth. Various methods have been employed to reduce the performance cost of z-buffering, such as lossless compression (computer resources to compress/decompress are cheaper than bandwidth) and ultra fast hardware z-clear that makes obsolete the "one frame positive, one frame negative" trick (skipping inter-frame clear altogether using signed numbers to cleverly check depths). Z-culling In rendering, z-culling is early pixel elimination based on depth, a method that provides an increase in performance when rendering of hidden surfaces is costly. It is a direct consequence of z-buffering, where the depth of each pixel candidate is compared to the depth of existing geometry behind which it might be hidden. When using a z-buffer, a pixel can be culled (discarded) as soon as its depth is known, which makes it possible to skip the entire process of lighting and texturing a pixel that would not be visible anyway. Also, time-consuming pixel shaders will generally not be executed for the culled pixels. This makes z-culling a good optimization candidate in situations where fillrate, lighting, texturing or pixel shaders are the main bottlenecks. While z-buffering allows the geometry to be unsorted, sorting polygons by increasing depth (thus using a reverse painter's algorithm) allows each screen pixel to be rendered fewer times. This can increase performance in fillrate-limited scenes with large amounts of overdraw, but if not combined with z-buffering it suffers from severe problems such as: • polygons might occlude one another in a cycle (e.g. : triangle A occludes B, B occludes C, C occludes A), and • there is no canonical "closest" point on a triangle (e.g.: no matter whether one sorts triangles by their centroid or closest point or furthest point, one can always find two triangles A and B such that A is "closer" but in reality B should be drawn first). As such, a reverse painter's algorithm cannot be used as an alternative to Z-culling (without strenuous re-engineering), except as an optimization to Z-culling. For example, an optimization might be to keep polygons sorted according to x/y-location and z-depth to provide bounds, in an effort to quickly determine if two polygons might possibly have an occlusion interaction. Algorithm Given: A list of polygons {P1,P2,.....Pn} Output: A COLOR array, which display the intensity of the visible polygon surfaces. Initialize: Begin: note : z-depth and z-buffer(x,y) is positive........ z-buffer(x,y)=max depth; and COLOR(x,y)=background color. for(each polygon P in the polygon list) do{ for(each pixel(x,y) that intersects P) do{ Calculate z-depth of P at (x,y) } If (z-depth < z-buffer[x,y]) then{ z-buffer[x,y]=z-depth; COLOR(x,y)=Intensity of P at(x,y);
Z-buffering 250 } } display COLOR array. Mathematics The range of depth values in camera space (see 3D projection) to be rendered is often defined between a and value of . After a perspective transformation, the new value of , or , is defined by: After an orthographic projection, the new value of , or , is defined by: where is the old value of in camera space, and is sometimes called or . The resulting values of are normalized between the values of -1 and 1, where the plane is at -1 and the plane is at 1. Values outside of this range correspond to points which are not in the viewing frustum, and shouldn't be rendered. Fixed-point representation Typically, these values are stored in the z-buffer of the hardware graphics accelerator in fixed point format. First they are normalized to a more common range which is [0,1] by substituting the appropriate conversion into the previous formula: Second, the above formula is multiplied by where d is the depth of the z-buffer (usually 16, 24 or 32 bits) and rounding the result to an integer: [1] This formula can be inversed and derivated in order to calculate the z-buffer resolution (the 'granularity' mentioned earlier). The inverse of the above : where The z-buffer resolution in terms of camera space would be the incremental value resulted from the smallest change in the integer stored in the z-buffer, which is +1 or -1. Therefore this resolution can be calculated from the derivative of as a function of : Expressing it back in camera space terms, by substituting by the above :
- Page 203 and 204: Stencil codes 198 Stencils The shap
- Page 205 and 206: Stencil codes 200 [7] Wellein, G et
- Page 207 and 208: Subdivision surface 202 used a four
- Page 209 and 210: Subsurface scattering 204 Subsurfac
- Page 211 and 212: Subsurface scattering 206 External
- Page 213 and 214: Surface normal 208 If a (possibly n
- Page 215 and 216: Surface normal 210 Normal in geomet
- Page 217 and 218: Texture filtering 212 Texture filte
- Page 219 and 220: Texture mapping 214 Texture mapping
- Page 221 and 222: Texture mapping 216 constant distan
- Page 223 and 224: Texture synthesis 218 • Structure
- Page 225 and 226: Texture synthesis 220 Pattern-based
- Page 227 and 228: Texture synthesis 222 • Micro-tex
- Page 229 and 230: UV mapping 224 A UV map can either
- Page 231 and 232: Vertex 226 Polytope vertices are re
- Page 233 and 234: Vertex Buffer Object 228 //Make the
- Page 235 and 236: Vertex Buffer Object 230 GLuint sha
- Page 237 and 238: Vertex Buffer Object 232 vertexes *
- Page 239 and 240: Virtual actor 234 Virtual actor A v
- Page 241 and 242: Virtual actor 236 exercises, and ev
- Page 243 and 244: Volume rendering 238 Volume ray cas
- Page 245 and 246: Volume rendering 240 Maximum intens
- Page 247 and 248: Volume rendering 242 Image-based me
- Page 249 and 250: Volumetric lighting 244 References
- Page 251 and 252: Voxel 246 • Outcast, a game made
- Page 253: Z-buffering 248 Z-buffering In comp
- Page 257 and 258: Z-fighting 252 Z-fighting Z-fightin
- Page 259 and 260: Appendix 3D computer graphics softw
- Page 261 and 262: 3D computer graphics software 256
- Page 263 and 264: 3D computer graphics software 258
- Page 265 and 266: 3D computer graphics software 260 2
- Page 267 and 268: Article Sources and Contributors 26
- Page 269 and 270: Article Sources and Contributors 26
- Page 271 and 272: Image Sources, Licenses and Contrib
- Page 273 and 274: Image Sources, Licenses and Contrib
Z-buffering 250<br />
}<br />
}<br />
display COLOR array.<br />
Mathematics<br />
The range of depth values in camera space (see <strong>3D</strong> projection) to be rendered is often defined between a and<br />
value of . After a perspective transformation, the new value of , or , is defined by:<br />
After an orthographic projection, the new value of , or , is defined by:<br />
where is the old value of in camera space, and is sometimes called or .<br />
The resulting values of are normalized between the values of -1 and 1, where the plane is at -1 and the<br />
plane is at 1. Values outside of this range correspond to points which are not in the viewing frustum, and<br />
shouldn't be rendered.<br />
Fixed-point representation<br />
Typically, these values are stored in the z-buffer of the hardware <strong>graphics</strong> accelerator in fixed point format. First they<br />
are normalized to a more common range which is [0,1] by substituting the appropriate conversion<br />
into the previous formula:<br />
Second, the above formula is multiplied by where d is the depth of the z-buffer (usually 16, 24 or 32<br />
bits) and rounding the result to an integer: [1]<br />
This formula can be inversed and derivated in order to calculate the z-buffer resolution (the 'granularity' mentioned<br />
earlier). The inverse of the above :<br />
where<br />
The z-buffer resolution in terms of camera space would be the incremental value resulted from the smallest change<br />
in the integer stored in the z-buffer, which is +1 or -1. Therefore this resolution can be calculated from the derivative<br />
of as a function of :<br />
Expressing it back in camera space terms, by substituting by the above :