3D graphics eBook - Course Materials Repository
3D graphics eBook - Course Materials Repository
3D graphics eBook - Course Materials Repository
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
<strong>3D</strong> Rendering<br />
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.<br />
PDF generated at: Tue, 11 Oct 2011 09:36:28 UTC
Contents<br />
Articles<br />
Preface 1<br />
<strong>3D</strong> rendering 1<br />
Concepts 5<br />
Alpha mapping 5<br />
Ambient occlusion 5<br />
Anisotropic filtering 8<br />
Back-face culling 11<br />
Beam tracing 12<br />
Bidirectional texture function 13<br />
Bilinear filtering 13<br />
Binary space partitioning 15<br />
Bounding interval hierarchy 20<br />
Bounding volume 23<br />
Bump mapping 26<br />
Catmull–Clark subdivision surface 28<br />
Conversion between quaternions and Euler angles 30<br />
Cube mapping 33<br />
Diffuse reflection 36<br />
Displacement mapping 39<br />
Doo–Sabin subdivision surface 41<br />
Edge loop 42<br />
Euler operator 43<br />
False radiosity 44<br />
Fragment 45<br />
Geometry pipelines 46<br />
Geometry processing 47<br />
Global illumination 47<br />
Gouraud shading 50<br />
Graphics pipeline 52<br />
Hidden line removal 54<br />
Hidden surface determination 55<br />
High dynamic range rendering 58
Image-based lighting 64<br />
Image plane 65<br />
Irregular Z-buffer 65<br />
Isosurface 66<br />
Lambert's cosine law 67<br />
Lambertian reflectance 70<br />
Level of detail 71<br />
Mipmap 74<br />
Newell's algorithm 76<br />
Non-uniform rational B-spline 77<br />
Normal mapping 85<br />
Oren–Nayar reflectance model 88<br />
Painter's algorithm 91<br />
Parallax mapping 93<br />
Particle system 94<br />
Path tracing 97<br />
Per-pixel lighting 101<br />
Phong reflection model 101<br />
Phong shading 105<br />
Photon mapping 106<br />
Photon tracing 109<br />
Polygon 111<br />
Potentially visible set 111<br />
Precomputed Radiance Transfer 114<br />
Procedural generation 115<br />
Procedural texture 121<br />
<strong>3D</strong> projection 124<br />
Quaternions and spatial rotation 127<br />
Radiosity 138<br />
Ray casting 145<br />
Ray tracing 147<br />
Reflection 154<br />
Reflection mapping 156<br />
Relief mapping 159<br />
Render Output unit 160<br />
Rendering 160<br />
Retained mode 170<br />
Scanline rendering 170
Schlick's approximation 173<br />
Screen Space Ambient Occlusion 173<br />
Self-shadowing 177<br />
Shadow mapping 177<br />
Shadow volume 183<br />
Silhouette edge 188<br />
Spectral rendering 189<br />
Specular highlight 190<br />
Specularity 193<br />
Sphere mapping 194<br />
Stencil buffer 195<br />
Stencil codes 196<br />
Subdivision surface 200<br />
Subsurface scattering 204<br />
Surface caching 206<br />
Surface normal 207<br />
Texel 210<br />
Texture atlas 211<br />
Texture filtering 212<br />
Texture mapping 214<br />
Texture synthesis 217<br />
Tiled rendering 222<br />
UV mapping 223<br />
UVW mapping 225<br />
Vertex 225<br />
Vertex Buffer Object 227<br />
Vertex normal 232<br />
Viewing frustum 233<br />
Virtual actor 234<br />
Volume rendering 237<br />
Volumetric lighting 243<br />
Voxel 244<br />
Z-buffering 248<br />
Z-fighting 252<br />
Appendix 254<br />
<strong>3D</strong> computer <strong>graphics</strong> software 254
References<br />
Article Sources and Contributors 261<br />
Image Sources, Licenses and Contributors 266<br />
Article Licenses<br />
License 269
<strong>3D</strong> rendering<br />
Preface<br />
<strong>3D</strong> rendering is the <strong>3D</strong> computer <strong>graphics</strong> process of automatically converting <strong>3D</strong> wire frame models into 2D<br />
images with <strong>3D</strong> photorealistic effects on a computer.<br />
Rendering methods<br />
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be<br />
compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often<br />
specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe<br />
rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or<br />
radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different<br />
methods are better suited for either photo-realistic rendering, or real-time rendering.<br />
Real-time<br />
Rendering for interactive media, such as games and<br />
simulations, is calculated and displayed in real time, at<br />
rates of approximately 20 to 120 frames per second. In<br />
real-time rendering, the goal is to show as much<br />
information as possible as the eye can process in a<br />
fraction of a second (a.k.a. in one frame. In the case of<br />
30 frame-per-second animation a frame encompasses<br />
one 30th of a second). The primary goal is to achieve<br />
an as high as possible degree of photorealism at an<br />
acceptable minimum rendering speed (usually 24<br />
frames per second, as that is the minimum the human<br />
eye needs to see to successfully create the illusion of<br />
movement). In fact, exploitations can be applied in the<br />
way the eye 'perceives' the world, and as a result the<br />
final image presented is not necessarily that of the<br />
real-world, but one close enough for the human eye to<br />
tolerate. Rendering software may simulate such visual<br />
effects as lens flares, depth of field or motion blur.<br />
An example of a ray-traced image that typically takes seconds or<br />
minutes to render.<br />
These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the<br />
human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artifact<br />
of a camera. This is the basic method employed in games, interactive worlds and VRML. The rapid increase in<br />
computer processing power has allowed a progressively higher degree of realism even for real-time rendering,<br />
including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's<br />
GPU.<br />
1
<strong>3D</strong> rendering 2<br />
Non real-time<br />
Animations for non-interactive media, such as feature<br />
films and video, are rendered much more slowly.<br />
Non-real time rendering enables the leveraging of<br />
limited processing power in order to obtain higher<br />
image quality. Rendering times for individual frames<br />
may vary from a few seconds to several days for<br />
complex scenes. Rendered frames are stored on a hard<br />
disk then can be transferred to other media such as<br />
motion picture film or optical disk. These frames are<br />
then displayed sequentially at high frame rates,<br />
typically 24, 25, or 30 frames per second, to achieve<br />
the illusion of movement.<br />
When the goal is photo-realism, techniques such as ray<br />
tracing or radiosity are employed. This is the basic<br />
Computer-generated image created by Gilles Tran.<br />
method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating<br />
other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such<br />
techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog,<br />
dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces,<br />
such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light<br />
reflecting inside the volumes of solid objects such as human skin).<br />
The rendering process is computationally expensive, given the complex variety of physical processes being<br />
simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher<br />
degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a<br />
render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible<br />
to create small amounts of <strong>3D</strong> animation on a home computer system. The output of the renderer is often used as<br />
only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and<br />
integrated into the final shot using compositing software.<br />
Reflection and shading models<br />
Models of reflection/scattering and shading are used to describe the appearance of a surface. Although these issues<br />
may seem like problems all on their own, they are studied almost exclusively within the context of rendering.<br />
Modern <strong>3D</strong> computer <strong>graphics</strong> rely heavily on a simplified reflection model called Phong reflection model (not to be<br />
confused with Phong shading). In refraction of light, an important concept is the refractive index. In most <strong>3D</strong><br />
programming implementations, the term for this value is "index of refraction," usually abbreviated "IOR." Shading<br />
can be broken down into two orthogonal issues, which are often studied independently:<br />
• Reflection/Scattering - How light interacts with the surface at a given point<br />
• Shading - How material properties vary across the surface
<strong>3D</strong> rendering 3<br />
Reflection<br />
Reflection or scattering is the relationship<br />
between incoming and outgoing<br />
illumination at a given point. Descriptions<br />
of scattering are usually given in terms of a<br />
bidirectional scattering distribution function<br />
or BSDF. Popular reflection rendering<br />
techniques in <strong>3D</strong> computer <strong>graphics</strong> include:<br />
• Flat shading: A technique that shades<br />
each polygon of an object based on the<br />
polygon's "normal" and the position and<br />
intensity of a light source.<br />
• Gouraud shading: Invented by H.<br />
Gouraud in 1971, a fast and<br />
resource-conscious vertex shading<br />
technique used to simulate smoothly<br />
shaded surfaces.<br />
The Utah teapot<br />
• Texture mapping: A technique for simulating a large amount of surface detail by mapping images (textures) onto<br />
polygons.<br />
• Phong shading: Invented by Bui Tuong Phong, used to simulate specular highlights and smooth shaded surfaces.<br />
• Bump mapping: Invented by Jim Blinn, a normal-perturbation technique used to simulate wrinkled surfaces.<br />
• Cel shading: A technique used to imitate the look of hand-drawn animation.<br />
Shading<br />
Shading addresses how different types of scattering are distributed across the surface (i.e., which scattering function<br />
applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is<br />
some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.) A<br />
simple example of shading is texture mapping, which uses an image to specify the diffuse color at each point on a<br />
surface, giving it more apparent detail.<br />
Transport<br />
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of<br />
light transport.<br />
Projection
<strong>3D</strong> rendering 4<br />
The shaded three-dimensional objects<br />
must be flattened so that the display<br />
device - namely a monitor - can<br />
display it in only two dimensions, this<br />
process is called <strong>3D</strong> projection. This is<br />
done using projection and, for most<br />
applications, perspective projection.<br />
The basic idea behind perspective<br />
projection is that objects that are<br />
further away are made smaller in<br />
relation to those that are closer to the<br />
eye. Programs produce perspective by<br />
Perspective Projection<br />
multiplying a dilation constant raised to the power of the negative of the distance from the observer. A dilation<br />
constant of one means that there is no perspective. High dilation constants can cause a "fish-eye" effect in which<br />
image distortion begins to occur. Orthographic projection is used mainly in CAD or CAM applications where<br />
scientific modeling requires precise measurements and preservation of the third dimension.<br />
External links<br />
• Art is the basis of Industrial Design [1]<br />
• A Critical History of Computer Graphics and Animation [2]<br />
• The ARTS: Episode 5 [3] An in depth interview with Legalize on the subject of the History of Computer Graphics.<br />
(Available in MP3 audio format)<br />
• CGSociety [4] The Computer Graphics Society<br />
• How Stuff Works - <strong>3D</strong> Graphics [5]<br />
• History of Computer Graphics series of articles [6]<br />
• Open Inventor by VSG [7] <strong>3D</strong> Graphics Toolkit for Applications Developers<br />
References<br />
[1] http:/ / www. eapprentice. net/ index. php?option=com_content& view=article& id=159:art-project& catid=64:model-making<br />
[2] http:/ / accad. osu. edu/ ~waynec/ history/ lessons. html<br />
[3] http:/ / www. acid. org/ radio/ index. html#ARTS-EP05<br />
[4] http:/ / www. cgsociety. org/<br />
[5] http:/ / computer. howstuffworks. com/ 3d<strong>graphics</strong>. htm<br />
[6] http:/ / hem. passagen. se/ des/ hocg/ hocg_1960. htm<br />
[7] http:/ / www. vsg3d. com/ vsg_prod_openinventor. php
Alpha mapping<br />
Concepts<br />
Alpha mapping is a technique in <strong>3D</strong> computer <strong>graphics</strong> where an image is mapped (assigned) to a <strong>3D</strong> object, and<br />
designates certain areas of the object to be transparent or translucent. The transparency can vary in strength, based on<br />
the image texture, which can be greyscale, or the alpha channel of an RGBA image texture.<br />
Ambient occlusion<br />
Ambient occlusion is a shading method used in <strong>3D</strong> computer <strong>graphics</strong> which helps add realism to local reflection<br />
models by taking into account attenuation of light due to occlusion. Ambient occlusion attempts to approximate the<br />
way light radiates in real life, especially off what are normally considered non-reflective surfaces.<br />
Unlike local methods like Phong shading, ambient occlusion is a global method, meaning the illumination at each<br />
point is a function of other geometry in the scene. However, it is a very crude approximation to full global<br />
illumination. The soft appearance achieved by ambient occlusion alone is similar to the way an object appears on an<br />
overcast day.<br />
Method of implementation<br />
Ambient occlusion is most often calculated by casting rays in every direction from the surface. Rays which reach the<br />
background or “sky” increase the brightness of the surface, whereas a ray which hits any other object contributes no<br />
illumination. As a result, points surrounded by a large amount of geometry are rendered dark, whereas points with<br />
little geometry on the visible hemisphere appear light.<br />
Ambient occlusion is related to accessibility shading, which determines appearance based on how easy it is for a<br />
surface to be touched by various elements (e.g., dirt, light, etc.). It has been popularized in production animation due<br />
to its relative simplicity and efficiency. In the industry, ambient occlusion is often referred to as "sky light".<br />
The ambient occlusion shading model has the nice property of offering a better perception of the 3d shape of the<br />
displayed objects. This was shown in a paper [1] where the authors report the results of perceptual experiments<br />
showing that depth discrimination under diffuse uniform sky lighting is superior to that predicted by a direct lighting<br />
model.<br />
5
Ambient occlusion 6<br />
ambient occlusion diffuse only combined ambient and diffuse<br />
The occlusion at a point on a surface with normal can be computed by integrating the visibility function<br />
over the hemisphere with respect to projected solid angle:<br />
where is the visibility function at , defined to be zero if is occluded in the direction and one otherwise,<br />
and is the infinitesimal solid angle step of the integration variable . A variety of techniques are used to<br />
approximate this integral in practice: perhaps the most straightforward way is to use the Monte Carlo method by<br />
casting rays from the point and testing for intersection with other scene geometry (i.e., ray casting). Another<br />
approach (more suited to hardware acceleration) is to render the view from by rasterizing black geometry against<br />
a white background and taking the (cosine-weighted) average of rasterized fragments. This approach is an example<br />
of a "gathering" or "inside-out" approach, whereas other algorithms (such as depth-map ambient occlusion) employ<br />
"scattering" or "outside-in" techniques.<br />
In addition to the ambient occlusion value, a "bent normal" vector is often generated, which points in the average<br />
direction of unoccluded samples. The bent normal can be used to look up incident radiance from an environment<br />
map to approximate image-based lighting. However, there are some situations in which the direction of the bent<br />
normal is a misrepresentation of the dominant direction of illumination, e.g.,
Ambient occlusion 7<br />
In this example the bent normal N b has an unfortunate direction, since it is pointing at an occluded surface.<br />
In this example, light may reach the point p only from the left or right sides, but the bent normal points to the<br />
average of those two sources, which is, unfortunately, directly toward the obstruction.<br />
Awards<br />
In 2010, Hayden Landis, Ken McGaugh and Hilmar Koch were awarded a Scientific and Technical Academy Award<br />
for their work on ambient occlusion rendering. [2]<br />
References<br />
[1] Langer, M.S.; H. H. Buelthoff (2000). "Depth discrimination from shading under diffuse lighting". Perception 29 (6): 649–660.<br />
doi:10.1068/p3060. PMID 11040949.<br />
[2] Oscar 2010: Scientific and Technical Awards (http:/ / www. altfg. com/ blog/ awards/ oscar-2010-scientific-and-technical-awards-489/ ), Alt<br />
Film Guide, Jan 7, 2010<br />
External links<br />
• Depth Map based Ambient Occlusion (http:/ / www. andrew-whitehurst. net/ amb_occlude. html)<br />
• NVIDIA's accurate, real-time Ambient Occlusion Volumes (http:/ / research. nvidia. com/ publication/<br />
ambient-occlusion-volumes)<br />
• Assorted notes about ambient occlusion (http:/ / www. cs. unc. edu/ ~coombe/ research/ ao/ )<br />
• Ambient Occlusion Fields (http:/ / www. tml. hut. fi/ ~janne/ aofields/ ) — real-time ambient occlusion using<br />
cube maps<br />
• PantaRay ambient occlusion used in the movie Avatar (http:/ / research. nvidia. com/ publication/<br />
pantaray-fast-ray-traced-occlusion-caching-massive-scenes)<br />
• Fast Precomputed Ambient Occlusion for Proximity Shadows (http:/ / hal. inria. fr/ inria-00379385) real-time<br />
ambient occlusion using volume textures<br />
• Dynamic Ambient Occlusion and Indirect Lighting (http:/ / download. nvidia. com/ developer/ GPU_Gems_2/<br />
GPU_Gems2_ch14. pdf) a real time self ambient occlusion method from Nvidia's GPU Gems 2 book<br />
• GPU Gems 3 : Chapter 12. High-Quality Ambient Occlusion (http:/ / http. developer. nvidia. com/ GPUGems3/<br />
gpugems3_ch12. html)
Ambient occlusion 8<br />
• ShadeVis (http:/ / vcg. sourceforge. net/ index. php/ ShadeVis) an open source tool for computing ambient<br />
occlusion<br />
• xNormal (http:/ / www. xnormal. net) A free normal mapper/ambient occlusion baking application<br />
• 3dsMax Ambient Occlusion Map Baking (http:/ / www. mrbluesummers. com/ 893/ video-tutorials/<br />
baking-ambient-occlusion-in-3dsmax-monday-movie) Demo video about preparing ambient occlusion in 3dsMax<br />
Anisotropic filtering<br />
In <strong>3D</strong> computer <strong>graphics</strong>, anisotropic<br />
filtering (abbreviated AF) is a method<br />
of enhancing the image quality of<br />
textures on surfaces that are at oblique<br />
viewing angles with respect to the<br />
camera where the projection of the<br />
texture (not the polygon or other<br />
primitive on which it is rendered)<br />
appears to be non-orthogonal (thus the<br />
origin of the word: "an" for not, "iso"<br />
for same, and "tropic" from tropism,<br />
relating to direction; anisotropic<br />
filtering does not filter the same in<br />
every direction).<br />
Like bilinear and trilinear filtering,<br />
anisotropic filtering eliminates aliasing<br />
effects, but improves on these other<br />
techniques by reducing blur and<br />
preserving detail at extreme viewing angles.<br />
An illustration of texture filtering methods showing a trilinear mipmapped texture on the<br />
left and the same texture enhanced with anisotropic texture filtering on the right.<br />
Anisotropic compression is relatively intensive (primarily memory bandwidth and to some degree computationally,<br />
though the standard space-time tradeoff rules apply) and only became a standard feature of consumer-level <strong>graphics</strong><br />
cards in the late 1990s. Anisotropic filtering is now common in modern <strong>graphics</strong> hardware (and video driver<br />
software) and is enabled either by users through driver settings or by <strong>graphics</strong> applications and video games through<br />
programming interfaces.
Anisotropic filtering 9<br />
An improvement on isotropic MIP mapping<br />
Hereafter, it is assumed the reader is familiar with<br />
MIP mapping.<br />
If we were to explore a more approximate anisotropic<br />
algorithm, RIP mapping, as an extension from MIP<br />
mapping, we can understand how anisotropic filtering<br />
gains so much texture mapping quality. If we need to<br />
texture a horizontal plane which is at an oblique angle<br />
to the camera, traditional MIP map minification<br />
would give us insufficient horizontal resolution due to<br />
the reduction of image frequency in the vertical axis.<br />
This is because in MIP mapping each MIP level is<br />
isotropic, so a 256 × 256 texture is downsized to a 128<br />
× 128 image, then a 64 × 64 image and so on, so<br />
resolution halves on each axis simultaneously, so a<br />
MIP map texture probe to an image will always<br />
sample an image that is of equal frequency in each<br />
axis. Thus, when sampling to avoid aliasing on a<br />
high-frequency axis, the other texture axes will be<br />
similarly downsampled and therefore potentially<br />
blurred.<br />
An example of ripmap image storage: the principal image on the top<br />
left is accompanied by filtered, linearly transformed copies of reduced<br />
With RIP map anisotropic filtering, in addition to downsampling to 128 × 128, images are also sampled to 256 × 128<br />
and 32 × 128 etc. These *anisotropically* downsampled images can be probed when the texture-mapped image<br />
frequency is different for each texture axis and therefore one axis need not blur due to the screen frequency of<br />
another axis and aliasing is still avoided. Unlike more general anisotropic filtering, the RIP mapping described for<br />
illustration has a limitation in that it only supports anisotropic probes that are axis-aligned in texture space, so<br />
diagonal anisotropy still presents a problem even though real-use cases of anisotropic texture commonly have such<br />
screenspace mappings.<br />
In layman's terms, anisotropic filtering retains the "sharpness" of a texture normally lost by MIP map texture's<br />
attempts to avoid aliasing. Anisotropic filtering can therefore be said to maintain crisp texture detail at all viewing<br />
orientations while providing fast anti-aliased texture filtering.<br />
Degree of anisotropy supported<br />
Different degrees or ratios of anisotropic filtering can be applied during rendering and current hardware rendering<br />
implementations set an upper bound on this ratio. This degree refers to the maximum ratio of anisotropy supported<br />
by the filtering process. So, for example 4:1 (pronounced 4 to 1) anisotropic filtering will continue to sharpen more<br />
oblique textures beyond the range sharpened by 2:1.<br />
In practice what this means is that in highly oblique texturing situations a 4:1 filter will be twice as sharp as a 2:1<br />
filter (it will display frequencies double that of the 2:1 filter). However, most of the scene will not require the 4:1<br />
filter; only the more oblique and usually more distant pixels will require the sharper filtering. This means that as the<br />
degree of anisotropic filtering continues to double there are diminishing returns in terms of visible quality with fewer<br />
and fewer rendered pixels affected, and the results become less obvious to the viewer.<br />
When one compares the rendered results of an 8:1 anisotropically filtered scene to a 16:1 filtered scene, only a<br />
relatively few highly oblique pixels, mostly on more distant geometry, will display visibly sharper textures in the<br />
scene with the higher degree of anisotropic filtering, and the frequency information on these few 16:1 filtered pixels<br />
size.
Anisotropic filtering 10<br />
will only be double that of the 8:1 filter. The performance penalty also diminishes because fewer pixels require the<br />
data fetches of greater anisotropy.<br />
In the end it is the additional hardware complexity vs. these diminishing returns, which causes an upper bound to be<br />
set on the anisotropic quality in a hardware design. Applications and users are then free to adjust this trade-off<br />
through driver and software settings up to this threshold.<br />
Implementation<br />
True anisotropic filtering probes the texture anisotropically on the fly on a per-pixel basis for any orientation of<br />
anisotropy.<br />
In <strong>graphics</strong> hardware, typically when the texture is sampled anisotropically, several probes (texel samples) of the<br />
texture around the center point are taken, but on a sample pattern mapped according to the projected shape of the<br />
texture at that pixel.<br />
Each anisotropic filtering probe is often in itself a filtered MIP map sample, which adds more sampling to the<br />
process. Sixteen trilinear anisotropic samples might require 128 samples from the stored texture, as trilinear MIP<br />
map filtering needs to take four samples times two MIP levels and then anisotropic sampling (at 16-tap) needs to<br />
take sixteen of these trilinear filtered probes.<br />
However, this level of filtering complexity is not required all the time. There are commonly available methods to<br />
reduce the amount of work the video rendering hardware must do.<br />
Performance and optimization<br />
The sample count required can make anisotropic filtering extremely bandwidth-intensive. Multiple textures are<br />
common; each texture sample could be four bytes or more, so each anisotropic pixel could require 512 bytes from<br />
texture memory, although texture compression is commonly used to reduce this.<br />
As a video display device can easily contain over a million pixels, and as the desired frame rate can be as high as<br />
30–60 frames per second (or more) the texture memory bandwidth can become very high very quickly. Ranges of<br />
hundreds of gigabytes per second of pipeline bandwidth for texture rendering operations is not unusual where<br />
anisotropic filtering operations are involved.<br />
Fortunately, several factors mitigate in favor of better performance:<br />
• The probes themselves share cached texture samples, both inter-pixel and intra-pixel.<br />
• Even with 16-tap anisotropic filtering, not all 16 taps are always needed because only distant highly oblique pixel<br />
fills tend to be highly anisotropic.<br />
• Highly Anisotropic pixel fill tends to cover small regions of the screen (i.e. generally under 10%)<br />
• Texture magnification filters (as a general rule) require no anisotropic filtering.<br />
External links<br />
• The Naked Truth About Anisotropic Filtering [1]<br />
References<br />
[1] http:/ / www. extremetech. com/ computing/ 51994-the-naked-truth-about-anisotropic-filtering
Back-face culling 11<br />
Back-face culling<br />
In computer <strong>graphics</strong>, back-face culling determines whether a polygon of a graphical object is visible. It is a step in<br />
the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order<br />
when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, if the<br />
polygon projected on the screen has a counter-clockwise winding it has been rotated to face away from the camera<br />
and will not be drawn.<br />
The process makes rendering objects quicker and more efficient by reducing the number of polygons for the program<br />
to draw. For example, in a city street scene, there is generally no need to draw the polygons on the sides of the<br />
buildings facing away from the camera; they are completely occluded by the sides facing the camera.<br />
A related technique is clipping, which determines whether polygons are within the camera's field of view at all.<br />
Another similar technique is Z-culling, also known as occlusion culling, which attempts to skip the drawing of<br />
polygons which are covered from the viewpoint by other visible polygons.<br />
This technique only works with single-sided polygons, which are only visible from one side. Double-sided polygons<br />
are rendered from both sides, and thus have no back-face to cull.<br />
One method of implementing back-face culling is by discarding all polygons where the dot product of their surface<br />
normal and the camera-to-polygon vector is greater than or equal to zero.<br />
Further reading<br />
• Geometry Culling in <strong>3D</strong> Engines [1] , by Pietari Laurila<br />
References<br />
[1] http:/ / www. gamedev. net/ reference/ articles/ article1212. asp
Beam tracing 12<br />
Beam tracing<br />
Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer <strong>graphics</strong> to<br />
render <strong>3D</strong> scenes, but it has been also used in other similar areas such as acoustics and electromagnetism<br />
simulations.<br />
Beam tracing is a derivative of the ray tracing algorithm that replaces rays, which have no thickness, with beams.<br />
Beams are shaped like unbounded pyramids, with (possibly complex) polygonal cross sections. Beam tracing was<br />
first proposed by Paul Heckbert and Pat Hanrahan [1] .<br />
In beam tracing, a pyramidal beam is initially cast through the entire viewing frustum. This initial viewing beam is<br />
intersected with each polygon in the environment, typically from nearest to farthest. Each polygon that intersects<br />
with the beam must be visible, and is removed from the shape of the beam and added to a render queue. When a<br />
beam intersects with a reflective or refractive polygon, a new beam is created in a similar fashion to ray-tracing.<br />
A variant of beam tracing casts a pyramidal beam through each pixel of the image plane. This is then split up into<br />
sub-beams based on its intersection with scene geometry. Reflection and transmission (refraction) rays are also<br />
replaced by beams. This sort of implementation is rarely used, as the geometric processes involved are much more<br />
complex and therefore expensive than simply casting more rays through the pixel.<br />
Beam tracing solves certain problems related to sampling and aliasing, which can plague conventional ray tracing<br />
approaches [2] . Since beam tracing effectively calculates the path of every possible ray within each beam [3] (which<br />
can be viewed as a dense bundle of adjacent rays), it is not as prone to under-sampling (missing rays) or<br />
over-sampling (wasted computational resources). The computational complexity associated with beams has made<br />
them unpopular for many visualization applications. In recent years, Monte Carlo algorithms like distributed ray<br />
tracing have become more popular for rendering calculations.<br />
A 'backwards' variant of beam tracing casts beams from the light source into the environment. Similar to backwards<br />
raytracing and photon mapping, backwards beam tracing may be used to efficiently model lighting effects such as<br />
caustics [4] . Recently the backwards beam tracing technique has also been extended to handle glossy to diffuse<br />
material interactions (glossy backward beam tracing) such as from polished metal surfaces [5] .<br />
Beam tracing has been successfully applied to the fields of acoustic modelling [6] and electromagnetic propagation<br />
modelling [7] . In both of these applications, beams are used as an efficient way to track deep reflections from a<br />
source to a receiver (or vice-versa). Beams can provide a convenient and compact way to represent visibility. Once a<br />
beam tree has been calculated, one can use it to readily account for moving transmitters or receivers.<br />
Beam tracing is related in concept to cone tracing.<br />
References<br />
[1] P. S. Heckbert and P. Hanrahan, "Beam tracing polygonal objects", Computer Graphics 18(3), 119-127 (1984).<br />
[2] A. Lehnert, "Systematic errors of the ray-tracing algorithm", Applied Acoustics 38, 207-221 (1993).<br />
[3] Steven Fortune, "Topological Beam Tracing", Symposium on Computational Geometry 1999: 59-68<br />
[4] M. Watt, "Light-water interaction using backwards beam tracing", in "Proceedings of the 17th annual conference on Computer <strong>graphics</strong> and<br />
interactive techniques(SIGGRAPH'90)",377-385(1990).<br />
[5] B. Duvenhage, K. Bouatouch, and D.G. Kourie, "Exploring the use of Glossy Light Volumes for Interactive Global Illumination", in<br />
"Proceedings of the 7th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa", 2010.<br />
[6] T. Funkhouser, I. Carlbom, G. Elko, G. Pingali, M. Sondhi, and J. West, "A beam tracing approach to acoustic modelling for interactive<br />
virtual environments", in Proceedings of the 25th annual conference on Computer <strong>graphics</strong> and interactive techniques (SIGGRAPH'98),<br />
21-32 (1998).<br />
[7] Steven Fortune, "A Beam-Tracing Algorithm for Prediction of Indoor Radio Propagation", in WACG 1996: 157-166
Bidirectional texture function 13<br />
Bidirectional texture function<br />
Bidirectional texture function (BTF) [1] is a 7-dimensional function depending on planar texture coordinates (x,y)<br />
as well as on view and illumination spherical angles. In practice this function is obtained as a set of several<br />
thousands color images of material sample taken during different camera and light positions.<br />
To cope with a massive BTF data with high redundancy, many compression method were proposed [1] [2] .<br />
Its main application is a photorealistic material rendering of objects in virtual reality systems.<br />
References<br />
[1] Jiří Filip; Michal Haindl (2009). "Bidirectional Texture Function Modeling: A State of the Art Survey" (http:/ / www. computer. org/ portal/<br />
web/ csdl/ doi/ 10. 1109/ TPAMI. 2008. 246). IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 11. pp.<br />
1921–1940. .<br />
[2] Vlastimil Havran; Jiří Filip, Karol Myszkowski (2009). "Bidirectional Texture Function Compression based on Multi-Level Vector<br />
Quantization" (http:/ / www3. interscience. wiley. com/ journal/ 123233573/ abstract). Computer Graphics Forum, vol. 29, no. 1. pp.<br />
175–190. .<br />
Bilinear filtering<br />
Bilinear filtering is a texture filtering method used to smooth textures<br />
when displayed larger or smaller than they actually are.<br />
Most of the time, when drawing a textured shape on the screen, the<br />
texture is not displayed exactly as it is stored, without any distortion.<br />
Because of this, most pixels will end up needing to use a point on the<br />
texture that's 'between' texels, assuming the texels are points (as<br />
opposed to, say, squares) in the middle (or on the upper left corner, or<br />
anywhere else; it doesn't matter, as long as it's consistent) of their<br />
A zoomed small portion of a bitmap, using<br />
nearest-neighbor filtering (left), bilinear filtering<br />
(center), and bicubic filtering (right).<br />
respective 'cells'. Bilinear filtering uses these points to perform bilinear interpolation between the four texels nearest<br />
to the point that the pixel represents (in the middle or upper left of the pixel, usually).<br />
The formula<br />
In these equations, u k and v k are the texture coordinates and y k is the color value at point k. Values without a<br />
subscript refer to the pixel point; values with subscripts 0, 1, 2, and 3 refer to the texel points, starting at the top left,<br />
reading right then down, that immediately surround the pixel point. So y 0 is the color of the texel at texture<br />
coordinate (u 0 , v 0 ). These are linear interpolation equations. We'd start with the bilinear equation, but since this is a<br />
special case with some elegant results, it is easier to start from linear interpolation.<br />
Assuming that the texture is a square bitmap,
Bilinear filtering 14<br />
Are all true. Further, define<br />
With these we can simplify the interpolation equations:<br />
And combine them:<br />
Or, alternatively:<br />
Which is rather convenient. However, if the image is merely scaled (and not rotated, sheared, put into perspective, or<br />
any other manipulation), it can be considerably faster to use the separate equations and store y b (and sometimes y a , if<br />
we are increasing the scale) for use in subsequent rows.<br />
Sample code<br />
This code assumes that the texture is square (an extremely common occurrence), that no mipmapping comes into<br />
play, and that there is only one channel of data (not so common. Nearly all textures are in color so they have red,<br />
green, and blue channels, and many have an alpha transparency channel, so we must make three or four calculations<br />
of y, one for each channel).<br />
{<br />
double getBilinearFilteredPixelColor(Texture tex, double u, double v)<br />
u *= tex.size;<br />
v *= tex.size;<br />
int x = floor(u);<br />
int y = floor(v);<br />
double u_ratio = u - x;<br />
double v_ratio = v - y;<br />
double u_opposite = 1 - u_ratio;<br />
double v_opposite = 1 - v_ratio;<br />
double result = (tex[x][y] * u_opposite + tex[x+1][y] *<br />
u_ratio) * v_opposite +<br />
u_ratio) * v_ratio;<br />
}<br />
return result;<br />
(tex[x][y+1] * u_opposite + tex[x+1][y+1] *
Bilinear filtering 15<br />
Limitations<br />
Bilinear filtering is rather accurate until the scaling of the texture gets below half or above double the original size of<br />
the texture - that is, if the texture was 256 pixels in each direction, scaling it to below 128 or above 512 pixels can<br />
make the texture look bad, because of missing pixels or too much smoothness. Often, mipmapping is used to provide<br />
a scaled-down version of the texture for better performance; however, the transition between two differently-sized<br />
mipmaps on a texture in perspective using bilinear filtering can be very abrupt. Trilinear filtering, though somewhat<br />
more complex, can make this transition smooth throughout.<br />
For a quick demonstration of how a texel can be missing from a filtered texture, here's a list of numbers representing<br />
the centers of boxes from an 8-texel-wide texture (in red and black), intermingled with the numbers from the centers<br />
of boxes from a 3-texel-wide down-sampled texture (in blue). The red numbers represent texels that would not be<br />
used in calculating the 3-texel texture at all.<br />
0.0625, 0.1667, 0.1875, 0.3125, 0.4375, 0.5000, 0.5625, 0.6875, 0.8125, 0.8333, 0.9375<br />
Special cases<br />
Textures aren't infinite, in general, and sometimes one ends up with a pixel coordinate that lies outside the grid of<br />
texel coordinates. There are a few ways to handle this:<br />
• Wrap the texture, so that the last texel in a row also comes right before the first, and the last texel in a column also<br />
comes right above the first. This works best when the texture is being tiled.<br />
• Make the area outside the texture all one color. This may be of use for a texture designed to be laid over a solid<br />
background or to be transparent.<br />
• Repeat the edge texels out to infinity. This works best if the texture is not designed to be repeated.<br />
Binary space partitioning<br />
In computer science, binary space partitioning (BSP) is a method for recursively subdividing a space into convex<br />
sets by hyperplanes. This subdivision gives rise to a representation of the scene by means of a tree data structure<br />
known as a BSP tree.<br />
Originally, this approach was proposed in <strong>3D</strong> computer <strong>graphics</strong> to increase the rendering efficiency by<br />
precomputing the BSP tree prior to low-level rendering operations. Some other applications include performing<br />
geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and <strong>3D</strong><br />
computer games, and other computer applications that involve handling of complex spatial scenes.<br />
Overview<br />
In computer <strong>graphics</strong> it is desirable that the drawing of a scene be done both correctly and quickly. A simple way to<br />
draw a scene is the painter's algorithm: draw it from back to front painting over the background with each closer<br />
object. However, that approach is quite limited, since time is wasted drawing objects that will be overdrawn later,<br />
and not all objects will be drawn correctly.<br />
Z-buffering can ensure that scenes are drawn correctly and eliminate the ordering step of the painter's algorithm, but<br />
it is expensive in terms of memory use. BSP trees will split up objects so that the painter's algorithm will draw them<br />
correctly without need of a Z-buffer and eliminate the need to sort the objects; as a simple tree traversal will yield<br />
them in the correct order. It also serves as a basis for other algorithms, such as visibility lists, which attempt to<br />
reduce overdraw.<br />
The downside is the requirement for a time consuming pre-processing of the scene, which makes it difficult and<br />
inefficient to directly implement moving objects into a BSP tree. This is often overcome by using the BSP tree
Binary space partitioning 16<br />
together with a Z-buffer, and using the Z-buffer to correctly merge movable objects such as doors and characters<br />
onto the background scene.<br />
BSP trees are often used by <strong>3D</strong> computer games, particularly first-person shooters and those with indoor<br />
environments. Probably the earliest game to use a BSP data structure was Doom (see Doom engine for an in-depth<br />
look at Doom's BSP implementation). Other uses include ray tracing and collision detection.<br />
Generation<br />
Binary space partitioning is a generic process of recursively dividing a scene into two until the partitioning satisfies<br />
one or more requirements. The specific method of division varies depending on its final purpose. For instance, in a<br />
BSP tree used for collision detection, the original object would be partitioned until each part becomes simple enough<br />
to be individually tested, and in rendering it is desirable that each part be convex so that the painter's algorithm can<br />
be used.<br />
The final number of objects will inevitably increase since lines or faces that cross the partitioning plane must be split<br />
into two, and it is also desirable that the final tree remains reasonably balanced. Therefore the algorithm for correctly<br />
and efficiently creating a good BSP tree is the most difficult part of an implementation. In <strong>3D</strong> space, planes are used<br />
to partition and split an object's faces; in 2D space lines split an object's segments.<br />
The following picture illustrates the process of partitioning an irregular polygon into a series of convex ones. Notice<br />
how each step produces polygons with fewer segments until arriving at G and F, which are convex and require no<br />
further partitioning. In this particular case, the partitioning line was picked between existing vertices of the polygon<br />
and intersected none of its segments. If the partitioning line intersects a segment, or face in a <strong>3D</strong> model, the<br />
offending segment(s) or face(s) have to be split into two at the line/plane because each resulting partition must be a<br />
full, independent object.<br />
1. A is the root of the tree and the entire polygon<br />
2. A is split into B and C<br />
3. B is split into D and E.<br />
4. D is split into F and G, which are convex and hence become leaves on the tree.<br />
Since the usefulness of a BSP tree depends upon how well it was generated, a good algorithm is essential. Most<br />
algorithms will test many possibilities for each partition until they find a good compromise. They might also keep<br />
backtracking information in memory, so that if a branch of the tree is found to be unsatisfactory, other alternative<br />
partitions may be tried. Thus producing a tree usually requires long computations.<br />
BSP trees are also used to represent natural images. Construction methods for BSP trees representing images were<br />
first introduced as efficient representations in which only a few hundred nodes can represent an image that normally
Binary space partitioning 17<br />
requires hundreds of thousands of pixels. Fast algorithms have also been developed to construct BSP trees of images<br />
using computer vision and signal processing algorithms. These algorithms, in conjunction with advanced entropy<br />
coding and signal approximation approaches, were used to develop image compression methods.<br />
Rendering a scene with visibility information from the BSP tree<br />
BSP trees are used to improve rendering performance in calculating visible triangles for the painter's algorithm for<br />
instance. The tree can be traversed in linear time from an arbitrary viewpoint.<br />
Since a painter's algorithm works by drawing polygons farthest from the eye first, the following code recurses to the<br />
bottom of the tree and draws the polygons. As the recursion unwinds, polygons closer to the eye are drawn over far<br />
polygons. Because the BSP tree already splits polygons into trivial pieces, the hardest part of the painter's algorithm<br />
is already solved - code for back to front tree traversal. [1]<br />
traverse_tree(bsp_tree* tree, point eye)<br />
{<br />
}<br />
location = tree->find_location(eye);<br />
if(tree->empty())<br />
return;<br />
if(location > 0) // if eye in front of location<br />
{<br />
}<br />
traverse_tree(tree->back, eye);<br />
display(tree->polygon_list);<br />
traverse_tree(tree->front, eye);<br />
else if(location < 0) // eye behind location<br />
{<br />
}<br />
traverse_tree(tree->front, eye);<br />
display(tree->polygon_list);<br />
traverse_tree(tree->back, eye);<br />
else // eye coincidental with partition hyperplane<br />
{<br />
}<br />
traverse_tree(tree->front, eye);<br />
traverse_tree(tree->back, eye);
Binary space partitioning 18<br />
Other space partitioning structures<br />
BSP trees divide a region of space into two subregions at each node. They are related to quadtrees and octrees, which<br />
divide each region into four or eight subregions, respectively.<br />
Relationship Table<br />
Name p s<br />
Binary Space Partition 1 2<br />
Quadtree 2 4<br />
Octree 3 8<br />
where p is the number of dividing planes used, and s is the number of subregions formed.<br />
BSP trees can be used in spaces with any number of dimensions. Quadtrees and octrees are useful for subdividing 2-<br />
and 3-dimensional spaces, respectively. Another kind of tree that behaves somewhat like a quadtree or octree, but is<br />
useful in any number of dimensions, is the kd-tree.<br />
Timeline<br />
• 1969 Schumacker et al. published a report that described how carefully positioned planes in a virtual environment<br />
could be used to accelerate polygon ordering. The technique made use of depth coherence, which states that a<br />
polygon on the far side of the plane cannot, in any way, obstruct a closer polygon. This was used in flight<br />
simulators made by GE as well as Evans and Sutherland. However, creation of the polygonal data organization<br />
was performed manually by scene designer.<br />
• 1980 Fuchs et al. [FUCH80] extended Schumacker’s idea to the representation of <strong>3D</strong> objects in a virtual<br />
environment by using planes that lie coincident with polygons to recursively partition the <strong>3D</strong> space. This provided<br />
a fully automated and algorithmic generation of a hierarchical polygonal data structure known as a Binary Space<br />
Partitioning Tree (BSP Tree). The process took place as an off-line preprocessing step that was performed once<br />
per environment/object. At run-time, the view-dependent visibility ordering was generated by traversing the tree.<br />
• 1981 Naylor's Ph.D thesis containing a full development of both BSP trees and a graph-theoretic approach using<br />
strongly connected components for pre-computing visibility, as well as the connection between the two methods.<br />
BSP trees as a dimension independent spatial search structure was emphasized, with applications to visible<br />
surface determination. The thesis also included the first empirical data demonstrating that the size of the tree and<br />
the number of new polygons was reasonable (using a model of the Space Shuttle).<br />
• 1983 Fuchs et al. describe a micro-code implementation of the BSP tree algorithm on an Ikonas frame buffer<br />
system. This was the first demonstration of real-time visible surface determination using BSP trees.<br />
• 1987 Thibault and Naylor described how arbitrary polyhedra may be represented using a BSP tree as opposed to<br />
the traditional b-rep (boundary representation). This provided a solid representation vs. a surface<br />
based-representation. Set operations on polyhedra were described using a tool, enabling Constructive Solid<br />
Geometry (CSG) in real-time. This was the fore runner of BSP level design using brushes, introduced in the<br />
Quake editor and picked up in the Unreal Editor.<br />
• 1990 Naylor, Amanatides, and Thibault provide an algorithm for merging two bsp trees to form a new bsp tree<br />
from the two original trees. This provides many benefits including: combining moving objects represented by<br />
BSP trees with a static environment (also represented by a BSP tree), very efficient CSG operations on polyhedra,<br />
exact collisions detection in O(log n * log n), and proper ordering of transparent surfaces contained in two<br />
interpenetrating objects (has been used for an x-ray vision effect).
Binary space partitioning 19<br />
• 1990 Teller and Séquin proposed the offline generation of potentially visible sets to accelerate visible surface<br />
determination in orthogonal 2D environments.<br />
• 1991 Gordon and Chen [CHEN91] described an efficient method of performing front-to-back rendering from a<br />
BSP tree, rather than the traditional back-to-front approach. They utilised a special data structure to record,<br />
efficiently, parts of the screen that have been drawn, and those yet to be rendered. This algorithm, together with<br />
the description of BSP Trees in the standard computer <strong>graphics</strong> textbook of the day (Foley, Van Dam, Feiner and<br />
Hughes) was used by John Carmack in the making of Doom.<br />
• 1992 Teller’s PhD thesis described the efficient generation of potentially visible sets as a pre-processing step to<br />
acceleration real-time visible surface determination in arbitrary <strong>3D</strong> polygonal environments. This was used in<br />
Quake and contributed significantly to that game's performance.<br />
• 1993 Naylor answers the question of what characterizes a good BSP tree. He used expected case models (rather<br />
than worst case analysis) to mathematically measure the expected cost of searching a tree and used this measure<br />
to build good BSP trees. Intuitively, the tree represents an object in a multi-resolution fashion (more exactly, as a<br />
tree of approximations). Parallels with Huffman codes and probabilistic binary search trees are drawn.<br />
• 1993 Hayder Radha's PhD thesis described (natural) image representation methods using BSP trees. This includes<br />
the development of an optimal BSP-tree construction framework for any arbitrary input image. This framework is<br />
based on a new image transform, known as the Least-Square-Error (LSE) Partitioning Line (LPE) transform. H.<br />
Radha' thesis also developed an optimal rate-distortion (RD) image compression framework and image<br />
manipulation approaches using BSP trees.<br />
References<br />
• [FUCH80] H. Fuchs, Z. M. Kedem and B. F. Naylor. “On Visible Surface Generation by A Priori Tree<br />
Structures.” ACM Computer Graphics, pp 124–133. July 1980.<br />
• [THIBAULT87] W. Thibault and B. Naylor, "Set Operations on Polyhedra Using Binary Space Partitioning<br />
Trees", Computer Graphics (Siggraph '87), 21(4), 1987.<br />
• [NAYLOR90] B. Naylor, J. Amanatides, and W. Thibualt, "Merging BSP Trees Yields Polyhedral Set<br />
Operations", Computer Graphics (Siggraph '90), 24(3), 1990.<br />
• [NAYLOR93] B. Naylor, "Constructing Good Partitioning Trees", Graphics Interface (annual Canadian CG<br />
conference) May, 1993.<br />
• [CHEN91] S. Chen and D. Gordon. “Front-to-Back Display of BSP Trees.” [2] IEEE Computer Graphics &<br />
Algorithms, pp 79–85. September 1991.<br />
• [RADHA91] H. Radha, R. Leoonardi, M. Vetterli, and B. Naylor “Binary Space Partitioning Tree Representation<br />
of Images,” Journal of Visual Communications and Image Processing 1991, vol. 2(3).<br />
• [RADHA93] H. Radha, "Efficient Image Representation using Binary Space Partitioning Trees.", Ph.D. Thesis,<br />
Columbia University, 1993.<br />
• [RADHA96] H. Radha, M. Vetterli, and R. Leoonardi, “Image Compression Using Binary Space Partitioning<br />
Trees,” IEEE Transactions on Image Processing, vol. 5, No.12, December 1996, pp. 1610–1624.<br />
• [WINTER99] AN INVESTIGATION INTO REAL-TIME <strong>3D</strong> POLYGON RENDERING USING BSP TREES.<br />
Andrew Steven Winter. April 1999. available online<br />
• Mark de Berg, Marc van Kreveld, Mark Overmars, and Otfried Schwarzkopf (2000). Computational Geometry<br />
(2nd revised edition ed.). Springer-Verlag. ISBN 3-540-65620-0. Section 12: Binary Space Partitions:<br />
pp. 251–265. Describes a randomized Painter's Algorithm.<br />
• Christer Ericson: Real-Time Collision Detection (The Morgan Kaufmann Series in Interactive 3-D Technology).<br />
Verlag Morgan Kaufmann, S. 349-382, Jahr 2005, ISBN 1-55860-732-3
Binary space partitioning 20<br />
[1] Binary Space Partition Trees in 3d worlds (http:/ / web. cs. wpi. edu/ ~matt/ courses/ cs563/ talks/ bsp/ document. html)<br />
[2] http:/ / www. rothschild. haifa. ac. il/ ~gordon/ ftb-bsp. pdf<br />
External links<br />
• BSP trees presentation (http:/ / www. cs. wpi. edu/ ~matt/ courses/ cs563/ talks/ bsp/ bsp. html)<br />
• Another BSP trees presentation (http:/ / www. cc. gatech. edu/ classes/ AY2004/ cs4451a_fall/ bsp. pdf)<br />
• A Java applet which demonstrates the process of tree generation (http:/ / symbolcraft. com/ <strong>graphics</strong>/ bsp/ )<br />
• A Master Thesis about BSP generating (http:/ / www. gamedev. net/ reference/ programming/ features/ bsptree/<br />
bsp. pdf)<br />
• BSP Trees: Theory and Implementation (http:/ / www. devmaster. net/ articles/ bsp-trees/ )<br />
• BSP in <strong>3D</strong> space (http:/ / www. euclideanspace. com/ threed/ solidmodel/ spatialdecomposition/ bsp/ index. htm)<br />
• A simple, illustrated introduction to using BSPs to create random room layouts (in this case for a<br />
dungeon-crawling game) (http:/ / doryen. eptalys. net/ articles/ bsp-dungeon-generation/ )<br />
Bounding interval hierarchy<br />
A bounding interval hierarchy (BIH) is a partitioning data structure similar to that of bounding volume hierarchies<br />
or kd-trees. Bounding interval hierarchies can be used in high performance (or real-time) ray tracing and may be<br />
especially useful for dynamic scenes.<br />
The BIH itself is, however, not new. It has been presented earlier under the name of SKD-Trees [1] , presented by<br />
Ooi et al., and BoxTrees [2] , independently invented by Zachmann.<br />
Overview<br />
Bounding interval hierarchies (BIH) exhibit many of the properties of both bounding volume hierarchies (BVH) and<br />
kd-trees. Whereas the construction and storage of BIH is comparable to that of BVH, the traversal of BIH resemble<br />
that of kd-trees. Furthermore, BIH are also binary trees just like kd-trees (and in fact their superset, BSP trees).<br />
Finally, BIH are axis-aligned as are its ancestors. Although a more general non-axis-aligned implementation of the<br />
BIH should be possible (similar to the BSP-tree, which uses unaligned planes), it would almost certainly be less<br />
desirable due to decreased numerical stability and an increase in the complexity of ray traversal.<br />
The key feature of the BIH is the storage of 2 planes per node (as opposed to 1 for the kd tree and 6 for an axis<br />
aligned bounding box hierarchy), which allows for overlapping children (just like a BVH), but at the same time<br />
featuring an order on the children along one dimension/axis (as it is the case for kd trees).<br />
It is also possible to just use the BIH data structure for the construction phase but traverse the tree in a way a<br />
traditional axis aligned bounding box hierarchy does. This enables some simple speed up optimizations for large ray<br />
bundles [3] while keeping memory/cache usage low.<br />
Some general attributes of bounding interval hierarchies (and techniques related to BIH) as described by [4] are:<br />
• Very fast construction times<br />
• Low memory footprint<br />
• Simple and fast traversal<br />
• Very simple construction and traversal algorithms<br />
• High numerical precision during construction and traversal<br />
• Flatter tree structure (decreased tree depth) compared to kd-trees
Bounding interval hierarchy 21<br />
Operations<br />
Construction<br />
To construct any space partitioning structure some form of heuristic is commonly used. For this the surface area<br />
heuristic, commonly used with many partitioning schemes, is a possible candidate. Another, more simplistic<br />
heuristic is the "global" heuristic described by [4] which only requires an axis-aligned bounding box, rather than the<br />
full set of primitives, making it much more suitable for a fast construction.<br />
The general construction scheme for a BIH:<br />
• calculate the scene bounding box<br />
• use a heuristic to choose one axis and a split plane candidate perpendicular to this axis<br />
• sort the objects to the left or right child (exclusively) depending on the bounding box of the object (note that<br />
objects intersecting the split plane may either be sorted by its overlap with the child volumes or any other<br />
heuristic)<br />
• calculate the maximum bounding value of all objects on the left and the minimum bounding value of those on the<br />
right for that axis (can be combined with previous step for some heuristics)<br />
• store these 2 values along with 2 bits encoding the split axis in a new node<br />
• continue with step 2 for the children<br />
Potential heuristics for the split plane candidate search:<br />
• Classical: pick the longest axis and the middle of the node bounding box on that axis<br />
• Classical: pick the longest axis and a split plane through the median of the objects (results in a leftist tree which is<br />
often unfortunate for ray tracing though)<br />
• Global heuristic: pick the split plane based on a global criterion, in the form of a regular grid (avoids unnecessary<br />
splits and keeps node volumes as cubic as possible)<br />
• Surface area heuristic: calculate the surface area and amount of objects for both children, over the set of all<br />
possible split plane candidates, then choose the one with the lowest costs (claimed to be optimal, though the cost<br />
function poses unusual demands to proof the formula, which can not be fulfilled in real life. also an exceptionally<br />
slow heuristic to evaluate)<br />
Ray traversal<br />
The traversal phase closely resembles a kd-tree traversal: One has to distinguish 4 simple cases, where the ray<br />
• just intersects the left child<br />
• just intersects the right child<br />
• intersects both children<br />
• intersects none of both (the only case not possible in a kd traversal)<br />
For the third case, depending on the ray direction (negative or positive) of the component (x, y or z) equalling the<br />
split axis of the current node, the traversal continues first with the left (positive direction) or the right (negative<br />
direction) child and the other one is pushed onto a stack.<br />
Traversal continues until a leaf node is found. After intersecting the objects in the leaf, the next element is popped<br />
from the stack. If the stack is empty, the nearest intersection of all pierced leafs is returned.<br />
It is also possible to add a 5th traversal case, but which also requires a slightly complicated construction phase. By<br />
swapping the meanings of the left and right plane of a node, it is possible to cut off empty space on both sides of a<br />
node. This requires an additional bit that must be stored in the node to detect this special case during traversal.<br />
Handling this case during the traversal phase is simple, as the ray<br />
• just intersects the only child of the current node or<br />
• intersects nothing
Bounding interval hierarchy 22<br />
Properties<br />
Numerical stability<br />
All operations during the hierarchy construction/sorting of the triangles are min/max-operations and comparisons.<br />
Thus no triangle clipping has to be done as it is the case with kd-trees and which can become a problem for triangles<br />
that just slightly intersect a node. Even if the kd implementation is carefully written, numerical errors can result in a<br />
non-detected intersection and thus rendering errors (holes in the geometry) due to the missed ray-object intersection.<br />
Extensions<br />
Instead of using two planes per node to separate geometry, it is also possible to use any number of planes to create a<br />
n-ary BIH or use multiple planes in a standard binary BIH (one and four planes per node were already proposed in [4]<br />
and then properly evaluated in [5] ) to achieve better object separation.<br />
References<br />
Papers<br />
[1] Nam, Beomseok; Sussman, Alan. A comparative study of spatial indexing techniques for multidimensional scientific datasets (http:/ /<br />
ieeexplore. ieee. org/ Xplore/ login. jsp?url=/ iel5/ 9176/ 29111/ 01311209. pdf)<br />
[2] Zachmann, Gabriel. Minimal Hierarchical Collision Detection (http:/ / zach. in. tu-clausthal. de/ papers/ vrst02. html)<br />
[3] Wald, Ingo; Boulos, Solomon; Shirley, Peter (2007). Ray Tracing Deformable Scenes using Dynamic Bounding Volume Hierarchies (http:/ /<br />
www. sci. utah. edu/ ~wald/ Publications/ 2007/ / / BVH/ download/ / togbvh. pdf)<br />
[4] Wächter, Carsten; Keller, Alexander (2006). Instant Ray Tracing: The Bounding Interval Hierarchy (http:/ / ainc. de/ Research/ BIH. pdf)<br />
[5] Wächter, Carsten (2008). Quasi-Monte Carlo Light Transport Simulation by Efficient Ray Tracing (http:/ / vts. uni-ulm. de/ query/ longview.<br />
meta. asp?document_id=6265)<br />
Forums<br />
• http:/ / ompf. org/ forum (http:/ / ompf. org/ forum)<br />
External links<br />
• BIH implementations: Javascript (http:/ / github. com/ imbcmdth/ jsBIH).
Bounding volume 23<br />
Bounding volume<br />
For building code compliance, see Bounding.<br />
In computer <strong>graphics</strong> and computational geometry, a bounding<br />
volume for a set of objects is a closed volume that completely contains<br />
the union of the objects in the set. Bounding volumes are used to<br />
improve the efficiency of geometrical operations by using simple<br />
volumes to contain more complex objects. Normally, simpler volumes<br />
have simpler ways to test for overlap.<br />
A bounding volume for a set of objects is also a bounding volume for<br />
the single object consisting of their union, and the other way around.<br />
Therefore it is possible to confine the description to the case of a single<br />
object, which is assumed to be non-empty and bounded (finite).<br />
Uses of bounding volumes<br />
Bounding volumes are most often used to accelerate certain kinds of tests.<br />
A three dimensional model with its bounding box<br />
drawn in dashed lines.<br />
In ray tracing, bounding volumes are used in ray-intersection tests, and in many rendering algorithms, they are used<br />
for viewing frustum tests. If the ray or viewing frustum does not intersect the bounding volume, it cannot intersect<br />
the object contained in the volume. These intersection tests produce a list of objects that must be displayed. Here,<br />
displayed means rendered or rasterized.<br />
In collision detection, when two bounding volumes do not intersect, then the contained objects cannot collide, either.<br />
Testing against a bounding volume is typically much faster than testing against the object itself, because of the<br />
bounding volume's simpler geometry. This is because an 'object' is typically composed of polygons or data structures<br />
that are reduced to polygonal approximations. In either case, it is computationally wasteful to test each polygon<br />
against the view volume if the object is not visible. (Onscreen objects must be 'clipped' to the screen, regardless of<br />
whether their surfaces are actually visible.)<br />
To obtain bounding volumes of complex objects, a common way is to break the objects/scene down using a scene<br />
graph or more specifically bounding volume hierarchies like e.g. OBB trees. The basic idea behind this is to organize<br />
a scene in a tree-like structure where the root comprises the whole scene and each leaf contains a smaller subpart.<br />
Common types of bounding volume<br />
The choice of the type of bounding volume for a given application is determined by a variety of factors: the<br />
computational cost of computing a bounding volume for an object, the cost of updating it in applications in which<br />
the objects can move or change shape or size, the cost of determining intersections, and the desired precision of the<br />
intersection test. The precision of the intersection test is related to the amount of space within the bounding volume<br />
not associated with the bounded object, called void space. Sophisticated bounding volumes generally allow for less<br />
void space but are more computationally expensive. It is common to use several types in conjunction, such as a<br />
cheap one for a quick but rough test in conjunction with a more precise but also more expensive type.<br />
The types treated here all give convex bounding volumes. If the object being bounded is known to be convex, this is<br />
not a restriction. If non-convex bounding volumes are required, an approach is to represent them as a union of a<br />
number of convex bounding volumes. Unfortunately, intersection tests become quickly more expensive as the
Bounding volume 24<br />
bounding boxes become more sophisticated.<br />
A bounding sphere is a sphere containing the object. In 2-D <strong>graphics</strong>, this is a circle. Bounding spheres are<br />
represented by centre and radius. They are very quick to test for collision with each other: two spheres intersect<br />
when the distance between their centres does not exceed the sum of their radii. This makes bounding spheres<br />
appropriate for objects that can move in any number of dimensions.<br />
A bounding ellipsoid is an ellipsoid containing the object. Ellipsoids usually provide tighter fitting than a sphere.<br />
Intersections with ellipsoids are done by scaling the other object along the principal axes of the ellipsoid by an<br />
amount equal to the multiplicative inverse of the radii of the ellipsoid, thus reducing the problem to intersecting the<br />
scaled object with a unit sphere. Care should be taken to avoid problems if the applied scaling introduces skew.<br />
Skew can make the usage of ellipsoids impractical in certain cases, for example collision between two arbitrary<br />
ellipsoids.<br />
A bounding cylinder is a cylinder containing the object. In most applications the axis of the cylinder is aligned with<br />
the vertical direction of the scene. Cylinders are appropriate for 3-D objects that can only rotate about a vertical axis<br />
but not about other axes, and are otherwise constrained to move by translation only. Two vertical-axis-aligned<br />
cylinders intersect when, simultaneously, their projections on the vertical axis intersect – which are two line<br />
segments – as well their projections on the horizontal plane – two circular disks. Both are easy to test. In video<br />
games, bounding cylinders are often used as bounding volumes for people standing upright.<br />
A bounding capsule is a swept sphere (i.e. the volume that a sphere takes as it moves along a straight line segment)<br />
containing the object. Capsules can be represented by the radius of the swept sphere and the segment that the sphere<br />
is swept across). It has traits similar to a cylinder, but is easier to use, because the intersection test is simpler. A<br />
capsule and another object intersect if the distance between the capsule's defining segment and some feature of the<br />
other object is smaller than the capsule's radius. For example, two capsules intersect if the distance between the<br />
capsules' segments is smaller than the sum of their radii. This holds for arbitrarily rotated capsules, which is why<br />
they're more appealing than cylinders in practice.<br />
A bounding box is a cuboid, or in 2-D a rectangle, containing the object. In dynamical simulation, bounding boxes<br />
are preferred to other shapes of bounding volume such as bounding spheres or cylinders for objects that are roughly<br />
cuboid in shape when the intersection test needs to be fairly accurate. The benefit is obvious, for example, for objects<br />
that rest upon other, such as a car resting on the ground: a bounding sphere would show the car as possibly<br />
intersecting with the ground, which then would need to be rejected by a more expensive test of the actual model of<br />
the car; a bounding box immediately shows the car as not intersecting with the ground, saving the more expensive<br />
test.<br />
In many applications the bounding box is aligned with the axes of the co-ordinate system, and it is then known as an<br />
axis-aligned bounding box (AABB). To distinguish the general case from an AABB, an arbitrary bounding box is<br />
sometimes called an oriented bounding box (OBB). AABBs are much simpler to test for intersection than OBBs,<br />
but have the disadvantage that when the model is rotated they cannot be simply rotated with it, but need to be<br />
recomputed.<br />
A bounding slab is related to the AABB and used to speed up ray tracing [1]<br />
A minimum bounding rectangle or MBR – the least AABB in 2-D – is frequently used in the description of<br />
geographic (or "geospatial") data items, serving as a simplified proxy for a dataset's spatial extent (see geospatial<br />
metadata) for the purpose of data search (including spatial queries as applicable) and display. It is also a basic<br />
component of the R-tree method of spatial indexing.<br />
A discrete oriented polytope (DOP) generalizes the AABB. A DOP is a convex polytope containing the object (in<br />
2-D a polygon; in 3-D a polyhedron), constructed by taking a number of suitably oriented planes at infinity and<br />
moving them until they collide with the object. The DOP is then the convex polytope resulting from intersection of<br />
the half-spaces bounded by the planes. Popular choices for constructing DOPs in 3-D <strong>graphics</strong> include the
Bounding volume 25<br />
axis-aligned bounding box, made from 6 axis-aligned planes, and the beveled bounding box, made from 10 (if<br />
beveled only on vertical edges, say) 18 (if beveled on all edges), or 26 planes (if beveled on all edges and corners). A<br />
DOP constructed from k planes is called a k-DOP; the actual number of faces can be less than k, since some can<br />
become degenerate, shrunk to an edge or a vertex.<br />
A convex hull is the smallest convex volume containing the object. If the object is the union of a finite set of points,<br />
its convex hull is a polytope.<br />
Basic intersection checks<br />
For some types of bounding volume (OBB and convex polyhedra), an effective check is that of the separating axis<br />
theorem. The idea here is that, if there exists an axis by which the objects do not overlap, then the objects do not<br />
intersect. Usually the axes checked are those of the basic axes for the volumes (the unit axes in the case of an AABB,<br />
or the 3 base axes from each OBB in the case of OBBs). Often, this is followed by also checking the cross-products<br />
of the previous axes (one axis from each object).<br />
In the case of an AABB, this tests becomes a simple set of overlap tests in terms of the unit axes. For an AABB<br />
defined by M,N against one defined by O,P they do not intersect if (M x >P x ) or (O x >N x ) or (M y >P y ) or (O y >N y ) or<br />
(M z >P z ) or (O z >N z ).<br />
An AABB can also be projected along an axis, for example, if it has edges of length L and is centered at C, and is<br />
being projected along the axis N:<br />
, and or , and<br />
where m and n are the minimum and maximum extents.<br />
An OBB is similar in this respect, but is slightly more complicated. For an OBB with L and C as above, and with I, J,<br />
and K as the OBB's base axes, then:<br />
For the ranges m,n and o,p it can be said that they do not intersect if m>p or o>n. Thus, by projecting the ranges of 2<br />
OBBs along the I, J, and K axes of each OBB, and checking for non-intersection, it is possible to detect<br />
non-intersection. By additionally checking along the cross products of these axes (I 0 ×I 1 , I 0 ×J 1 , ...) one can be more<br />
certain that intersection is impossible.<br />
This concept of determining non-intersection via use of axis projection also extends to convex polyhedra, however<br />
with the normals of each polyhedral face being used instead of the base axes, and with the extents being based on the<br />
minimum and maximum dot products of each vertex against the axes. Note that this description assumes the checks<br />
are being done in world space.<br />
References<br />
[1] POV-Ray Documentation (http:/ / www. povray. org/ documentation/ view/ 3. 6. 1/ 323/ )<br />
External links<br />
• Illustration of several DOPs for the same model, from epicgames.com (http:/ / udn. epicgames. com/ Two/ rsrc/<br />
Two/ CollisionTutorial/ kdop_sizes. jpg)
Bump mapping 26<br />
Bump mapping<br />
Bump mapping is a technique in<br />
computer <strong>graphics</strong> for simulating<br />
bumps and wrinkles on the surface of<br />
an object. This is achieved by<br />
perturbing the surface normals of the<br />
object and using the perturbed normal<br />
during lighting calculations. The result<br />
is an apparently bumpy surface rather<br />
than a smooth surface although the<br />
surface of the underlying object is not<br />
actually changed. Bump mapping was<br />
introduced by Blinn in 1978. [1]<br />
Normal mapping is the most common variation of bump mapping used [2] .<br />
Bump mapping basics<br />
Bump mapping is a technique in<br />
computer <strong>graphics</strong> to make a rendered<br />
surface look more realistic by<br />
simulating small displacements of the<br />
surface. However, unlike traditional<br />
displacement mapping, the surface<br />
geometry is not modified. Instead only<br />
the surface normal is modified as if the<br />
surface had been displaced. The<br />
modified surface normal is then used<br />
for lighting calculations as usual,<br />
typically using the Phong reflection<br />
model or similar, giving the<br />
appearance of detail instead of a<br />
smooth surface.<br />
Bump mapping is much faster and<br />
consumes less resources for the same<br />
level of detail compared to<br />
displacement mapping because the geometry remains unchanged.<br />
A sphere without bump mapping (left). A bump map to be applied to the sphere (middle).<br />
The sphere with the bump map applied (right) appears to have a mottled surface<br />
resembling an orange. Bump maps achieve this effect by changing how an illuminated<br />
surface reacts to light without actually modifying the size or shape of the surface<br />
Bump mapping is limited in that it does not actually modify the shape of the underlying<br />
object. On the left, a mathematical function defining a bump map simulates a crumbling<br />
surface on a sphere, but the object's outline and shadow remain those of a perfect sphere.<br />
On the right, the same function is used to modify the surface of a sphere by generating an<br />
isosurface. This actually models a sphere with a bumpy surface with the result that both<br />
its outline and its shadow are rendered realistically.<br />
There are primarily two methods to perform bump mapping. The first uses a height map for simulating the surface<br />
displacement yielding the modified normal. This is the method invented by Blinn [1] and is usually what is referred to<br />
as bump mapping unless specified. The steps of this method is summarized as follows.<br />
Before lighting a calculation is performed for each visible point (or pixel) on the object's surface:<br />
1. Look up the height in the heightmap that corresponds to the position on the surface.<br />
2. Calculate the surface normal of the heightmap, typically using the finite difference method.<br />
3. Combine the surface normal from step two with the true ("geometric") surface normal so that the combined<br />
normal points in a new direction.
Bump mapping 27<br />
4. Calculate the interaction of the new "bumpy" surface with lights in the scene using, for example, the Phong<br />
reflection model.<br />
The result is a surface that appears to have real depth. The algorithm also ensures that the surface appearance<br />
changes as lights in the scene are moved around.<br />
The other method is to specify a normal map which contains the modified normal for each point on the surface<br />
directly. Since the normal is specified directly instead of derived from a height map this method usually leads to<br />
more predictable results. This makes it easier for artists to work with, making it the most common method of bump<br />
mapping today [2] .<br />
There are also extensions which modifies other surface features in addition to increase the sense of depth. Parallax<br />
mapping is one such extension.<br />
The primary limitation with bump mapping is that it perturbs only the surface normals without changing the<br />
underlying surface itself. [3] Silhouettes and shadows therefore remain unaffected, which is especially noticeable for<br />
larger simulated displacements. This limitation can be overcome by techniques including the displacement mapping<br />
where bumps are actually applied to the surface or using an isosurface.<br />
Realtime bump mapping techniques<br />
Realtime <strong>3D</strong> <strong>graphics</strong> programmers often use variations of the technique in order to simulate bump mapping at a<br />
lower computational cost.<br />
One typical way was to use a fixed geometry, which allows one to use the heightmap surface normal almost directly.<br />
Combined with a precomputed lookup table for the lighting calculations the method could be implemented with a<br />
very simple and fast loop, allowing for a full-screen effect. This method was a common visual effect when bump<br />
mapping was first introduced.<br />
References<br />
[1] Blinn, James F. "Simulation of Wrinkled Surfaces" (http:/ / portal. acm. org/ citation. cfm?id=507101), Computer Graphics, Vol. 12 (3),<br />
pp. 286-292 SIGGRAPH-ACM (August 1978)<br />
[2] Mikkelsen, Morten. Simulation of Wrinkled Surfaces Revisited (http:/ / image. diku. dk/ projects/ media/ morten. mikkelsen. 08. pdf), 2008<br />
(PDF)<br />
[3] Real-Time Bump Map Synthesis (http:/ / web4. cs. ucl. ac. uk/ staff/ j. kautz/ publications/ rtbumpmapHWWS01. pdf), Jan Kautz 1 , Wolfgang<br />
Heidrichy 2 and Hans-Peter Seidel 1 , ( 1 Max-Planck-Institut für Informatik, 2 University of British Columbia)<br />
External links<br />
• Bump shading for volume textures (http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=291525), Max,<br />
N.L., Becker, B.G., Computer Graphics and Applications, IEEE, Jul 1994, Volume 14, Issue 4, pages 18 - 20,<br />
ISSN 0272-1716<br />
• Bump Mapping tutorial using CG and C++ (http:/ / www. blacksmith-studios. dk/ projects/ downloads/<br />
bumpmapping_using_cg. php)<br />
• Simple creating vectors per pixel of a grayscale for a bump map to work and more (http:/ / freespace. virgin. net/<br />
hugo. elias/ <strong>graphics</strong>/ x_polybm. htm)<br />
• Bump Mapping example (http:/ / www. neilwallis. com/ java/ bump2. htm) (Java applet)
CatmullClark subdivision surface 28<br />
Catmull–Clark subdivision surface<br />
The Catmull–Clark algorithm is used in computer <strong>graphics</strong> to create<br />
smooth surfaces by subdivision surface modeling. It was devised by<br />
Edwin Catmull and Jim Clark in 1978 as a generalization of bi-cubic<br />
uniform B-spline surfaces to arbitrary topology. [1] In 2005, Edwin<br />
Catmull received an Academy Award for Technical Achievement<br />
together with Tony DeRose and Jos Stam for their invention and<br />
application of subdivision surfaces.<br />
Recursive evaluation<br />
Catmull–Clark surfaces are defined recursively, using the following<br />
refinement scheme: [1]<br />
Start with a mesh of an arbitrary polyhedron. All the vertices in this<br />
mesh shall be called original points.<br />
• For each face, add a face point<br />
• Set each face point to be the centroid of all original points for the<br />
respective face.<br />
• For each edge, add an edge point.<br />
First three steps of Catmull–Clark subdivision of<br />
a cube with subdivision surface below<br />
• Set each edge point to be the average of the two neighbouring face points and its two original endpoints.<br />
• For each face point, add an edge for every edge of the face, connecting the face point to each edge point for the<br />
face.<br />
• For each original point P, take the average F of all n face points for faces touching P, and take the average R of all<br />
n edge midpoints for edges touching P, where each edge midpoint is the average of its two endpoint vertices.<br />
Move each original point to the point<br />
(This is the barycenter of P, R and F with respective weights (n-3), 2 and 1. This arbitrary-looking formula was<br />
chosen by Catmull and Clark based on the aesthetic appearance of the resulting surfaces rather than on a<br />
mathematical derivation.)<br />
The new mesh will consist only of quadrilaterals, which won't in general be planar. The new mesh will generally<br />
look smoother than the old mesh.<br />
Repeated subdivision results in smoother meshes. It can be shown that the limit surface obtained by this refinement<br />
process is at least at extraordinary vertices and everywhere else (when n indicates how many derivatives are<br />
continuous, we speak of continuity). After one iteration, the number of extraordinary points on the surface<br />
remains constant.
CatmullClark subdivision surface 29<br />
Exact evaluation<br />
The limit surface of Catmull–Clark subdivision surfaces can also be evaluated directly, without any recursive<br />
refinement. This can be accomplished by means of the technique of Jos Stam [2] . This method reformulates the<br />
recursive refinement process into a matrix exponential problem, which can be solved directly by means of matrix<br />
diagonalization.<br />
Software using Catmull–Clark subdivision surfaces<br />
• 3ds max<br />
• <strong>3D</strong>-Coat<br />
• AC<strong>3D</strong><br />
• Anim8or<br />
• Blender<br />
• Carrara<br />
• CATIA (Imagine and Shape)<br />
• Cheetah<strong>3D</strong><br />
• Cinema4D<br />
• DAZ Studio, 2.0<br />
• DeleD Pro<br />
• Gelato<br />
• Hexagon<br />
• Houdini<br />
• JPatch<br />
• K-<strong>3D</strong><br />
• LightWave <strong>3D</strong>, version 9<br />
• Maya<br />
• modo<br />
• Mudbox<br />
• Realsoft<strong>3D</strong><br />
• Remo <strong>3D</strong><br />
• Shade<br />
• Silo<br />
• SketchUp -Requires a Plugin.<br />
• Softimage XSI<br />
• Strata <strong>3D</strong> CX<br />
• Vue 9<br />
• Wings <strong>3D</strong><br />
• Zbrush<br />
• TopMod<br />
• TopoGun
CatmullClark subdivision surface 30<br />
References<br />
[1] E. Catmull and J. Clark: Recursively generated B-spline surfaces on arbitrary topological meshes, Computer-Aided Design 10(6):350-355<br />
(November 1978), ( doi (http:/ / dx. doi. org/ 10. 1016/ 0010-4485(78)90110-0), pdf (http:/ / www. idi. ntnu. no/ ~fredrior/ files/<br />
Catmull-Clark 1978 Recursively generated surfaces. pdf))<br />
[2] Jos Stam, Exact Evaluation of Catmull–Clark Subdivision Surfaces at Arbitrary Parameter Values, Proceedings of SIGGRAPH'98. In<br />
Computer Graphics Proceedings, ACM SIGGRAPH, 1998, 395–404 ( pdf (http:/ / www. dgp. toronto. edu/ people/ stam/ reality/ Research/<br />
pdf/ sig98. pdf), downloadable eigenstructures (http:/ / www. dgp. toronto. edu/ ~stam/ reality/ Research/ SubdivEval/ index. html))<br />
Conversion between quaternions and Euler<br />
angles<br />
Spatial rotations in three dimensions can be parametrized using both Euler angles and unit quaternions. This article<br />
explains how to convert between the two representations. Actually this simple use of "quaternions" was first<br />
presented by Euler some seventy years earlier than Hamilton to solve the problem of magic squares. For this reason<br />
the dynamics community commonly refers to quaternions in this application as "Euler parameters".<br />
Definition<br />
A unit quaternion can be described as:<br />
We can associate a quaternion with a rotation around an axis by the following expression<br />
where α is a simple rotation angle (the value in radians of the angle of rotation) and cos(β x ), cos(β y ) and cos(β z ) are<br />
the "direction cosines" locating the axis of rotation (Euler's Theorem).
Conversion between quaternions and Euler angles 31<br />
Rotation matrices<br />
Euler angles – The xyz (fixed) system is shown in blue, the XYZ (rotated) system<br />
is shown in red. The line of nodes, labelled N, is shown in green.<br />
The orthogonal matrix (post-multiplying a column vector) corresponding to a clockwise/left-handed rotation by the<br />
unit quaternion is given by the inhomogeneous expression<br />
or equivalently, by the homogeneous expression<br />
If is not a unit quaternion then the homogeneous form is still a scalar multiple of a rotation<br />
matrix, while the inhomogeneous form is in general no longer an orthogonal matrix. This is why in numerical work<br />
the homogeneous form is to be preferred if distortion is to be avoided.<br />
The orthogonal matrix (post-multiplying a column vector) corresponding to a clockwise/left-handed rotation with<br />
Euler angles φ, θ, ψ, with x-y-z convention, is given by:
Conversion between quaternions and Euler angles 32<br />
Conversion<br />
By combining the quaternion representations of the Euler rotations we get<br />
For Euler angles we get:<br />
arctan and arcsin have a result between −π/2 and π/2. With three rotation between −π/2 and π/2 you can't have all<br />
possible orientations. You need to replace the arctan by atan2 to generate all the orientation.<br />
Relationship with Tait–Bryan angles<br />
Similarly for Euler angles, we use the Tait–Bryan angles (in terms<br />
of flight dynamics):<br />
• Roll – : rotation about the X-axis<br />
• Pitch – : rotation about the Y-axis<br />
• Yaw – : rotation about the Z-axis<br />
where the X-axis points forward, Y-axis to the right and Z-axis<br />
downward and in the example to follow the rotation occurs in the<br />
order yaw, pitch, roll (about body-fixed axes).<br />
Singularities<br />
One must be aware of singularities in the Euler angle<br />
parametrization when the pitch approaches ±90° (north/south<br />
pole). These cases must be handled specially. The common name<br />
for this situation is gimbal lock.<br />
Code to handle the singularities is derived on this site:<br />
www.euclideanspace.com [1]<br />
Tait–Bryan angles for an aircraft
Conversion between quaternions and Euler angles 33<br />
External links<br />
• Q60. How do I convert Euler rotation angles to a quaternion? [2] and related questions at The Matrix and<br />
Quaternions FAQ<br />
References<br />
[1] http:/ / www. euclideanspace. com/ maths/ geometry/ rotations/ conversions/ quaternionToEuler/<br />
[2] http:/ / www. j3d. org/ matrix_faq/ matrfaq_latest. html#Q60<br />
Cube mapping<br />
In computer <strong>graphics</strong>, cube mapping is a method of<br />
environment mapping that uses a six-sided cube as the<br />
map shape. The environment is projected onto the six<br />
faces of a cube and stored as six square textures, or<br />
unfolded into six regions of a single texture. The cube<br />
map is generated by first rendering the scene six times<br />
from a viewpoint, with the views defined by an<br />
orthogonal 90 degree view frustum representing each<br />
cube face. [1]<br />
In the majority of cases, cube mapping is preferred<br />
over the older method of sphere mapping because it<br />
eliminates many of the problems that are inherent in<br />
sphere mapping such as image distortion, viewpoint<br />
dependency, and computational efficiency. Also, cube<br />
mapping provides a much larger capacity to support<br />
real-time rendering of reflections relative to sphere<br />
mapping because the combination of inefficiency and<br />
viewpoint dependency severely limit the ability of<br />
sphere mapping to be applied when there is a<br />
consistently changing viewpoint.<br />
History<br />
The lower left image shows a scene with a viewpoint marked with a<br />
black dot. The upper image shows the net of the cube mapping as seen<br />
from that viewpoint, and the lower right image shows the cube<br />
superimposed on the original scene.<br />
Cube mapping was first proposed in 1986 by Ned Greene in his paper “Environment Mapping and Other<br />
Applications of World Projections” [2] , ten years after environment mapping was first put forward by Jim Blinn and<br />
Martin Newell. However, hardware limitations on the ability to access six texture images simultaneously made it<br />
infeasible to implement cube mapping without further technological developments. This problem was remedied in<br />
1999 with the release of the Nvidia GeForce 256. Nvidia touted cube mapping in hardware as “a breakthrough image<br />
quality feature of GeForce 256 that ... will allow developers to create accurate, real-time reflections. Accelerated in<br />
hardware, cube environment mapping will free up the creativity of developers to use reflections and specular lighting<br />
effects to create interesting, immersive environments.” [3] Today, cube mapping is still used in a variety of graphical<br />
applications as a favored method of environment mapping.
Cube mapping 34<br />
Advantages<br />
Cube mapping is preferred over other methods of environment mapping because of its relative simplicity. Also, cube<br />
mapping produces results that are similar to those obtained by ray tracing, but is much more computationally<br />
efficient – the moderate reduction in quality is compensated for by large gains in efficiency.<br />
Predating cube mapping, sphere mapping has many inherent flaws that made it impractical for most applications.<br />
Sphere mapping is view dependent meaning that a different texture is necessary for each viewpoint. Therefore, in<br />
applications where the viewpoint is mobile, it would be necessary to dynamically generate a new sphere mapping for<br />
each new viewpoint (or, to pre-generate a mapping for every viewpoint). Also, a texture mapped onto a sphere's<br />
surface must be stretched and compressed, and warping and distortion (particularly along the edge of the sphere) are<br />
a direct consequence of this. Although these image flaws can be reduced using certain tricks and techniques like<br />
“pre-stretching”, this just adds another layer of complexity to sphere mapping.<br />
Paraboloid mapping provides some improvement on the limitations of sphere mapping, however it requires two<br />
rendering passes in addition to special image warping operations and more involved computation.<br />
Conversely, cube mapping requires only a single render pass, and due to its simple nature, is very easy for<br />
developers to comprehend and generate. Also, cube mapping uses the entire resolution of the texture image,<br />
compared to sphere and paraboloid mappings, which also allows it to use lower resolution images to achieve the<br />
same quality. Although handling the seams of the cube map is a problem, algorithms have been developed to handle<br />
seam behavior and result in a seamless reflection.<br />
Disadvantages<br />
If a new object or new lighting is introduced into scene or if some object that is reflected in it is moving or changing<br />
in some manner, then the reflection (cube map) does not change and the cube map must be re-rendered. When the<br />
cube map is affixed to an object that moves through the scene then the cube map must also be re-rendered from that<br />
new position.<br />
Applications<br />
Stable Specular Highlights<br />
Computer-aided design (CAD) programs use specular highlights as visual cues to convey a sense of surface<br />
curvature when rendering <strong>3D</strong> objects. However, many CAD programs exhibit problems in sampling specular<br />
highlights because the specular lighting computations are only performed at the vertices of the mesh used to<br />
represent the object, and interpolation is used to estimate lighting across the surface of the object. Problems occur<br />
when the mesh vertices are not dense enough, resulting in insufficient sampling of the specular lighting. This in turn<br />
results in highlights with brightness proportionate to the distance from mesh vertices, ultimately compromising the<br />
visual cues that indicate curvature. Unfortunately, this problem cannot be solved simply by creating a denser mesh,<br />
as this can greatly reduce the efficiency of object rendering.<br />
Cube maps provide a fairly straightforward and efficient solution to rendering stable specular highlights. Multiple<br />
specular highlights can be encoded into a cube map texture, which can then be accessed by interpolating across the<br />
surface's reflection vector to supply coordinates. Relative to computing lighting at individual vertices, this method<br />
provides cleaner results that more accurately represent curvature. Another advantage to this method is that it scales<br />
well, as additional specular highlights can be encoded into the texture at no increase in the cost of rendering.<br />
However, this approach is limited in that the light sources must be either distant or infinite lights, although<br />
fortunately this is usually the case in CAD programs. [4]
Cube mapping 35<br />
Skyboxes<br />
Perhaps the most trivial application of cube mapping is to create pre-rendered panoramic sky images which are then<br />
rendered by the graphical engine as faces of a cube at practically infinite distance with the view point located in the<br />
center of the cube. The perspective projection of the cube faces done by the <strong>graphics</strong> engine undoes the effects of<br />
projecting the environment to create the cube map, so that the observer experiences an illusion of being surrounded<br />
by the scene which was used to generate the skybox. This technique has found a widespread use in video games<br />
since it allows designers to add complex (albeit not explorable) environments to a game at almost no performance<br />
cost.<br />
Skylight Illumination<br />
Cube maps can be useful for modelling outdoor illumination accurately. Simply modelling sunlight as a single<br />
infinite light oversimplifies outdoor illumination and results in unrealistic lighting. Although plenty of light does<br />
come from the sun, the scattering of rays in the atmosphere causes the whole sky to act as a light source (often<br />
referred to as skylight illumination).However, by using a cube map the diffuse contribution from skylight<br />
illumination can be captured. Unlike environment maps where the reflection vector is used, this method accesses the<br />
cube map based on the surface normal vector to provide a fast approximation of the diffuse illumination from the<br />
skylight. The one downside to this method is that computing cube maps to properly represent a skylight is very<br />
complex; one recent process is computing the spherical harmonic basis that best represents the low frequency diffuse<br />
illumination from the cube map. however, a considerable amount of research has been done to effectively model<br />
skylight illumination.<br />
Dynamic Reflection<br />
Basic environment mapping uses a static cube map - although the object can be moved and distorted, the reflected<br />
environment stays consistent. However, a cube map texture can be consistently updated to represent a dynamically<br />
changing environment (for example, trees swaying in the wind). A simple yet costly way to generate dynamic<br />
reflections, involves building the cube maps at runtime for every frame. Although this is far less efficient than static<br />
mapping because of additional rendering steps, it can still be performed at interactive rates.<br />
Unfortunately, this technique does not scale well when multiple reflective objects are present. A unique dynamic<br />
environment map is usually required for each reflective object. Also, further complications are added if reflective<br />
objects can reflect each other - dynamic cube maps can be recursively generated approximating the effects normally<br />
generated using raytracing.<br />
Global Illumination<br />
An algorithm for global illumination computation at interactive rates using a cube-map data structure, was presented<br />
at ICCVG 2002.[5]<br />
Projection textures<br />
Another application which found widespread use in video games, projective texture mapping relies on cube maps to<br />
project images of an environment onto the surrounding scene; for example, a point light source is tied to a cube map<br />
which is a panoramic image shot from inside a lantern cage or a window frame through which the light is filtering.<br />
This enables a game designer to achieve realistic lighting without having to complicate the scene geometry or resort<br />
to expensive real-time shadow volume computations.
Cube mapping 36<br />
Related<br />
A large set of free cube maps for experimentation: http:/ / www. humus. name/ index. php?page=Textures<br />
Mark VandeWettering took M. C. Escher's famous self portrait Hand with Reflecting Sphere and reversed the<br />
mapping to obtain this [6] cube map.<br />
References<br />
[1] Fernando, R. & Kilgard M. J. (2003). The CG Tutorial: The Definitive Guide to Programmable Real-Time Graphics. (1st ed.).<br />
Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA. Chapter 7: Environment Mapping Techniques<br />
[2] Greene, N. 1986. Environment mapping and other applications of world projections. IEEE Comput. Graph. Appl. 6, 11 (Nov. 1986), 21-29.<br />
(http:/ / dx. doi. org/ 10. 1109/ MCG. 1986. 276658)<br />
[3] Nvidia, Jan 2000. Technical Brief: Perfect Reflections and Specular Lighting Effects With Cube Environment Mapping (http:/ / developer.<br />
nvidia. com/ object/ Cube_Mapping_Paper. html)<br />
[4] Nvidia, May 2004. Cube Map OpenGL Tutorial (http:/ / developer. nvidia. com/ object/ cube_map_ogl_tutorial. html)<br />
[5] http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 95. 946<br />
[6] http:/ / brainwagon. org/ 2002/ 12/ 05/ fun-with-environment-maps/<br />
Diffuse reflection<br />
Diffuse reflection is the reflection of light from a surface such that an<br />
incident ray is reflected at many angles rather than at just one angle as<br />
in the case of specular reflection. An illuminated ideal diffuse<br />
reflecting surface will have equal luminance from all directions in the<br />
hemisphere surrounding the surface (Lambertian reflectance).<br />
A surface built from a non-absorbing powder such as plaster, or from<br />
fibers such as paper, or from a polycrystalline material such as white<br />
marble, reflects light diffusely with great efficiency. Many common<br />
materials exhibit a mixture of specular and diffuse reflection.<br />
The visibility of objects is primarily caused by diffuse reflection of<br />
light: it is diffusely-scattered light that forms the image of the object in<br />
the observer's eye.<br />
Diffuse and specular reflection from a glossy<br />
surface [1]
Diffuse reflection 37<br />
Mechanism<br />
Diffuse reflection from solids is generally not due to<br />
surface roughness. A flat surface is indeed required to<br />
give specular reflection, but it does not prevent diffuse<br />
reflection. A piece of highly polished white marble<br />
remains white; no amount of polishing will turn it into<br />
a mirror. Polishing produces some specular reflection,<br />
but the remaining light continues to be diffusely<br />
reflected.<br />
The most general mechanism by which a surface gives<br />
diffuse reflection does not involve exactly the surface:<br />
most of the light is contributed by scattering centers<br />
beneath the surface, [2] [3] as illustrated in Figure 1 at<br />
right. If one were to imagine that the figure represents<br />
snow, and that the polygons are its (transparent) ice<br />
crystallites, an impinging ray is partially reflected (a<br />
few percent) by the first particle, enters in it, is again<br />
reflected by the interface with the second particle,<br />
enters in it, impinges on the third, and so on, generating<br />
a series of "primary" scattered rays in random<br />
directions, which, in turn, through the same<br />
mechanism, generate a large number of "secondary"<br />
scattered rays, which generate "tertiary" rays.... [4] All<br />
these rays walk through the snow crystallytes, which do<br />
not absorb light, until they arrive at the surface and exit<br />
in random directions. [5] The result is that all the light<br />
that was sent out is returned in all directions, so that<br />
snow is seen to be white, in spite of the fact that it is<br />
made of transparent objects (ice crystals).<br />
For simplicity, "reflections" are spoken of here, but<br />
more generally the interface between the small particles<br />
that constitute many materials is irregular on a scale<br />
comparable with light wavelength, so diffuse light is<br />
generated at each interface, rather than a single<br />
reflected ray, but the story can be told the same way.<br />
Figure 1 - General mechanism of diffuse reflection by a solid surface<br />
(refraction phenomena not represented)<br />
Figure 2 - Diffuse reflection from an irregular surface<br />
This mechanism is very general, because almost all common materials are made of "small things" held together.<br />
Mineral materials are generally polycrystalline: one can describe them as made of a 3-D mosaic of small, irregularly<br />
shaped defective crystals. Organic materials are usually composed of fibers or cells, with their membranes and their<br />
complex internal structure. And each interface, inhomogeneity or imperfection can deviate, reflect or scatter light,<br />
reproducing the above mechanism.<br />
Few materials don't follow it: among them metals, which do not allow light to enter; gases, liquids; glass and<br />
transparent plastics (which have a liquid-like amorphous microscopic structure); single crystals, such as some gems<br />
or a salt crystal; and some very special materials, such as the tissues which make the cornea and the lens of an eye.<br />
These materials can reflect diffusely, however, if their surface is microscopically rough, like in a frost glass (figure
Diffuse reflection 38<br />
2), or, of course, if their homogeneous structure deteriorates, as in the eye lens.<br />
A surface may also exhibit both specular and diffuse reflection, as is the case, for example, of glossy paints as used<br />
in home painting, which give also a fraction of specular reflection, while matte paints give almost exclusively diffuse<br />
reflection.<br />
Specular vs. diffuse reflection<br />
Virtually all materials can give specular reflection, provided that their surface can be polished to eliminate<br />
irregularities comparable with light wavelength (a fraction of micrometer). A few materials, like liquids and glasses,<br />
lack the internal subdivisions which give the subsurface scattering mechanism described above, so they can be clear<br />
and give only specular reflection (not great, however), while, among common materials, only polished metals can<br />
reflect light specularly with great efficiency (the reflecting material of mirrors usually is aluminum or silver). All<br />
other common materials, even when perfectly polished, usually give not more than a few percent specular reflection,<br />
except in particular cases, such as grazing angle reflection by a lake, or the total reflection of a glass prism, or when<br />
structured in a complex purposely made configuration, such as the silvery skin of many fish species.<br />
Diffuse reflection from white materials, instead, can be highly efficient in giving back all the light they receive, due<br />
to the summing up of the many subsurface reflections.<br />
Colored objects<br />
Up to now white objects have been discussed, which do not absorb light. But the above scheme continues to be valid<br />
in the case that the material is absorbent. In this case, diffused rays will lose some wavelengths during their walk in<br />
the material, and will emerge colored.<br />
More, diffusion affects in a substantial manner the color of objects, because it determines the average path of light in<br />
the material, and hence to which extent the various wavelengths are absorbed. [6] Red ink looks black when it stays in<br />
its bottle. Its vivid color is only perceived when it is placed on a scattering material (e.g. paper). This is so because<br />
light's path through the paper fibers (and through the ink) is only a fraction of millimeter long. Light coming from<br />
the bottle, instead, has crossed centimeters of ink, and has been heavily absorbed, even in its red wavelengths.<br />
And, when a colored object has both diffuse and specular reflection, usually only the diffuse component is colored.<br />
A cherry reflects diffusely red light, absorbs all other colors and has a specular reflection which is essentially white.<br />
This is quite general, because, except for metals, the reflectivity of most materials depends on their refraction index,<br />
which varies little with the wavelength (though it is this variation that causes the chromatic dispersion in a prism), so<br />
that all colors are reflected nearly with the same intensity. Reflections from different origin, instead, may be colored:<br />
metallic reflections, such as in gold or copper, or interferential reflections: iridescences, peacock feathers, butterfly<br />
wings, beetle elytra, or the antireflection coating of a lens.<br />
Importance for vision<br />
Looking at the surrounding environment, one sees that what makes the human eye to form an image of almost all<br />
things visible is the diffuse reflection from their surface. With few exceptions, such as black objects, glass, liquids,<br />
polished or smooth metals, some small reflections from glossy objects, and the objects that themselves emit light: the<br />
Sun, lamps, and, computer screens (which, however, emit diffuse light). Outdoors it is the same, with perhaps the<br />
exception of a transparent water stream or of the iridescent colors of a beetle, and with the addition of other types of<br />
scattering: blue (or, at sunset, variously colored) light from the sky molecules (Rayleigh scattering), white from the<br />
water droplets of clouds (Mie scattering).<br />
Light scattering from the surfaces of objects is by far the primary mechanism by which humans physically<br />
[7] [8]<br />
observe.
Diffuse reflection 39<br />
Interreflection<br />
Diffuse interreflection is a process whereby light reflected from an object strikes other objects in the surrounding<br />
area, illuminating them. Diffuse interreflection specifically describes light reflected from objects which are not shiny<br />
or specular. In real life terms what this means is that light is reflected off non-shiny surfaces such as the ground,<br />
walls, or fabric, to reach areas not directly in view of a light source. If the diffuse surface is colored, the reflected<br />
light is also colored, resulting in similar coloration of surrounding objects.<br />
In <strong>3D</strong> computer <strong>graphics</strong>, diffuse interreflection is an important component of global illumination. There are a<br />
number of ways to model diffuse interreflection when rendering a scene. Radiosity and photon mapping are two<br />
commonly used methods.<br />
References<br />
[1] Scott M. Juds (1988). Photoelectric sensors and controls: selection and application (http:/ / books. google. com/ ?id=BkdBo1n_oO4C&<br />
pg=PA29& dq="diffuse+ reflection"+ lambertian#v=onepage& q="diffuse reflection" lambertian& f=false). CRC Press. p. 29.<br />
ISBN 9780824778866. .<br />
[2] P.Hanrahan and W.Krueger (1993), Reflection from layered surfaces due to subsurface scattering, in SIGGRAPH ’93 Proceedings, J. T.<br />
Kajiya, Ed., vol. 27, pp. 165–174 (http:/ / www. cs. berkeley. edu/ ~ravir/ 6998/ papers/ p165-hanrahan. pdf).<br />
[3] H.W.Jensen et al. (2001), A practical model for subsurface light transport, in ' Proceedings of ACM SIGGRAPH 2001', pp. 511–518 (http:/ /<br />
www. cs. berkeley. edu/ ~ravir/ 6998/ papers/ p511-jensen. pdf)<br />
[4] Only primary and secondary rays are represented in the figure.<br />
[5] Or, if the object is thin, it can exit from the opposite surface, giving diffuse transmitted light.<br />
[6] Paul Kubelka, Franz Munk (1931), Ein Beitrag zur Optik der Farbanstriche, Zeits. f. Techn. Physik, 12, 593–601, see The Kubelka-Munk<br />
Theory of Reflectance (http:/ / web. eng. fiu. edu/ ~godavart/ BME-Optics/ Kubelka-Munk-Theory. pdf)<br />
[7] Kerker, M. (1909). The Scattering of Light. New York: Academic.<br />
[8] Mandelstam, L.I. (1926). "Light Scattering by Inhomogeneous Media". Zh. Russ. Fiz-Khim. Ova. 58: 381.<br />
Displacement mapping<br />
Displacement mapping is an alternative computer <strong>graphics</strong> technique<br />
in contrast to bump mapping, normal mapping, and parallax mapping,<br />
using a (procedural-) texture- or height map to cause an effect where<br />
the actual geometric position of points over the textured surface are<br />
displaced, often along the local surface normal, according to the value<br />
the texture function evaluates to at each point on the surface. It gives<br />
surfaces a great sense of depth and detail, permitting in particular<br />
self-occlusion, self-shadowing and silhouettes; on the other hand, it is<br />
the most costly of this class of techniques owing to the large amount of<br />
additional geometry.<br />
For years, displacement mapping was a peculiarity of high-end<br />
rendering systems like PhotoRealistic RenderMan, while realtime<br />
APIs, like OpenGL and DirectX, were only starting to use this feature.<br />
One of the reasons for this is that the original implementation of<br />
displacement mapping required an adaptive tessellation of the surface<br />
in order to obtain enough micropolygons whose size matched the size<br />
of a pixel on the screen.<br />
Displacement mapping
Displacement mapping 40<br />
Meaning of the term in different contexts<br />
Displacement mapping includes the term mapping which refers to a texture map being used to modulate the<br />
displacement strength. The displacement direction is usually the local surface normal. Today, many renderers allow<br />
programmable shading which can create high quality (multidimensional) procedural textures and patterns at arbitrary<br />
high frequencies. The use of the term mapping becomes arguable then, as no texture map is involved anymore.<br />
Therefore, the broader term displacement is often used today to refer to a super concept that also includes<br />
displacement based on a texture map.<br />
Renderers using the REYES algorithm, or similar approaches based on micropolygons, have allowed displacement<br />
mapping at arbitrary high frequencies since they became available almost 20 years ago.<br />
The first commercially available renderer to implement a micropolygon displacement mapping approach through<br />
REYES was Pixar's PhotoRealistic RenderMan. Micropolygon renderers commonly tessellate geometry themselves<br />
at a granularity suitable for the image being rendered. That is: the modeling application delivers high-level primitives<br />
to the renderer. Examples include true NURBS- or subdivision surfaces. The renderer then tessellates this geometry<br />
into micropolygons at render time using view-based constraints derived from the image being rendered.<br />
Other renderers that require the modeling application to deliver objects pre-tessellated into arbitrary polygons or<br />
even triangles have defined the term displacement mapping as moving the vertices of these polygons. Often the<br />
displacement direction is also limited to the surface normal at the vertex. While conceptually similar, those polygons<br />
are usually a lot larger than micropolygons. The quality achieved from this approach is thus limited by the<br />
geometry's tessellation density a long time before the renderer gets access to it.<br />
This difference between displacement mapping in micropolygon renderers vs. displacement mapping in a<br />
non-tessellating (macro)polygon renderers can often lead to confusion in conversations between people whose<br />
exposure to each technology or implementation is limited. Even more so, as in recent years, many non-micropolygon<br />
renderers have added the ability to do displacement mapping of a quality similar to what a micropolygon renderer is<br />
able to deliver, naturally. To distinguish between the crude pre-tessellation-based displacement these renderers did<br />
before, the term sub-pixel displacement was introduced to describe this feature.<br />
Sub-pixel displacement commonly refers to finer re-tessellation of geometry that was already tessellated into<br />
polygons. This re-tessellation results in micropolygons or often microtriangles. The vertices of these then get moved<br />
along their normals to archive the displacement mapping.<br />
True micropolygon renderers have always been able to do what sub-pixel-displacement achieved only recently, but<br />
at a higher quality and in arbitrary displacement directions.<br />
Recent developments seem to indicate that some of the renderers that use sub-pixel displacement move towards<br />
supporting higher level geometry too. As the vendors of these renderers are likely to keep using the term sub-pixel<br />
displacement, this will probably lead to more obfuscation of what displacement mapping really stands for, in <strong>3D</strong><br />
computer <strong>graphics</strong>.<br />
In reference to Microsoft's proprietary High Level Shader Language, displacement mapping can be interpreted as a<br />
kind of "vertex-texture mapping" where the values of the texture map do not alter pixel colors (as is much more<br />
common), but instead change the position of vertices. Unlike bump, normal and parallax mapping, all of which can<br />
be said to "fake" the behavior of displacement mapping, in this way a genuinely rough surface can be produced from<br />
a texture. It has to be used in conjunction with adaptive tessellation techniques (that increases the number of<br />
rendered polygons according to current viewing settings) to produce highly detailed meshes.
Displacement mapping 41<br />
Further reading<br />
• Blender Displacement Mapping [1]<br />
• Relief Texture Mapping [2] website<br />
• Real-Time Relief Mapping on Arbitrary Polygonal Surfaces [3] paper<br />
• Relief Mapping of Non-Height-Field Surface Details [4] paper<br />
• Steep Parallax Mapping [5] website<br />
• State of the art of displacement mapping on the gpu [6] paper<br />
References<br />
[1] http:/ / mediawiki. blender. org/ index. php/ Manual/ Displacement_Maps<br />
[2] http:/ / www. inf. ufrgs. br/ %7Eoliveira/ RTM. html<br />
[3] http:/ / www. inf. ufrgs. br/ %7Eoliveira/ pubs_files/ Policarpo_Oliveira_Comba_RTRM_I<strong>3D</strong>_2005. pdf<br />
[4] http:/ / www. inf. ufrgs. br/ %7Eoliveira/ pubs_files/ Policarpo_Oliveira_RTM_multilayer_I<strong>3D</strong>2006. pdf<br />
[5] http:/ / <strong>graphics</strong>. cs. brown. edu/ games/ SteepParallax/ index. html<br />
[6] http:/ / www. iit. bme. hu/ ~szirmay/ egdisfinal3. pdf<br />
Doo–Sabin subdivision surface<br />
In computer <strong>graphics</strong>, Doo–Sabin subdivision surface is a type of<br />
subdivision surface based on a generalization of bi-quadratic uniform<br />
B-splines. It was developed in 1978 by Daniel Doo and Malcolm Sabin<br />
[1] [2] .<br />
This process generates one new face at each original vertex, n new<br />
faces along each original edge, and n x n new faces at each original<br />
face. A primary characteristic of the Doo–Sabin subdivision method is<br />
the creation of four faces around every vertex. A drawback is that the<br />
faces created at the vertices are not necessarily coplanar.<br />
Evaluation<br />
Doo–Sabin surfaces are defined recursively. Each refinement iteration<br />
Simple Doo-Sabin sudivision surface. The figure<br />
shows the limit surface, as well as the control<br />
point wireframe mesh.<br />
replaces the current mesh with a smoother, more refined mesh, following the procedure described in [2] . After many<br />
iterations, the surface will gradually converge onto a smooth limit surface. The figure below show the effect of two<br />
refinement iterations on a T-shaped quadrilateral mesh.<br />
matrices are not in general diagonalizable.<br />
Just as for Catmull–Clark surfaces,<br />
Doo–Sabin limit surfaces can also be<br />
evaluated directly without any<br />
recursive refinement, by means of the<br />
technique of Jos Stam [3] . The solution<br />
is, however, not as computationally<br />
efficient as for Catmull-Clark surfaces<br />
because the Doo–Sabin subdivision
DooSabin subdivision surface 42<br />
External links<br />
[1] D. Doo: A subdivision algorithm for smoothing down irregularly shaped polyhedrons, Proceedings on Interactive Techniques in Computer<br />
Aided Design, pp. 157 - 165, 1978 ( pdf (http:/ / trac2. assembla. com/ DooSabinSurfaces/ export/ 12/ trunk/ docs/ Doo 1978 Subdivision<br />
algorithm. pdf))<br />
[2] D. Doo and M. Sabin: Behavior of recursive division surfaces near extraordinary points, Computer-Aided Design, 10 (6) 356–360 (1978), (<br />
doi (http:/ / dx. doi. org/ 10. 1016/ 0010-4485(78)90111-2), pdf (http:/ / www. cs. caltech. edu/ ~cs175/ cs175-02/ resources/ DS. pdf))<br />
[3] Jos Stam, Exact Evaluation of Catmull–Clark Subdivision Surfaces at Arbitrary Parameter Values, Proceedings of SIGGRAPH'98. In<br />
Computer Graphics Proceedings, ACM SIGGRAPH, 1998, 395–404 ( pdf (http:/ / www. dgp. toronto. edu/ people/ stam/ reality/ Research/<br />
pdf/ sig98. pdf), downloadable eigenstructures (http:/ / www. dgp. toronto. edu/ ~stam/ reality/ Research/ SubdivEval/ index. html))<br />
• Doo–Sabin surfaces (http:/ / <strong>graphics</strong>. cs. ucdavis. edu/ education/ CAGDNotes/ Doo-Sabin/ Doo-Sabin. html)<br />
Edge loop<br />
An edge loop, in computer <strong>graphics</strong>, can loosely be defined as a set of connected edges across a surface. Usually the<br />
last edge meets again with the first edge, thus forming a loop. The set or string of edges can for example be the outer<br />
edges of a flat surface or the edges surrounding a 'hole' in a surface.<br />
In a stricter sense an edge loop is defined as set of edges where the loop follows the middle edge in every 'four way<br />
junction'. [1] The loop will end when it encounters another type of junction (three or five way for example). Take an<br />
edge on a mesh surface for example, say at one end of the edge it connects with three other edges, making a four way<br />
junction. If you follow the middle 'road' each time you would either end up with a completed loop or the edge loop<br />
would end at another type of junction.<br />
Edge loops are especially practical in organic models which need to be animated. In organic modeling edge loops<br />
play a vital role in proper deformation of the mesh. [2] A properly modeled mesh will take into careful consideration<br />
the placement and termination of these edge loops. Generally edge loops follow the structure and contour of the<br />
muscles that they mimic. For example, in modeling a human face edge loops should follow the orbicularis oculi<br />
muscle around the eyes and the orbicularis oris muscle around the mouth. The hope is that by mimicking the way the<br />
muscles are formed they also aid in the way the muscles are deformed by way of contractions and expansions. An<br />
edge loop closely mimics how real muscles work, and if built correctly, will give you control over contour and<br />
silhouette in any position.<br />
An important part in developing proper edge loops is by understanding poles. [3] The E(5) Pole and the N(3) Pole are<br />
the two most important poles in developing both proper edge loops and a clean topology on your model. The E(5)<br />
Pole is derived from an extruded face. When this face is extruded, four 4-sided polygons are formed in addition to<br />
the original face. Each lower corner of these four polygons forms a five-way junction. Each one of these five-way<br />
junctions is an E-pole. An N(3) Pole is formed when 3 edges meet at one point creating a three-way junction. The<br />
N(3) Pole is important in that it redirects the direction of an edge loop.<br />
References<br />
[1] Edge Loop (http:/ / wiki. cgsociety. org/ index. php/ Edge_Loop), CG Society<br />
[2] Modeling With Edge Loops (http:/ / zoomy. net/ 2008/ 04/ 02/ modeling-with-edge-loops/ ), Zoomy.net<br />
[3] "The pole" (http:/ / www. subdivisionmodeling. com/ forums/ showthread. php?t=907), SubdivisionModeling.com<br />
External links<br />
• Edge Loop (http:/ / wiki. cgsociety. org/ index. php/ Edge_Loop), CG Society
Euler operator 43<br />
Euler operator<br />
In mathematics, Euler operators are a small set of functions to create polygon meshes. They are closed and<br />
sufficient on the set of meshes, and they are invertible.<br />
Purpose<br />
A "polygon mesh" can be thought of as a graph, with vertices, and with edges that connect these vertices. In addition<br />
to a graph, a mesh has also faces: Let the graph be drawn ("embedded") in a two-dimensional plane, in such a way<br />
that the edges do not cross (which is possible only if the graph is a planar graph). Then the contiguous 2D regions on<br />
either side of each edge are the faces of the mesh.<br />
The Euler operators are functions to manipulate meshes. They are very straightforward: Create a new vertex (in some<br />
face), connect vertices, split a face by inserting a diagonal, subdivide an edge by inserting a vertex. It is immediately<br />
clear that these operations are invertible.<br />
Further Euler operators exist to create higher-genus shapes, for instance to connect the ends of a bent tube to create a<br />
torus.<br />
Properties<br />
Euler operators are topological operators: They modify only the incidence relationship, i.e., which face is bounded<br />
by which face, which vertex is connected to which other vertex, and so on. They are not concerned with the<br />
geometric properties: The length of an edge, the position of a vertex, and whether a face is curved or planar, are just<br />
geometric "attributes".<br />
Note: In topology, objects can arbitrarily deform. So a valid mesh can, e.g., collapse to a single point if all of its<br />
vertices happen to be at the same position in space.<br />
References<br />
• Sven Havemann, Generative Mesh Modeling [1] , PhD thesis, Braunschweig University, Germany, 2005.<br />
• Martti Mäntylä, An Introduction to Solid Modeling, Computer Science Press, Rockville MD, 1988. ISBN<br />
0-88175-108-1.<br />
References<br />
[1] http:/ / www. eg. org/ EG/ DL/ dissonline/ doc/ havemann. pdf
False radiosity 44<br />
False radiosity<br />
False Radiosity is a <strong>3D</strong> computer <strong>graphics</strong> technique used to create texture mapping for objects that emulates patch<br />
interaction algorithms in radiosity rendering. Though practiced in some form since the late 1990s, the term was<br />
coined around 2002 by architect Andrew Hartness, then head of <strong>3D</strong> and real-time design at Ateliers Jean Nouvel.<br />
During the period of nascent commercial enthusiasm for radiosity-enhanced imagery, but prior to the<br />
democratization of powerful computational hardware, architects and graphic artists experimented with time-saving<br />
<strong>3D</strong> rendering techniques. By darkening areas of texture maps corresponding to corners, joints and recesses, and<br />
applying maps via self-illumination or diffuse mapping in a <strong>3D</strong> program, a radiosity-like effect of patch interaction<br />
could be created with a standard scan-line renderer. Successful emulation of radiosity required a theoretical<br />
understanding and graphic application of patch view factors, path tracing and global illumination algorithms. Texture<br />
maps were usually produced with image editing software, such as Adobe Photoshop. The advantage of this method is<br />
decreased rendering time and easily modifiable overall lighting strategies.<br />
Another common approach similar to false radiosity is the manual placement of standard omni-type lights with<br />
limited attenuation in places in the <strong>3D</strong> scene where the artist would expect radiosity reflections to occur. This<br />
method uses many lights and can require an advanced light-grouping system, depending on what assigned<br />
materials/objects are illuminated, how many surfaces require false radiosity treatment, and to what extent it is<br />
anticipated that lighting strategies be set up for frequent changes.<br />
References<br />
• Autodesk interview with Hartness about False Radiosity and real-time design [1]<br />
References<br />
[1] http:/ / usa. autodesk. com/ adsk/ servlet/ item?siteID=123112& id=5549510& linkID=10371177
Fragment 45<br />
Fragment<br />
In computer <strong>graphics</strong>, a fragment is the data necessary to generate a single pixel's worth of a drawing primitive in<br />
the frame buffer.<br />
This data may include, but is not limited to:<br />
• raster position<br />
• depth<br />
• interpolated attributes (color, texture coordinates, etc.)<br />
• stencil<br />
• alpha<br />
• window ID<br />
As a scene is drawn, drawing primitives (the basic elements of <strong>graphics</strong> output, such as points,lines, circles, text etc<br />
[1] ) are rasterized into fragments which are textured and combined with the existing frame buffer. How a fragment is<br />
combined with the data already in the frame buffer depends on various settings. In a typical case, a fragment may be<br />
discarded if it is farther away than the pixel that is already at that location (according to the depth buffer). If it is<br />
nearer than the existing pixel, it may replace what is already there, or, if alpha blending is in use, the pixel's color<br />
may be replaced with a mixture of the fragment's color and the pixel's existing color, as in the case of drawing a<br />
translucent object.<br />
In general, a fragment can be thought of as the data needed to shade the pixel, plus the data needed to test whether<br />
the fragment survives to become a pixel (depth, alpha, stencil, scissor, window ID, etc.)<br />
References<br />
[1] The Drawing Primitives by Janne Saarela (http:/ / baikalweb. jinr. ru/ doc/ cern_doc/ asdoc/ gks_html3/ node28. html)
Geometry pipelines 46<br />
Geometry pipelines<br />
Geometric manipulation of modeling primitives, such as that performed by a geometry pipeline, is the first stage in<br />
computer <strong>graphics</strong> systems which perform image generation based on geometric models. While Geometry Pipelines<br />
were originally implemented in software, they have become highly amenable to hardware implementation,<br />
particularly since the advent of very-large-scale integration (VLSI) in the early 1980s. A device called the Geometry<br />
Engine developed by Jim Clark and Marc Hannah at Stanford University in about 1981 was the watershed for what<br />
has since become an increasingly commoditized function in contemporary image-synthetic raster display systems. [1]<br />
[2]<br />
Geometric transformations are applied to the vertices of polygons, or other geometric objects used as modelling<br />
primitives, as part of the first stage in a classical geometry-based graphic image rendering pipeline. Geometric<br />
computations may also be applied to transform polygon or patch surface normals, and then to perform the lighting<br />
and shading computations used in their subsequent rendering.<br />
History<br />
Hardware implementations of the geometry pipeline were introduced in the early Evans & Sutherland Picture<br />
System, but perhaps received broader recognition when later applied in the broad range of <strong>graphics</strong> systems products<br />
introduced by Silicon Graphics (SGI). Initially the SGI geometry hardware performed simple model space to screen<br />
space viewing transformations with all the lighting and shading handled by a separate hardware implementation<br />
stage, but in later, much higher performance applications such as the RealityEngine, they began to be applied to<br />
perform part of the rendering support as well.<br />
More recently, perhaps dating from the late 1990s, the hardware support required to perform the manipulation and<br />
rendering of quite complex scenes has become accessible to the consumer market. Companies such as NVIDIA and<br />
ATI (now a part of AMD) are two current leading representatives of hardware vendors in this space. The GeForce<br />
line of <strong>graphics</strong> cards from NVIDIA were the first to implement hardware geometry processing in the consumer PC<br />
market, while earlier <strong>graphics</strong> accelerators by <strong>3D</strong>fx and others had to rely on the CPU to perform geometry<br />
processing.<br />
This subject matter is part of the technical foundation for modern computer <strong>graphics</strong>, and is a comprehensive topic<br />
taught at both the undergraduate and graduate levels as part of a computer science education.<br />
References<br />
[1] Clark, James (July 1980). "Special Feature A VLSI Geometry Processor For Graphics" (http:/ / www. computer. org/ portal/ web/ csdl/ doi/<br />
10. 1109/ MC. 1980. 1653711). Computer: pp. 59–68. .<br />
[2] Clark, James (July 1982). "The Geometry Engine: A VLSI Geometry System for Graphics" (http:/ / accad. osu. edu/ ~waynec/ history/ PDFs/<br />
geometry-engine. pdf). Proceedings of the 9th annual conference on Computer <strong>graphics</strong> and interactive techniques. pp. 127-133. .
Geometry processing 47<br />
Geometry processing<br />
Geometry processing, or mesh processing, is a fast-growing area of research that uses concepts from applied<br />
mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction,<br />
analysis, manipulation, simulation and transmission of complex <strong>3D</strong> models. Applications of geometry processing<br />
algorithms already cover a wide range of areas from multimedia, entertainment and classical computer-aided design,<br />
to biomedical computing, reverse engineering and scientific computing.<br />
External links<br />
• Siggraph 2001 <strong>Course</strong> on Digital Geometry Processing [1] , by Peter Schroder and Wim Sweldens<br />
• Symposium on Geometry Processing [2]<br />
• Multi-Res Modeling Group [3] , Caltech<br />
• Mathematical Geometry Processing Group [4] , Free University of Berlin<br />
References<br />
[1] http:/ / www. multires. caltech. edu/ pubs/ DGP<strong>Course</strong>/<br />
[2] http:/ / www. geometryprocessing. org/<br />
[3] http:/ / www. multires. caltech. edu/<br />
[4] http:/ / geom. mi. fu-berlin. de/ index. html<br />
Global illumination<br />
Rendering without global illumination. Areas that lie outside of the ceiling lamp's direct light lack definition. For example, the lamp's housing<br />
appears completely uniform. Without the ambient light added into the render, it would appear uniformly black.
Global illumination 48<br />
Rendering with global illumination. Light is reflected by surfaces, and colored light transfers from one surface to another. Notice how color from<br />
the red wall and green wall (not visible) reflects onto other surfaces in the scene. Also notable is the caustic projected onto the red wall from light<br />
passing through the glass sphere.<br />
Global illumination is a general name for a group of algorithms used in <strong>3D</strong> computer <strong>graphics</strong> that are meant to add<br />
more realistic lighting to <strong>3D</strong> scenes. Such algorithms take into account not only the light which comes directly from<br />
a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected<br />
by other surfaces in the scene, whether reflective or not (indirect illumination).<br />
Theoretically reflections, refractions, and shadows are all examples of global illumination, because when simulating<br />
them, one object affects the rendering of another object (as opposed to an object being affected only by a direct<br />
light). In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination.<br />
Images rendered using global illumination algorithms often appear more photorealistic than images rendered using<br />
only direct illumination algorithms. However, such images are computationally more expensive and consequently<br />
much slower to generate. One common approach is to compute the global illumination of a scene and store that<br />
information with the geometry, i.e., radiosity. That stored data can then be used to generate images from different<br />
viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations<br />
repeatedly.<br />
Radiosity, ray tracing, beam tracing, cone tracing, path tracing, Metropolis light transport, ambient occlusion, photon<br />
mapping, and image based lighting are examples of algorithms used in global illumination, some of which may be<br />
used together to yield results that are not fast, but accurate.<br />
These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most<br />
of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve<br />
the lighting equation and provide a more realistically illuminated scene.<br />
The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to<br />
heat transfer simulations performed using finite-element methods in engineering design.<br />
In real-time <strong>3D</strong> <strong>graphics</strong>, the diffuse inter-reflection component of global illumination is sometimes approximated by<br />
an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in <strong>3D</strong> software<br />
packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination<br />
method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect.<br />
Ambient lighting is known to "flatten" shadows in <strong>3D</strong> scenes, making the overall visual effect more bland. However,<br />
used properly, ambient lighting can be an efficient way to make up for a lack of processing power.
Global illumination 49<br />
Procedure<br />
For the simulation of global illumination are used in <strong>3D</strong> programs, more and more specialized algorithms that can<br />
effectively simulate the global illumination. These are, for example, path tracing or photon mapping, under certain<br />
conditions, including radiosity. These are always methods to try to solve the rendering equation.<br />
The following approaches can be distinguished here:<br />
• Inversion:<br />
• is not applied in practice<br />
• Expansion:<br />
• bi-directional approach: Photon mapping + Distributed ray tracing, Bi-directional path tracing, Metropolis light<br />
transport<br />
• Iteration:<br />
• Radiosity<br />
In Light path notation global lighting the paths of the type L (D | S) corresponds * E.<br />
Image-based lighting<br />
Another way to simulate real global illumination, is the use of High dynamic range images (HDRIs), also known as<br />
environment maps, which encircle the scene, and they illuminate. This process is known as image-based lighting.<br />
External links<br />
• SSRT [1] – C++ source code for a Monte-carlo pathtracer (supporting GI) - written with ease of understanding in<br />
mind.<br />
• Video demonstrating global illumination and the ambient color effect [2]<br />
• Real-time GI demos [3] – survey of practical real-time GI techniques as a list of executable demos<br />
• kuleuven [4] - This page contains the Global Illumination Compendium, an effort to bring together most of the<br />
useful formulas and equations for global illumination algorithms in computer <strong>graphics</strong>.<br />
• GI Tutorial [5] - Video tutorial on faking global illumination within <strong>3D</strong> Studio Max by Jason Donati<br />
References<br />
[1] http:/ / www. nirenstein. com/ e107/ page. php?11<br />
[2] http:/ / www. archive. org/ details/ MarcC_AoI-Global_Illumination<br />
[3] http:/ / realtimeradiosity. com/ demos<br />
[4] http:/ / www. cs. kuleuven. be/ ~phil/ GI/<br />
[5] http:/ / www. youtube. com/ watch?v=K5a-FqHz3o0
Gouraud shading 50<br />
Gouraud shading<br />
Gouraud shading, named after Henri<br />
Gouraud, is an interpolation method used in<br />
computer <strong>graphics</strong> to produce continuous<br />
shading of surfaces represented by polygon<br />
meshes. In practice, Gouraud shading is<br />
most often used to achieve continuous<br />
lighting on triangle surfaces by computing<br />
the lighting at the corners of each triangle<br />
and linearly interpolating the resulting<br />
colours for each pixel covered by the<br />
triangle. Gouraud first published the<br />
[1] [2] [3]<br />
technique in 1971.<br />
Description<br />
Gouraud-shaded triangle mesh using the Phong reflection model<br />
Gouraud shading works as follows: An estimate to the surface normal of each vertex in a polygonal <strong>3D</strong> model is<br />
either specified for each vertex or found by averaging the surface normals of the polygons that meet at each vertex.<br />
Using these estimates, lighting computations based on a reflection model, e.g. the Phong reflection model, are then<br />
performed to produce colour intensities at the vertices. For each screen pixel that is covered by the polygonal mesh,<br />
colour intensities can then be interpolated from the colour values calculated at the vertices.<br />
Comparison with other shading techniques<br />
Gouraud shading is considered superior to<br />
flat shading, which requires significantly<br />
less processing than Gouraud shading, but<br />
usually results in a faceted look.<br />
In comparison to Phong shading, Gouraud<br />
shading's strength and weakness lies in its<br />
interpolation. If a mesh covers more pixels<br />
in screen space than it has vertices,<br />
interpolating colour values from samples of<br />
expensive lighting calculations at vertices is<br />
less processor intensive than performing the<br />
Comparison of flat shading and Gouraud shading.<br />
lighting calculation for each pixel as in Phong shading. However, highly localized lighting effects (such as specular<br />
highlights, e.g. the glint of reflected light on the surface of an apple) will not be rendered correctly, and if a highlight<br />
lies in the middle of a polygon, but does not spread to the polygon's vertex, it will not be apparent in a Gouraud<br />
rendering; conversely, if a highlight occurs at the vertex of a polygon, it will be rendered correctly at this vertex (as<br />
this is where the lighting model is applied), but will be spread unnaturally across all neighboring polygons via the<br />
interpolation method. The problem is easily spotted in a rendering which ought to have a specular highlight moving<br />
smoothly across the surface of a model as it rotates. Gouraud shading will instead produce a highlight continuously<br />
fading in and out across neighboring portions of the model, peaking in intensity when the intended specular highlight<br />
passes over a vertex of the model. (This can be improved by increasing the density of vertices in the object, or<br />
alternatively an adaptive tessellation scheme might be used to increase the density only near the highlight.)
Gouraud shading 51<br />
Gouraud-shaded sphere - note the poor behaviour of the specular<br />
References<br />
highlight.<br />
[1] Gouraud, Henri (1971). Computer Display of Curved Surfaces, Doctoral Thesis. University of Utah.<br />
The same sphere rendered with a very high polygon count.<br />
[2] Gouraud, Henri (1971). "Continuous shading of curved surfaces". IEEE Transactions on Computers C-20 (6): 623–629.<br />
[3] Gouraud, Henri (1998). "Continuous shading of curved surfaces" (http:/ / old. siggraph. org/ publications/ seminal-<strong>graphics</strong>. shtml). In<br />
Rosalee Wolfe (ed.). Seminal Graphics: Pioneering efforts that shaped the field. ACM Press. ISBN 1-58113-052-X. .
Graphics pipeline 52<br />
Graphics pipeline<br />
In <strong>3D</strong> computer <strong>graphics</strong>, the terms <strong>graphics</strong> pipeline or rendering pipeline most commonly refers to the current<br />
state of the art method of rasterization-based rendering as supported by commodity <strong>graphics</strong> hardware pipeline . The<br />
<strong>graphics</strong> pipeline typically accepts some representation of a three-dimensional primitives as an input and results in a<br />
2D raster image as output. OpenGL and Direct<strong>3D</strong> are two notable 3d graphic standards, both describing very similar<br />
graphic pipeline.<br />
Stages of the <strong>graphics</strong> pipeline<br />
Generations of graphic pipeline<br />
Graphic pipeline constantly evolve. This article describe graphic pipeline as can be found in OpenGL 3.2 and<br />
Direct<strong>3D</strong> 9.<br />
Transformation<br />
This stage consumes data about polygons with vertices, edges and faces that constitute the whole scene. A matrix<br />
controls the linear transformations (scaling, rotation, translation, etc.) and viewing transformations (world and view<br />
space) that are to be applied on this data.<br />
Per-vertex lighting<br />
Geometry in the complete <strong>3D</strong> scene is lit according to the defined locations of light sources, reflectance, and other<br />
surface properties. Current hardware implementations of the <strong>graphics</strong> pipeline compute lighting only at the vertices<br />
of the polygons being rendered. The lighting values between vertices are then interpolated during rasterization.<br />
Per-fragment (i.e. per-pixel) lighting can be done on modern <strong>graphics</strong> hardware as a post-rasterization process by<br />
means of a shader program.<br />
Viewing transformation or normalizing transformation<br />
Objects are transformed from 3-D world space coordinates into a 3-D coordinate system based on the position and<br />
orientation of a virtual camera. This results in the original <strong>3D</strong> scene as seen from the camera’s point of view, defined<br />
in what is called eye space or camera space. The normalizing transformation is the mathematical inverse of the<br />
viewing transformation, and maps from an arbitrary user-specified coordinate system (u, v, w) to a canonical<br />
coordinate system (x, y, z).<br />
Primitives generation<br />
After the transformation, new primitives are generated from those primitives that were sent to the beginning of the<br />
<strong>graphics</strong> pipeline.<br />
Projection transformation<br />
In the case of a Perspective projection, objects which are distant from the camera are made smaller (sheared). In an<br />
orthographic projection, objects retain their original size regardless of distance from the camera.<br />
In this stage of the <strong>graphics</strong> pipeline, geometry is transformed from the eye space of the rendering camera into a<br />
special <strong>3D</strong> coordinate space called "Homogeneous Clip Space", which is very convenient for clipping. Clip Space<br />
tends to range from [-1, 1] in X,Y,Z, although this can vary by <strong>graphics</strong> API(Direct<strong>3D</strong> or OpenGL). The Projection<br />
Transform is responsible for mapping the planes of the camera's viewing volume (or Frustum) to the planes of the<br />
box which makes up Clip Space.
Graphics pipeline 53<br />
Clipping<br />
Geometric primitives that now fall outside of the viewing frustum will not be visible and are discarded at this stage.<br />
Clipping is not necessary to achieve a correct image output, but it accelerates the rendering process by eliminating<br />
the unneeded rasterization and post-processing on primitives that will not appear anyway.<br />
Viewport transformation<br />
The post-clip vertices are transformed once again to be in window space. In practice, this transform is very simple:<br />
applying a scale (multiplying by the width of the window) and a bias (adding to the offset from the screen origin). At<br />
this point, the vertices have coordinates which directly relate to pixels in a raster.<br />
Scan conversion or rasterization<br />
Rasterization is the process by which the 2D image space representation of the scene is converted into raster format<br />
and the correct resulting pixel values are determined. From now on, operations will be carried out on each single<br />
pixel. This stage is rather complex, involving multiple steps often referred as a group under the name of pixel<br />
pipeline.<br />
Texturing, fragment shading<br />
At this stage of the pipeline individual fragments (or pre-pixels) are assigned a color based on values interpolated<br />
from the vertices during rasterization or from a texture in memory.<br />
Display<br />
The final colored pixels can then be displayed on a computer monitor or other display.<br />
The <strong>graphics</strong> pipeline in hardware<br />
The rendering pipeline is mapped onto current <strong>graphics</strong> acceleration hardware such that the input to the <strong>graphics</strong> card<br />
(GPU) is in the form of vertices. These vertices then undergo transformation and per-vertex lighting. At this point in<br />
modern GPU pipelines a custom vertex shader program can be used to manipulate the <strong>3D</strong> vertices prior to<br />
rasterization. Once transformed and lit, the vertices undergo clipping and rasterization resulting in fragments. A<br />
second custom shader program can then be run on each fragment before the final pixel values are output to the frame<br />
buffer for display.<br />
The <strong>graphics</strong> pipeline is well suited to the rendering process because it allows the GPU to function as a stream<br />
processor since all vertices and fragments can be thought of as independent. This allows all stages of the pipeline to<br />
be used simultaneously for different vertices or fragments as they work their way through the pipe. In addition to<br />
pipelining vertices and fragments, their independence allows <strong>graphics</strong> processors to use parallel processing units to<br />
process multiple vertices or fragments in a single stage of the pipeline at the same time.
Graphics pipeline 54<br />
References<br />
1. Graphics pipeline. (n.d.). Computer Desktop Encyclopedia. Retrieved December 13, 2005, from Answers.com:<br />
[1]<br />
2. Raster Graphics and Color [2] 2004 by Greg Humphreys at the University of Virginia<br />
[1] http:/ / www. answers. com/ topic/ <strong>graphics</strong>-pipeline<br />
[2] http:/ / www. cs. virginia. edu/ ~gfx/ <strong>Course</strong>s/ 2004/ Intro. Fall. 04/ handouts/ 01-raster. pdf<br />
External links<br />
• MIT Open<strong>Course</strong>Ware Computer Graphics, Fall 2003 (http:/ / ocw. mit. edu/ courses/<br />
electrical-engineering-and-computer-science/ 6-837-computer-<strong>graphics</strong>-fall-2003/ )<br />
• ExtremeTech <strong>3D</strong> Pipeline Tutorial (http:/ / www. extremetech. com/ computing/<br />
49076-extremetech-3d-pipeline-tutorial)<br />
• http:/ / developer. nvidia. com/<br />
• http:/ / www. atitech. com/ developer/<br />
Hidden line removal<br />
Hidden line removal is an extension of wireframe model rendering<br />
where lines (or segments of lines) covered by surfaces are not drawn.<br />
This is not the same as hidden face removal since this involves depth<br />
and occlusion while the other involves normals.<br />
Algorithms<br />
A commonly used algorithm to implement it is Arthur Appel's<br />
algorithm. [1] This algorithm works by propagating the visibility from a<br />
segment with a known visibility to a segment whose visibility is yet to<br />
be determined. Certain pathological cases exist that can make this<br />
algorithm difficult to implement. Those cases are:<br />
1. Vertices on edges;<br />
2. Edges on vertices;<br />
3. Edges on edges.<br />
Line removal technique in action<br />
This algorithm is unstable because an error in visibility will be propagated to subsequent nodes (although there are<br />
ways to compensate for this problem). [2]
Hidden line removal 55<br />
References<br />
[1] (Appel, A., "The Notion of Quantitative Invisibility and the Machine Rendering of Solids", Proceedings ACM National Conference,<br />
Thompson Books, Washington, DC, 1967, pp. 387-393.)<br />
[2] James Blinn, "Fractional Invisibility", IEEE Computer Graphics and Applications, Nov. 1988, pp. 77-84.<br />
External links<br />
• Patrick-Gilles Maillot's Thesis (http:/ / www. chez. com/ pmaillot) an extension of the Bresenham line drawing<br />
algorithm to perform <strong>3D</strong> hidden lines removal; also published in MICAD '87 proceedings on CAD/CAM and<br />
Computer Graphics, page 591 - ISBN 2-86601-084-1.<br />
• Vector Hidden Line Removal (http:/ / wheger. tripod. com/ vhl/ vhl. htm) An article by Walter Heger with a<br />
further description (of the pathological cases) and more citations.<br />
Hidden surface determination<br />
In <strong>3D</strong> computer <strong>graphics</strong>, hidden surface determination (also known as hidden surface removal (HSR),<br />
occlusion culling (OC) or visible surface determination (VSD)) is the process used to determine which surfaces<br />
and parts of surfaces are not visible from a certain viewpoint. A hidden surface determination algorithm is a solution<br />
to the visibility problem, which was one of the first major problems in the field of <strong>3D</strong> computer <strong>graphics</strong>. The<br />
process of hidden surface determination is sometimes called hiding, and such an algorithm is sometimes called a<br />
hider. The analogue for line rendering is hidden line removal. Hidden surface determination is necessary to render<br />
an image correctly, so that one cannot look through walls in virtual reality.<br />
There are many techniques for hidden surface determination. They are fundamentally an exercise in sorting, and<br />
usually vary in the order in which the sort is performed and how the problem is subdivided. Sorting large quantities<br />
of <strong>graphics</strong> primitives is usually done by divide and conquer.<br />
Hidden surface removal algorithms<br />
Considering the rendering pipeline, the projection, the clipping, and the rasterization steps are handled differently by<br />
the following algorithms:<br />
• Z-buffering During rasterization the depth/Z value of each pixel (or sample in the case of anti-aliasing, but<br />
without loss of generality the term pixel is used) is checked against an existing depth value. If the current pixel is<br />
behind the pixel in the Z-buffer, the pixel is rejected, otherwise it is shaded and its depth value replaces the one in<br />
the Z-buffer. Z-buffering supports dynamic scenes easily, and is currently implemented efficiently in <strong>graphics</strong><br />
hardware. This is the current standard. The cost of using Z-buffering is that it uses up to 4 bytes per pixel, and that<br />
the rasterization algorithm needs to check each rasterized sample against the z-buffer. The z-buffer can also suffer<br />
from artifacts due to precision errors (also known as z-fighting), although this is far less common now that<br />
commodity hardware supports 24-bit and higher precision buffers.<br />
• Coverage buffers (C-Buffer) and Surface buffer (S-Buffer): faster than z-buffers and commonly used in games in<br />
the Quake I era. Instead of storing the Z value per pixel, they store list of already displayed segments per line of<br />
the screen. New polygons are then cut against already displayed segments that would hide them. An S-Buffer can<br />
display unsorted polygons, while a C-Buffer require polygons to be displayed from the nearest to the furthest.<br />
C-buffer having no overdrawn, they will make the rendering a bit faster. They were commonly used with BSP<br />
trees which would give the polygon sorting.<br />
• Sorted Active Edge List: used in Quake 1, this was storing a list of the edges of already displayed polygons.<br />
Polygons are displayed from the nearest to the furthest. New polygons are clipped against already displayed<br />
polygons' edges, creating new polygons to display then storing the additional edges. It's much harder to
Hidden surface determination 56<br />
implement than S/C/Z buffers, but it will scale much better with the increase in resolution.<br />
• Painter's algorithm sorts polygons by their barycenter and draws them back to front. This produces few artifacts<br />
when applied to scenes with polygons of similar size forming smooth meshes and backface culling turned on. The<br />
cost here is the sorting step and the fact that visual artifacts can occur.<br />
• Binary space partitioning (BSP) divides a scene along planes corresponding to polygon boundaries. The<br />
subdivision is constructed in such a way as to provide an unambiguous depth ordering from any point in the scene<br />
when the BSP tree is traversed. The disadvantage here is that the BSP tree is created with an expensive<br />
pre-process. This means that it is less suitable for scenes consisting of dynamic geometry. The advantage is that<br />
the data is pre-sorted and error free, ready for the previously mentioned algorithms. Note that the BSP is not a<br />
solution to HSR, only a help.<br />
• Ray tracing attempts to model the path of light rays to a viewpoint by tracing rays from the viewpoint into the<br />
scene. Although not a hidden surface removal algorithm as such, it implicitly solves the hidden surface removal<br />
problem by finding the nearest surface along each view-ray. Effectively this is equivalent to sorting all the<br />
geometry on a per pixel basis.<br />
• The Warnock algorithm divides the screen into smaller areas and sorts triangles within these. If there is ambiguity<br />
(i.e., polygons overlap in depth extent within these areas), then further subdivision occurs. At the limit,<br />
subdivision may occur down to the pixel level.<br />
Culling and VSD<br />
A related area to VSD is culling, which usually happens before VSD in a rendering pipeline. Primitives or batches of<br />
primitives can be rejected in their entirety, which usually reduces the load on a well-designed system.<br />
The advantage of culling early on the pipeline is that entire objects that are invisible do not have to be fetched,<br />
transformed, rasterized or shaded. Here are some types of culling algorithms:<br />
Viewing frustum culling<br />
The viewing frustum is a geometric representation of the volume visible to the virtual camera. Naturally, objects<br />
outside this volume will not be visible in the final image, so they are discarded. Often, objects lie on the boundary of<br />
the viewing frustum. These objects are cut into pieces along this boundary in a process called clipping, and the<br />
pieces that lie outside the frustum are discarded as there is no place to draw them.<br />
Backface culling<br />
Since meshes are hollow shells, not solid objects, the back side of some faces, or polygons, in the mesh will never<br />
face the camera. Typically, there is no reason to draw such faces. This is responsible for the effect often seen in<br />
computer and video games in which, if the camera happens to be inside a mesh, rather than seeing the "inside"<br />
surfaces of the mesh, it mostly disappears. (Some game engines continue to render any forward-facing or<br />
double-sided polygons, resulting in stray shapes appearing without the rest of the penetrated mesh.)<br />
Contribution culling<br />
Often, objects are so far away that they do not contribute significantly to the final image. These objects are thrown<br />
away if their screen projection is too small. See Clipping plane<br />
Occlusion culling<br />
Objects that are entirely behind other opaque objects may be culled. This is a very popular mechanism to speed up<br />
the rendering of large scenes that have a moderate to high depth complexity. There are several types of occlusion<br />
culling approaches:
Hidden surface determination 57<br />
• Potentially visible set or PVS rendering, divides a scene into regions and pre-computes visibility for them. These<br />
visibility sets are then indexed at run-time to obtain high quality visibility sets (accounting for complex occluder<br />
interactions) quickly.<br />
• Portal rendering divides a scene into cells/sectors (rooms) and portals (doors), and computes which sectors are<br />
visible by clipping them against portals.<br />
Hansong Zhang's dissertation "Effective Occlusion Culling for the Interactive Display of Arbitrary Models" [1]<br />
describes an occlusion culling approach.<br />
Divide and conquer<br />
A popular theme in the VSD literature is divide and conquer. The Warnock algorithm pioneered dividing the screen.<br />
Beam tracing is a ray-tracing approach which divides the visible volumes into beams. Various screen-space<br />
subdivision approaches reducing the number of primitives considered per region, e.g. tiling, or screen-space BSP<br />
clipping. Tiling may be used as a preprocess to other techniques. ZBuffer hardware may typically include a coarse<br />
'hi-Z' against which primitives can be early-rejected without rasterization, this is a form of occlusion culling.<br />
Bounding volume hierarchies (BVHs) are often used to subdivide the scene's space (examples are the BSP tree, the<br />
octree and the kd-tree). This allows visibility determination to be performed hierarchically: effectively, if a node in<br />
the tree is considered to be invisible then all of its child nodes are also invisible, and no further processing is<br />
necessary (they can all be rejected by the renderer). If a node is considered visible, then each of its children need to<br />
be evaluated. This traversal is effectively a tree walk where invisibility/occlusion or reaching a leaf node determines<br />
whether to stop or whether to recurse respectively.<br />
References<br />
[1] http:/ / www. cs. unc. edu/ ~zhangh/ hom. html
High dynamic range rendering 58<br />
High dynamic range rendering<br />
In <strong>3D</strong> computer <strong>graphics</strong>, high dynamic<br />
range rendering (HDRR or HDR<br />
rendering), also known as high dynamic<br />
range lighting, is the rendering of computer<br />
<strong>graphics</strong> scenes by using lighting<br />
calculations done in a larger dynamic range.<br />
This allows preservation of details that may<br />
be lost due to limiting contrast ratios. Video<br />
games and computer-generated movies and<br />
special effects benefit from this as it creates<br />
more realistic scenes than with the more<br />
simplistic lighting models used.<br />
Graphics processor company Nvidia<br />
summarizes the motivation for HDRR in<br />
A comparison of the standard fixed-aperture rendering (left) with the HDR<br />
rendering (right) in the video game Half-Life 2: Lost Coast<br />
three points: [1] 1) bright things can be really bright, 2) dark things can be really dark, and 3) details can be seen in<br />
both.<br />
History<br />
The use of high dynamic range imaging (HDRI) in computer <strong>graphics</strong> was introduced by Greg Ward in 1985 with<br />
his open-source Radiance rendering and lighting simulation software which created the first file format to retain a<br />
high-dynamic-range image. HDRI languished for more than a decade, held back by limited computing power,<br />
storage, and capture methods. Not until recently has the technology to put HDRI into practical use been developed. [2]<br />
[3]<br />
In 1990, Nakame, et al., presented a lighting model for driving simulators that highlighted the need for<br />
high-dynamic-range processing in realistic simulations. [4]<br />
In 1995, Greg Spencer presented Physically-based glare effects for digital images at SIGGRAPH, providing a<br />
quantitative model for flare and blooming in the human eye. [5]<br />
In 1997 Paul Debevec presented Recovering high dynamic range radiance maps from photographs [6] at SIGGRAPH<br />
and the following year presented Rendering synthetic objects into real scenes. [7] These two papers laid the<br />
framework for creating HDR light probes of a location and then using this probe to light a rendered scene.<br />
HDRI and HDRL (high-dynamic-range image-based lighting) have, ever since, been used in many situations in <strong>3D</strong><br />
scenes in which inserting a <strong>3D</strong> object into a real environment requires the lightprobe data to provide realistic lighting<br />
solutions.<br />
In gaming applications, Riven: The Sequel to Myst in 1997 used an HDRI postprocessing shader directly based on<br />
Spencer's paper [8] . After E³ 2003, Valve Software released a demo movie of their Source engine rendering a<br />
cityscape in a high dynamic range. [9] The term was not commonly used again until E³ 2004, where it gained much<br />
more attention when Valve Software announced Half-Life 2: Lost Coast and Epic Games showcased Unreal Engine<br />
3, coupled with open-source engines such as OGRE <strong>3D</strong> and open-source games like Nexuiz.
High dynamic range rendering 59<br />
Examples<br />
One of the primary advantages of HDR rendering is that details in a scene with a large contrast ratio are preserved.<br />
Without HDR, areas that are too dark are clipped to black and areas that are too bright are clipped to white. These are<br />
represented by the hardware as a floating point value of 0.0 and 1.0 for pure black and pure white, respectively.<br />
Another aspect of HDR rendering is the addition of perceptual cues which increase apparent brightness. HDR<br />
rendering also affects how light is preserved in optical phenomena such as reflections and refractions, as well as<br />
transparent materials such as glass. In LDR rendering, very bright light sources in a scene (such as the sun) are<br />
capped at 1.0. When this light is reflected the result must then be less than or equal to 1.0. However, in HDR<br />
rendering, very bright light sources can exceed the 1.0 brightness to simulate their actual values. This allows<br />
reflections off surfaces to maintain realistic brightness for bright light sources.<br />
Limitations and compensations<br />
Human eye<br />
The human eye can perceive scenes with a very high dynamic contrast ratio, around 1,000,000:1. Adaptation is<br />
achieved in part through adjustments of the iris and slow chemical changes, which take some time (e.g. the delay in<br />
being able to see when switching from bright lighting to pitch darkness). At any given time, the eye's static range is<br />
smaller, around 10,000:1. However, this is still generally higher than the static range achievable by most display<br />
technology.<br />
Output to displays<br />
Although many manufacturers claim very high numbers, plasma displays, LCD displays, and CRT displays can only<br />
deliver a fraction of the contrast ratio found in the real world, and these are usually measured under ideal conditions.<br />
The simultaneous contrast of real content under normal viewing conditions is significantly lower [10] .<br />
Some increase in dynamic range in LCD monitors can be achieved by automatically reducing the backlight for dark<br />
scenes (LG calls it Digital Fine Contrast [11] , Samsung are quoting "dynamic contrast ratio"), or having an array of<br />
brighter and darker LED backlights (BrightSide Technologies – now part of Dolby [12] , and Samsung in<br />
development [13] ).<br />
Light bloom<br />
Light blooming is the result of scattering in the human lens, which our brain interprets as a bright spot in a scene. For<br />
example, a bright light in the background will appear to bleed over onto objects in the foreground. This can be used<br />
to create an illusion to make the bright spot appear to be brighter than it really is. [5]<br />
Flare<br />
Flare is the diffraction of light in the human lens, resulting in "rays" of light emanating from small light sources, and<br />
can also result in some chromatic effects. It is most visible on point light sources because of their small visual<br />
angle. [5]<br />
Otherwise, HDR rendering systems have to map the full dynamic range to what the eye would see in the rendered<br />
situation onto the capabilities of the device. This tone mapping is done relative to what the virtual scene camera sees,<br />
combined with several full screen effects, e.g. to simulate dust in the air which is lit by direct sunlight in a dark<br />
cavern, or the scattering in the eye.<br />
Tone mapping and blooming shaders, can be used together help simulate these effects.
High dynamic range rendering 60<br />
Tone mapping<br />
Tone mapping, in the context of <strong>graphics</strong> rendering, is a technique used to map colors from high dynamic range (in<br />
which lighting calculations are performed) to a lower dynamic range that matches the capabilities of the desired<br />
display device. Typically, the mapping is non-linear – it preserves enough range for dark colors and gradually limits<br />
the dynamic range for bright colors. This technique often produces visually appealing images with good overall<br />
detail and contrast. Various tone mapping operators exist, ranging from simple real-time methods used in computer<br />
games to more sophisticated techniques that attempt to imitate the perceptual response of the human visual system.<br />
Applications in computer entertainment<br />
Currently HDRR has been prevalent in games, primarily for PCs, Microsoft's Xbox 360, and Sony's PlayStation 3. It<br />
has also been simulated on the PlayStation 2, GameCube, Xbox and Amiga systems. Sproing Interactive Media has<br />
announced that their new Athena game engine for the Wii will support HDRR, adding Wii to the list of systems that<br />
support it.<br />
In desktop publishing and gaming, color values are often processed several times over. As this includes<br />
multiplication and division (which can accumulate rounding errors), it is useful to have the extended accuracy and<br />
range of 16 bit integer or 16 bit floating point formats. This is useful irrespective of the aforementioned limitations in<br />
some hardware.<br />
Development of HDRR through DirectX<br />
Complex shader effects began their days with the release of Shader Model 1.0 with DirectX 8. Shader Model 1.0<br />
illuminated <strong>3D</strong> worlds with what is called standard lighting. Standard lighting, however, had two problems:<br />
1. Lighting precision was confined to 8 bit integers, which limited the contrast ratio to 256:1. Using the HVS color<br />
model, the value (V), or brightness of a color has a range of 0 – 255. This means the brightest white (a value of<br />
255) is only 256 levels brighter than the darkest shade above pure black (i.e.: value of 0).<br />
2. Lighting calculations were integer based, which didn't offer as much accuracy because the real world is not<br />
confined to whole numbers.<br />
Before HDRR was fully developed and implemented, games may have attempted to enhance the contrast of a scene<br />
by exaggerating the final render's contrast (as seen in Need For Speed: Underground 2's "Enhanced contrast" setting)<br />
or using some other color correction method (such as in certain scenes in Metal Gear Solid 3: Snake Eater).<br />
On December 24, 2002, Microsoft released a new version of DirectX. DirectX 9.0 introduced Shader Model 2.0,<br />
which offered one of the necessary components to enable rendering of high dynamic range images: lighting precision<br />
was not limited to just 8-bits. Although 8-bits was the minimum in applications, programmers could choose up to a<br />
maximum of 24 bits for lighting precision. However, all calculations were still integer-based. One of the first<br />
<strong>graphics</strong> cards to support DirectX 9.0 natively was ATI's Radeon 9700, though the effect wasn't programmed into<br />
games for years afterwards. On August 23, 2003, Microsoft updated DirectX to DirectX 9.0b, which enabled the<br />
Pixel Shader 2.x (Extended) profile for ATI's Radeon X series and NVIDIA's GeForce FX series of <strong>graphics</strong><br />
processing units.<br />
On August 9, 2004, Microsoft updated DirectX once more to DirectX 9.0c. This also exposed the Shader Model 3.0<br />
profile for high level shader language (HLSL). Shader Model 3.0's lighting precision has a minimum of 32 bits as<br />
opposed to 2.0's 8-bit minimum. Also all lighting-precision calculations are now floating-point based. NVIDIA states<br />
that contrast ratios using Shader Model 3.0 can be as high as 65535:1 using 32-bit lighting precision. At first, HDRR<br />
was only possible on video cards capable of Shader-Model-3.0 effects, but software developers soon added<br />
compatibility for Shader Model 2.0. As a side note, when referred to as Shader Model 3.0 HDR, HDRR is really<br />
done by FP16 blending. FP16 blending is not part of Shader Model 3.0, but is supported mostly by cards also<br />
capable of Shader Model 3.0 (exceptions include the GeForce 6200 series). FP16 blending can be used as a faster
High dynamic range rendering 61<br />
way to render HDR in video games.<br />
Shader Model 4.0 is a feature of DirectX 10, which has been released with Windows Vista. Shader Model 4.0 will<br />
allow for 128-bit HDR rendering, as opposed to 64-bit HDR in Shader Model 3.0 (although this is theoretically<br />
possible under Shader Model 3.0).<br />
Shader Model 5.0 is a feature in DirectX 11, On Windows Vista and Windows 7, it allows 6:1 compression of HDR<br />
textures, without noticeable loss, which is prevalent on previous versions of DirectX HDR texture compression<br />
techniques.<br />
Development of HDRR through OpenGL<br />
It is possible to develop HDRR through GLSL shader starting from OpenGL 1.4 onwards.<br />
GPUs that support HDRR<br />
This is a list of <strong>graphics</strong> processing units that may or can support HDRR. It is implied that because the minimum<br />
requirement for HDR rendering is Shader Model 2.0 (or in this case DirectX 9), any <strong>graphics</strong> card that supports<br />
Shader Model 2.0 can do HDR rendering. However, HDRR may greatly impact the performance of the software<br />
using it if the device is not sufficiently powerful.<br />
GPUs designed for games<br />
Shader Model 2 Compliant (Includes versions 2.0, 2.0a and 2.0b)<br />
From ATI R300 series: 9500, 9500 Pro, 9550, 9550 SE, 9600, 9600 SE, 9600 TX, 9600 AIW, 9600 Pro, 9600 XT, 9650, 9700, 9700<br />
AIW, 9700 Pro, 9800, 9800 SE, 9800 AIW, 9800 Pro, 9800XT, X300, X300 SE, X550, X600 AIW, X600 Pro, X600 XT<br />
R420 series: X700, X700 Pro, X700 XT, X800, X800SE, X800 GT, X800 GTO, X800 Pro, X800 AIW, X800 XL, X800 XT,<br />
X800 XTPE, X850 Pro, X850 XT, X850 XTPE<br />
Radeon RS690: X1200 mobility<br />
From<br />
NVIDIA<br />
From S3<br />
Graphics<br />
GeForce FX (includes PCX versions): 5100, 5200, 5200 SE/XT, 5200 Ultra, 5300, 5500, 5600, 5600 SE/XT, 5600 Ultra,<br />
5700, 5700 VE, 5700 LE, 5700 Ultra, 5750, 5800, 5800 Ultra, 5900 5900 ZT, 5900 SE/XT, 5900 Ultra, 5950, 5950 Ultra<br />
Delta Chrome: S4, S4 Pro, S8, S8 Nitro, F1, F1 Pole Gamma Chrome: S18 Pro, S18 Ultra, S25, S27<br />
From SiS Xabre: Xabre II<br />
From XGI Volari: V3 XT, V5, V5, V8, V8 Ultra, Duo V5 Ultra, Duo V8 Ultra, 8300, 8600, 8600 XT<br />
Shader Model 3.0 Compliant<br />
From ATI R520 series: X1300 HyperMemory Edition, X1300, X1300 Pro, X1600 Pro, X1600 XT, X1650 Pro, X1650 XT, X1800<br />
GTO, X1800 XL AIW, X1800 XL, X1800 XT, X1900 AIW, X1900 GT, X1900 XT, X1900 XTX, X1950 Pro, X1950 XT,<br />
X1950 XTX, Xenos (Xbox 360)<br />
From<br />
NVIDIA<br />
GeForce 6: 6100, 6150, 6200 LE, 6200, 6200 TC, 6250, 6500, 6600, 6600 LE, 6600 DDR2, 6600 GT, 6610 XL, 6700 XL,<br />
6800, 6800 LE, 6800 XT, 6800 GS, 6800 GTO, 6800 GT, 6800 Ultra, 6800 Ultra Extreme GeForce 7: 7300 LE, 7300 GS,<br />
7300 GT, 7600 GS, 7600 GT, 7800 GS, 7800 GT, 7800 GTX, 7800 GTX 512MB, 7900 GS, 7900 GT, 7950 GT, 7900 GTO,<br />
7900 GTX, 7900 GX2, 7950 GX2, 7950 GT, RSX (PlayStation 3)<br />
Shader Model 4.0/4.1* Compliant<br />
From ATI R600 series: [14] HD 2900 XT, HD 2900 Pro, HD 2900 GT, HD 2600 XT, HD 2600 Pro, HD 2400 XT, HD 2400 Pro, HD<br />
2350, HD 3870*, HD 3850*, HD 3650*, HD 3470*, HD 3450*, HD 3870 X2* R700 series: [15] HD 4870 X2, HD 4890, HD<br />
4870*, HD4850*, HD 4670*, HD 4650*<br />
From<br />
NVIDIA<br />
GeForce 8: [16] 8800 Ultra, 8800 GTX, 8800 GT, 8800 GTS, 8800GTS 512MB, 8800GS, 8600 GTS, 8600 GT, 8600M GS,<br />
8600M GT, 8500 GT, 8400 GS, 8300 GS, 8300 GT, 8300 GeForce 9 Series: [17] 9800 GX2, 9800 GTX (+), 9800 GT, 9600<br />
GT, 9600 GSO, 9500 GT, 9400 GT, 9300 GT, 9300 GS, 9200 GT<br />
GeForce 200 Series: [18] GTX 295, GTX 285, GTX 280, GTX 275, GTX 260, GTS 250, GTS240, GT240*, GT220*<br />
Shader Model 5.0 Compliant
High dynamic range rendering 62<br />
From ATI R800 Series: [19] HD 5750, HD 5770, HD 5850, HD 5870, HD 5870 X2, HD 5970* R900 Series: [20] HD 6990, HD 6970,<br />
From<br />
NVIDIA<br />
HD 6950, HD 6870, HD 6850, HD 6770, HD 6750, HD 6670, HD 6570, HD 6450<br />
GeForce 400 Series: [21] GTX 480, GTX 475, GTX 470, GTX 465, GTX 460 GeForce 500 Series: [22] GTX 590, GTX 580,<br />
GTX 570, GTX 560 Ti, GTX 550 Ti<br />
GPUs designed for workstations<br />
Shader Model 2 Compliant (Includes versions 2.0, 2.0a and 2.0b)<br />
From ATI FireGL: Z1-128, T2-128, X1-128, X2-256, X2-256t, V3100, V3200, X3-256, V5000, V5100, V7100<br />
From NVIDIA Quadro FX: 330, 500, 600, 700, 1000, 1100, 1300, 2000, 3000<br />
Shader Model 3.0 Compliant<br />
From ATI FireGL: V7300, V7350<br />
From NVIDIA Quadro FX: 350, 540, 550, 560, 1400, 1500, 3400, 3450, 3500, 4000, 4400, 4500, 4500SDI, 4500 X2, 5500, 5500SDI<br />
From <strong>3D</strong>labs Wildcat Realizm: 100, 200, 500, 800<br />
Video games and HDR rendering<br />
With the release of the seventh generation video game consoles, and the decrease of price of capable <strong>graphics</strong> cards<br />
such as the GeForce 6, 7, and Radeon X1000 series, HDR rendering started to become a standard feature in many<br />
games in late 2006. Options may exist to turn the feature on or off, as it is stressful for <strong>graphics</strong> cards to process.<br />
However, certain lighting styles may not benefit from HDR as much, for example, in games containing<br />
predominantly dark scenery (or, likewise, predominantly bright scenery), and thus such games may not include HDR<br />
in order to boost performance.<br />
Game Engines that Support HDR Rendering<br />
• Unreal Engine 3 [23]<br />
• Source [24]<br />
• CryEngine [25] , CryEngine 2 [26] , CryEngine 3<br />
• Dunia Engine<br />
• Gamebryo<br />
• Unity (game engine)<br />
• id Tech 5<br />
• Lithtech<br />
• Unigine [27]<br />
References<br />
[1] Simon Green and Cem Cebenoyan (2004). "High Dynamic Range Rendering (on the GeForce 6800)" (http:/ / download. nvidia. com/<br />
developer/ presentations/ 2004/ 6800_Leagues/ 6800_Leagues_HDR. pdf) (PDF). GeForce 6 Series. nVidia. pp. 3. .<br />
[2] Reinhard, Erik; Greg Ward, Sumanta Pattanaik, Paul Debevec (August 2005). High Dynamic Range Imaging: Acquisition, Display, and<br />
Image-Based Lighting. Westport, Connecticut: Morgan Kaufmann. ISBN 0125852630.<br />
[3] Greg Ward. "High Dynamic Range Imaging" (http:/ / www. anyhere. com/ gward/ papers/ cic01. pdf). . Retrieved 18 August 2009.<br />
[4] Eihachiro Nakamae; Kazufumi Kaneda, Takashi Okamoto, Tomoyuki Nishita (1990). "A lighting model aiming at drive simulators" (http:/ /<br />
doi. acm. org/ 10. 1145/ 97879. 97922). Siggraph: 395. doi:10.1145/97879.97922. .<br />
[5] Greg Spencer; Peter Shirley, Kurt Zimmerman, Donald P. Greenberg (1995). "Physically-based glare effects for digital images" (http:/ / doi.<br />
acm. org/ 10. 1145/ 218380. 218466). Siggraph: 325. doi:10.1145/218380.218466. .<br />
[6] Paul E. Debevec and Jitendra Malik (1997). "Recovering high dynamic range radiance maps from photographs" (http:/ / www. debevec. org/<br />
Research/ HDR). Siggraph. .
High dynamic range rendering 63<br />
[7] Paul E. Debevec (1998). "Rendering synthetic objects into real scenes: bridging traditional and image-based <strong>graphics</strong> with global illumination<br />
and high dynamic range photography" (http:/ / www. debevec. org/ Research/ IBL/ ). Siggraph. .<br />
[8] Forcade, Tim (February 1998). "Unraveling Riven". Computer Graphics World.<br />
[9] Valve (2003). "Source DirectX 9.0 Effects Trailer" (http:/ / www. fileplanet. com/ 130227/ 130000/ fileinfo/ Source-DirectX-9.<br />
0-Effects-Trailer) (exe (Bink Movie)). File Planet. .<br />
[10] http:/ / www. hometheaterhifi. com/ volume_13_2/ feature-article-contrast-ratio-5-2006-part-1. html<br />
[11] http:/ / www. lge. com/ about/ press_release/ detail/ PRO%7CNEWS%5EPRE%7CMENU_20075_PRE%7CMENU. jhtml<br />
[12] http:/ / www. dolby. com/ promo/ hdr/ technology. html<br />
[13] http:/ / www. engadget. com/ 2007/ 02/ 01/ samsungs-15-4-30-and-40-inch-led-backlit-lcds/<br />
[14] "ATI Radeon 2400 Series – GPU Specifications" (http:/ / ati. amd. com/ products/ radeonhd2400/ specs. html). radeon series. . Retrieved<br />
2007-09-10.<br />
[15] "ATI Radeon HD 4800 Series – Overview" (http:/ / ati. amd. com/ products/ radeonhd4800/ index. html). radeon series. . Retrieved<br />
2008-07-01.<br />
[16] "Geforce 8800 Technical Specifications" (http:/ / www. nvidia. com/ page/ 8800_tech_specs. html). Geforce 8 Series. . Retrieved<br />
2006-11-20.<br />
[17] "NVIDIA Geforce 9800 GX2" (http:/ / www. nvidia. com/ object/ geforce_9800gx2. html). Geforce 9 Series. . Retrieved 2008-07-01.<br />
[18] "Geforce GTX 285 Technical Specifications" (http:/ / www. nvidia. com/ object/ product_geforce_gtx_285_us. html). Geforce 200 Series. .<br />
Retrieved 2010-06-22.<br />
[19] "ATI Radeon HD 5000 Series – Overview" (http:/ / www. amd. com/ us/ products/ desktop/ <strong>graphics</strong>/ ati-radeon-hd-5000/ Pages/<br />
ati-radeon-hd-5000. aspx). radeon series. . Retrieved 2011-03-29.<br />
[20] "AMD Radeon HD 6000 Series - Overview" (http:/ / www. amd. com/ us/ products/ desktop/ <strong>graphics</strong>/ amd-radeon-hd-6000/ Pages/<br />
amd-radeon-hd-6000. aspx). Radeon Series. . Retrieved 2011-03-29.<br />
[21] "Geforce GTX 480 Technical Specifications" (http:/ / www. nvidia. com/ object/ product_geforce_gtx_480_us. html). Geforce 400 Series. .<br />
Retrieved 2010-06-22.<br />
[22] "Geforce GTX 580 Specifications" (http:/ / www. nvidia. com/ object/ product-geforce-gtx-580-us. html). Geforce 500 Series. . Retrieved<br />
2011-03-29.<br />
[23] "Rendering - Features - Unreal Technology" (http:/ / www. unrealengine. com/ features/ rendering/ ). Epic Games. 2006. . Retrieved<br />
2011-03-15.<br />
[24] "SOURCE - RENDERING SYSTEM" (http:/ / source. valvesoftware. com/ rendering. php). Valve Corporation. 2007. . Retrieved<br />
2011-03-15.<br />
[25] "FarCry 1.3: Crytek’s Last Play Brings HDR and <strong>3D</strong>c for the First Time" (http:/ / www. xbitlabs. com/ articles/ video/ display/ farcry13.<br />
html). X-bit Labs. 2004. . Retrieved 2011-03-15.<br />
[26] "CryEngine 2 - Overview" (http:/ / crytek. com/ cryengine/ cryengine2/ overview). CryTek. 2011. . Retrieved 2011-03-15.<br />
[27] "Unigine Engine - Unigine (advanced <strong>3D</strong> engine for multi-platform games and virtual reality systems)" (http:/ / unigine. com/ products/<br />
unigine/ ). Unigine Corp.. 2011. . Retrieved 2011-03-15.<br />
External links<br />
• NVIDIA's HDRR technical summary (http:/ / download. nvidia. com/ developer/ presentations/ 2004/<br />
6800_Leagues/ 6800_Leagues_HDR. pdf) (PDF)<br />
• A HDRR Implementation with OpenGL 2.0 (http:/ / www. gsulinux. org/ ~plq)<br />
• OpenGL HDRR Implementation (http:/ / www. smetz. fr/ ?page_id=83)<br />
• High Dynamic Range Rendering in OpenGL (http:/ / transporter-game. googlecode. com/ files/<br />
HDRRenderingInOpenGL. pdf) (PDF)<br />
• High Dynamic Range Imaging environments for Image Based Lighting (http:/ / www. hdrsource. com/ )<br />
• Microsoft's technical brief on SM3.0 in comparison with SM2.0 (http:/ / www. microsoft. com/ whdc/ winhec/<br />
partners/ shadermodel30_NVIDIA. mspx)<br />
• Tom's Hardware: New Graphics Card Features of 2006 (http:/ / www. tomshardware. com/ 2006/ 01/ 13/<br />
new_3d_<strong>graphics</strong>_card_features_in_2006/ )<br />
• List of GPU's compiled by Chris Hare (http:/ / users. erols. com/ chare/ video. htm)<br />
• techPowerUp! GPU Database (http:/ / www. techpowerup. com/ gpudb/ )<br />
• Understanding Contrast Ratios in Video Display Devices (http:/ / www. hometheaterhifi. com/ volume_13_2/<br />
feature-article-contrast-ratio-5-2006-part-1. html)<br />
• Requiem by TBL, featuring real-time HDR rendering in software (http:/ / demoscene. tv/ page. php?id=172&<br />
lang=uk& vsmaction=view_prod& id_prod=12561)
High dynamic range rendering 64<br />
• List of video games supporting HDR (http:/ / www. uvlist. net/ groups/ info/ hdrlighting)<br />
• Examples of high dynamic range photography (http:/ / www. hdr-photography. org/ )<br />
• Examples of high dynamic range 360-degree panoramic photography (http:/ / www. hdrsource. com/ )<br />
Image-based lighting<br />
Image-based lighting (IBL) is a <strong>3D</strong> rendering technique which involves plotting an image onto a dome or sphere<br />
that contains the primary subject. The lighting characteristics of the surrounding surface are then taken into account<br />
when rendering the scene, using the modeling techniques of global illumination. This is in contrast to light sources<br />
such as a computer-simulated sun or light bulb, which are more localized.<br />
Image-based lighting generally uses high dynamic range imaging for greater realism, though this is not universal.<br />
Almost all modern rendering software offers some type of image-based lighting, though the exact terminology used<br />
in the system may vary.<br />
Image-based lighting is also starting to show up in video games as video game consoles and personal computers start<br />
to have the computational resources to render scenes in real time using this technique. This technique is used in<br />
Forza Motorsport 4, and by the Chameleon engine used in Need for Speed: Hot Pursuit.<br />
References<br />
• Tutorial [1]<br />
External links<br />
• Real-Time HDR Image-Based Lighting Demo [2]<br />
References<br />
[1] http:/ / ict. usc. edu/ publications/ ibl-tutorial-cga2002. pdf<br />
[2] http:/ / www. daionet. gr. jp/ ~masa/ rthdribl/
Image plane 65<br />
Image plane<br />
In <strong>3D</strong> computer <strong>graphics</strong>, the image plane is that plane in the world which is identified with the plane of the<br />
monitor. If one makes the analogy of taking a photograph to rendering a <strong>3D</strong> image, the surface of the film is the<br />
image plane. In this case, the viewing transformation is a projection that maps the world onto the image plane. A<br />
rectangular region of this plane, called the viewing window or viewport, maps to the monitor. This establishes the<br />
mapping between pixels on the monitor and points (or rather, rays) in the <strong>3D</strong> world.<br />
In optics, the image plane is the plane that contains the object's projected image, and lies beyond the back focal<br />
plane.<br />
Irregular Z-buffer<br />
The irregular Z-buffer is an algorithm designed to solve the visibility problem in real-time 3-d computer <strong>graphics</strong>.<br />
It is related to the classical Z-buffer in that it maintains a depth value for each image sample and uses these to<br />
determine which geometric elements of a scene are visible. The key difference, however, between the classical<br />
Z-buffer and the irregular Z-buffer is that the latter allows arbitrary placement of image samples in the image plane,<br />
whereas the former requires samples to be arranged in a regular grid.<br />
These depth samples are explicitly stored in a two-dimensional spatial data structure. During rasterization, triangles<br />
are projected onto the image plane as usual, and the data structure is queried to determine which samples overlap<br />
each projected triangle. Finally, for each overlapping sample, the standard Z-compare and (conditional) frame buffer<br />
update are performed.<br />
Implementation<br />
The classical rasterization algorithm projects each polygon onto the image plane, and determines which sample<br />
points from a regularly spaced set lie inside the projected polygon. Since the locations of these samples (i.e. pixels)<br />
are implicit, this determination can be made by testing the edges against the implicit grid of sample points. If,<br />
however the locations of the sample points are irregularly spaced and cannot be computed from a formula, then this<br />
approach does not work. The irregular Z-buffer solves this problem by storing sample locations explicitly in a<br />
two-dimensional spatial data structure, and later querying this structure to determine which samples lie within a<br />
projected triangle. This latter step is referred to as "irregular rasterization".<br />
Although the particular data structure used may vary from implementation to implementation, the two studied<br />
approaches are the kd-tree, and a grid of linked lists. A balanced kd-tree implementation has the advantage that it<br />
guarantees O(log(N)) access. Its chief disadvantage is that parallel construction of the kd-tree may be difficult, and<br />
traversal requires expensive branch instructions. The grid of lists has the advantage that it can be implemented more<br />
effectively on GPU hardware, which is designed primarily for the classical Z-buffer.<br />
With the appearance of CUDA, the programmability of current <strong>graphics</strong> hardware has been drastically improved.<br />
The Master Thesis, "Fast Triangle Rasterization using irregular Z-buffer on CUDA", provide a complete description<br />
to an irregular Z-Buffer based shadow mapping software implementation on CUDA. The rendering system is<br />
running completely on GPUs. It is capable of generating aliasing-free shadows at a throughput of dozens of million<br />
triangles per second.
Irregular Z-buffer 66<br />
Applications<br />
The irregular Z-buffer can be used for any application which requires visibility calculations at arbitrary locations in<br />
the image plane. It has been shown to be particularly adept at shadow mapping, an image space algorithm for<br />
rendering hard shadows. In addition to shadow rendering, potential applications include adaptive anti-aliasing,<br />
jittered sampling, and environment mapping.<br />
External links<br />
• The Irregular Z-Buffer: Hardware Acceleration for Irregular Data Structures [1]<br />
• The Irregular Z-Buffer And Its Application to Shadow Mapping [2]<br />
• Alias-Free Shadow Maps [3]<br />
• Fast Triangle Rasterization using irregular Z-buffer on CUDA [4]<br />
References<br />
[1] http:/ / www. tacc. utexas. edu/ ~cburns/ papers/ izb-tog. pdf<br />
[2] http:/ / www. cs. utexas. edu/ ftp/ pub/ techreports/ tr04-09. pdf<br />
[3] http:/ / www. tml. hut. fi/ ~timo/ publications/ aila2004egsr_paper. pdf<br />
[4] http:/ / publications. lib. chalmers. se/ records/ fulltext/ 123790. pdf<br />
Isosurface<br />
An isosurface is a three-dimensional analog of an isoline. It is a<br />
surface that represents points of a constant value (e.g. pressure,<br />
temperature, velocity, density) within a volume of space; in other<br />
words, it is a level set of a continuous function whose domain is<br />
<strong>3D</strong>-space.<br />
Isosurfaces are normally displayed using computer <strong>graphics</strong>, and are<br />
used as data visualization methods in computational fluid dynamics<br />
(CFD), allowing engineers to study features of a fluid flow (gas or<br />
liquid) around objects, such as aircraft wings. An isosurface may<br />
represent an individual shock wave in supersonic flight, or several<br />
isosurfaces may be generated showing a sequence of pressure values in<br />
the air flowing around a wing. Isosurfaces tend to be a popular form of<br />
visualization for volume datasets since they can be rendered by a<br />
simple polygonal model, which can be drawn on the screen very<br />
quickly.<br />
Zirconocene with an isosurface showing areas of<br />
the molecule susceptible to electrophilic attack.<br />
Image courtesy of Accelrys (http:/ / www.<br />
accelrys. com)<br />
In medical imaging, isosurfaces may be used to represent regions of a particular density in a three-dimensional CT<br />
scan, allowing the visualization of internal organs, bones, or other structures.<br />
Numerous other disciplines that are interested in three-dimensional data often use isosurfaces to obtain information<br />
about pharmacology, chemistry, geophysics and meteorology.<br />
A popular method of constructing an isosurface from a data volume is the marching cubes algorithm, and another,<br />
very similar method is the marching tetrahedrons algorithm.
Isosurface 67<br />
Examples of isosurfaces are 'Metaballs' or 'blobby objects' used in <strong>3D</strong><br />
visualisation. A more general way to construct an isosurface is to use<br />
the function representation and the HyperFun language.<br />
External links<br />
• Isosurface Polygonization [1]<br />
References<br />
[1] http:/ / www2. imm. dtu. dk/ ~jab/ gallery/ polygonization. html<br />
Lambert's cosine law<br />
Isosurface of vorticity trailed from a propeller<br />
In optics, Lambert's cosine law says that the radiant intensity observed from an ideal diffusely reflecting surface (a<br />
Lambertian surface) is directly proportional to the cosine of the angle θ between the observer's line of sight and the<br />
surface normal. The law is also known as the cosine emission law or Lambert's emission law. It is named after<br />
Johann Heinrich Lambert, from his Photometria, published in 1760.<br />
An important consequence of Lambert's cosine law is that when a Lambertian surface is viewed from any angle, it<br />
has the same apparent radiance. This means, for example, that to the human eye it has the same apparent brightness<br />
(or luminance). It has the same radiance because, although the emitted power from a given area element is reduced<br />
by the cosine of the emission angle, the apparent size (solid angle) of the observed area, as seen by a viewer, is<br />
decreased by a corresponding amount. Therefore, its radiance (power per unit solid angle per unit projected source<br />
area) is the same. For example, in the visible spectrum, the Sun is not a Lambertian radiator; its brightness is a<br />
maximum at the center of the solar disk, an example of limb darkening. A black body is a perfect Lambertian<br />
radiator.<br />
Lambertian scatterers<br />
When an area element is radiating as a result of being illuminated by an external source, the irradiance (energy or<br />
photons/time/area) landing on that area element will be proportional to the cosine of the angle between the<br />
illuminating source and the normal. A Lambertian scatterer will then scatter this light according to the same cosine<br />
law as a Lambertian emitter. This means that although the radiance of the surface depends on the angle from the<br />
normal to the illuminating source, it will not depend on the angle from the normal to the observer. For example, if<br />
the moon were a Lambertian scatterer, one would expect to see its scattered brightness appreciably diminish towards<br />
the terminator due to the increased angle at which sunlight hit the surface. The fact that it does not diminish<br />
illustrates that the moon is not a Lambertian scatterer, and in fact tends to scatter more light into the oblique angles<br />
blade
Lambert's cosine law 68<br />
than would a Lambertian scatterer.<br />
Details of equal brightness effect<br />
The situation for a Lambertian surface (emitting or scattering) is<br />
illustrated in Figures 1 and 2. For conceptual clarity we will think in<br />
terms of photons rather than energy or luminous energy. The wedges in<br />
the circle each represent an equal angle dΩ, and for a Lambertian<br />
surface, the number of photons per second emitted into each wedge is<br />
proportional to the area of the wedge.<br />
It can be seen that the length of each wedge is the product of the<br />
diameter of the circle and cos(θ). It can also be seen that the maximum<br />
rate of photon emission per unit solid angle is along the normal and<br />
diminishes to zero for θ = 90°. In mathematical terms, the radiance<br />
along the normal is I photons/(s·cm 2 ·sr) and the number of photons per<br />
second emitted into the vertical wedge is I dΩ dA. The number of<br />
photons per second emitted into the wedge at angle θ is<br />
I cos(θ) dΩ dA.<br />
Figure 2 represents what an observer sees. The observer directly above<br />
the area element will be seeing the scene through an aperture of area<br />
dA 0 and the area element dA will subtend a (solid) angle of dΩ 0 . We<br />
can assume without loss of generality that the aperture happens to<br />
subtend solid angle dΩ when "viewed" from the emitting area element.<br />
This normal observer will then be recording I dΩ dA photons per<br />
second and so will be measuring a radiance of<br />
photons/(s·cm 2 ·sr).<br />
Figure 1: Emission rate (photons/s) in a normal<br />
and off-normal direction. The number of<br />
photons/sec directed into any wedge is<br />
proportional to the area of the wedge.<br />
Figure 2: Observed intensity (photons/(s·cm 2 ·sr))<br />
for a normal and off-normal observer; dA 0 is the<br />
area of the observing aperture and dΩ is the solid<br />
angle subtended by the aperture from the<br />
viewpoint of the emitting area element.<br />
The observer at angle θ to the normal will be seeing the scene through the same aperture of area dA 0 and the area<br />
element dA will subtend a (solid) angle of dΩ 0 cos(θ). This observer will be recording I cos(θ) dΩ dA photons per<br />
second, and so will be measuring a radiance of<br />
which is the same as the normal observer.<br />
photons/(s·cm 2 ·sr),
Lambert's cosine law 69<br />
Relating peak luminous intensity and luminous flux<br />
In general, the luminous intensity of a point on a surface varies by direction; for a Lambertian surface, that<br />
distribution is defined by the cosine law, with peak luminous intensity in the normal direction. Thus when the<br />
Lambertian assumption holds, we can calculate the total luminous flux, , from the peak luminous intensity,<br />
and so<br />
, by integrating the cosine law:<br />
where is the determinant of the Jacobian matrix for the unit sphere, and realizing that is luminous flux<br />
per steradian. [1] Similarly, the peak intensity will be of the total radiated luminous flux. For Lambertian<br />
surfaces, the same factor of relates luminance to luminous emittance, radiant intensity to radiant flux, and<br />
radiance to radiant emittance. Radians and steradians are, of course, dimensionless and so "rad" and "sr" are included<br />
only for clarity.<br />
Example: A surface with a luminance of say 100 cd/m 2 (= 100 nits, typical PC-screen) seen from the front, will (if it<br />
is a perfect Lambert emitter) emit a total luminous flux of 314 lm/m 2 . If it's a 19" screen (area ≈ 0.1 m 2 ), the total<br />
light emitted would thus be 31.4 lm.<br />
Uses<br />
Lambert's cosine law in its reversed form (Lambertian reflection) implies that the apparent brightness of a<br />
Lambertian surface is proportional to cosine of the angle between the surface normal and the direction of the incident<br />
light.<br />
This phenomenon is among others used when creating moldings, which are a means of applying light and dark<br />
shaded stripes to a structure or object without having to change the material or apply pigment. The contrast of dark<br />
and light areas gives definition to the object. Moldings are strips of material with various cross sections used to cover<br />
transitions between surfaces or for decoration.<br />
References<br />
[1] Incropera and DeWitt, Fundamentals of Heat and Mass Transfer, 5th ed., p.710.
Lambertian reflectance 70<br />
Lambertian reflectance<br />
If a surface exhibits Lambertian reflectance, light falling on it is scattered such that the apparent brightness of the<br />
surface to an observer is the same regardless of the observer's angle of view. More technically, the surface luminance<br />
is isotropic. For example, unfinished wood exhibits roughly Lambertian reflectance, but wood finished with a glossy<br />
coat of polyurethane does not, since specular highlights may appear at different locations on the surface. Not all<br />
rough surfaces are perfect Lambertian reflectors, but this is often a good approximation when the characteristics of<br />
the surface are unknown. Lambertian reflectance is named after Johann Heinrich Lambert.<br />
Use in computer <strong>graphics</strong><br />
In computer <strong>graphics</strong>, Lambertian reflection is often used as a model for diffuse reflection. This technique causes all<br />
closed polygons (such as a triangle within a <strong>3D</strong> mesh) to reflect light equally in all directions when rendered. In<br />
effect, a point rotated around its normal vector will not change the way it reflects light. However, the point will<br />
change the way it reflects light if it is tilted away from its initial normal vector. [1] The reflection is calculated by<br />
taking the dot product of the surface's normal vector, , and a normalized light-direction vector, , pointing from<br />
the surface to the light source. This number is then multiplied by the color of the surface and the intensity of the light<br />
hitting the surface:<br />
,<br />
where is the intensity of the diffusely reflected light (surface brightness), is the color and is the intensity<br />
of the incoming light. Because<br />
,<br />
where is the angle between the direction of the two vectors, the intensity will be the highest if the normal vector<br />
points in the same direction as the light vector ( , the surface will be perpendicular to the direction of<br />
the light), and the lowest if the normal vector is perpendicular to the light vector ( , the surface runs<br />
parallel with the direction of the light).<br />
Lambertian reflection from polished surfaces are typically accompanied by specular reflection (gloss), where the<br />
surface luminance is highest when the observer is situated at the perfect reflection direction (i.e. where the direction<br />
of the reflected light is a reflection of the direction of the incident light in the surface), and falls off sharply. This is<br />
simulated in computer <strong>graphics</strong> with various specular reflection models such as Phong, Cook-Torrance. etc.<br />
Spectralon is a material which is designed to exhibit an almost perfect Lambertian reflectance, while Scotchlite is a<br />
material designed with the opposite intent of only reflecting light on one line of sight.<br />
Other waves<br />
While Lambertian reflectance usually refers to the reflection of light by an object, it can be used to refer to the<br />
reflection of any wave. For example, in ultrasound imaging, "rough" tissues are said to exhibit Lambertian<br />
reflectance.<br />
References<br />
[1] Angel, Edward (2003). Interactive Computer Graphics: A Top-Down Approach Using OpenGL (http:/ / books. google. com/<br />
?id=Fsy_QgAACAAJ) (third ed.). Addison-Wesley. ISBN 978-0-321-31252-5. .
Level of detail 71<br />
Level of detail<br />
In computer <strong>graphics</strong>, accounting for level of detail involves decreasing the complexity of a <strong>3D</strong> object representation<br />
as it moves away from the viewer or according other metrics such as object importance, eye-space speed or position.<br />
Level of detail techniques increases the efficiency of rendering by decreasing the workload on <strong>graphics</strong> pipeline<br />
stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the<br />
small effect on object appearance when distant or moving fast.<br />
Although most of the time LOD is applied to geometry detail only, the basic concept can be generalized. Recently,<br />
LOD techniques included also shader management to keep control of pixel complexity. A form of level of detail<br />
management has been applied to textures for years, under the name of mipmapping, also providing higher rendering<br />
quality.<br />
It is commonplace to say that "an object has been LOD'd" when the object is simplified by the underlying LOD-ing<br />
algorithm.<br />
Historical reference<br />
The origin oldOldOld of all the LoD algorithms for <strong>3D</strong> computer <strong>graphics</strong> can be traced back to an article by James H.<br />
Clark in the October 1976 issue of Communications of the ACM. At the time, computers were monolithic and rare,<br />
and <strong>graphics</strong> was being driven by researchers. The hardware itself was completely different, both architecturally and<br />
performance-wise. As such, many differences could be observed with regard to today's algorithms but also many<br />
common points.<br />
The original algorithm presented a much more generic approach to what will be discussed here. After introducing<br />
some available algorithms for geometry management, it is stated that most fruitful gains came from "...structuring<br />
the environments being rendered", allowing to exploit faster transformations and clipping operations.<br />
The same environment structuring is now proposed as a way to control varying detail thus avoiding unnecessary<br />
computations, yet delivering adequate visual quality:<br />
“<br />
For example, a dodecahedron looks like a sphere from a sufficiently large distance and thus can be used to model it so long as it is viewed<br />
from that or a greater distance. However, if it must ever be viewed more closely, it will look like a dodecahedron. One solution to this is<br />
simply to define it with the most detail that will ever be necessary. However, then it might have far more detail than is needed to represent it at<br />
large distances, and in a complex environment with many such objects, there would be too many polygons (or other geometric primitives) for<br />
the visible surface algorithms to efficiently handle. ”<br />
The proposed algorithm envisions a tree data structure which encodes in its arcs both transformations and transitions<br />
to more detailed objects. In this way, each node encodes an object and according to a fast heuristic, the tree is<br />
descended to the leafs which provide each object with more detail. When a leaf is reached, other methods could be<br />
used when higher detail is needed, such as Catmull's recursive subdivision Catmull .<br />
“<br />
The significant point, however, is that in a complex environment, the amount of information presented about the various objects in the<br />
environment varies according to the fraction of the field of view occupied by those objects. ”<br />
The paper then introduces clipping (not to be confused with culling (computer <strong>graphics</strong>), although often similar),<br />
various considerations on the graphical working set and its impact on performance, interactions between the<br />
proposed algorithm and others to improve rendering speed. Interested readers are encouraged in checking the<br />
references for further details on the topic.
Level of detail 72<br />
Well known approaches<br />
Although the algorithm introduced above covers a whole range of level of detail management techniques, real world<br />
applications usually employ different methods according the information being rendered. Because of the appearance<br />
of the considered objects, two main algorithm families are used.<br />
The first is based on subdividing the space in a finite amount of regions, each with a certain level of detail. The result<br />
is discrete amount of detail levels, from which the name Discrete LoD (DLOD). There's no way to support a smooth<br />
transition between LOD levels at this level, although alpha blending or morphing can be used to avoid visual<br />
popping.<br />
The latter considers the polygon mesh being rendered as a function which must be evaluated requiring to avoid<br />
excessive errors which are a function of some heuristic (usually distance) themselves. The given "mesh" function is<br />
then continuously evaluated and an optimized version is produced according to a tradeoff between visual quality and<br />
performance. Those kind of algorithms are usually referred as Continuous LOD (CLOD).<br />
Details on Discrete LOD<br />
The basic concept of discrete LOD (DLOD) is to provide various<br />
models to represent the same object. Obtaining those models<br />
requires an external algorithm which is often non-trivial and<br />
subject of many polygon reduction techniques. Successive<br />
LOD-ing algorithms will simply assume those models are<br />
available.<br />
DLOD algorithms are often used in performance-intensive<br />
applications with small data sets which can easily fit in memory.<br />
Although out of core algorithms could be used, the information<br />
granularity is not well suited to this kind of application. This kind<br />
of algorithm is usually easier to get working, providing both faster<br />
performance and lower CPU usage because of the few operations<br />
involved.<br />
DLOD methods are often used for "stand-alone" moving objects,<br />
possibly including complex animation methods. A different<br />
approach is used for geomipmapping geomipmapping , a popular<br />
terrain rendering algorithm because this applies to terrain meshes<br />
which are both graphically and topologically different from<br />
An example of various DLOD ranges. Darker areas are<br />
meant to be rendered with higher detail. An additional<br />
culling operation is run, discarding all the information<br />
outside the frustum (colored areas).<br />
"object" meshes. Instead of computing an error and simplify the mesh according to this, geomipmapping takes a<br />
fixed reduction method, evaluates the error introduced and computes a distance at which the error is acceptable.<br />
Although straightforward, the algorithm provides decent performance.
Level of detail 73<br />
A discrete LOD example<br />
As a simple example, consider the following sphere. A discrete LOD approach would cache a certain number of<br />
models to be used at different distances. Because the model can trivially be procedurally generated by its<br />
mathematical formulation, using a different amount of sample points distributed on the surface is sufficient to<br />
generate the various models required. This pass is not a LOD-ing algorithm.<br />
Image<br />
Visual impact comparisons and measurements<br />
Vertices ~5500 ~2880 ~1580 ~670 140<br />
Notes Maximum<br />
detail,<br />
for closeups.<br />
Minimum<br />
detail,<br />
very far objects.<br />
To simulate a realistic transform bound scenario, we'll use an ad-hoc written application. We'll make sure we're not<br />
CPU bound by using simple algorithms and minimum fragment operations. Each frame, the program will compute<br />
each sphere's distance and choose a model from a pool according to this information. To easily show the concept, the<br />
distance at which each model is used is hard coded in the source. A more involved method would compute adequate<br />
models according to the usage distance chosen.<br />
We use OpenGL for rendering because its high efficiency in managing small batches, storing each model in a display<br />
list thus avoiding communication overheads. Additional vertex load is given by applying two directional light<br />
sources ideally located infinitely far away.<br />
The following table compares the performance of LoD aware rendering and a full detail (brute force) method.<br />
Rendered<br />
images<br />
Visual impact comparisons and measurements<br />
Brute DLOD Comparison<br />
Render time 27.27 ms 1.29 ms 21 × reduction<br />
Scene<br />
vertices<br />
(thousands)<br />
2328.48 109.44 21 × reduction
Level of detail 74<br />
Hierarchical LOD<br />
Because hardware is geared towards large amounts of detail, rendering low polygon objects may score sub-optimal<br />
performances. HLOD avoids the problem by grouping different objects together hlod . This allows for higher<br />
efficiency as well as taking advantage of proximity considerations.<br />
References<br />
1. Communications of the ACM, October 1976 Volume 19 Number 10. Pages 547-554. Hierarchical Geometric<br />
Models for Visible Surface Algorithms by James H. Clark, University of California at Santa Cruz. Digitalized scan<br />
is freely available at http:/ / accad. osu. edu/ ~waynec/ history/ PDFs/ clark-vis-surface. pdf.<br />
2. Catmull E., A Subdivision Algorithm for Computer Display of Curved Surfaces. Tech. Rep. UTEC-CSc-74-133,<br />
University of Utah, Salt Lake City, Utah, Dec. 1974.<br />
3. de Boer, W.H., Fast Terrain Rendering using Geometrical Mipmapping, in flipCode featured articles, October<br />
2000. Available at http:/ / www. flipcode. com/ tutorials/ tut_geomipmaps. shtml.<br />
4. Carl Erikson's paper at http:/ / www. cs. unc. edu/ Research/ ProjectSummaries/ hlods. pdf provides a quick, yet<br />
effective overlook at HLOD mechanisms. A more involved description follows in his thesis, at https:/ / wwwx. cs.<br />
unc. edu/ ~geom/ papers/ documents/ dissertations/ erikson00. pdf.<br />
Mipmap<br />
In <strong>3D</strong> computer <strong>graphics</strong> texture filtering, MIP maps (also mipmaps) are pre-calculated, optimized collections of<br />
images that accompany a main texture, intended to increase rendering speed and reduce aliasing artifacts. They are<br />
widely used in <strong>3D</strong> computer games, flight simulators and other <strong>3D</strong> imaging systems. The technique is known as<br />
mipmapping. The letters "MIP" in the name are an acronym of the Latin phrase multum in parvo, meaning "much in<br />
a small space". Mipmaps need more space in memory. They also form the basis of wavelet compression.<br />
Origin<br />
Mipmapping was invented by Lance Williams in 1983 and is described in his paper Pyramidal parametrics. From<br />
the abstract: "This paper advances a 'pyramidal parametric' prefiltering and sampling geometry which minimizes<br />
aliasing effects and assures continuity within and between target images." The "pyramid" can be imagined as the set<br />
of mipmaps stacked on top of each other.<br />
How it works<br />
Each bitmap image of the mipmap set is a version of the main texture,<br />
but at a certain reduced level of detail. Although the main texture<br />
would still be used when the view is sufficient to render it in full detail,<br />
the renderer will switch to a suitable mipmap image (or in fact,<br />
interpolate between the two nearest, if trilinear filtering is activated)<br />
when the texture is viewed from a distance or at a small size.<br />
Rendering speed increases since the number of texture pixels ("texels")<br />
being processed can be much lower than with simple textures. Artifacts<br />
are reduced since the mipmap images are effectively already<br />
anti-aliased, taking some of the burden off the real-time renderer.<br />
Scaling down and up is made more efficient with mipmaps as well.<br />
An example of mipmap image storage: the<br />
principal image on the left is accompanied by<br />
filtered copies of reduced size.
Mipmap 75<br />
If the texture has a basic size of 256 by 256 pixels, then the associated mipmap set may contain a series of 8 images,<br />
each one-fourth the total area of the previous one: 128×128 pixels, 64×64, 32×32, 16×16, 8×8, 4×4, 2×2, 1×1 (a<br />
single pixel). If, for example, a scene is rendering this texture in a space of 40×40 pixels, then either a scaled up<br />
version of the 32×32 (without trilinear interpolation) or an interpolation of the 64×64 and the 32×32 mipmaps (with<br />
trilinear interpolation) would be used. The simplest way to generate these textures is by successive averaging;<br />
however, more sophisticated algorithms (perhaps based on signal processing and Fourier transforms) can also be<br />
used.<br />
The increase in storage space required for all of these mipmaps is a third of the original texture, because the sum of<br />
the areas 1/4 + 1/16 + 1/64 + 1/256 + · · · converges to 1/3. In the case of an RGB image with three channels stored<br />
as separate planes, the total mipmap can be visualized as fitting neatly into a square area twice as large as the<br />
dimensions of the original image on each side (four times the original area - one square for each channel, then<br />
increase subtotal that by a third). This is the inspiration for the tag "multum in parvo".<br />
In many instances, the filtering should not be uniform in each direction (it should be anisotropic, as opposed to<br />
isotropic), and a compromise resolution is used. If a higher resolution is used, the cache coherence goes down, and<br />
the aliasing is increased in one direction, but the image tends to be clearer. If a lower resolution is used, the cache<br />
coherence is improved, but the image is overly blurry, to the point where it becomes difficult to identify.<br />
To help with this problem, nonuniform mipmaps (also known as rip-maps) are sometimes used, although there is no<br />
direct support for this method on modern <strong>graphics</strong> hardware. With a 16×16 base texture map, the rip-map resolutions<br />
would be 16×8, 16×4, 16×2, 16×1, 8×16, 8×8, 8×4, 8×2, 8×1, 4×16, 4×8, 4×4, 4×2, 4×1, 2×16, 2×8, 2×4, 2×2, 2×1,<br />
1×16, 1×8, 1×4, 1×2 and 1×1. In the general case, for a × base texture map, the rip-map resolutions would be<br />
× for i, j in 0,1,2,...,n.<br />
A trade off : anisotropic mip-mapping<br />
Mipmaps require 33% more memory than a single texture. To reduce the memory requirement, and simultaneously<br />
give more resolutions to work with, summed-area tables were conceived. However, this approach tends to exhibit<br />
poor cache behavior. Also, a summed-area table needs to have wider types to store the partial sums than the word<br />
size used to store the texture. For these reasons, there isn't any hardware that implements summed-area tables today.<br />
A compromise has been reached today, called anisotropic mip-mapping. In the case where an anisotropic filter is<br />
needed, a higher resolution mipmap is used, and several texels are averaged in one direction to get more filtering in<br />
that direction. This has a somewhat detrimental effect on the cache, but greatly improves image quality.
Newell's algorithm 76<br />
Newell's algorithm<br />
Newell's Algorithm is a <strong>3D</strong> computer <strong>graphics</strong> procedure for elimination of polygon cycles in the depth sorting<br />
required in hidden surface removal. It was proposed in 1972 by brothers Martin Newell and Dick Newell, and Tom<br />
Sancha, while all three were working at CADCentre.<br />
In the depth sorting phase of hidden surface removal, if two polygons have no overlapping extents or extreme<br />
minimum and maximum values in the x, y, and z directions, then they can be easily sorted. If two polygons, Q and P,<br />
do have overlapping extents in the Z direction, then it is possible that cutting is necessary.<br />
In that case Newell's algorithm tests the following:<br />
1. Test for Z overlap; implied in the selection of the face Q from the sort list<br />
2. The extreme coordinate values in X of the two faces do not overlap (minimax test in<br />
X)<br />
3. The extreme coordinate values in Y of the two faces do not overlap (minimax test in<br />
Y)<br />
4. All vertices of P lie deeper than the plane of Q<br />
5. All vertices of Q lie closer to the viewpoint than the plane of P<br />
6. The rasterisation of P and Q do not overlap<br />
Note that the tests are given in order of increasing computational difficulty.<br />
Note also that the polygons must be planar.<br />
Cyclic polygons must be<br />
eliminated to correctly sort<br />
them by depth<br />
If the tests are all false, then the polygons must be split. Splitting is accomplished by selecting one polygon and<br />
cutting it along the line of intersection with the other polygon. The above tests are again performed, and the<br />
algorithm continues until all polygons pass the above tests.<br />
References<br />
• Sutherland, Ivan E.; Sproull, Robert F.; Schumacker, Robert A. (1974), "A characterization of ten hidden-surface<br />
algorithms", Computing Surveys 6 (1): 1–55, doi:10.1145/356625.356626.<br />
• Newell, M. E.; Newell, R. G.; Sancha, T. L. (1972), "A new approach to the shaded picture problem", Proc. ACM<br />
National Conference, pp. 443–450.
Non-uniform rational B-spline 77<br />
Non-uniform rational B-spline<br />
Non-uniform rational basis spline<br />
(NURBS) is a mathematical model<br />
commonly used in computer <strong>graphics</strong><br />
for generating and representing curves<br />
and surfaces which offers great<br />
flexibility and precision for handling<br />
both analytic and freeform shapes.<br />
History<br />
Development of NURBS began in the<br />
1950s by engineers who were in need<br />
of a mathematically precise<br />
representation of freeform surfaces like<br />
those used for ship hulls, aerospace<br />
exterior surfaces, and car bodies,<br />
which could be exactly reproduced<br />
whenever technically needed. Prior<br />
representations of this kind of surface<br />
only existed as a single physical model<br />
created by a designer.<br />
The pioneers of this development were<br />
Pierre Bézier who worked as an<br />
engineer at Renault, and Paul de<br />
Casteljau who worked at Citroën, both<br />
Three-dimensional NURBS surfaces can have complex, organic shapes. Control points<br />
influence the directions the surface takes. The outermost square below delineates the X/Y<br />
extents of the surface.<br />
A NURBS curve.<br />
Animated version<br />
in France. Bézier worked nearly parallel to de Casteljau, neither knowing about the work of the other. But because<br />
Bézier published the results of his work, the average computer <strong>graphics</strong> user today recognizes splines — which are<br />
represented with control points lying off the curve itself — as Bézier splines, while de Casteljau’s name is only<br />
known and used for the algorithms he developed to evaluate parametric surfaces. In the 1960s it became clear that<br />
non-uniform, rational B-splines are a generalization of Bézier splines, which can be regarded as uniform,<br />
non-rational B-splines.<br />
At first NURBS were only used in the proprietary CAD packages of car companies. Later they became part of<br />
standard computer <strong>graphics</strong> packages.<br />
Real-time, interactive rendering of NURBS curves and surfaces was first made available on Silicon Graphics<br />
workstations in 1989. In 1993, the first interactive NURBS modeller for PCs, called NöRBS, was developed by CAS<br />
Berlin, a small startup company cooperating with the Technical University of Berlin. Today most professional<br />
computer <strong>graphics</strong> applications available for desktop use offer NURBS technology, which is most often realized by<br />
integrating a NURBS engine from a specialized company.
Non-uniform rational B-spline 78<br />
Use<br />
NURBS are commonly used in computer-aided design<br />
(CAD), manufacturing (CAM), and engineering (CAE)<br />
and are part of numerous industry wide used standards,<br />
such as IGES, STEP, ACIS, and PHIGS. NURBS tools<br />
are also found in various <strong>3D</strong> modeling and animation<br />
software packages, such as form•Z, Blender,3ds Max,<br />
Maya, Rhino<strong>3D</strong>, Cinema 4D, Cobalt, Shark FX, and<br />
Solid Modeling Solutions. Other than this there are<br />
specialized NURBS modeling software packages such<br />
as Autodesk Alias Surface, solidThinking and ICEM<br />
Surf.<br />
They allow representation of geometrical shapes in a compact form. They can be efficiently handled by the computer<br />
programs and yet allow for easy human interaction. NURBS surfaces are functions of two parameters mapping to a<br />
surface in three-dimensional space. The shape of the surface is determined by control points.<br />
In general, editing NURBS curves and surfaces is highly intuitive and predictable. Control points are always either<br />
connected directly to the curve/surface, or act as if they were connected by a rubber band. Depending on the type of<br />
user interface, editing can be realized via an element’s control points, which are most obvious and common for<br />
Bézier curves, or via higher level tools such as spline modeling or hierarchical editing.<br />
A surface under construction, e.g. the hull of a motor yacht, is usually composed of several NURBS surfaces known<br />
as patches. These patches should be fitted together in such a way that the boundaries are invisible. This is<br />
mathematically expressed by the concept of geometric continuity.<br />
Higher-level tools exist which benefit from the ability of NURBS to create and establish geometric continuity of<br />
different levels:<br />
Positional continuity (G0)<br />
holds whenever the end positions of two curves or surfaces are coincidental. The curves or surfaces may still<br />
meet at an angle, giving rise to a sharp corner or edge and causing broken highlights.<br />
Tangential continuity (G1)<br />
requires the end vectors of the curves or surfaces to be parallel, ruling out sharp edges. Because highlights<br />
falling on a tangentially continuous edge are always continuous and thus look natural, this level of continuity<br />
can often be sufficient.<br />
Curvature continuity (G2)<br />
further requires the end vectors to be of the same length and rate of length change. Highlights falling on a<br />
curvature-continuous edge do not display any change, causing the two surfaces to appear as one. This can be<br />
visually recognized as “perfectly smooth”. This level of continuity is very useful in the creation of models that<br />
require many bi-cubic patches composing one continuous surface.<br />
Geometric continuity mainly refers to the shape of the resulting surface; since NURBS surfaces are functions, it is<br />
also possible to discuss the derivatives of the surface with respect to the parameters. This is known as parametric<br />
continuity. Parametric continuity of a given degree implies geometric continuity of that degree.<br />
First- and second-level parametric continuity (C0 and C1) are for practical purposes identical to positional and<br />
tangential (G0 and G1) continuity. Third-level parametric continuity (C2), however, differs from curvature<br />
continuity in that its parameterization is also continuous. In practice, C2 continuity is easier to achieve if uniform<br />
B-splines are used.
Non-uniform rational B-spline 79<br />
The definition of the continuity 'Cn' requires that the n th derivative of the curve/surface ( ) are equal<br />
at a joint. [1] Note that the (partial) derivatives of curves and surfaces are vectors that have a direction and a<br />
magnitude. Both should be equal.<br />
Highlights and reflections can reveal the perfect smoothing, which is otherwise practically impossible to achieve<br />
without NURBS surfaces that have at least G2 continuity. This same principle is used as one of the surface<br />
evaluation methods whereby a ray-traced or reflection-mapped image of a surface with white stripes reflecting on it<br />
will show even the smallest deviations on a surface or set of surfaces. This method is derived from car prototyping<br />
wherein surface quality is inspected by checking the quality of reflections of a neon-light ceiling on the car surface.<br />
This method is also known as "Zebra analysis".<br />
Technical specifications<br />
A NURBS curve is defined by its order, a set of weighted control points, and a knot vector. NURBS curves and<br />
surfaces are generalizations of both B-splines and Bézier curves and surfaces, the primary difference being the<br />
weighting of the control points which makes NURBS curves rational (non-rational B-splines are a special case of<br />
rational B-splines). Whereas Bézier curves evolve into only one parametric direction, usually called s or u, NURBS<br />
surfaces evolve into two parametric directions, called s and t or u and v.<br />
By evaluating a NURBS curve at<br />
various values of the parameter, the<br />
curve can be represented in Cartesian<br />
two- or three-dimensional space.<br />
Likewise, by evaluating a NURBS<br />
surface at various values of the two<br />
parameters, the surface can be<br />
represented in Cartesian space.<br />
NURBS curves and surfaces are useful<br />
for a number of reasons:<br />
• They are invariant under affine [2] as<br />
well as perspective [3]<br />
transformations: operations like<br />
rotations and translations can be<br />
applied to NURBS curves and surfaces by applying them to their control points.<br />
• They offer one common mathematical form for both standard analytical shapes (e.g., conics) and free-form<br />
shapes.<br />
• They provide the flexibility to design a large variety of shapes.<br />
• They reduce the memory consumption when storing shapes (compared to simpler methods).<br />
• They can be evaluated reasonably quickly by numerically stable and accurate algorithms.<br />
In the next sections, NURBS is discussed in one dimension (curves). It should be noted that all of it can be<br />
generalized to two or even more dimensions.<br />
Control points<br />
The control points determine the shape of the curve. Typically, each point of the curve is computed by taking a<br />
weighted sum of a number of control points. The weight of each point varies according to the governing parameter.<br />
For a curve of degree d, the weight of any control point is only nonzero in d+1 intervals of the parameter space.<br />
Within those intervals, the weight changes according to a polynomial function (basis functions) of degree d. At the<br />
boundaries of the intervals, the basis functions go smoothly to zero, the smoothness being determined by the degree
Non-uniform rational B-spline 80<br />
of the polynomial.<br />
As an example, the basis function of degree one is a triangle function. It rises from zero to one, then falls to zero<br />
again. While it rises, the basis function of the previous control point falls. In that way, the curve interpolates between<br />
the two points, and the resulting curve is a polygon, which is continuous, but not differentiable at the interval<br />
boundaries, or knots. Higher degree polynomials have correspondingly more continuous derivatives. Note that<br />
within the interval the polynomial nature of the basis functions and the linearity of the construction make the curve<br />
perfectly smooth, so it is only at the knots that discontinuity can arise.<br />
The fact that a single control point only influences those intervals where it is active is a highly desirable property,<br />
known as local support. In modeling, it allows the changing of one part of a surface while keeping other parts equal.<br />
Adding more control points allows better approximation to a given curve, although only a certain class of curves can<br />
be represented exactly with a finite number of control points. NURBS curves also feature a scalar weight for each<br />
control point. This allows for more control over the shape of the curve without unduly raising the number of control<br />
points. In particular, it adds conic sections like circles and ellipses to the set of curves that can be represented<br />
exactly. The term rational in NURBS refers to these weights.<br />
The control points can have any dimensionality. One-dimensional points just define a scalar function of the<br />
parameter. These are typically used in image processing programs to tune the brightness and color curves.<br />
Three-dimensional control points are used abundantly in <strong>3D</strong> modeling, where they are used in the everyday meaning<br />
of the word 'point', a location in <strong>3D</strong> space. Multi-dimensional points might be used to control sets of time-driven<br />
values, e.g. the different positional and rotational settings of a robot arm. NURBS surfaces are just an application of<br />
this. Each control 'point' is actually a full vector of control points, defining a curve. These curves share their degree<br />
and the number of control points, and span one dimension of the parameter space. By interpolating these control<br />
vectors over the other dimension of the parameter space, a continuous set of curves is obtained, defining the surface.<br />
The knot vector<br />
The knot vector is a sequence of parameter values that determines where and how the control points affect the<br />
NURBS curve. The number of knots is always equal to the number of control points plus curve degree minus one.<br />
The knot vector divides the parametric space in the intervals mentioned before, usually referred to as knot spans.<br />
Each time the parameter value enters a new knot span, a new control point becomes active, while an old control<br />
point is discarded. It follows that the values in the knot vector should be in nondecreasing order, so (0, 0, 1, 2, 3, 3) is<br />
valid while (0, 0, 2, 1, 3, 3) is not.<br />
Consecutive knots can have the same value. This then defines a knot span of zero length, which implies that two<br />
control points are activated at the same time (and of course two control points become deactivated). This has impact<br />
on continuity of the resulting curve or its higher derivatives; for instance, it allows the creation of corners in an<br />
otherwise smooth NURBS curve. A number of coinciding knots is sometimes referred to as a knot with a certain<br />
multiplicity. Knots with multiplicity two or three are known as double or triple knots. The multiplicity of a knot is<br />
limited to the degree of the curve; since a higher multiplicity would split the curve into disjoint parts and it would<br />
leave control points unused. For first-degree NURBS, each knot is paired with a control point.<br />
The knot vector usually starts with a knot that has multiplicity equal to the order. This makes sense, since this<br />
activates the control points that have influence on the first knot span. Similarly, the knot vector usually ends with a<br />
knot of that multiplicity. Curves with such knot vectors start and end in a control point.<br />
The individual knot values are not meaningful by themselves; only the ratios of the difference between the knot<br />
values matter. Hence, the knot vectors (0, 0, 1, 2, 3, 3) and (0, 0, 2, 4, 6, 6) produce the same curve. The positions of<br />
the knot values influences the mapping of parameter space to curve space. Rendering a NURBS curve is usually<br />
done by stepping with a fixed stride through the parameter range. By changing the knot span lengths, more sample<br />
points can be used in regions where the curvature is high. Another use is in situations where the parameter value has<br />
some physical significance, for instance if the parameter is time and the curve describes the motion of a robot arm.
Non-uniform rational B-spline 81<br />
The knot span lengths then translate into velocity and acceleration, which are essential to get right to prevent damage<br />
to the robot arm or its environment. This flexibility in the mapping is what the phrase non uniform in NURBS refers<br />
to.<br />
Necessary only for internal calculations, knots are usually not helpful to the users of modeling software. Therefore,<br />
many modeling applications do not make the knots editable or even visible. It's usually possible to establish<br />
reasonable knot vectors by looking at the variation in the control points. More recent versions of NURBS software<br />
(e.g., Autodesk Maya and Rhinoceros <strong>3D</strong>) allow for interactive editing of knot positions, but this is significantly less<br />
intuitive than the editing of control points.<br />
Order<br />
The order of a NURBS curve defines the number of nearby control points that influence any given point on the<br />
curve. The curve is represented mathematically by a polynomial of degree one less than the order of the curve.<br />
Hence, second-order curves (which are represented by linear polynomials) are called linear curves, third-order curves<br />
are called quadratic curves, and fourth-order curves are called cubic curves. The number of control points must be<br />
greater than or equal to the order of the curve.<br />
In practice, cubic curves are the ones most commonly used. Fifth- and sixth-order curves are sometimes useful,<br />
especially for obtaining continuous higher order derivatives, but curves of higher orders are practically never used<br />
because they lead to internal numerical problems and tend to require disproportionately large calculation times.<br />
Construction of the basis functions [4]<br />
The basis functions used in NURBS curves are usually denoted as , in which corresponds to the -th<br />
control point, and corresponds with the degree of the basis function. The parameter dependence is frequently left<br />
out, so we can write . The definition of these basis functions is recursive in . The degree-0 functions<br />
are piecewise constant functions. They are one on the corresponding knot span and zero everywhere else.<br />
Effectively, is a linear interpolation of and . The latter two functions are non-zero for<br />
knot spans, overlapping for knot spans. The function is computed as<br />
From bottom to top: Linear basis functions<br />
(blue) and (green), their weight<br />
functions and and the resulting quadratic<br />
basis function. The knots are 0, 1, 2 and 2.5<br />
rises linearly from zero to one on the interval where is non-zero, while falls from one to zero on<br />
the interval where is non-zero. As mentioned before, is a triangular function, nonzero over two knot<br />
spans rising from zero to one on the first, and falling to zero on the second knot span. Higher order basis functions
Non-uniform rational B-spline 82<br />
are non-zero over corresponding more knot spans and have correspondingly higher degree. If is the parameter, and is<br />
the -th knot, we can write the functions and as<br />
and<br />
The functions and are positive when the corresponding lower order basis functions are non-zero. By induction<br />
on n it follows that the basis functions are non-negative for all values of and . This makes the computation of<br />
the basis functions numerically stable.<br />
Again by induction, it can be proved that the sum of the basis functions for a particular value of the parameter is<br />
unity. This is known as the partition of unity property of the basis functions.<br />
The figures show the linear and the quadratic basis functions for the<br />
knots {..., 0, 1, 2, 3, 4, 4.1, 5.1, 6.1, 7.1, ...}<br />
One knot span is considerably shorter than the others. On that knot<br />
span, the peak in the quadratic basis function is more distinct, reaching<br />
almost one. Conversely, the adjoining basis functions fall to zero more<br />
quickly. In the geometrical interpretation, this means that the curve<br />
approaches the corresponding control point closely. In case of a double<br />
knot, the length of the knot span becomes zero and the peak reaches<br />
one exactly. The basis function is no longer differentiable at that point.<br />
The curve will have a sharp corner if the neighbour control points are not collinear.<br />
General form of a NURBS curve<br />
Linear basis functions<br />
Quadratic basis functions<br />
Using the definitions of the basis functions from the previous paragraph, a NURBS curve takes the following<br />
form [5] :<br />
In this, is the number of control points and are the corresponding weights. The denominator is a<br />
normalizing factor that evaluates to one if all weights are one. This can be seen from the partition of unity property<br />
of the basis functions. It is customary to write this as<br />
in which the functions<br />
are known as the rational basis functions.
Non-uniform rational B-spline 83<br />
Manipulating NURBS objects<br />
A number of transformations can be applied to a NURBS object. For instance, if some curve is defined using a<br />
certain degree and N control points, the same curve can be expressed using the same degree and N+1 control points.<br />
In the process a number of control points change position and a knot is inserted in the knot vector. These<br />
manipulations are used extensively during interactive design. When adding a control point, the shape of the curve<br />
should stay the same, forming the starting point for further adjustments. A number of these operations are discussed<br />
below. [6]<br />
Knot insertion<br />
As the term suggests, knot insertion inserts a knot into the knot vector. If the degree of the curve is , then<br />
control points are replaced by new ones. The shape of the curve stays the same.<br />
A knot can be inserted multiple times, up to the maximum multiplicity of the knot. This is sometimes referred to as<br />
knot refinement and can be achieved by an algorithm that is more efficient than repeated knot insertion.<br />
Knot removal<br />
Knot removal is the reverse of knot insertion. Its purpose is to remove knots and the associated control points in<br />
order to get a more compact representation. Obviously, this is not always possible while retaining the exact shape of<br />
the curve. In practice, a tolerance in the accuracy is used to determine whether a knot can be removed. The process is<br />
used to clean up after an interactive session in which control points may have been added manually, or after<br />
importing a curve from a different representation, where a straightforward conversion process leads to redundant<br />
control points.<br />
Degree elevation<br />
A NURBS curve of a particular degree can always be represented by a NURBS curve of higher degree. This is<br />
frequently used when combining separate NURBS curves, e.g. when creating a NURBS surface interpolating<br />
between a set of NURBS curves or when unifying adjacent curves. In the process, the different curves should be<br />
brought to the same degree, usually the maximum degree of the set of curves. The process is known as degree<br />
elevation.<br />
Curvature<br />
The most important property in differential geometry is the curvature . It describes the local properties (edges,<br />
corners, etc.) and relations between the first and second derivative, and thus, the precise curve shape. Having<br />
determined the derivatives it is easy to compute or approximated as the arclength from the<br />
second derivate . The direct computation of the curvature with these equations is the big<br />
advantage of parameterized curves against their polygonal representations.
Non-uniform rational B-spline 84<br />
Example: a circle<br />
Non-rational splines or Bézier curves may approximate a circle, but they cannot represent it exactly. Rational splines<br />
can represent any conic section, including the circle, exactly. This representation is not unique, but one possibility<br />
appears below:<br />
x y z weight<br />
1 0 0 1<br />
1 1 0<br />
0 1 0 1<br />
−1 1 0<br />
−1 0 0 1<br />
−1 −1 0<br />
0 −1 0 1<br />
1 −1 0<br />
1 0 0 1<br />
The order is three, since a circle is a quadratic curve and the spline's order is one more than the degree of its<br />
piecewise polynomial segments. The knot vector is . The<br />
circle is composed of four quarter circles, tied together with double knots. Although double knots in a third order<br />
NURBS curve would normally result in loss of continuity in the first derivative, the control points are positioned in<br />
such a way that the first derivative is continuous. (In fact, the curve is infinitely differentiable everywhere, as it must<br />
be if it exactly represents a circle.)<br />
The curve represents a circle exactly, but it is not exactly parametrized in the circle's arc length. This means, for<br />
example, that the point at does not lie at (except for the start, middle and end point of each<br />
quarter circle, since the representation is symmetrical). This is obvious; the x coordinate of the circle would<br />
otherwise provide an exact rational polynomial expression for , which is impossible. The circle does make<br />
one full revolution as its parameter goes from 0 to , but this is only because the knot vector was arbitrarily<br />
chosen as multiples of .<br />
References<br />
• Les Piegl & Wayne Tiller: The NURBS Book, Springer-Verlag 1995–1997 (2nd ed.). The main reference for<br />
Bézier, B-Spline and NURBS; chapters on mathematical representation and construction of curves and surfaces,<br />
interpolation, shape modification, programming concepts.<br />
• Dr. Thomas Sederberg, BYU NURBS, http:/ / cagd. cs. byu. edu/ ~557/ text/ ch6. pdf<br />
• Dr. Lyle Ramshaw. Blossoming: A connect-the-dots approach to splines, Research Report 19, Compaq Systems<br />
Research Center, Palo Alto, CA, June 1987<br />
• David F. Rogers: An Introduction to NURBS with Historical Perspective, Morgan Kaufmann Publishers 2001.<br />
Good elementary book for NURBS and related issues.
Non-uniform rational B-spline 85<br />
Notes<br />
[1] Foley, van Dam, Feiner & Hughes: Computer Graphics: Principles and Practice, section 11.2, Addison Wesley 1996 (2nd ed.).<br />
[2] David F. Rogers: An Introduction to NURBS with Historical Perspective, section 7.1<br />
[3] Demidov, Evgeny. "NonUniform Rational B-splines (NURBS) - Perspective projection" (http:/ / www. ibiblio. org/ e-notes/ Splines/<br />
NURBS. htm). An Interactive Introduction to Splines. Ibiblio. . Retrieved 2010-02-14.<br />
[4] Les Piegl & Wayne Tiller: The NURBS Book, chapter 2, sec. 2<br />
[5] Les Piegl & Wayne Tiller: The NURBS Book, chapter 4, sec. 2<br />
[6] Les Piegl & Wayne Tiller: The NURBS Book, chapter 5<br />
External links<br />
• About Nonuniform Rational B-Splines - NURBS (http:/ / www. cs. wpi. edu/ ~matt/ courses/ cs563/ talks/ nurbs.<br />
html)<br />
• An Interactive Introduction to Splines (http:/ / ibiblio. org/ e-notes/ Splines/ Intro. htm)<br />
• http:/ / www. cs. bris. ac. uk/ Teaching/ Resources/ COMS30115/ all. pdf<br />
• http:/ / homepages. inf. ed. ac. uk/ rbf/ CVonline/ LOCAL_COPIES/ AV0405/ DONAVANIK/ bezier. html<br />
• http:/ / mathcs. holycross. edu/ ~croyden/ csci343/ notes. html (Lecture 33: Bézier Curves, Splines)<br />
• http:/ / www. cs. mtu. edu/ ~shene/ COURSES/ cs3621/ NOTES/ notes. html<br />
• A free software package for handling NURBS curves, surfaces and volumes (http:/ / octave. sourceforge. net/<br />
nurbs) in Octave and Matlab<br />
Normal mapping<br />
In <strong>3D</strong> computer <strong>graphics</strong>, normal mapping, or "Dot3 bump mapping",<br />
is a technique used for faking the lighting of bumps and dents. It is<br />
used to add details without using more polygons. A normal map is<br />
usually an RGB image that corresponds to the X, Y, and Z coordinates<br />
of a surface normal from a more detailed version of the object. A<br />
common use of this technique is to greatly enhance the appearance and<br />
details of a low polygon model by generating a normal map from a<br />
high polygon model.<br />
History<br />
Normal mapping used to re-detail simplified<br />
meshes.<br />
The idea of taking geometric details from a high polygon model was introduced in "Fitting Smooth Surfaces to<br />
Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996 [1] , where this approach was used for<br />
creating displacement maps over nurbs. In 1998, two papers were presented with key ideas for transferring details<br />
with normal maps from high to low polygon meshes: "Appearance Preserving Simplification", by Cohen et al.<br />
SIGGRAPH 1998 [2] , and "A general method for preserving attribute values on simplified meshes" by Cignoni et al.<br />
IEEE Visualization '98 [3] . The former introduced the idea of storing surface normals directly in a texture, rather than<br />
displacements, though it required the low-detail model to be generated by a particular constrained simplification<br />
algorithm. The latter presented a simpler approach that decouples the high and low polygonal mesh and allows the<br />
recreation of any attributes of the high-detail model (color, texture coordinates, displacements, etc.) in a way that is<br />
not dependent on how the low-detail model was created. The combination of storing normals in a texture, with the<br />
more general creation process is still used by most currently available tools.
Normal mapping 86<br />
How it works<br />
To calculate the Lambertian (diffuse) lighting of a surface, the unit vector from the shading point to the light source<br />
is dotted with the unit vector normal to that surface, and the result is the intensity of the light on that surface.<br />
Imagine a polygonal model of a sphere - you can only approximate the shape of the surface. By using a 3-channel<br />
bitmap textured across the model, more detailed normal vector information can be encoded. Each channel in the<br />
bitmap corresponds to a spatial dimension (X, Y and Z). These spatial dimensions are relative to a constant<br />
coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the<br />
derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much<br />
more detail to the surface of a model, especially in conjunction with advanced lighting techniques.<br />
Since a normal will be used in the dot product calculation for the diffuse lighting computation, we can see that the<br />
{0, 0, –1} would be remapped to the {128, 128, 255} values, giving that kind of sky blue color seen in normal maps<br />
(blue (z) coordinate is perspective (deepness) coordinate and RG-xy flat coordinates on screen). {0.3, 0.4, –0.866}<br />
would be remapped to the ({0.3, 0.4, –0.866}/2+{0.5, 0.5, 0.5})*255={0.15+0.5, 0.2+0.5, 0.433+0.5}*255={0.65,<br />
0.7, 0.933}*255={166, 179, 238} values ( ). Coordinate z (blue) minus sign<br />
flipped, because need match normal map normal vector with eye (viewpoint or camera) vector or light vector<br />
(because sign "-" for z axis means vertex is in front of camera and not behind camera; when light vector and normal<br />
vector match surface shined with maximum strength).<br />
Calculating Tangent Space<br />
In order to find the perturbation in the normal the tangent space must be correctly calculated [4] . Most often the<br />
normal is perturbed in a fragment shader after applying the model and view matrices. Typically the geometry<br />
provides a normal and tangent. The tangent is part of the tangent plane and can be transformed simply with the linear<br />
part of the matrix (the upper 3x3). However, the normal needs to be transformed by the inverse transpose. Most<br />
applications will want cotangent to match the transformed geometry (and associated uv's). So instead of enforcing<br />
the cotangent to be perpendicular to the tangent, it is generally preferable to transform the cotangent just like the<br />
tangent. Let t be tangent, b be cotangent, n be normal, M 3x3 be the linear part of model matrix, and V 3x3 be the linear<br />
part of the view matrix.<br />
Normal mapping in video games<br />
Interactive normal map rendering was originally only possible on PixelFlow, a parallel rendering machine built at the<br />
University of North Carolina at Chapel Hill. It was later possible to perform normal mapping on high-end SGI<br />
workstations using multi-pass rendering and framebuffer operations [5] or on low end PC hardware with some tricks<br />
using paletted textures. However, with the advent of shaders in personal computers and game consoles, normal<br />
mapping became widely used in proprietary commercial video games starting in late 2003, and followed by open<br />
source games in later years. Normal mapping's popularity for real-time rendering is due to its good quality to<br />
processing requirements ratio versus other methods of producing similar effects. Much of this efficiency is made<br />
possible by distance-indexed detail scaling, a technique which selectively decreases the detail of the normal map of a<br />
given texture (cf. mipmapping), meaning that more distant surfaces require less complex lighting simulation.<br />
Basic normal mapping can be implemented in any hardware that supports palettized textures. The first game console<br />
to have specialized normal mapping hardware was the Sega Dreamcast. However, Microsoft's Xbox was the first<br />
console to widely use the effect in retail games. Out of the sixth generation consoles, only the PlayStation 2's GPU<br />
lacks built-in normal mapping support. Games for the Xbox 360 and the PlayStation 3 rely heavily on normal
Normal mapping 87<br />
mapping and are beginning to implement parallax mapping. The Nintendo <strong>3D</strong>S has been shown to support normal<br />
mapping, as demonstrated by Resident Evil Revelations and Metal Gear Solid: Snake Eater.<br />
References<br />
[1] Krishnamurthy and Levoy, Fitting Smooth Surfaces to Dense Polygon Meshes (http:/ / www-<strong>graphics</strong>. stanford. edu/ papers/ surfacefitting/ ),<br />
SIGGRAPH 1996<br />
[2] Cohen et al., Appearance-Preserving Simplification (http:/ / www. cs. unc. edu/ ~geom/ APS/ APS. pdf), SIGGRAPH 1998 (PDF)<br />
[3] Cignoni et al., A general method for recovering attribute values on simplifed meshes (http:/ / vcg. isti. cnr. it/ publications/ papers/ rocchini.<br />
pdf), IEEE Visualization 1998 (PDF)<br />
[4] Mikkelsen, Simulation of Wrinkled Surfaces Revisited (http:/ / image. diku. dk/ projects/ media/ morten. mikkelsen. 08. pdf), 2008 (PDF)<br />
[5] Heidrich and Seidel, Realistic, Hardware-accelerated Shading and Lighting (http:/ / www. cs. ubc. ca/ ~heidrich/ Papers/ Siggraph. 99. pdf),<br />
SIGGRAPH 1999 (PDF)<br />
External links<br />
• Understanding Normal Maps (http:/ / liman3d. com/ tutorial_normalmaps. html)<br />
• Introduction to Normal Mapping (http:/ / www. game-artist. net/ forums/ vbarticles. php?do=article&<br />
articleid=16)<br />
• Blender Normal Mapping (http:/ / mediawiki. blender. org/ index. php/ Manual/ Bump_and_Normal_Maps)<br />
• Normal Mapping with paletted textures (http:/ / vcg. isti. cnr. it/ activities/ geometrye<strong>graphics</strong>/ bumpmapping.<br />
html) using old OpenGL extensions.<br />
• Normal Map Photography (http:/ / zarria. net/ nrmphoto/ nrmphoto. html) Creating normal maps manually by<br />
layering digital photographs<br />
• Normal Mapping Explained (http:/ / www. 3dkingdoms. com/ tutorial. htm)<br />
• xNormal (http:/ / www. xnormal. net) A closed source, free normal mapper for Windows<br />
• (http:/ / www. vlsvideomappingtrophy. com) video mapping competition.
OrenNayar reflectance model 88<br />
Oren–Nayar reflectance model<br />
The Oren-Nayar reflectance model, developed by Michael Oren and Shree K. Nayar, is a reflectance model for<br />
diffuse reflection from rough surfaces. It has been shown to accurately predict the appearance of a wide range of<br />
natural surfaces, such as concrete, plaster, sand, etc.<br />
Introduction<br />
Reflectance is a physical property of a material that<br />
describes how it reflects incident light. The appearance<br />
of various materials are determined to a large extent by<br />
their reflectance properties. Most reflectance models<br />
can be broadly classified into two categories: diffuse<br />
and specular. In computer vision and computer<br />
<strong>graphics</strong>, the diffuse component is often assumed to be<br />
Lambertian. A surface that obeys Lambert's Law<br />
appears equally bright from all viewing directions. This<br />
model for diffuse reflection was proposed by Johann<br />
Heinrich Lambert in 1760 and has been perhaps the<br />
most widely used reflectance model in computer vision<br />
Comparison of a matte vase with the rendering based on the<br />
Lambertian model. Illumination is from the viewing direction<br />
and <strong>graphics</strong>. For a large number of real-world surfaces, such as concrete, plaster, sand, etc., however, the<br />
Lambertian model is an inadequate approximation of the diffuse component. This is primarily because the<br />
Lambertian model does not take the roughness of the surface into account.<br />
Rough surfaces can be modelled as a set of facets with different slopes, where each facet is a small planar patch.<br />
Since photo receptors of the retina and pixels in a camera are both finite-area detectors, substantial macroscopic<br />
(much larger than the wavelength of incident light) surface roughness is often projected onto a single detection<br />
element, which in turn produces an aggregate brightness value over many facets. Whereas Lambert’s law may hold<br />
well when observing a single planar facet, a collection of such facets with different orientations is guaranteed to<br />
violate Lambert’s law. The primary reason for this is that the foreshortened facet areas will change for different<br />
viewing directions, and thus the surface appearance will be view-dependent.
OrenNayar reflectance model 89<br />
Analysis of this phenomenon has a long history and can<br />
be traced back almost a century. Past work has resulted<br />
in empirical models designed to fit experimental data as<br />
well as theoretical results derived from first principles.<br />
Much of this work was motivated by the<br />
non-Lambertian reflectance of the moon.<br />
The Oren-Nayar reflectance model, developed by<br />
Michael Oren and Shree K. Nayar in 1993 [1] , predicts<br />
reflectance from rough diffuse surfaces for the entire<br />
hemisphere of source and sensor directions. The model<br />
takes into account complex physical phenomena such<br />
as masking, shadowing and interreflections between<br />
points on the surface facets. It can be viewed as a<br />
generalization of Lambert’s law. Today, it is widely<br />
used in computer <strong>graphics</strong> and animation for rendering<br />
rough surfaces. It also has important implications for<br />
human vision and computer vision problems, such as<br />
shape from shading, photometric stereo, etc.<br />
Formulation<br />
The surface roughness model used in the<br />
derivation of the Oren-Nayar model is the<br />
microfacet model, proposed by Torrance<br />
and Sparrow [2] , which assumes the surface<br />
to be composed of long symmetric<br />
V-cavities. Each cavity consists of two<br />
planar facets. The roughness of the surface<br />
is specified using a probability function for<br />
the distribution of facet slopes. In particular,<br />
the Gaussian distribution is often used, and<br />
thus the variance of the Gaussian<br />
distribution, , is a measure of the<br />
roughness of the surfaces (ranging from 0 to<br />
1).<br />
In the Oren-Nayar reflectance model, each<br />
facet is assumed to be Lambertian in<br />
Aggregation of the reflection from rough surfaces<br />
Diagram of surface reflection<br />
reflectance. As shown in the image at right, given the radiance of the incoming light , the radiance of the<br />
reflected light , according to the Oren-Nayar model, is<br />
where<br />
,<br />
,
OrenNayar reflectance model 90<br />
,<br />
,<br />
and is the albedo of the surface, and is the roughness of the surface (ranging from 0 to 1). In the case of<br />
(i.e., all facets in the same plane), we have , and , and thus the Oren-Nayar model simplifies to the<br />
Lambertian model:<br />
Results<br />
Here is a real image of a matte vase illuminated from the viewing direction, along with versions rendered using the<br />
Lambertian and Oren-Nayar models. It shows that the Oren-Nayar model predicts the diffuse reflectance for rough<br />
surfaces more accurately than the Lambertian model.<br />
Plot of the brightness of the rendered images, compared with the<br />
measurements on a cross section of the real vase.<br />
Here are rendered images of a sphere<br />
using the Oren-Nayar model,<br />
corresponding to different surface<br />
roughnesses (i.e. different values):
OrenNayar reflectance model 91<br />
Connection with other microfacet reflectance models<br />
Oren-Nayar model Torrance-Sparrow model<br />
Microfacet model for refraction [3]<br />
Rough opaque diffuse surfaces Rough opaque specular surfaces (glossy surfaces) Rough transparent surfaces<br />
Each facet is Lambertian (diffuse) Each facet is a mirror (specular) Each facet is made of glass (transparent)<br />
References<br />
[1] M. Oren and S.K. Nayar, " Generalization of Lambert's Reflectance Model (http:/ / www1. cs. columbia. edu/ CAVE/ publications/ pdfs/<br />
Oren_SIGGRAPH94. pdf)". SIGGRAPH. pp.239-246, Jul, 1994<br />
[2] Torrance, K. E. and Sparrow, E. M. Theory for off-specular reflection from roughened surfaces. J. Opt. Soc. Am.. 57, 9(Sep 1967) 1105-1114<br />
[3] B. Walter, et al. " Microfacet Models for Refraction through Rough Surfaces (http:/ / www. cs. cornell. edu/ ~srm/ publications/<br />
EGSR07-btdf. html)". EGSR 2007.<br />
External links<br />
• The official project page for the Oren-Nayar model (http:/ / www1. cs. columbia. edu/ CAVE/ projects/ oren/ ) at<br />
Shree Nayar's CAVE research group webpage (http:/ / www. cs. columbia. edu/ CAVE/ )<br />
Painter's algorithm<br />
The painter's algorithm, also known as a priority fill, is one of the simplest solutions to the visibility problem in<br />
<strong>3D</strong> computer <strong>graphics</strong>. When projecting a <strong>3D</strong> scene onto a 2D plane, it is necessary at some point to decide which<br />
polygons are visible, and which are hidden.<br />
The name "painter's algorithm" refers to the technique employed by many painters of painting distant parts of a scene<br />
before parts which are nearer thereby covering some areas of distant parts. The painter's algorithm sorts all the<br />
polygons in a scene by their depth and then paints them in this order, farthest to closest. It will paint over the parts<br />
that are normally not visible — thus solving the visibility problem — at the cost of having painted invisible areas of<br />
distant objects.<br />
The distant mountains are painted first, followed by the closer meadows; finally, the closest objects in this scene, the trees, are painted.
Painter's algorithm 92<br />
The algorithm can fail in some cases, including cyclic overlap or<br />
piercing polygons. In the case of cyclic overlap, as shown in the figure<br />
to the right, Polygons A, B, and C overlap each other in such a way<br />
that it is impossible to determine which polygon is above the others. In<br />
this case, the offending polygons must be cut to allow sorting. Newell's<br />
algorithm, proposed in 1972, provides a method for cutting such<br />
polygons. Numerous methods have also been proposed in the field of<br />
computational geometry.<br />
The case of piercing polygons arises when one polygon intersects<br />
another. As with cyclic overlap, this problem may be resolved by<br />
cutting the offending polygons.<br />
In basic implementations, the painter's algorithm can be inefficient. It<br />
forces the system to render each point on every polygon in the visible<br />
set, even if that polygon is occluded in the finished scene. This means<br />
that, for detailed scenes, the painter's algorithm can overly tax the<br />
computer hardware.<br />
Overlapping polygons can cause the algorithm to<br />
A reverse painter's algorithm is sometimes used, in which objects nearest to the viewer are painted first — with<br />
the rule that paint must never be applied to parts of the image that are already painted. In a computer graphic system,<br />
this can be very efficient, since it is not necessary to calculate the colors (using lighting, texturing and such) for parts<br />
of the more distant scene that are hidden by nearby objects. However, the reverse algorithm suffers from many of the<br />
same problems as the standard version.<br />
These and other flaws with the algorithm led to the development of Z-buffer techniques, which can be viewed as a<br />
development of the painter's algorithm, by resolving depth conflicts on a pixel-by-pixel basis, reducing the need for a<br />
depth-based rendering order. Even in such systems, a variant of the painter's algorithm is sometimes employed. As<br />
Z-buffer implementations generally rely on fixed-precision depth-buffer registers implemented in hardware, there is<br />
scope for visibility problems due to rounding error. These are overlaps or gaps at joins between polygons. To avoid<br />
this, some <strong>graphics</strong> engine implementations "overrender", drawing the affected edges of both polygons in the order<br />
given by painter's algorithm. This means that some pixels are actually drawn twice (as in the full painters algorithm)<br />
but this happens on only small parts of the image and has a negligible performance effect.<br />
References<br />
• Foley, James; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1990). Computer Graphics: Principles and<br />
Practice. Reading, MA, USA: Addison-Wesley. p. 1174. ISBN 0-201-12110-7.<br />
fail
Parallax mapping 93<br />
Parallax mapping<br />
Parallax mapping (also called offset<br />
mapping or virtual displacement<br />
mapping) is an enhancement of the bump<br />
mapping or normal mapping techniques<br />
applied to textures in <strong>3D</strong> rendering<br />
applications such as video games. To the<br />
end user, this means that textures such as<br />
stone walls will have more apparent depth<br />
and thus greater realism with less of an<br />
influence on the performance of the<br />
simulation. Parallax mapping was<br />
introduced by Tomomichi Kaneko et al. [1] in<br />
2001.<br />
Example of parallax mapping. The walls are textured with parallax maps.<br />
Screenshot taken from one of the base examples of the open source Irrlicht 3d<br />
Parallax mapping is implemented by displacing the texture coordinates at a point on the rendered polygon by a<br />
function of the view angle in tangent space (the angle relative to the surface normal) and the value of the height map<br />
at that point. At steeper view-angles, the texture coordinates are displaced more, giving the illusion of depth due to<br />
parallax effects as the view changes.<br />
Parallax mapping described by Kaneko is a single step process that does not account for occlusion. Subsequent<br />
enhancements have been made to the algorithm incorporating iterative approaches to allow for occlusion and<br />
accurate silhouette rendering. [2]<br />
Steep parallax mapping<br />
Steep parallax mapping is one name for the class of algorithms that trace rays against heightfields. The idea is to<br />
walk along a ray that has entered the heightfield's volume, finding the intersection point of the ray with the<br />
heightfield. This closest intersection is what part of the heightfield is truly visible. Relief mapping and parallax<br />
occlusion mapping are other common names for these techniques.<br />
Interval mapping improves on the usual binary search done in relief mapping by creating a line between known<br />
inside and outside points and choosing the next sample point by intersecting this line with a ray, rather than using the<br />
midpoint as in a traditional binary search.<br />
References<br />
engine.<br />
[1] Kaneko, T., et al., 2001. Detailed Shape Representation with Parallax Mapping (http:/ / vrsj. t. u-tokyo. ac. jp/ ic-at/ ICAT2003/ papers/<br />
01205. pdf). In Proceedings of ICAT 2001, pp. 205-208.<br />
[2] Tatarchuk, N., 2005. Practical Dynamic Parallax Occlusion Mapping (http:/ / developer. amd. com/ media/ gpu_assets/<br />
Tatarchuk-ParallaxOcclusionMapping-Sketch-print. pdf) Siggraph presentation<br />
External links<br />
• Comparison from the Irrlicht Engine: With Parallax mapping (http:/ / www. irrlicht3d. org/ images/<br />
parallaxmapping. jpg) vs. Without Parallax mapping (http:/ / www. irrlicht3d. org/ images/ noparallaxmapping.<br />
jpg)<br />
• Parallax mapping implementation in DirectX, forum topic (http:/ / www. gamedev. net/ community/ forums/<br />
topic. asp?topic_id=387447)
Parallax mapping 94<br />
• Parallax Mapped Bullet Holes (http:/ / cowboyprogramming. com/ 2007/ 01/ 05/ parallax-mapped-bullet-holes/ ) -<br />
Details the algorithm used for F.E.A.R. style bullet holes.<br />
• Interval Mapping (http:/ / <strong>graphics</strong>. cs. ucf. edu/ IntervalMapping/ )<br />
• Parallax Mapping with Offset Limiting (http:/ / jerome. jouvie. free. fr/ OpenGl/ Projects/ Shaders. php)<br />
• Steep Parallax Mapping (http:/ / <strong>graphics</strong>. cs. brown. edu/ games/ SteepParallax/ index. html)<br />
Particle system<br />
The term particle system refers to a computer <strong>graphics</strong> technique to<br />
simulate certain fuzzy phenomena, which are otherwise very hard to<br />
reproduce with conventional rendering techniques. Examples of such<br />
phenomena which are commonly replicated using particle systems<br />
include fire, explosions, smoke, moving water, sparks, falling leaves,<br />
clouds, fog, snow, dust, meteor tails, hair, fur, grass, or abstract visual<br />
effects like glowing trails, magic spells, etc.<br />
While in most cases particle systems are implemented in three<br />
dimensional <strong>graphics</strong> systems, two dimensional particle systems may<br />
also be used under some circumstances.<br />
Typical implementation<br />
Typically a particle system's position and motion in <strong>3D</strong> space are<br />
controlled by what is referred to as an emitter. The emitter acts as the<br />
source of the particles, and its location in <strong>3D</strong> space determines where<br />
they are generated and whence they proceed. A regular <strong>3D</strong> mesh<br />
object, such as a cube or a plane, can be used as an emitter. The emitter<br />
has attached to it a set of particle behavior parameters. These<br />
parameters can include the spawning rate (how many particles are<br />
generated per unit of time), the particles' initial velocity vector (the<br />
direction they are emitted upon creation), particle lifetime (the length<br />
of time each individual particle exists before disappearing), particle<br />
color, and many more. It is common for all or most of these parameters<br />
to be "fuzzy" — instead of a precise numeric value, the artist specifies<br />
a central value and the degree of randomness allowable on either side<br />
of the center (i.e. the average particle's lifetime might be 50 frames<br />
±20%). When using a mesh object as an emitter, the initial velocity<br />
vector is often set to be normal to the individual face(s) of the object,<br />
making the particles appear to "spray" directly from each face.<br />
A typical particle system's update loop (which is performed for each<br />
frame of animation) can be separated into two distinct stages, the<br />
parameter update/simulation stage and the rendering stage.<br />
Simulation stage<br />
A particle system used to simulate a fire, created<br />
in 3dengfx.<br />
Ad-hoc particle system used to simulate a galaxy,<br />
created in 3dengfx.<br />
A particle system used to simulate a bomb<br />
explosion, created in particleIllusion.
Particle system 95<br />
During the simulation stage, the number of new particles that must be created is calculated based on spawning rates<br />
and the interval between updates, and each of them is spawned in a specific position in <strong>3D</strong> space based on the<br />
emitter's position and the spawning area specified. Each of the particle's parameters (i.e. velocity, color, etc.) is<br />
initialized according to the emitter's parameters. At each update, all existing particles are checked to see if they have<br />
exceeded their lifetime, in which case they are removed from the simulation. Otherwise, the particles' position and<br />
other characteristics are advanced based on some sort of physical simulation, which can be as simple as translating<br />
their current position, or as complicated as performing physically accurate trajectory calculations which take into<br />
account external forces (gravity, friction, wind, etc.). It is common to perform some sort of collision detection<br />
between particles and specified <strong>3D</strong> objects in the scene to make the particles bounce off of or otherwise interact with<br />
obstacles in the environment. Collisions between particles are rarely used, as they are computationally expensive and<br />
not really useful for most simulations.<br />
Rendering stage<br />
After the update is complete, each particle is rendered, usually in the form of a textured billboarded quad (i.e. a<br />
quadrilateral that is always facing the viewer). However, this is not necessary; a particle may be rendered as a single<br />
pixel in small resolution/limited processing power environments. Particles can be rendered as Metaballs in off-line<br />
rendering; isosurfaces computed from particle-metaballs make quite convincing liquids. Finally, <strong>3D</strong> mesh objects<br />
can "stand in" for the particles — a snowstorm might consist of a single <strong>3D</strong> snowflake mesh being duplicated and<br />
rotated to match the positions of thousands or millions of particles.<br />
Snowflakes versus hair<br />
Particle systems can be either animated or static; that is, the lifetime of each particle can either be distributed over<br />
time or rendered all at once. The consequence of this distinction is the difference between the appearance of "snow"<br />
and the appearance of "hair."<br />
The term "particle system" itself often brings to mind only the animated aspect, which is commonly used to create<br />
moving particulate simulations — sparks, rain, fire, etc. In these implementations, each frame of the animation<br />
contains each particle at a specific position in its life cycle, and each particle occupies a single point position in<br />
space.<br />
However, if the entire life cycle of the each particle is rendered simultaneously, the result is static particles —<br />
strands of material that show the particles' overall trajectory, rather than point particles. These strands can be used to<br />
simulate hair, fur, grass, and similar materials. The strands can be controlled with the same velocity vectors, force<br />
fields, spawning rates, and deflection parameters that animated particles obey. In addition, the rendered thickness of<br />
the strands can be controlled and in some implementations may be varied along the length of the strand. Different<br />
combinations of parameters can impart stiffness, limpness, heaviness, bristliness, or any number of other properties.<br />
The strands may also use texture mapping to vary the strands' color, length, or other properties across the emitter<br />
surface.
Particle system 96<br />
A cube emitting 5000 animated particles, obeying<br />
a "gravitational" force in the negative Y direction.<br />
Artist-friendly particle system tools<br />
The same cube emitter rendered using static<br />
particles, or strands.<br />
Particle systems can be created and modified natively in many <strong>3D</strong> modeling and rendering packages including<br />
Cinema 4D, Lightwave, Houdini, Maya, XSI, <strong>3D</strong> Studio Max and Blender. These editing programs allow artists to<br />
have instant feedback on how a particle system will look with properties and constraints that they specify. There is<br />
also plug-in software available that provides enhanced particle effects; examples include AfterBurn and RealFlow<br />
(for liquids). Compositing software such as Combustion or specialized, particle-only software such as Particle Studio<br />
and particleIllusion can be used for the creation of particle systems for film and video.<br />
Developer-friendly particle system tools<br />
Particle systems code that can be included in game engines, digital content creation systems, and effects applications<br />
can be written from scratch or downloaded. One free implementation is The Particle Systems API [1] (Development<br />
ended in 2008). Another for the XNA framework is the Dynamic Particle System Framework [2] . Havok provides<br />
multiple particle system APIs. Their Havok FX API focuses especially on particle system effects. Ageia provides a<br />
particle system and other game physics API that is used in many games, including Unreal Engine 3 games. In<br />
February 2008, Ageia was bought by Nvidia.<br />
External links<br />
• Particle Systems: A Technique for Modeling a Class of Fuzzy Objects [3] — William T. Reeves (ACM<br />
Transactions on Graphics, April 1983)<br />
• The Particle Systems API [1] - David K. McAllister<br />
• The ocean spray in your face. [4] — Jeff Lander (Game Developer, July 1998)<br />
• Building an Advanced Particle System [5] — John van der Burg (Gamasutra, June 2000)<br />
• Particle Engine Using Triangle Strips [6] — Jeff Molofee (NeHe)<br />
• Designing an Extensible Particle System using C++ and Templates [7] — Kent Lai (GameDev)<br />
• repository of public <strong>3D</strong> particle scripts in LSL Second Life format [8] - Ferd Frederix
Particle system 97<br />
References<br />
[1] http:/ / particlesystems. org/<br />
[2] http:/ / www. xnaparticles. com/<br />
[3] http:/ / portal. acm. org/ citation. cfm?id=357320<br />
[4] http:/ / www. double. co. nz/ dust/ col0798. pdf<br />
[5] http:/ / www. gamasutra. com/ view/ feature/ 3157/ building_an_advanced_particle_. php<br />
[6] http:/ / nehe. gamedev. net/ data/ lessons/ lesson. asp?lesson=19<br />
[7] http:/ / archive. gamedev. net/ archive/ reference/ articles/ article1982. html<br />
[8] http:/ / secondlife. mitsi. com/ cgi/ llscript. plx?Category=Particles<br />
Path tracing<br />
Path tracing is a computer <strong>graphics</strong><br />
rendering technique that attempts to<br />
simulate the physical behaviour of<br />
light as closely as possible. It is a<br />
generalisation of conventional<br />
Whitted-style ray tracing, tracing rays<br />
from the virtual camera through<br />
several bounces on or through objects.<br />
The image quality provided by path<br />
tracing is usually superior to that of<br />
images produced using conventional<br />
rendering methods at the cost of much<br />
greater computation requirements.<br />
Path tracing naturally simulates many<br />
effects that have to be specifically<br />
added to other methods (conventional<br />
ray tracing or scanline rendering), such<br />
as soft shadows, depth of field, motion<br />
blur, caustics, ambient occlusion, and<br />
indirect lighting. Implementation of a<br />
renderer including these effects is<br />
correspondingly simpler.<br />
A simple scene showing the soft phenomena simulated with path tracing.<br />
Due to its accuracy and unbiased nature, path tracing is used to generate reference images when testing the quality of<br />
other rendering algorithms. In order to get high quality images from path tracing, a large number of rays must be<br />
traced to avoid visible artifacts in the form of noise.
Path tracing 98<br />
History<br />
Further information: Rendering, Chronology of important published ideas<br />
The rendering equation and its use in computer <strong>graphics</strong> was presented by James Kajiya in 1986. kajiya1986rendering<br />
This presentation contained what was probably the first description of the path tracing algorithm. A decade later,<br />
Lafortune suggested many refinements, including bidirectional path tracing. lafortune1996mathematical<br />
Metropolis light transport, a method of perturbing previously found paths in order to increase performance for<br />
difficult scenes, was introduced in 1997 by Eric Veach and Leonidas J. Guibas.<br />
More recently, CPUs and GPUs have become powerful enough to render images more quickly, causing more<br />
widespread interest in path tracing algorithms. Tim Purcell first presented a global illumination algorithm running on<br />
a GPU in 2002. purcell2002ray In February 2009 Austin Robison of Nvidia demonstrated the first commercial<br />
implementation of a path tracer running on a GPU robisonNVIRT , and other implementations have followed, such as<br />
that of Vladimir Koylazov in August 2009. pathGPUimplementations This was aided by the maturing of GPGPU<br />
programming toolkits such as CUDA and OpenCL and GPU ray tracing SDKs such as OptiX.<br />
Description<br />
In the real world, many small amounts of light are emitted from light sources, and travel in straight lines (rays) from<br />
object to object, changing colour and intensity, until they are absorbed (possibly by an eye or camera). This process<br />
is simulated by path tracing, except that the paths are traced backwards, from the camera to the light. The<br />
inefficiency arises in the random nature of the bounces from many surfaces, as it is usually quite unlikely that a path<br />
will intersect a light. As a result, most traced paths do not contribute to the final image.<br />
This behaviour is described mathematically by the rendering equation, which is the equation that path tracing<br />
algorithms try to solve.<br />
Path tracing is not simply ray tracing with infinite recursion depth. In conventional ray tracing, lights are sampled<br />
directly when a diffuse surface is hit by a ray. In path tracing, a new ray is randomly generated within the<br />
hemisphere of the object and then traced until it hits a light — possibly never. This type of path can hit many diffuse<br />
surfaces before interacting with a light.<br />
A simple path tracing pseudocode might look something like this:<br />
Color TracePath(Ray r,depth) {<br />
if(depth == MaxDepth)<br />
return Black; // bounced enough times<br />
r.FindNearestObject();<br />
if(r.hitSomething == false)<br />
return Black; // nothing was hit<br />
Material m = r.thingHit->material;<br />
Color emittance = m.emittance;<br />
// pick a random direction from here and keep going<br />
Ray newRay;<br />
newRay.origin = r.pointWhereObjWasHit;<br />
newRay.direction = RandomUnitVectorInHemisphereOf(r.normalWhereObjWasHit);<br />
float cos_omega = DotProduct(newRay.direction, r.normalWhereObjWasHit);<br />
Color BDRF = m.reflectance*cos_omega;
Path tracing 99<br />
}<br />
Color reflected = TracePath(newRay,depth+1);<br />
return emittance + ( BDRF * cos_omega * reflected );<br />
In the above example if every surface of a closed space emitted and reflected (0.5,0.5,0.5) then every pixel in the<br />
image would be white.<br />
Bidirectional path tracing<br />
In order to accelerate the convergence of images, bidirectional algorithms trace paths in both directions. In the<br />
forward direction, rays are traced from light sources until they are too faint to be seen or strike the camera. In the<br />
reverse direction (the usual one), rays are traced from the camera until they strike a light or too many bounces<br />
("depth") have occurred. This approach normally results in an image that converges much more quickly than using<br />
only one direction.<br />
Veach and Guibas give a more accurate description veach1997metropolis :<br />
These methods generate one subpath starting at a light source and another starting at the lens, then they<br />
consider all the paths obtained by joining every prefix of one subpath to every suffix of the other. This<br />
leads to a family of different importance sampling techniques for paths, which are then combined to<br />
minimize variance.<br />
Performance<br />
A path tracer continuously samples pixels of an image. The image starts to become recognisable after only a few<br />
samples per pixel, perhaps 100. However, for the image to "converge" and reduce noise to acceptable levels usually<br />
takes around 5000 samples for most images, and many more for pathological cases. This can take hours or days<br />
depending on scene complexity and hardware and software performance. Newer GPU implementations are<br />
promising from 1-10 million samples per second on modern hardware, producing acceptably noise-free images in<br />
seconds or minutes. Noise is particularly a problem for animations, giving them a normally-unwanted "film-grain"<br />
quality of random speckling.<br />
Metropolis light transport obtains more important samples first, by slightly modifying previously-traced successful<br />
paths. This can result in a lower-noise image with fewer samples.<br />
Renderer performance is quite difficult to measure fairly. One approach is to measure "Samples per second", or the<br />
number of paths that can be traced and added to the image each second. This varies considerably between scenes and<br />
also depends on the "path depth", or how many times a ray is allowed to bounce before it is abandoned. It also<br />
depends heavily on the hardware used. Finally, one renderer may generate many low quality samples, while another<br />
may converge faster using fewer high-quality samples.
Path tracing 100<br />
Scattering distribution functions<br />
The reflective properties (amount, direction and colour) of surfaces are<br />
modelled using BRDFs. The equivalent for transmitted light (light that<br />
goes through the object) are BTDFs. A path tracer can take full<br />
advantage of complex, carefully modelled or measured distribution<br />
functions, which controls the appearance ("material", "texture" or<br />
"shading" in computer <strong>graphics</strong> terms) of an object.<br />
Notes<br />
1. Kajiya, J T, The rendering equation (http:/ / citeseerx. ist. psu. edu/<br />
viewdoc/ download?doi=10. 1. 1. 63. 1402& rep=rep1& type=pdf),<br />
Proceedings of the 13th annual conference on Computer <strong>graphics</strong><br />
and interactive techniques, ACM, 1986.<br />
2. Lafortune, E, Mathematical Models and Monte Carlo Algorithms<br />
for Physically Based Rendering (http:/ / www. <strong>graphics</strong>. cornell.<br />
edu/ ~eric/ thesis/ index. html), (PhD thesis), 1996.<br />
Scattering distribution functions<br />
3. Purcell, T J; Buck, I; Mark, W; and Hanrahan, P, "Ray Tracing on Programmable Graphics Hardware", Proc.<br />
SIGGRAPH 2002, 703 - 712. See also Purcell, T, Ray tracing on a stream processor (http:/ / <strong>graphics</strong>. stanford.<br />
edu/ papers/ tpurcell_thesis/ ) (PhD thesis), 2004.<br />
4. Robison, Austin, "Interactive Ray Tracing on the GPU and NVIRT Overview" (http:/ / realtimerendering. com/<br />
downloads/ NVIRT-Overview. pdf), slide 37, I<strong>3D</strong> 2009.<br />
5. Vray demo (http:/ / www. youtube. com/ watch?v=eRoSFNRQETg); Other examples include Octane Render,<br />
Arion, and Luxrender.<br />
6. Veach, E., and Guibas, L. J. Metropolis light transport (http:/ / <strong>graphics</strong>. stanford. edu/ papers/ metro/ metro. pdf).<br />
In SIGGRAPH’97 (August 1997), pp. 65–76.<br />
7. This "Introduction to Global Illumination" (http:/ / www. thepolygoners. com/ tutorials/ GIIntro/ GIIntro. htm)<br />
has some good example images, demonstrating the image noise, caustics and indirect lighting properties of<br />
images rendered with path tracing methods. It also discusses possible performance improvements in some detail.<br />
8. SmallPt (http:/ / www. kevinbeason. com/ smallpt/ ) is an educational path tracer by Kevin Beason. It uses 99<br />
lines of C++ (including scene description). This page has a good set of examples of noise resulting from this<br />
technique.
Per-pixel lighting 101<br />
Per-pixel lighting<br />
In computer <strong>graphics</strong>, per-pixel lighting is commonly used to refer to a set of methods for computing illumination at<br />
each rendered pixel of an image. These generally produce more realistic images than vertex lighting, which only<br />
calculates illumination at each vertex of a <strong>3D</strong> model and then interpolates the resulting values to calculate the<br />
per-pixel color values.<br />
Per-pixel lighting is commonly used with other computer <strong>graphics</strong> techniques to help improve render quality,<br />
including bump mapping, specularity, phong shading, and shadow volumes.<br />
Real-time applications, such as computer games, which use modern <strong>graphics</strong> cards, will normally implement<br />
per-pixel lighting algorithms using pixel shaders. Per-pixel lighting is also performed on the CPU in many high-end<br />
commercial rendering applications which typically do not render at interactive framerates.<br />
Phong reflection model<br />
The Phong reflection model (also called Phong illumination or Phong lighting) is an empirical model of the local<br />
illumination of points on a surface. In <strong>3D</strong> computer <strong>graphics</strong>, it is sometimes ambiguously referred to as Phong<br />
shading, in particular if the model is used in combination with the interpolation method of the same name and in the<br />
context of pixel shaders or other places where a lighting calculation can be referred to as “shading”.<br />
History<br />
The Phong reflection model was developed by Bui Tuong Phong at the University of Utah, who published it in his<br />
1973 Ph.D. dissertation. [1] [2] It was published in conjunction with a method for interpolating the calculation for each<br />
individual pixel that is rasterized from a polygonal surface model; the interpolation technique is known as Phong<br />
shading, even when it is used with a reflection model other than Phong's. Phong's methods were considered radical at<br />
the time of their introduction, but have evolved into a baseline shading method for many rendering applications.<br />
Phong's methods have proven popular due to their generally efficient use of computation time per rendered pixel.<br />
Description<br />
Phong reflection is an empirical model of local illumination. It describes the way a surface reflects light as a<br />
combination of the diffuse reflection of rough surfaces with the specular reflection of shiny surfaces. It is based on<br />
Bui Tuong Phong's informal observation that shiny surfaces have small intense specular highlights, while dull<br />
surfaces have large highlights that fall off more gradually. The model also includes an ambient term to account for<br />
the small amount of light that is scattered about the entire scene.
Phong reflection model 102<br />
Visual illustration of the Phong equation: here the light is white, the ambient and diffuse colors are both blue, and the specular color is white,<br />
reflecting a small part of the light hitting the surface, but only in very narrow highlights. The intensity of the diffuse component varies with the<br />
direction of the surface, and the ambient component is uniform (independent of direction).<br />
For each light source in the scene, components and are defined as the intensities (often as RGB values) of the<br />
specular and diffuse components of the light sources respectively. A single term controls the ambient lighting; it<br />
is sometimes computed as a sum of contributions from all light sources.<br />
For each material in the scene, the following parameters are defined:<br />
: specular reflection constant, the ratio of reflection of the specular term of incoming light<br />
: diffuse reflection constant, the ratio of reflection of the diffuse term of incoming light (Lambertian<br />
reflectance)<br />
: ambient reflection constant, the ratio of reflection of the ambient term present in all points in the scene<br />
rendered<br />
: is a shininess constant for this material, which is larger for surfaces that are smoother and more<br />
mirror-like. When this constant is large the specular highlight is small.<br />
Furthermore, is defined as the set of all light sources, as the direction vector from the point on the<br />
surface toward each light source ( specifies the light source), as the normal at this point on the surface,<br />
as the direction that a perfectly reflected ray of light would take from this point on the surface, and as the<br />
direction pointing towards the viewer (such as a virtual camera).<br />
Then the Phong reflection model provides an equation for computing the illumination of each surface point :<br />
where the direction vector is calculated as the reflection of on the surface characterized by the surface<br />
normal using:<br />
and the hats indicate that the vectors are normalized. The diffuse term is not affected by the viewer direction ( ).<br />
The specular term is large only when the viewer direction ( ) is aligned with the reflection direction . Their<br />
alignment is measured by the power of the cosine of the angle between them. The cosine of the angle between the<br />
normalized vectors and is equal to their dot product. When is large, in the case of a nearly mirror-like<br />
reflection, the specular highlight will be small, because any viewpoint not aligned with the reflection will have a<br />
cosine less than one which rapidly approaches zero when raised to a high power.<br />
Although the above formulation is the common way of presenting the Phong reflection model, each term should only<br />
be included if the term's dot product is positive. (Additionally, the specular term should only be included if the dot<br />
product of the diffuse term is positive.)
Phong reflection model 103<br />
When the color is represented as RGB values, as often is the case in computer <strong>graphics</strong>, this equation is typically<br />
modeled separately for R, G and B intensities, allowing different reflections constants and for the<br />
different color channels.<br />
Computationally more efficient alterations<br />
When implementing the Phong reflection model, there are a number of methods for approximating the model, rather<br />
than implementing the exact formulas, which can speed up the calculation; for example, the Blinn–Phong reflection<br />
model is a modification of the Phong reflection model, which is more efficient if the viewer and the light source are<br />
treated to be at infinity.<br />
Another approximation [3] also addresses the computation of the specular term since the calculation of the power term<br />
may be computationally expensive. Considering that the specular term should be taken into account only if its dot<br />
product is positive, it can be approximated by realizing that<br />
for , for a sufficiently large, fixed integer (typically 4 will be enough), where is a<br />
real number (not necessarily an integer). The value can be further approximated as<br />
; this squared distance between the vectors and is much less sensitive to<br />
normalization errors in those vectors than is Phong's dot-product-based .<br />
The value can be chosen to be a fixed power of 2, where is a small integer; then the expression<br />
can be efficiently calculated by squaring times. Here the shininess parameter is ,<br />
proportional to the orignal parameter .<br />
This method substitutes a few multiplications for a variable exponentiation.<br />
Inverse Phong reflection model<br />
The Phong reflection model in combination with Phong shading is an approximation of shading of objects in real<br />
life. This means that the Phong equation can relate the shading seen in a photograph with the surface normals of the<br />
visible object. Inverse refers to the wish to estimate the surface normals given a rendered image, natural or<br />
computer-made.<br />
The Phong reflection model contains many parameters, such as the surface diffuse reflection parameter (albedo)<br />
which may vary within the object. Thus the normals of an object in a photograph can only be determined, by<br />
introducing additive information such as the number of lights, light directions and reflection parameters.<br />
For example we have a cylindrical object for instance a finger and like to calculate the normal on a<br />
line on the object. We assume only one light, no specular reflection, and uniform known (approximated) reflection<br />
parameters. We can then simplify the Phong equation to:<br />
With a constant equal to the ambient light and a constant equal to the diffusion reflection. We can re-write<br />
the equation to:<br />
Which can be rewritten for a line through the cylindrical object as:<br />
For instance if the light direction is 45 degrees above the object we get two equations with two<br />
unknowns.
Phong reflection model 104<br />
Because of the powers of two in the equation there are two possible solutions for the normal direction. Thus some<br />
prior information of the geometry is needed to define the correct normal direction. The normals are directly related to<br />
angles of inclination of the line on the object surface. Thus the normals allow the calculation of the relative surface<br />
heights of the line on the object using a line integral, if we assume a continuous surface.<br />
If the object is not cylindrical, we have three unknown normal values . Then the two equations<br />
still allow the normal to rotate around the view vector, thus additional constraints are needed from prior geometric<br />
information. For instance in face recognition those geometric constraints can be obtained using principal component<br />
analysis (PCA) on a database of depth-maps of faces, allowing only surface normals solutions which are found in a<br />
normal population. [4]<br />
Applications<br />
As already implied, the Phong reflection model is often used together with Phong shading to shade surfaces in <strong>3D</strong><br />
computer <strong>graphics</strong> software. Apart from this, it may also be used for other purposes. For example, it has been used to<br />
model the reflection of thermal radiation from the Pioneer probes in an attempt to explain the Pioneer anomaly. [5]<br />
References<br />
[1] B. T. Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311–317.<br />
[2] University of Utah School of Computing, http:/ / www. cs. utah. edu/ school/ history/ #phong-ref<br />
[3] Lyon, Richard F. (August 2, 1993). "Phong Shading Reformulation for Hardware Renderer Simplification" (http:/ / dicklyon. com/ tech/<br />
Graphics/ Phong_TR-Lyon. pdf). . Retrieved 7 March 2011.<br />
[4] Boom, B.J. and Spreeuwers, L.J. and Veldhuis, R.N.J. (September 2009). "Model-Based Illumination Correction for Face Images in<br />
Uncontrolled Scenarios". Lecture Notes in Computer Science 5702 (2009): 33–40. doi:10.1007/978-3-642-03767-2.<br />
[5] F. Francisco, O. Bertolami, P. J. S. Gil, J. Páramos. "Modelling the reflective thermal contribution to the acceleration of the Pioneer<br />
spacecraft". arXiv:1103.5222.
Phong shading 105<br />
Phong shading<br />
Phong shading refers to an interpolation technique for surface shading in <strong>3D</strong> computer <strong>graphics</strong>. It is also called<br />
Phong interpolation [1] or normal-vector interpolation shading. [2] Specifically, it interpolates surface normals across<br />
rasterized polygons and computes pixel colors based on the interpolated normals and a reflection model. Phong<br />
shading may also refer to the specific combination of Phong interpolation and the Phong reflection model.<br />
History<br />
Phong shading and the Phong reflection model were developed by Bui Tuong Phong at the University of Utah, who<br />
published them in his 1973 Ph.D. dissertation. [3] [4] Phong's methods were considered radical at the time of their<br />
introduction, but have evolved into a baseline shading method for many rendering applications. Phong's methods<br />
have proven popular due to their generally efficient use of computation time per rendered pixel.<br />
Phong interpolation<br />
Phong shading improves upon<br />
Gouraud shading and provides a better<br />
approximation of the shading of a<br />
smooth surface. Phong shading<br />
assumes a smoothly varying surface<br />
normal vector. The Phong interpolation<br />
method works better than Gouraud<br />
shading when applied to a reflection<br />
model that has small specular<br />
highlights such as the Phong reflection<br />
model.<br />
Phong shading interpolation example<br />
The most serious problem with Gouraud shading occurs when specular highlights are found in the middle of a large<br />
polygon. Since these specular highlights are absent from the polygon's vertices and Gouraud shading interpolates<br />
based on the vertex colors, the specular highlight will be missing from the polygon's interior. This problem is fixed<br />
by Phong shading.<br />
Unlike Gouraud shading, which interpolates colors across polygons, in Phong shading a normal vector is linearly<br />
interpolated across the surface of the polygon from the polygon's vertex normals. The surface normal is interpolated<br />
and normalized at each pixel and then used in a reflection model, e.g. the Phong reflection model, to obtain the final<br />
pixel color. Phong shading is more computationally expensive than Gouraud shading since the reflection model must<br />
be computed at each pixel instead of at each vertex.<br />
In modern <strong>graphics</strong> hardware, variants of this algorithm are implemented using pixel or fragment shaders.<br />
Phong reflection model<br />
Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model,<br />
which is an empirical model of local illumination. It describes the way a surface reflects light as a combination of the<br />
diffuse reflection of rough surfaces with the specular reflection of shiny surfaces. It is based on Bui Tuong Phong's<br />
informal observation that shiny surfaces have small intense specular highlights, while dull surfaces have large<br />
highlights that fall off more gradually. The reflection model also includes an ambient term to account for the small<br />
amount of light that is scattered about the entire scene.
Phong shading 106<br />
Visual illustration of the Phong equation: here the light is white, the ambient and diffuse colors are both blue, and the specular color is white,<br />
reflecting a small part of the light hitting the surface, but only in very narrow highlights. The intensity of the diffuse component varies with the<br />
References<br />
direction of the surface, and the ambient component is uniform (independent of direction).<br />
[1] Watt, Alan H.; Watt, Mark (1992). Advanced Animation and Rendering Techniques: Theory and Practice. Addison-Wesley Professional.<br />
pp. 21–26. ISBN 978-0201544121.<br />
[2] Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1996). Computer Graphics: Principles and Practice. (2nd ed. in C).<br />
Addison-Wesley Publishing Company. pp. 738 and 739. ISBN 0-201-84840-6.<br />
[3] B. T. Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311–317.<br />
[4] University of Utah School of Computing, http:/ / www. cs. utah. edu/ school/ history/ #phong-ref<br />
Photon mapping<br />
In computer <strong>graphics</strong>, photon mapping is a two-pass global illumination algorithm developed by Henrik Wann<br />
Jensen that solves the rendering equation. Rays from the light source and rays from the camera are traced<br />
independently until some termination criterion is met, then they are connected in a second step to produce a radiance<br />
value. It is used to realistically simulate the interaction of light with different objects. Specifically, it is capable of<br />
simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection<br />
between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects<br />
caused by particulate matter such as smoke or water vapor. It can also be extended to more accurate simulations of<br />
light such as spectral rendering.<br />
Unlike path tracing, bidirectional path tracing and Metropolis light transport, photon mapping is a "biased" rendering<br />
algorithm, which means that averaging many renders using this method does not converge to a correct solution to the<br />
rendering equation. However, since it is a consistent method, a correct solution can be achieved by increasing the<br />
number of photons.
Photon mapping 107<br />
Effects<br />
Caustics<br />
Light refracted or reflected causes patterns called caustics, usually<br />
visible as concentrated patches of light on nearby surfaces. For<br />
example, as light rays pass through a wine glass sitting on a table, they<br />
are refracted and patterns of light are visible on the table. Photon<br />
mapping can trace the paths of individual photons to model where<br />
these concentrated patches of light will appear.<br />
Diffuse interreflection<br />
Diffuse interreflection is apparent when light from one diffuse object is<br />
reflected onto another. Photon mapping is particularly adept at<br />
handling this effect because the algorithm reflects photons from one<br />
A model of a wine glass ray traced with photon<br />
mapping to show caustics.<br />
surface to another based on that surface's bidirectional reflectance distribution function (BRDF), and thus light from<br />
one object striking another is a natural result of the method. Diffuse interreflection was first modeled using radiosity<br />
solutions. Photon mapping differs though in that it separates the light transport from the nature of the geometry in the<br />
scene. Color bleed is an example of diffuse interreflection.<br />
Subsurface scattering<br />
Subsurface scattering is the effect evident when light enters a material and is scattered before being absorbed or<br />
reflected in a different direction. Subsurface scattering can accurately be modeled using photon mapping. This was<br />
the original way Jensen implemented it; however, the method becomes slow for highly scattering materials, and<br />
bidirectional surface scattering reflectance distribution functions (BSSRDFs) are more efficient in these situations.<br />
Usage<br />
Construction of the photon map (1st pass)<br />
With photon mapping, light packets called photons are sent out into the scene from the light sources. Whenever a<br />
photon intersects with a surface, the intersection point and incoming direction are stored in a cache called the photon<br />
map. Typically, two photon maps are created for a scene: one especially for caustics and a global one for other light.<br />
After intersecting the surface, a probability for either reflecting, absorbing, or transmitting/refracting is given by the<br />
material. A Monte Carlo method called Russian roulette is used to choose one of these actions. If the photon is<br />
absorbed, no new direction is given, and tracing for that photon ends. If the photon reflects, the surface's<br />
bidirectional reflectance distribution function is used to determine a new direction. Finally, if the photon is<br />
transmitting, a different function for its direction is given depending upon the nature of the transmission.<br />
Once the photon map is constructed (or during construction), it is typically arranged in a manner that is optimal for<br />
the k-nearest neighbor algorithm, as photon look-up time depends on the spatial distribution of the photons. Jensen<br />
advocates the usage of kd-trees. The photon map is then stored on disk or in memory for later usage.
Photon mapping 108<br />
Rendering (2nd pass)<br />
In this step of the algorithm, the photon map created in the first pass is used to estimate the radiance of every pixel of<br />
the output image. For each pixel, the scene is ray traced until the closest surface of intersection is found.<br />
At this point, the rendering equation is used to calculate the surface radiance leaving the point of intersection in the<br />
direction of the ray that struck it. To facilitate efficiency, the equation is decomposed into four separate factors:<br />
direct illumination, specular reflection, caustics, and soft indirect illumination.<br />
For an accurate estimate of direct illumination, a ray is traced from the point of intersection to each light source. As<br />
long as a ray does not intersect another object, the light source is used to calculate the direct illumination. For an<br />
approximate estimate of indirect illumination, the photon map is used to calculate the radiance contribution.<br />
Specular reflection can be, in most cases, calculated using ray tracing procedures (as it handles reflections well).<br />
The contribution to the surface radiance from caustics is calculated using the caustics photon map directly. The<br />
number of photons in this map must be sufficiently large, as the map is the only source for caustics information in<br />
the scene.<br />
For soft indirect illumination, radiance is calculated using the photon map directly. This contribution, however, does<br />
not need to be as accurate as the caustics contribution and thus uses the global photon map.<br />
Calculating radiance using the photon map<br />
In order to calculate surface radiance at an intersection point, one of the cached photon maps is used. The steps are:<br />
1. Gather the N nearest photons using the nearest neighbor search function on the photon map.<br />
2. Let S be the sphere that contains these N photons.<br />
3. For each photon, divide the amount of flux (real photons) that the photon represents by the area of S and multiply<br />
by the BRDF applied to that photon.<br />
4. The sum of those results for each photon represents total surface radiance returned by the surface intersection in<br />
the direction of the ray that struck it.<br />
Optimizations<br />
• To avoid emitting unneeded photons, the initial direction of the outgoing photons is often constrained. Instead of<br />
simply sending out photons in random directions, they are sent in the direction of a known object that is a desired<br />
photon manipulator to either focus or diffuse the light. There are many other refinements that can be made to the<br />
algorithm: for example, choosing the number of photons to send, and where and in what pattern to send them. It<br />
would seem that emitting more photons in a specific direction would cause a higher density of photons to be<br />
stored in the photon map around the position where the photons hit, and thus measuring this density would give<br />
an inaccurate value for irradiance. This is true; however, the algorithm used to compute radiance does not depend<br />
on irradiance estimates.<br />
• For soft indirect illumination, if the surface is Lambertian, then a technique known as irradiance caching may be<br />
used to interpolate values from previous calculations.<br />
• To avoid unnecessary collision testing in direct illumination, shadow photons can be used. During the photon<br />
mapping process, when a photon strikes a surface, in addition to the usual operations performed, a shadow photon<br />
is emitted in the same direction the original photon came from that goes all the way through the object. The next<br />
object it collides with causes a shadow photon to be stored in the photon map. Then during the direct illumination<br />
calculation, instead of sending out a ray from the surface to the light that tests collisions with objects, the photon<br />
map is queried for shadow photons. If none are present, then the object has a clear line of sight to the light source<br />
and additional calculations can be avoided.<br />
• To optimize image quality, particularly of caustics, Jensen recommends use of a cone filter. Essentially, the filter<br />
gives weight to photons' contributions to radiance depending on how far they are from ray-surface intersections.
Photon mapping 109<br />
This can produce sharper images.<br />
• Image space photon mapping [1] achieves real-time performance by computing the first and last scattering using a<br />
GPU rasterizer.<br />
Variations<br />
• Although photon mapping was designed to work primarily with ray tracers, it can also be extended for use with<br />
scanline renderers.<br />
External links<br />
• Global Illumination using Photon Maps [2]<br />
• Realistic Image Synthesis Using Photon Mapping [3] ISBN 1-56881-147-0<br />
• Photon mapping introduction [4] from Worcester Polytechnic Institute<br />
• Bias in Rendering [5]<br />
• Siggraph Paper [6]<br />
References<br />
[1] http:/ / research. nvidia. com/ publication/ hardware-accelerated-global-illumination-image-space-photon-mapping<br />
[2] http:/ / <strong>graphics</strong>. ucsd. edu/ ~henrik/ papers/ photon_map/ global_illumination_using_photon_maps_egwr96. pdf<br />
[3] http:/ / <strong>graphics</strong>. ucsd. edu/ ~henrik/ papers/ book/<br />
[4] http:/ / www. cs. wpi. edu/ ~emmanuel/ courses/ cs563/ write_ups/ zackw/ photon_mapping/ PhotonMapping. html<br />
[5] http:/ / www. cgafaq. info/ wiki/ Bias_in_rendering<br />
[6] http:/ / www. cs. princeton. edu/ courses/ archive/ fall02/ cs526/ papers/ course43sig02. pdf<br />
Photon tracing<br />
Photon tracing is a rendering method similar to ray tracing and photon mapping for creating ultra high realism<br />
images.<br />
Rendering Method<br />
The method aims to simulate realistic photon behavior by using an adapted ray tracing method similar to photon<br />
mapping, by sending rays from the light source. However, unlike photon mapping, each ray keeps bouncing around<br />
until one of three things occurs:<br />
1. it is absorbed by any material.<br />
2. it leaves the rendering scene.<br />
3. it hits a special photo sensitive plane, similar to the film in cameras.
Photon tracing 110<br />
Advantages and disadvantages<br />
This method has a number of advantages compared to other methods.<br />
• Global illumination and radiosity are automatic and nearly free.<br />
• Sub-surface scattering is simple and cheap.<br />
• True caustics are free.<br />
• There are no rendering artifacts if done right.<br />
• Fairly simple to code and implement using a regular ray tracer.<br />
• Simple to parallelize, even across multiple computers.<br />
Even though the image quality is superior this method has one major drawback: render times. One of the first<br />
simulations in 1991, programmed in C by Richard Keene, it took 100 Sun 1 computers operating at 1 MHz a month<br />
to render a single image. With modern computers it can take up to one day to compute a crude result using even the<br />
simplest scene.<br />
Shading methods<br />
Because the rendering method differs from both ray tracing and scan line rendering, photon tracing needs its own set<br />
of shaders.<br />
• Surface shader - dictates how the photon rays reflect or refract.<br />
• Absorption shader - tells the ray if the photon should be absorbed or not.<br />
• Emission shader - when called it emits a photon ray<br />
Renderers<br />
• [1] - A light simulation renderer similar to the experiment performed by Keene.<br />
Future<br />
With newer ray tracing hardware large rendering farms may be possible that can render images on a commercial<br />
level. Eventually even home computers will be able to render images using this method without any problem.<br />
External links<br />
• www.cpjava.net [1]<br />
References<br />
[1] http:/ / www. cpjava. net/ photonproj. html
Polygon 111<br />
Polygon<br />
Polygons are used in computer <strong>graphics</strong> to compose images that are three-dimensional in appearance. Usually (but<br />
not always) triangular, polygons arise when an object's surface is modeled, vertices are selected, and the object is<br />
rendered in a wire frame model. This is quicker to display than a shaded model; thus the polygons are a stage in<br />
computer animation. The polygon count refers to the number of polygons being rendered per frame.<br />
Competing methods for rendering polygons that avoid seams<br />
• Point<br />
• Floating Point<br />
• Fixed-Point<br />
• Polygon<br />
• because of rounding, every scanline has its own direction in space and may show its front or back side to the<br />
viewer.<br />
• Fraction (mathematics)<br />
• Bresenham's line algorithm<br />
• Polygons have to be split into triangles<br />
• The whole triangle shows the same side to the viewer<br />
• The point numbers from the Transform and lighting stage have to converted to Fraction (mathematics)<br />
• Barycentric coordinates (mathematics)<br />
• Used in raytracing<br />
Potentially visible set<br />
Potentially Visible Sets are used to accelerate the rendering of <strong>3D</strong> environments. This is a form of occlusion culling,<br />
whereby a candidate set of potentially visible polygons are pre-computed, then indexed at run-time in order to<br />
quickly obtain an estimate of the visible geometry. The term PVS is sometimes used to refer to any occlusion culling<br />
algorithm (since in effect, this is what all occlusion algorithms compute), although in almost all the literature, it is<br />
used to refer specifically to occlusion culling algorithms that pre-compute visible sets and associate these sets with<br />
regions in space. In order to make this association, the camera view-space (the set of points from which the camera<br />
can render an image) is typically subdivided into (usually convex) regions and a PVS is computed for each region.<br />
Benefits vs. Cost<br />
The benefit of offloading visibility as a pre-process are:<br />
• The application just has to look up the pre-computed set given its view position. This set may be further reduced<br />
via frustum culling. Computationally, this is far cheaper than computing occlusion based visibility every frame.<br />
• Within a frame, time is limited. Only 1/60th of a second (assuming a 60 Hz frame-rate) is available for visibility<br />
determination, rendering preparation (assuming <strong>graphics</strong> hardware), AI, physics, or whatever other app specific<br />
code is required. In contrast, the offline pre-processing of a potentially visible set can take as long as required in<br />
order to compute accurate visibility.<br />
The disadvantages are:<br />
• There are additional storage requirements for the PVS data.<br />
• Preprocessing times may be long or inconvenient.
Potentially visible set 112<br />
• Can't be used for completely dynamic scenes.<br />
• The visible set for a region can in some cases be much larger than for a point.<br />
Primary Problem<br />
The primary problem in PVS computation then becomes: For a set of polyhedral regions, for each region compute<br />
the set of polygons that can be visible from anywhere inside the region.<br />
[1] [2]<br />
There are various classifications of PVS algorithms with respect to the type of visibility set they compute.<br />
Conservative algorithms<br />
These overestimate visibility consistently, such that no triangle that is visible may be omitted. The net result is that<br />
no image error is possible, however, it is possible to greatly over-estimate visibility, leading to inefficient rendering<br />
(due to the rendering of invisible geometry). The focus on conservative algorithm research is maximizing occluder<br />
fusion in order to reduce this overestimation. The list of publications on this type of algorithm is extensive - good<br />
surveys on this topic include Cohen-Or et al. [2] and Durand. [3]<br />
Aggressive algorithms<br />
These underestimate visibility consistently, such that no redundant (invisible) polygons exist in the PVS set,<br />
although it may be possible to miss a polygon that is actually visible leading to image errors. The focus on<br />
[4] [5]<br />
aggressive algorithm research is to reduce the potential error.<br />
Approximate algorithms<br />
These can result in both redundancy and image error. [6]<br />
Exact algorithms<br />
These provide optimal visibility sets, where there is no image error and no redundancy. They are, however, complex<br />
to implement and typically run a lot slower than other PVS based visibility algorithms. Teller computed exact<br />
visibility for a scene subdivided into cells and portals [7] (see also portal rendering).<br />
The first general tractable <strong>3D</strong> solutions were presented in 2002 by Nirenstein et al. [1] and Bittner [8] . Haumont et<br />
al. [9] improve on the performance of these techniques significantly. Bittner et al. [10] solve the problem for 2.5D<br />
urban scenes. Although not quite related to PVS computation, the work on the <strong>3D</strong> Visibility Complex and <strong>3D</strong><br />
Visibility Skeleton by Durand [3] provides an excellent theoretical background on analytic visibility.<br />
Visibility in <strong>3D</strong> is inherently a 4-Dimensional problem. To tackle this, solutions are often performed using Plücker<br />
(see Julius Plücker) coordinates, which effectively linearize the problem in a 5D projective space. Ultimately, these<br />
problems are solved with higher dimensional constructive solid geometry.
Potentially visible set 113<br />
Secondary Problems<br />
Some interesting secondary problems include:<br />
[7] [11] [12]<br />
• Compute an optimal sub-division in order to maximize visibility culling.<br />
• Compress the visible set data in order to minimize storage overhead. [13]<br />
Implementation Variants<br />
• It is often undesirable or inefficient to simply compute triangle level visibility. Graphics hardware prefers objects<br />
to be static and remain in video memory. Therefore, it is generally better to compute visibility on a per-object<br />
basis and to sub-divide any objects that may be too large individually. This adds conservativity, but the benefit is<br />
better hardware utilization and compression (since visibility data is now per-object, rather than per-triangle).<br />
• Computing cell or sector visibility is also advantageous, since by determining visible regions of space, rather than<br />
visible objects, it is possible to not only cull out static objects in those regions, but dynamic objects as well.<br />
References<br />
[1] S. Nirenstein, E. Blake, and J. Gain. Exact from-region visibility culling (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 131.<br />
7204), In Proceedings of the 13th workshop on Rendering, pages 191–202. Euro<strong>graphics</strong> Association, June 2002.<br />
[2] Daniel Cohen-Or, Yiorgos Chrysanthou, Cl´audio T. Silva, and Fr´edo Durand. A survey of visibility for walkthrough applications (http:/ /<br />
citeseer. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 148. 4589), IEEE TVCG, 9(3):412–431, July-September 2003.<br />
[3] <strong>3D</strong> Visibility: Analytical study and Applications (http:/ / people. csail. mit. edu/ fredo/ THESE/ ), Frédo Durand, PhD thesis, Université<br />
Joseph Fourier, Grenoble, France, July 1999. is strongly related to exact visibility computations.<br />
[4] Shaun Nirenstein and Edwin Blake, Hardware Accelerated Visibility Preprocessing using Adaptive Sampling (http:/ / citeseerx. ist. psu. edu/<br />
viewdoc/ summary?doi=10. 1. 1. 64. 3231), Rendering Techniques 2004: Proceedings of the 15th Euro<strong>graphics</strong> Symposium on Rendering,<br />
207- 216, Norrköping, Sweden, June 2004.<br />
[5] Peter Wonka, Michael Wimmer, Kaichi Zhou, Stefan Maierhofer, Gerd Hesina, Alexander Reshetov. Guided Visibility Sampling (http:/ /<br />
citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 91. 5278), ACM Transactions on Graphics. volume 25. number 3. pages 494 - 502.<br />
2006. Proceedings of SIGGRAPH 2006.<br />
[6] Craig Gotsman, Oded Sudarsky, and Jeffrey A. Fayman. Optimized occlusion culling using five-dimensional subdivision (http:/ / www. cs.<br />
technion. ac. il/ ~gotsman/ AmendedPubl/ OptimizedOcclusion/ optimizedOcclusion. pdf), Computers & Graphics, 23(5):645–654, October<br />
1999.<br />
[7] Seth Teller, Visibility Computations in Densely Occluded Polyhedral Environments (http:/ / www. eecs. berkeley. edu/ Pubs/ TechRpts/ 1992/<br />
CSD-92-708. pdf) (Ph.D. dissertation, Berkeley, 1992)<br />
[8] Jiri Bittner. Hierarchical Techniques for Visibility Computations (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 2. 9886),<br />
PhD Dissertation. Department of Computer Science and Engineering. Czech Technical University in Prague. Submitted October 2002,<br />
defended March 2003.<br />
[9] Denis Haumont, Otso Mäkinen and Shaun Nirenstein, A low Dimensional Framework for Exact Polygon-to-Polygon Occlusion Queries<br />
(http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 66. 6371), Rendering Techniques 2005: Proceedings of the 16th Euro<strong>graphics</strong><br />
Symposium on Rendering, 211-222, Konstanz, Germany, June 2005<br />
[10] Jiri Bittner, Peter Wonka, Michael Wimmer. Fast Exact From-Region Visibility in Urban Scenes (http:/ / citeseerx. ist. psu. edu/ viewdoc/<br />
summary?doi=10. 1. 1. 71. 3271), In Proceedings of Euro<strong>graphics</strong> Symposium on Rendering 2005, pages 223-230.<br />
[11] D. Haumont, O. Debeir and F. Sillion, Graphics Forum, Volumetric Cell-and-Portal Generation (http:/ / citeseerx. ist. psu. edu/ viewdoc/<br />
summary?doi=10. 1. 1. 163. 6834), Volume 22, Number 3, pages 303-312, September 2003<br />
[12] Oliver Mattausch, Jiri Bittner, Michael Wimmer, Adaptive Visibility-Driven View Cell Construction (http:/ / citeseerx. ist. psu. edu/<br />
viewdoc/ summary?doi=10. 1. 1. 67. 6705), In Proceedings of Euro<strong>graphics</strong> Symposium on Rendering 2006.<br />
[13] Michiel van de Panne and A. James Stewart, Effective Compression Techniques for Precomputed Visibility (http:/ / citeseerx. ist. psu. edu/<br />
viewdoc/ summary?doi=10. 1. 1. 116. 8940), Euro<strong>graphics</strong> Workshop on Rendering, 1999, pg. 305-316, June.
Potentially visible set 114<br />
External links<br />
Cited author's pages (including publications):<br />
• Jiri Bittner (http:/ / www. cgg. cvut. cz/ ~bittner/ )<br />
• Daniel Cohen-Or (http:/ / www. math. tau. ac. il/ ~dcor/ )<br />
• Fredo Durand (http:/ / people. csail. mit. edu/ fredo/ )<br />
• Denis Haumont (http:/ / www. ulb. ac. be/ polytech/ sln/ team/ dhaumont/ dhaumont. html)<br />
• Shaun Nirenstein (http:/ / www. nirenstein. com)<br />
• Seth Teller (http:/ / people. csail. mit. edu/ seth/ )<br />
• Peter Wonka (http:/ / www. public. asu. edu/ ~pwonka/ )<br />
Other links:<br />
• Selected publications on visibility (http:/ / artis. imag. fr/ ~Xavier. Decoret/ bib/ visibility/ )<br />
Precomputed Radiance Transfer<br />
Precomputed Radiance Transfer (PRT) is a computer <strong>graphics</strong> technique used to render a scene in real time with<br />
complex light interactions being precomputed to save time. Radiosity methods can be used to determine the diffuse<br />
lighting of the scene, however PRT offers a method to dynamically change the lighting environment.<br />
In essence, PRT computes the illumination of a point as a linear combination of incident irradiance. An efficient<br />
method must be used to encode this data, such as Spherical harmonics.<br />
When spherical harmonics is used to approximate the light transport function, only low frequency effect can be<br />
handled with a reasonable number of parameters. Ren Ng extended this work to handle higher frequency shadows by<br />
replacing spherical harmonics with non-linear wavelets.<br />
Teemu Mäki-Patola gives a clear introduction to the topic based on the work of Peter-Pike Sloan et al. [1] At<br />
SIGGRAPH 2005, a detailed course on PRT was given. [2]<br />
References<br />
[1] Teemu Mäki-Patola (2003-05-05). "Precomputed Radiance Transfer" (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 131.<br />
6778) (PDF). Helsinki University of Technology. . Retrieved 2008-02-25.<br />
[2] Jan Kautz; Peter-Pike Sloan, Jaakko Lehtinen. "Precomputed Radiance Transfer: Theory and Practice" (http:/ / www. cs. ucl. ac. uk/ staff/ j.<br />
kautz/ PRT<strong>Course</strong>/ ). SIGGRAPH 2005 <strong>Course</strong>s. . Retrieved 2009-02-25.<br />
• Peter-Pike Sloan, Jan Kautz, and John Snyder. "Precomputed Radiance Transfer for Real-Time Rendering in<br />
Dynamic, Low-Frequency Lighting Environments". ACM Transactions on Graphics, Proceedings of the 29th<br />
Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 527-536. New York,<br />
NY: ACM Press, 2002. (http:/ / www. mpi-inf. mpg. de/ ~jnkautz/ projects/ prt/ prtSIG02. pdf)<br />
• NG, R., RAMAMOORTHI, R., AND HANRAHAN, P. 2003. All-Frequency Shadows Using Non-Linear<br />
Wavelet Lighting Approximation. ACM Transactions on Graphics 22, 3, 376–381. (http:/ / <strong>graphics</strong>. stanford.<br />
edu/ papers/ allfreq/ allfreq. press. pdf)
Procedural generation 115<br />
Procedural generation<br />
Procedural generation is a widely used term in the production of media; it refers to content generated<br />
algorithmically rather than manually. Often, this means creating content on the fly rather than prior to distribution.<br />
This is often related to computer <strong>graphics</strong> applications and video game level design.<br />
Overview<br />
The term procedural refers to the process that computes a particular function. Fractals, an example of procedural<br />
generation [1]<br />
, dramatically express this concept, around which a whole body of mathematics—fractal<br />
geometry—has evolved. Commonplace procedural content includes textures and meshes. Sound is often<br />
procedurally generated as well and has applications in both speech synthesis as well as music. It has been used to<br />
create compositions in various genres of electronic music by artists such as Brian Eno who popularized the term<br />
"generative music". [2]<br />
While software developers have applied procedural generation techniques for years, few products have employed<br />
this approach extensively. Procedurally generated elements have appeared in earlier video games: The Elder Scrolls<br />
II: Daggerfall randomly generates terrain and NPCs, creating a world roughly twice the actual size of the British<br />
Isles. Soldier of Fortune from Raven Software uses simple routines to detail enemy models. Avalanche Studios<br />
employed procedural generation to create a large and varied group of tropical islands in great detail for Just Cause.<br />
The modern demoscene uses procedural generation to package a great deal of audiovisual content into relatively<br />
small programs. Farbrausch is a team famous for such achievements, although many similar techniques were already<br />
implemented by The Black Lotus in the 1990s.<br />
Contemporary application<br />
Procedurally generated content such as textures and landscapes may exhibit variation, but the generation of a<br />
particular item or landscape must be identical from frame to frame. Accordingly, the functions used must be<br />
referentially transparent, always returning the same result for the same point, so that they may be called in any order<br />
and their results freely cached as necessary. This is similar to lazy evaluation in functional programming languages.<br />
Video games<br />
The earliest computer games were severely limited by memory constraints. This forced content, such as maps, to be<br />
generated algorithmically on the fly: there simply wasn't enough space to store a large amount of pre-made levels<br />
and artwork. Pseudorandom number generators were often used with predefined seed values in order to create very<br />
large game worlds that appeared premade. For example, The Sentinel supposedly had 10,000 different levels stored<br />
in only 48 or 64 kilobytes. An extreme case was Elite, which was originally planned to contain a total of 2 48<br />
(approximately 282 trillion) galaxies with 256 solar systems each. The publisher, however, was afraid that such a<br />
gigantic universe would cause disbelief in players, and eight of these galaxies were chosen for the final version. [3]<br />
Other notable early examples include the 1985 game Rescue on Fractalus that used fractals to procedurally create in<br />
real time the craggy mountains of an alien planet and River Raid, the 1982 Activision game that used a<br />
pseudorandom number sequence generated by a linear feedback shift register in order to generate a scrolling maze of<br />
obstacles.<br />
Today, most games include thousands of times as much data in terms of memory as algorithmic mechanics. For<br />
example, all of the buildings in the large game worlds of the Grand Theft Auto games have been individually<br />
designed and placed by artists. In a typical modern video game, game content such as textures and character and<br />
environment models are created by artists beforehand, then rendered in the game engine. As the technical<br />
capabilities of computers and video game consoles increases, the amount of work required by artists also greatly
Procedural generation 116<br />
increases. First, high-end gaming PCs and current-generation game consoles like the Xbox 360 and PlayStation 3 are<br />
capable of rendering scenes containing many very detailed objects with high-resolution textures in high-definition.<br />
This means that artists must invest a great deal more time in creating a single character, vehicle, building, or texture,<br />
since gamers will tend to expect ever-increasingly detailed environments.<br />
Furthermore, the number of unique objects displayed in a video game is increasing. In addition to highly detailed<br />
models, players expect a variety of models that appear substantially different from one another. In older games, a<br />
single character or object model might have been used over and over again throughout a game. With the increased<br />
visual fidelity of modern games, however, it is very jarring (and threatens the suspension of disbelief) to see many<br />
copies of a single object, while the real world contains far more variety. Again, artists would be required to complete<br />
exponentially more work in order to create many different varieties of a particular object. The need to hire larger art<br />
staffs is one of the reasons for the rapid increase in game development costs.<br />
Some initial approaches to procedural synthesis attempted to solve these problems by shifting the burden of content<br />
generation from the artists to programmers who can create code which automatically generates different meshes<br />
according to input parameters. Although sometimes this still happens, what has been recognized is that applying a<br />
purely procedural model is often hard at best, requiring huge amounts of time to evolve into a functional, usable and<br />
realistic-looking method. Instead of writing a procedure that completely builds content procedurally, it has been<br />
proven to be much cheaper and more effective to rely on artist created content for some details. For example,<br />
SpeedTree is middleware used to generate a large variety of trees procedurally, yet its leaf textures can be fetched<br />
from regular files, often representing digitally acquired real foliage. Other effective methods to generate hybrid<br />
content are to procedurally merge different pre-made assets or to procedurally apply some distortions to them.<br />
Supposing, however, a single algorithm can be envisioned to generate a realistic-looking tree, the algorithm could be<br />
called to generate random trees, thus filling a whole forest at runtime, instead of storing all the vertices required by<br />
the various models. This would save storage media space and reduce the burden on artists, while providing a richer<br />
experience. The same method would require far more processing power. Since CPUs are constantly increasing in<br />
speed, however, the latter is becoming less of a hurdle.<br />
A different problem is that it is not easy to develop a good algorithm for a single tree, let alone for a variety of<br />
species (compare Sumac, Birch, Maple). An additional caveat is that assembling a realistic-looking forest could not<br />
be done by simply assembling trees because in the real world there are interactions between the various trees which<br />
can dramatically change their appearance and distribution.<br />
In 2004, a PC first-person shooter called .kkrieger was released that made heavy use of procedural synthesis: while<br />
quite short and very simple, the advanced video effects were packed into just 96 Kilobytes. In contrast, many modern<br />
games have to be released on DVDs, often exceeding 2 gigabytes in size, more than 20,000,000 times larger. Naked<br />
Sky's RoboBlitz used procedural generation to maximize content in a less than 50MB downloadable file for Xbox<br />
Live Arcade. Will Wright's Spore also makes use of procedural synthesis.<br />
In 2008, Valve Software released Left 4 Dead, a first-person shooter based on the Source engine that utilized<br />
procedural generation as a major game mechanic. The game featured a built-in artificial intelligence structure,<br />
dubbed the "Director," which analyzed player statistics and game states on the fly to provide dynamic experiences on<br />
each and every playthrough. Based on different player variables, such as remaining health, ammo, and number of<br />
players, the A.I. Director could potentially create or remove enemies and items so that any given match maintained<br />
an exciting and breakneck pace. Left 4 Dead 2, released in November 2009, expanded on this concept, introducing<br />
even more advanced mechanics to the A.I. Director, such as the ability to generate new paths for players to follow<br />
according to their individual statuses.<br />
An indie game that makes extensive use of procedural generation is Minecraft. In the game the initial state of the<br />
world is mostly random (with guidelines in order to generate Earth-like terrain), and new areas are generated<br />
whenever the player moves towards the edges of the world. This has the benefit that every time a new game is made,<br />
the world is completely different and will need a different method to be successful, adding replay value.
Procedural generation 117<br />
Another indie game that relies heavily on procedural generation is Dwarf Fortress. In the game the whole world is<br />
generated, completely with its history, notable people, and monsters.<br />
Film<br />
As in video games, procedural generation is often used in film to rapidly create visually interesting and accurate<br />
spaces. This comes in a wide variety of applications.<br />
One application is known as an "imperfect factory," where artists can rapidly generate a large number of similar<br />
objects. This accounts for the fact that, in real life, no two objects are ever exactly alike. For instance, an artist could<br />
model a product for a grocery store shelf, and then create an imperfect factory that would generate a large number of<br />
similar objects to populate the shelf.<br />
Noise is extremely important to procedural workflow in film, the most prolific of which is Perlin noise. Noise refers<br />
to an algorithm that generates a patterned sequence of pseudorandom numbers.<br />
Cellular automata methods of procedural generation<br />
Simple programs which generate complex output are a typical method of procedural generation. Typically, one starts<br />
with some simple initial conditions like an array of numbers. Then one applies simple rules to the array, which<br />
determine the next step in the evolution of the output—that is, the next array of numbers. The rules can be fixed, or<br />
they can be changing in time. One can let the program run, and given the right update rules and initial conditions,<br />
one can obtain a non-repeating evolution starting from an initial array.<br />
Cellular automata are discrete computer models of evolutionary behavior which start from an array of<br />
differently-colored cells, then apply update rules to determine the colors of the next array of cells. Typically one<br />
starts with a two-color program, and a random finite array of black and white cells. Then one defines a<br />
neighborhood—for the simplest case, just the two neighboring cells of any one cell—and creates a so-called "update<br />
rule" to determine what the next cell in the evolution will be. Typical elementary update rules are of the form, "If the<br />
current cell is black and the cells to the right and left are white, then the next cell in the evolution shall be white."<br />
Such simple rules, even in the simplest case of two colors, can produce a complex evolution (see Rule 30 and Rule<br />
110).<br />
For game developers, this means that, using with finite initial conditions and extremely simple rules, one can<br />
generate complex behavior. That is, content can be nearly spontaneously generated from virtually nothing, which is<br />
the idea behind procedural generation. Evolving cellular automata is one way of generating a large amount of<br />
content from a small amount of input.<br />
Software examples<br />
Middleware<br />
• Acropora [4] - a procedural <strong>3D</strong> modeling software utilizing voxels to create organic objects and terrain.<br />
• Art of Illusion – an open source and free <strong>3D</strong> modeler, has an internal node-based procedural texture editor.<br />
• CityEngine [5] – a procedural <strong>3D</strong> modeling software, specialized in city modeling.<br />
• CityScape - procedural generation of <strong>3D</strong> cities, including overpasses and tunnels, from GIS data.<br />
• Filter Forge – an Adobe Photoshop plugin for designing procedural textures using node-based editing.<br />
• Grome – popular terrain and outdoor scenes modeler for games and simulation software.<br />
• Houdini - a procedural <strong>3D</strong> animation package. A free version of the software is available.<br />
• Allegorithmic Substance – a middleware and authoring software designed to create and generate procedural<br />
textures in games (used in RoboBlitz).
Procedural generation 118<br />
• Softimage - a <strong>3D</strong> computer <strong>graphics</strong> application that allows node-based procedural creation and deformation of<br />
geometry.<br />
• SpeedTree – a middleware product for procedurally generating trees.<br />
• Terragen – landscape generation software. Terragen 2 permits procedural generation of an entire world.<br />
• Ürban PAD [6] – a software for creating and generating procedural urban landscape in games.<br />
• World Machine [7] - a powerful node-based procedurally generated terrain software with a plugin system to write<br />
new, complex nodes. Exports to Terragen, among other formats, for rendering, as well as having internal texture<br />
generation tools. See [8]<br />
Space simulations with procedural worlds and universes<br />
• Elite (1984) - Everything about the universe, planet positions, names, politics and general descriptions, is<br />
generated procedurally; Ian Bell has released the algorithms in C as text elite [9]<br />
• StarFlight (1986)<br />
• Exile (1988) - Game levels were created in a pseudorandom fashion, as areas important to gameplay were<br />
generated.<br />
• Frontier: Elite II (1993) - Much as the game Elite had a procedural universe, so did its sequel.<br />
• Frontier: First Encounters (1995)<br />
• Mankind (1998) - a MMORTS where everything about the galaxy, systems names, planets, maps and resources is<br />
generated procedurally from a simple tga image.<br />
• Noctis (2002)<br />
• Infinity: The Quest for Earth (In development, not yet released)<br />
• Vega Strike - An open source game very similar to Elite.<br />
Games with procedural levels<br />
Arcade games<br />
• The Sentinel (1986) - Used procedural generation to create 10,000 unique levels.<br />
• Darwinia (2005) - Has procedural landscapes that allowed for greatly reduced game development time.<br />
Racing games<br />
• Fuel (2009) - Generates an open world through procedural techniques [10]<br />
Role-playing games<br />
• Captive (1990) generates (theoretically up to 65,535) game levels procedurally [11]<br />
• Virtual Hydlide (1995)<br />
• The Elder Scrolls II: Daggerfall (1996)<br />
• Diablo (1998) and Diablo II (2000) both use procedural generation for level design.<br />
• Torchlight, a Diablo clone differing mostly by art style and feel.<br />
• Dwarf Fortress procedurally generates a large game world, including civilization structures, a large world history,<br />
interactive geography including erosion and magma flows, ecosystems which react with each other and the game<br />
world. The process of initially generating the world can take up to half an hour even on a modern PC, and is then<br />
stored in text files reaching over 100MB to be reloaded whenever the game is played.<br />
• Dark Cloud and Dark Cloud 2 both generate game levels procedurally.<br />
• Hellgate: London (2007)<br />
• The Disgaea series of games use procedural level generation for the "Item World".<br />
• Realm of the Mad God<br />
• Nearly all roguelikes use this technique.<br />
Strategy games
Procedural generation 119<br />
• Majesty:The Fantasy Kingdom Sim (2000) - Uses procedural generation for all levels and scenarios.<br />
• Seven Kingdoms (1997) - Uses procedural generation for levels.<br />
• Xconq - an open source strategy game and game engine.<br />
• Minecraft - The game world is procedurally generated as the player explores it, with the full size possible<br />
stretching out to be nearly eight times the surface area of the Earth before running into technical limits. [12]<br />
• Frozen Synapse - Levels in single player are mostly randomly generated, with bits and pieces that are constant in<br />
every generation. Multiplayer maps are randomly generated. Skirmish maps are randomly generated, and allow<br />
the player to change the algorithm used.<br />
• Atom Zombie Smasher - Levels are generated randomly.<br />
Games with yet unknown genre<br />
• Subversion (TBA) - Uses procedural generation to create cities on a given terrain.<br />
Third-person shooters<br />
• Just Cause (2006) - Game area is over 250000 acres (1000 km 2 ), created procedurally<br />
Almost entirely procedural games<br />
• .kkrieger (2004)<br />
• Synth video game (2009) - 100% procedural <strong>graphics</strong> and levels<br />
Games with miscellaneous procedural effects<br />
• ToeJam & Earl (1991) - The random levels were procedurally generated.<br />
• The Elder Scrolls III: Morrowind (2002) - Water effects are generated on the fly with procedural animation by the<br />
technique demonstrated in NVIDIA's "Water Interaction" demo. [13]<br />
• RoboBlitz (2006) for XBox360 live arcade and PC (Textures generated on the fly via ProFX)<br />
• Spore (2008)<br />
• Left 4 Dead (2008) - Certain events, item locations, and number of enemies are procedurally generated according<br />
to player statistics.<br />
• Borderlands (2009) - The weapons, items and some levels are procedurally generated based on individual players'<br />
current level.<br />
• Left 4 Dead 2 (2009) - Certain areas of maps are randomly generated and weather effects are dynamically altered<br />
based on current situation.<br />
• Star Trek Online (2010) - Star Trek Online procedurally generates new races, new objects, star systems and<br />
planets for exploration. The player can save the coordinates of a system they find, so that they can return or let<br />
other players find the system.<br />
• Terraria (2011) - Terraria procedurally generates a 2D landscape for the player to explore.<br />
• Invaders: Corruption (2010) - A free, procedurally generated arena-shooter
Procedural generation 120<br />
References<br />
[1] "How does one get started with procedural generation?" (http:/ / stackoverflow. com/ questions/ 155069/<br />
how-does-one-get-started-with-procedural-generation). stack overflow. .<br />
[2] Brian Eno (June 8, 1996). "A talk delivered in San Francisco, June 8, 1996" (http:/ / www. inmotionmagazine. com/ eno1. html). inmotion<br />
magazine. . Retrieved 2008-11-07.<br />
[3] Francis Spufford (October 18, 2003). "Masters of their universe" (http:/ / www. guardian. co. uk/ weekend/ story/ 0,3605,1064107,00. html).<br />
Guardian. .<br />
[4] http:/ / www. voxelogic. com<br />
[5] http:/ / www. procedural. com<br />
[6] http:/ / www. gamr7. com<br />
[7] http:/ / www. world-machine. com<br />
[8] http:/ / www. world-machine. com/<br />
[9] Ian Bell's Text Elite Page (http:/ / www. iancgbell. clara. net/ elite/ text/ index. htm)<br />
[10] Jim Rossignol (February 24, 2009). "Interview: Codies on FUEL" (http:/ / www. rockpapershotgun. com/ 2009/ 02/ 24/<br />
interview-codies-on-fuel/ ). rockpapershotgun.com. . Retrieved 2010-03-06.<br />
[11] http:/ / captive. atari. org/ Technical/ MapGen/ Introduction. php<br />
[12] http:/ / notch. tumblr. com/ post/ 458869117/ how-saving-and-loading-will-work-once-infinite-is-in<br />
[13] "NVIDIA Water Interaction Demo" (http:/ / http. download. nvidia. com/ developer/ SDK/ Individual_Samples/ 3d<strong>graphics</strong>_samples.<br />
html#WaterInteraction). NVIDIA. 2003. . Retrieved 2007-10-08.<br />
External links<br />
• The Future Of Content (http:/ / www. gamasutra. com/ php-bin/ news_index. php?story=5570) - Will Wright<br />
keynote on Spore & procedural generation at the Game Developers Conference 2005. (registration required to<br />
view video).<br />
• Darwinia (http:/ / www. darwinia. co. uk/ ) - development diary (http:/ / www. darwinia. co. uk/ extras/<br />
development. html) procedural generation of terrains and trees.<br />
• Filter Forge tutorial at The Photoshop Roadmap (http:/ / www. photoshoproadmap. com/ Photoshop-blog/ 2006/<br />
08/ 30/ creating-a-wet-and-muddy-rocks-texture/ )<br />
• Procedural Graphics - an introduction by in4k (http:/ / in4k. untergrund. net/ index.<br />
php?title=Procedural_Graphics_-_an_introduction)<br />
• Texturing & Modeling:A Procedural Approach (http:/ / cobweb. ecn. purdue. edu/ ~ebertd/ book2e. html)<br />
• Ken Perlin's Discussion of Perlin Noise (http:/ / www. noisemachine. com/ talk1/ )<br />
• Weisstein, Eric W., " Elementary Cellular Automaton (http:/ / mathworld. wolfram. com/<br />
ElementaryCellularAutomaton. html)" from MathWorld.<br />
• The HVox Engine: Procedural Volumetric Terrains on the Fly (2004) (http:/ / www. gpstraces. com/ sven/ HVox/<br />
hvox. news. html)<br />
• Procedural Content Generation Wiki (http:/ / pcg. wikidot. com/ ): a community dedicated to documenting,<br />
analyzing, and discussing all forms of procedural content generation.<br />
• Procedural Trees and Procedural Fire in a Virtual World (http:/ / software. intel. com/ en-us/ articles/<br />
procedural-trees-and-procedural-fire-in-a-virtual-world/ ): A white paper on creating procedural trees and<br />
procedural fire using the Intel Smoke framework<br />
• A Real-Time Procedural Universe (http:/ / www. gamasutra. com/ view/ feature/ 3098/<br />
a_realtime_procedural_universe_. php) a tutorial on generating procedural planets in real-time
Procedural texture 121<br />
Procedural texture<br />
A procedural texture is a computer generated image created<br />
using an algorithm intended to create a realistic<br />
representation of natural elements such as wood, marble,<br />
granite, metal, stone, and others.<br />
Usually, the natural look of the rendered result is achieved by<br />
the usage of fractal noise and turbulence functions. These<br />
functions are used as a numerical representation of the<br />
“randomness” found in nature.<br />
Solid texturing<br />
Solid texturing is a process where the texture generating<br />
function is evaluated over at each visible surface point of<br />
the model. Traditionally these functions use Perlin noise as<br />
their basis function, but some simple functions may use more<br />
trivial methods such as the sum of sinusoidal functions for<br />
instance. Solid textures are an alternative to the traditional<br />
A procedural floor grate texture generated with the texture<br />
editor Genetica [1] .<br />
2D texture images which are applied to the surfaces of a model. It is a difficult and tedious task to get multiple 2D<br />
textures to form a consistent visual appearance on a model without it looking obviously tiled. Solid textures were<br />
created to specifically solve this problem.<br />
Instead of editing images to fit a model, a function is used to evaluate the colour of the point being textured. Points<br />
are evaluated based on their <strong>3D</strong> position, not their 2D surface position. Consequently, solid textures are unaffected<br />
by distortions of the surface parameter space, such as you might see near the poles of a sphere. Also, continuity<br />
between the surface parameterization of adjacent patches isn’t a concern either. Solid textures will remain consistent<br />
and have features of constant size regardless of distortions in the surface coordinate systems. [2]<br />
Cellular texturing<br />
Cellular texturing differs from the majority of other procedural texture generating techniques as it does not depend<br />
on noise functions as its basis, although it is often used to complement the technique. Cellular textures are based on<br />
feature points which are scattered over a three dimensional space. These points are then used to split up the space<br />
into small, randomly tiled regions called cells. These cells often look like “lizard scales,” “pebbles,” or “flagstones”.<br />
Even though these regions are discrete, the cellular basis function itself is continuous and can be evaluated anywhere<br />
in space. [3]<br />
Genetic textures<br />
Genetic texture generation is highly experimental approach for generating textures. It is a highly automated process<br />
that uses a human to completely moderate the eventual outcome. The flow of control usually has a computer<br />
generate a set of texture candidates. From these, a user picks a selection. The computer then generates another set of<br />
textures by mutating and crossing over elements of the user selected textures [4] . For more information on exactly<br />
how this mutation and cross over generation method is achieved, see Genetic algorithm. The process continues until<br />
a suitable texture for the user is generated. This isn't a commonly used method of generating textures as it’s very<br />
difficult to control and direct the eventual outcome. Because of this, it is typically used for experimentation or<br />
abstract textures only.
Procedural texture 122<br />
Self-organizing textures<br />
Starting from a simple white noise, self-organization processes lead to structured patterns - still with a part of<br />
randomness. Reaction-diffusion systems are a good example to generate such kind of textures.<br />
Example of a procedural marble texture<br />
(Taken from The Renderman Companion Book, by Steve Upstill)<br />
/* Copyrighted Pixar 1988 */<br />
/* From the RenderMan Companion p. 355 */<br />
/* Listing 16.19 Blue marble surface shader*/<br />
/*<br />
* blue_marble(): a marble stone texture in shades of blue<br />
* surface<br />
*/<br />
blue_marble(<br />
{<br />
float Ks = .4,<br />
Kd = .6,<br />
Ka = .1,<br />
roughness = .1,<br />
txtscale = 1;<br />
color specularcolor = 1)<br />
point PP; /* scaled point in shader space */<br />
float csp; /* color spline parameter */<br />
point Nf; /* forward-facing normal */<br />
point V; /* for specular() */<br />
float pixelsize, twice, scale, weight, turbulence;<br />
/* Obtain a forward-facing normal for lighting calculations. */<br />
Nf = faceforward( normalize(N), I);<br />
V = normalize(-I);<br />
/*<br />
* Compute "turbulence" a la [PERLIN85]. Turbulence is a sum of<br />
* "noise" components with a "fractal" 1/f power spectrum. It gives the<br />
* visual impression of turbulent fluid flow (for example, as in the<br />
* formation of blue_marble from molten color splines!). Use the<br />
* surface element area in texture space to control the number of<br />
* noise components so that the frequency content is appropriate<br />
* to the scale. This prevents aliasing of the texture.<br />
*/<br />
PP = transform("shader", P) * txtscale;<br />
pixelsize = sqrt(area(PP));<br />
twice = 2 * pixelsize;<br />
turbulence = 0;
Procedural texture 123<br />
}<br />
for (scale = 1; scale > twice; scale /= 2)<br />
turbulence += scale * noise(PP/scale);<br />
/* Gradual fade out of highest-frequency component near limit */<br />
if (scale > pixelsize) {<br />
}<br />
/*<br />
weight = (scale / pixelsize) - 1;<br />
weight = clamp(weight, 0, 1);<br />
turbulence += weight * scale * noise(PP/scale);<br />
* Magnify the upper part of the turbulence range 0.75:1<br />
* to fill the range 0:1 and use it as the parameter of<br />
* a color spline through various shades of blue.<br />
*/<br />
csp = clamp(4 * turbulence - 3, 0, 1);<br />
Ci = color spline(csp,<br />
color (0.25, 0.25, 0.35), /* pale blue */<br />
color (0.25, 0.25, 0.35), /* pale blue */<br />
color (0.20, 0.20, 0.30), /* medium blue */<br />
color (0.20, 0.20, 0.30), /* medium blue */<br />
color (0.20, 0.20, 0.30), /* medium blue */<br />
color (0.25, 0.25, 0.35), /* pale blue */<br />
color (0.25, 0.25, 0.35), /* pale blue */<br />
color (0.15, 0.15, 0.26), /* medium dark blue */<br />
color (0.15, 0.15, 0.26), /* medium dark blue */<br />
color (0.10, 0.10, 0.20), /* dark blue */<br />
color (0.10, 0.10, 0.20), /* dark blue */<br />
color (0.25, 0.25, 0.35), /* pale blue */<br />
color (0.10, 0.10, 0.20) /* dark blue */<br />
);<br />
/* Multiply this color by the diffusely reflected light. */<br />
Ci *= Ka*ambient() + Kd*diffuse(Nf);<br />
/* Adjust for opacity. */<br />
Oi = Os;<br />
Ci = Ci * Oi;<br />
/* Add in specular highlights. */<br />
Ci += specularcolor * Ks * specular(Nf,V,roughness);<br />
This article was taken from The Photoshop Roadmap [5] with written authorization
Procedural texture 124<br />
References<br />
[1] http:/ / www. spiral<strong>graphics</strong>. biz/ gallery. htm<br />
[2] Ebert et al: Texturing and Modeling A Procedural Approach, page 10. Morgan Kaufmann, 2003.<br />
[3] Ebert et al: Texturing and Modeling A Procedural Approach, page 135. Morgan Kaufmann, 2003.<br />
[4] Ebert et al: Texturing and Modeling A Procedural Approach, page 547. Morgan Kaufmann, 2003.<br />
[5] http:/ / www. photoshoproadmap. com<br />
Some programs for creating textures using Procedural texturing<br />
• Allegorithmic Substance<br />
• Filter Forge<br />
• Genetica (program) (http:/ / www. spiral<strong>graphics</strong>. biz/ genetica. htm)<br />
• DarkTree (http:/ / www. darksim. com/ html/ dt25_description. html)<br />
• Context Free Art (http:/ / www. contextfreeart. org/ index. html)<br />
• TexRD (http:/ / www. texrd. com) (based on reaction-diffusion: self-organizing textures)<br />
• Texture Garden (http:/ / texturegarden. com)<br />
• Enhance Textures (http:/ / www. shaders. co. uk)<br />
<strong>3D</strong> projection<br />
<strong>3D</strong> projection is any method of mapping three-dimensional points to a two-dimensional plane. As most current<br />
methods for displaying graphical data are based on planar two-dimensional media, the use of this type of projection<br />
is widespread, especially in computer <strong>graphics</strong>, engineering and drafting.<br />
Orthographic projection<br />
When the human eye looks at a scene, objects in the distance appear smaller than objects close by. Orthographic<br />
projection ignores this effect to allow the creation of to-scale drawings for construction and engineering.<br />
Orthographic projections are a small set of transforms often used to show profile, detail or precise measurements of a<br />
three dimensional object. Common names for orthographic projections include plane, cross-section, bird's-eye, and<br />
elevation.<br />
If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z<br />
axis), the mathematical transformation is as follows; To project the <strong>3D</strong> point , , onto the 2D point ,<br />
using an orthographic projection parallel to the y axis (profile view), the following equations can be used:<br />
where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be<br />
used to properly align the viewport. Using matrix multiplication, the equations become:<br />
.<br />
While orthographically projected images represent the three dimensional nature of the object projected, they do not<br />
represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In<br />
particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of<br />
whether they are far away or near to the virtual viewer. As a result, lengths near to the viewer are not foreshortened<br />
as they would be in a perspective projection.
<strong>3D</strong> projection 125<br />
Perspective projection<br />
When the human eye looks at a scene, objects in the distance appear smaller than objects close by - this is known as<br />
perspective. While orthographic projection ignores this effect to allow accurate measurements, perspective definition<br />
shows distant objects as smaller to provide additional realism.<br />
The perspective projection requires greater definition. A conceptual aid to understanding the mechanics of this<br />
projection involves treating the 2D projection as being viewed through a camera viewfinder. The camera's position,<br />
orientation, and field of view control the behavior of the projection transformation. The following variables are<br />
defined to describe this transformation:<br />
• - the <strong>3D</strong> position of a point A that is to be projected.<br />
• - the <strong>3D</strong> position of a point C representing the camera.<br />
• - The orientation of the camera (represented, for instance, by Tait–Bryan angles).<br />
• - the viewer's position relative to the display surface. [1]<br />
Which results in:<br />
• - the 2D projection of .<br />
When and the <strong>3D</strong> vector is projected to the 2D vector .<br />
Otherwise, to compute we first define a vector as the position of point A with respect to a coordinate<br />
system defined by the camera, with origin in C and rotated by with respect to the initial coordinate system. This is<br />
achieved by subtracting from and then applying a rotation by to the result. This transformation is often<br />
called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the<br />
[2] [3]<br />
x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes):<br />
This representation corresponds to rotating by three Euler angles (more properly, Tait–Bryan angles), using the xyz<br />
convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x<br />
(reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z) (reading<br />
left-to-right)". Note that if the camera is not rotated ( ), then the matrices drop out (as<br />
identities), and this reduces to simply a shift:<br />
Alternatively, without using matrices, (note that the signs of angles are inconsistent with matrix form):<br />
This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection<br />
plane, literature also may use x/z): [4]<br />
Or, in matrix form using homogeneous coordinates, the system<br />
in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving
<strong>3D</strong> projection 126<br />
The distance of the viewer from the display surface, , directly relates to the field of view, where<br />
corners of your viewing surface)<br />
The above equations can also be rewritten as:<br />
is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the<br />
In which is the display size, is the recording surface size (CCD or film), is the distance from the<br />
recording surface to the entrance pupil (camera center), and is the distance from the point to the entrance pupil.<br />
Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.<br />
Diagram<br />
To determine which screen x-coordinate corresponds to a point at multiply the point coordinates by:<br />
where<br />
is the screen x coordinate<br />
is the model x coordinate<br />
is the focal length—the axial distance from the camera center to the image plane<br />
is the subject distance.<br />
Because the camera is in <strong>3D</strong>, the same works for the screen y-coordinate, substituting y for x in the above diagram<br />
and equation.
<strong>3D</strong> projection 127<br />
References<br />
[1] Ingrid Carlbom, Joseph Paciorek (1978). "Planar Geometric Projections and Viewing Transformations" (http:/ / www. cs. uns. edu. ar/ cg/<br />
clasespdf/ p465carlbom. pdf). ACM Computing Surveys 10 (4): 465–502. doi:10.1145/356744.356750. .<br />
[2] Riley, K F (2006). Mathematical Methods for Physics and Engineering. Cambridge University Press. pp. 931, 942. doi:10.2277/0521679710.<br />
ISBN 0-521-67971-0.<br />
[3] Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Reading, Mass.: Addison-Wesley Pub. Co.. pp. 146–148. ISBN 0-201-02918-9.<br />
[4] Sonka, M; Hlavac, V; Boyle, R (1995). Image Processing, Analysis & Machine Vision (2nd ed.). Chapman and Hall. pp. 14.<br />
ISBN 0-412-45570-6<br />
External links<br />
• A case study in camera projection (http:/ / nccasymposium. bmth. ac. uk/ 2007/ muhittin_bilginer/ index. html)<br />
• Creating <strong>3D</strong> Environments from Digital Photographs (http:/ / nccasymposium. bmth. ac. uk/ 2009/<br />
McLaughlin_Chris/ McLaughlin_C_WebBasedNotes. pdf)<br />
Further reading<br />
• Kenneth C. Finney (2004). <strong>3D</strong> Game Programming All in One (http:/ / books. google. com/<br />
?id=cknGqaHwPFkC& pg=PA93& dq="<strong>3D</strong>+ projection"). Thomson <strong>Course</strong>. pp. 93. ISBN 978-1-59200-136-1.<br />
Quaternions and spatial rotation<br />
Unit quaternions provide a convenient mathematical notation for representing orientations and rotations of objects in<br />
three dimensions. Compared to Euler angles they are simpler to compose and avoid the problem of gimbal lock.<br />
Compared to rotation matrices they are more numerically stable and may be more efficient. Quaternions have found<br />
their way into applications in computer <strong>graphics</strong>, computer vision, robotics, navigation, molecular dynamics and<br />
orbital mechanics of satellites. [1]<br />
When used to represent rotation, unit quaternions are also called versors, or rotation quaternions. When used to<br />
represent an orientation (rotation relative to a reference position), they are called orientation quaternions or<br />
attitude quaternions.
Quaternions and spatial rotation 128<br />
Quaternion rotation operations<br />
A very formal explanation of the properties used in this section is given by Altmann. [2]<br />
The hypersphere of rotations<br />
Visualizing the space of rotations<br />
Unit quaternions represent the mathematical space of rotations in three dimensions in a very straightforward way.<br />
The correspondence between rotations and quaternions can be understood by first visualizing the space of rotations<br />
itself.<br />
In order to visualize the space of rotations, it helps to<br />
consider a simpler case. Any rotation in three<br />
dimensions can be described by a rotation by some<br />
angle about some axis. Consider the special case in<br />
which the axis of rotation lies in the xy plane. We can<br />
then specify the axis of one of these rotations by a point<br />
on a circle, and we can use the radius of the circle to<br />
specify the angle of rotation. Similarly, a rotation<br />
whose axis of rotation lies in the "xy" plane can be<br />
described as a point on a sphere of fixed radius in three<br />
dimensions. Beginning at the north pole of a sphere in<br />
three dimensional space, we specify the point at the<br />
north pole to be the identity rotation (a zero angle<br />
rotation). Just as in the case of the identity rotation, no<br />
axis of rotation is defined, and the angle of rotation<br />
(zero) is irrelevant. A rotation having a very small<br />
rotation angle can be specified by a slice through the<br />
sphere parallel to the xy plane and very near the north<br />
pole. The circle defined by this slice will be very small,<br />
Two rotations by different angles and different axes in the space of<br />
rotations. The length of the vector is related to the magnitude of the<br />
rotation.<br />
corresponding to the small angle of the rotation. As the rotation angles become larger, the slice moves in the negative<br />
"z" direction, and the circles become larger until the equator of the sphere is reached, which will correspond to a<br />
rotation angle of 180 degrees. Continuing southward, the radii of the circles now become smaller (corresponding to<br />
the absolute value of the angle of the rotation considered as a negative number). Finally, as the south pole is reached,<br />
the circles shrink once more to the identity rotation, which is also specified as the point at the south pole.<br />
Notice that a number of characteristics of such rotations and their representations can be seen by this visualization.<br />
The space of rotations is continuous, each rotation has a neighborhood of rotations which are nearly the same, and<br />
this neighborhood becomes flat as the neighborhood shrinks. Also, each rotation is actually represented by two<br />
antipodal points on the sphere, which are at opposite ends of a line through the center of the sphere. This reflects the<br />
fact that each rotation can be represented as a rotation about some axis, or, equivalently, as a negative rotation about<br />
an axis pointing in the opposite direction (a so-called double cover). The "latitude" of a circle representing a<br />
particular rotation angle will be half of the angle represented by that rotation, since as the point is moved from the<br />
north to south pole, the latitude ranges from zero to 180 degrees, while the angle of rotation ranges from 0 to 360<br />
degrees. (the "longitude" of a point then represents a particular axis of rotation.) Note however that this set of<br />
rotations is not closed under composition. Two successive rotations with axes in the xy plane will not necessarily<br />
give a rotation whose axis lies in the xy plane, and thus cannot be represented as a point on the sphere. This will not<br />
be the case with a general rotation in 3-space, in which rotations do form a closed set under composition.
Quaternions and spatial rotation 129<br />
This visualization can be extended to a general rotation<br />
in 3 dimensional space. The identity rotation is a point,<br />
and a small angle of rotation about some axis can be<br />
represented as a point on a sphere with a small radius.<br />
As the angle of rotation grows, the sphere grows, until<br />
the angle of rotation reaches 180 degrees, at which<br />
point the sphere begins to shrink, becoming a point as<br />
the angle approaches 360 degrees (or zero degrees from<br />
the negative direction). This set of expanding and<br />
contracting spheres represents a hypersphere in four<br />
dimensional space (a 3-sphere). Just as in the simpler<br />
example above, each rotation represented as a point on<br />
the hypersphere is matched by its antipodal point on<br />
that hypersphere. The "latitude" on the hypersphere<br />
will be half of the corresponding angle of rotation, and<br />
the neighborhood of any point will become "flatter"<br />
(i.e. be represented by a 3-D Euclidean space of points)<br />
as the neighborhood shrinks. This behavior is matched<br />
by the set of unit quaternions: A general quaternion<br />
represents a point in a four dimensional space, but<br />
The sphere of rotations for the rotations that have a "horizontal" axis<br />
(in the xy plane).<br />
constraining it to have unit magnitude yields a three dimensional space equivalent to the surface of a hypersphere.<br />
The magnitude of the unit quaternion will be unity, corresponding to a hypersphere of unit radius. The vector part of<br />
a unit quaternion represents the radius of the 2-sphere corresponding to the axis of rotation, and its magnitude is the<br />
cosine of half the angle of rotation. Each rotation is represented by two unit quaternions of opposite sign, and, as in<br />
the space of rotations in three dimensions, the quaternion product of two unit quaternions will yield a unit<br />
quaternion. Also, the space of unit quaternions is "flat" in any infinitesimal neighborhood of a given unit quaternion.<br />
Parameterizing the space of rotations<br />
We can parameterize the surface of a sphere with two coordinates, such as latitude and longitude. But latitude and<br />
longitude are ill-behaved (degenerate) at the north and south poles, though the poles are not intrinsically different<br />
from any other points on the sphere. At the poles (latitudes +90° and -90°), the longitude becomes meaningless.<br />
It can be shown that no two-parameter coordinate system can avoid such degeneracy. We can avoid such problems<br />
by embedding the sphere in three-dimensional space and parameterizing it with three Cartesian coordinates (here<br />
w,x,y), placing the north pole at (w,x,y) = (1,0,0), the south pole at (w,x,y) = (−1,0,0), and the equator at w = 0, x 2 + y 2<br />
= 1. Points on the sphere satisfy the constraint w 2 + x 2 + y 2 = 1, so we still have just two degrees of freedom though<br />
there are three coordinates. A point (w,x,y) on the sphere represents a rotation in the ordinary space around the<br />
horizontal axis directed by the vector by an angle .<br />
In the same way the hyperspherical space of <strong>3D</strong> rotations can be parameterized by three angles (Euler angles), but<br />
any such parameterization is degenerate at some points on the hypersphere, leading to the problem of gimbal lock.<br />
We can avoid this by using four Euclidean coordinates w,x,y,z, with w 2 + x 2 + y 2 + z 2 = 1. The point (w,x,y,z)<br />
represents a rotation around the axis directed by the vector by an angle
Quaternions and spatial rotation 130<br />
From the rotations to the quaternions<br />
Quaternions briefly<br />
The complex numbers can be defined by introducing an abstract symbol i which satisfies the usual rules of algebra<br />
and additionally the rule i 2 = −1. This is sufficient to reproduce all of the rules of complex number arithmetic: for<br />
example:<br />
.<br />
In the same way the quaternions can be defined by introducing abstract symbols i, j, k which satisfy the rules i 2 = j 2<br />
= k 2 = ijk = −1 and the usual algebraic rules except the commutative law of multiplication (a familiar example of<br />
such a noncommutative multiplication is matrix multiplication). From this all of the rules of quaternion arithmetic<br />
follow: for example, one can show that:<br />
The imaginary part of a quaternion behaves like a vector in three dimension vector<br />
space, and the real part a behaves like a scalar in . When quaternions are used in geometry, it is more convenient<br />
to define them as a scalar plus a vector:<br />
.<br />
Those who have studied vectors at school might find it strange to add a number to a vector, as they are objects of<br />
very different natures, or to multiply two vectors together, as this operation is usually undefined. However, if one<br />
remembers that it is a mere notation for the real and imaginary parts of a quaternion, it becomes more legitimate. In<br />
other words, the correct reasoning is the addition of two quaternions, one with zero vector/imaginary part, and<br />
another one with zero scalar/real part:<br />
.<br />
We can express quaternion multiplication in the modern language of vector cross and dot products (which were<br />
actually inspired by the quaternions in the first place ). In place of the rules i 2 = j 2 = k 2 = ijk = −1 we have the<br />
quaternion multiplication rule:<br />
where:<br />
• is the resulting quaternion,<br />
• is vector cross product (a vector),<br />
• is vector scalar product (a scalar).<br />
Quaternion multiplication is noncommutative (because of the cross product, which anti-commutes), while<br />
scalar-scalar and scalar-vector multiplications commute. From these rules it follows immediately that (see details):<br />
The (left and right) multiplicative inverse or reciprocal of a nonzero quaternion is given by the conjugate-to-norm<br />
ratio (see details):<br />
as can be verified by direct calculation.<br />
,<br />
.<br />
.
Quaternions and spatial rotation 131<br />
Describing rotations with quaternions<br />
Let (w,x,y,z) be the coordinates of a rotation by around the axis as previously described. Define the quaternion<br />
where is a unit vector. Let also be an ordinary vector in 3 dimensional space, considered as a quaternion with a<br />
real coordinate equal to zero. Then it can be shown (see next section) that the quaternion product<br />
yields the vector upon rotation of the original vector by an angle around the axis . The rotation is<br />
clockwise if our line of sight points in the direction pointed by . This operation is known as conjugation by q.<br />
It follows that quaternion multiplication is composition of rotations, for if p and q are quaternions representing<br />
rotations, then rotation (conjugation) by pq is<br />
which is the same as rotating (conjugating) by q and then by p.<br />
,<br />
The quaternion inverse of a rotation is the opposite rotation, since . The square of a quaternion<br />
rotation is a rotation by twice the angle around the same axis. More generally q n is a rotation by n times the angle<br />
around the same axis as q. This can be extended to arbitrary real n, allowing for smooth interpolation between spatial<br />
orientations; see Slerp.<br />
Proof of the quaternion rotation identity<br />
Let be a unit vector (the rotation axis) and let . Our goal is to show that<br />
yields the vector rotated by an angle around the axis . Expanding out, we have<br />
where and are the components of perpendicular and parallel to respectively. This is the formula of a<br />
rotation by around the axis.
Quaternions and spatial rotation 132<br />
Example<br />
The conjugation operation<br />
A rotation of 120° around the first diagonal permutes i, j, and k<br />
cyclically.<br />
Consider the rotation f around the axis , with a rotation angle of 120°, or 2π ⁄ 3 radians.<br />
The length of is √3, the half angle is π ⁄ 3 (60°) with cosine ½, (cos 60° = 0.5) and sine √3 ⁄ 2 , (sin 60° ≈ 0.866). We<br />
are therefore dealing with a conjugation by the unit quaternion<br />
If f is the rotation function,<br />
It can be proved that the inverse of a unit quaternion is obtained simply by changing the sign of its imaginary<br />
components. As a consequence,<br />
and<br />
This can be simplified, using the ordinary rules for quaternion arithmetic, to<br />
As expected, the rotation corresponds to keeping a cube held fixed at one point, and rotating it 120° about the long<br />
diagonal through the fixed point (observe how the three axes are permuted cyclically).
Quaternions and spatial rotation 133<br />
Quaternion arithmetic in practice<br />
Let's show how we reached the previous result. Let's develop the expression of f (in two stages), and apply the rules<br />
It gives us:<br />
which is the expected result. As we can see, such computations are relatively long and tedious if done manually;<br />
however, in a computer program, this amounts to calling the quaternion multiplication routine twice.<br />
Explaining quaternions' properties with rotations<br />
Non-commutativity<br />
The multiplication of quaternions is non-commutative. Since the multiplication of unit quaternions corresponds to<br />
the composition of three dimensional rotations, this property can be made intuitive by showing that three<br />
dimensional rotations are not commutative in general.<br />
A simple exercise of applying two rotations to an asymmetrical object (e.g., a book) can explain it. First, rotate a<br />
book 90 degrees clockwise around the z axis. Next flip it 180 degrees around the x axis and memorize the result.<br />
Then restore the original orientation, so that the book title is again readable, and apply those rotations in opposite<br />
order. Compare the outcome to the earlier result. This shows that, in general, the composition of two different<br />
rotations around two distinct spatial axes will not commute.
Quaternions and spatial rotation 134<br />
Are quaternions handed?<br />
Note that quaternions, like the rotations or other linear transforms, are not "handed" (as in left-handed vs<br />
right-handed). Handedness of a coordinate system comes from the interpretation of the numbers in physical space.<br />
No matter what the handedness convention, rotating the X vector 90 degrees around the Z vector will yield the Y<br />
vector — the mathematics and numbers are the same.<br />
Alternatively, if the quaternion or direction cosine matrix is interpreted as a rotation from one frame to another, then<br />
it must be either left-handed or right-handed. For example, in the above example rotating the X vector 90 degrees<br />
around the Z vector yields the Y vector only if you use the right hand rule. If you use a left hand rule, the result<br />
would be along the negative Y vector. Transformations from quaternion to direction cosine matrix often do not<br />
specify whether the input quaternion should be left handed or right handed. It is possible to determine the<br />
handedness of the algorithm by constructing a simple quaternion from a vector and an angle and assuming right<br />
handedness to begin with. For example, [0.7071, 0, 0, 0.7071 has the axis of rotation along the z-axis, and a rotation<br />
angle of 90 degrees. Pass this quaternion into the quaternion to matrix algorithm. If the end result is as shown below<br />
and you wish to interpret the matrix as right-handed, then the algorithm is expecting a right-handed quaternion. If the<br />
end result is the transpose and you still want to interpret the result as a right-handed matrix, then you must feed the<br />
algorithm left-handed quaternions. To convert between left and right-handed quaternions simply negate the vector<br />
part of the quaternion.<br />
Quaternions and other representations of rotations<br />
Qualitative description of the advantages of quaternions<br />
The representation of a rotation as a quaternion (4 numbers) is more compact than the representation as an<br />
orthogonal matrix (9 numbers). Furthermore, for a given axis and angle, one can easily construct the corresponding<br />
quaternion, and conversely, for a given quaternion one can easily read off the axis and the angle. Both of these are<br />
much harder with matrices or Euler angles.<br />
In computer games and other applications, one is often interested in “smooth rotations”, meaning that the scene<br />
should slowly rotate and not in a single step. This can be accomplished by choosing a curve such as the spherical<br />
linear interpolation in the quaternions, with one endpoint being the identity transformation 1 (or some other initial<br />
rotation) and the other being the intended final rotation. This is more problematic with other representations of<br />
rotations.<br />
When composing several rotations on a computer, rounding errors necessarily accumulate. A quaternion that’s<br />
slightly off still represents a rotation after being normalised— a matrix that’s slightly off may not be orthogonal<br />
anymore and is harder to convert back to a proper orthogonal matrix.<br />
Quaternions also avoid a phenomenon called gimbal lock which can result when, for example in pitch/yaw/roll<br />
rotational systems, the pitch is rotated 90° up or down, so that yaw and roll then correspond to the same motion, and<br />
a degree of freedom of rotation is lost. In a gimbal-based aerospace inertial navigation system, for instance, this<br />
could have disastrous results if the aircraft is in a steep dive or ascent.
Quaternions and spatial rotation 135<br />
Conversion to and from the matrix representation<br />
From a quaternion to an orthogonal matrix<br />
The orthogonal matrix corresponding to a rotation by the unit quaternion (with |z| = 1) is<br />
given by<br />
From an orthogonal matrix to a quaternion<br />
Finding the quaternion that corresponds to a rotation matrix can be numerically<br />
unstable if the trace (sum of the diagonal elements) of the rotation matrix is zero or very small. A robust method is to<br />
choose the diagonal element with the largest value. Let uvw be an even permutation of xyz (i.e. xyz, yzx or zxy).<br />
The value<br />
will be a real number because . If r is zero the matrix is the identity matrix, and the<br />
quaternion must be the identity quaternion (1, 0, 0, 0). Otherwise the quaternion can be calculated as follows: [3]<br />
Beware the vector convention: There are two conventions for rotation matrices: one assumes row vectors on the left;<br />
the other assumes column vectors on the right; the two conventions generate matrices that are the transpose of each<br />
other. The above matrix assumes column vectors on the right. In general, a matrix for vertex transpose is ambiguous<br />
unless the vector convention is also mentioned. Historically, the column-on-the-right convention comes from<br />
mathematics and classical mechanics, whereas row-vector-on-the-left comes from computer <strong>graphics</strong>, where<br />
typesetting row vectors was easier back in the early days.<br />
(Compare the equivalent general formula for a 3 × 3 rotation matrix in terms of the axis and the angle.)<br />
Fitting quaternions<br />
The above section described how to recover a quaternion q from a 3 × 3 rotation matrix Q. Suppose, however, that<br />
we have some matrix Q that is not a pure rotation — due to round-off errors, for example — and we wish to find the<br />
quaternion q that most accurately represents Q. In that case we construct a symmetric 4×4 matrix<br />
and find the eigenvector (x,y,z,w) corresponding to the largest eigenvalue (that value will be 1 if and only if Q is a<br />
pure rotation). The quaternion so obtained will correspond to the rotation closest to the original matrix Q [4]
Quaternions and spatial rotation 136<br />
Performance comparisons with other rotation methods<br />
This section discusses the performance implications of using quaternions versus other methods (axis/angle or<br />
rotation matrices) to perform rotations in <strong>3D</strong>.<br />
Results<br />
Storage requirements<br />
Method Storage<br />
Rotation matrix 9<br />
Quaternion 4<br />
Angle/axis 3*<br />
* Note: angle-axis can be stored as 3 elements by multiplying the unit rotation axis by the rotation angle, forming the<br />
logarithm of the quaternion, at the cost of additional calculations.<br />
Used methods<br />
Performance comparison of rotation chaining operations<br />
Method # multiplies # add/subtracts total operations<br />
Rotation matrices 27 18 45<br />
Quaternions 16 12 28<br />
Performance comparison of vector rotating operations<br />
Method # multiplies # add/subtracts # sin/cos total operations<br />
Rotation matrix 9 6 0 15<br />
Quaternions 18 12 0 30<br />
Angle/axis 23 16 2 41<br />
There are three basic approaches to rotating a vector :<br />
1. Compute the matrix product of a 3x3 rotation matrix R and the original 3x1 column matrix representing . This<br />
requires 3*(3 multiplications + 2 additions) = 9 multiplications and 6 additions, the most efficient method for<br />
rotating a vector.<br />
2. Using the quaternion-vector rotation formula derived above of , the rotated vector can be<br />
evaluated directly via two quaternion products from the definition. However, the number of multiply/add<br />
operations can be minimised by expanding both quaternion products of into vector operations by twice<br />
applying . Further applying a number of<br />
vector identities yields which requires only 18 multiplies and 12 additions<br />
to evaluate. As a second approach, the quaternion could first be converted to its equivalent angle/axis<br />
representation then the angle/axis representation used to rotate the vector. However, this is both less efficient and<br />
less numerically stable when the quaternion nears the no-rotation point.<br />
3. Use the angle-axis formula to convert an angle/axis to a rotation matrix R then multiplying with a vector.<br />
Converting the angle/axis to R using common subexpression elimination costs 14 multiplies, 2 function calls (sin,<br />
cos), and 10 add/subtracts; from item 1, rotating using R adds an additional 9 multiplications and 6 additions for a<br />
total of 23 multiplies, 16 add/subtracts, and 2 function calls (sin, cos).
Quaternions and spatial rotation 137<br />
Pairs of unit quaternions as rotations in 4D space<br />
A pair of unit quaternions z l and z r can represent any rotation in 4D space. Given a four dimensional vector , and<br />
pretending that it is a quaternion, we can rotate the vector like this:<br />
It is straightforward to check that for each matrix M M T = I, that is, that each matrix (and hence both matrices<br />
together) represents a rotation. Note that since , the two matrices must commute. Therefore,<br />
there are two commuting subgroups of the set of four dimensional rotations. Arbitrary four dimensional rotations<br />
have 6 degrees of freedom, each matrix represents 3 of those 6 degrees of freedom.<br />
Since an infinitesimal four-dimensional rotation can be represented by a pair of quaternions (as follows), all<br />
(non-infinitesimal) four-dimensional rotations can also be represented.<br />
References<br />
[1] Quaternions and rotation Sequences: a Primer with Applications to Orbits, Aerospace, and Virtual Reality. Kuipers, Jack B., Princeton<br />
University Press copyright 1999.<br />
[2] Rotations, Quaternions, and Double Groups. Altmann, Simon L., Dover Publications, 1986 (see especially Ch. 12).<br />
[3] (http:/ / www. j3d. org/ matrix_faq/ matrfaq_latest. html#Q55), The Java <strong>3D</strong> Community Site, Matrix FAQ, Q55<br />
[4] Bar-Itzhack, Itzhack Y. (2000), "New method for extracting the quaternion from a rotation matrix", AIAA Journal of Guidance, Control and<br />
Dynamics 23 (6): 1085–1087 (Engineering Note), doi:10.2514/2.4654, ISSN 0731-5090<br />
External links and resources<br />
• Shoemake, Ken. Quaternions (http:/ / www. cs. caltech. edu/ courses/ cs171/ quatut. pdf)<br />
• Simple Quaternion type and operations in more than twenty different languages (http:/ / rosettacode. org/ wiki/<br />
Simple_Quaternion_type_and_operations) on Rosetta Code<br />
• Hart, Francis, Kauffman. Quaternion demo (http:/ / <strong>graphics</strong>. stanford. edu/ courses/ cs348c-95-fall/ software/<br />
quatdemo/ )<br />
• Dam, Koch, Lillholm. Quaternions, Interpolation and Animation (http:/ / www. diku. dk/ publikationer/ tekniske.<br />
rapporter/ 1998/ 98-5. ps. gz)<br />
• Byung-Uk Lee. Unit Quaternion Representation of Rotation (http:/ / home. ewha. ac. kr/ ~bulee/ quaternion. pdf)<br />
• Ibanez, Luis. Quaternion Tutorial I (http:/ / www. itk. org/ <strong>Course</strong>Ware/ Training/ QuaternionsI. pdf)<br />
• Ibanez, Luis. Quaternion Tutorial II (http:/ / www. itk. org/ <strong>Course</strong>Ware/ Training/ QuaternionsII. pdf)<br />
• Vicci, Leandra. Quaternions and Rotations in 3-Space: The Algebra and its Geometric Interpretation (ftp:/ / ftp.<br />
cs. unc. edu/ pub/ techreports/ 01-014. pdf)<br />
• Howell, Thomas and Lafon, Jean-Claude. The Complexity of the Quaternion Product, TR75-245, Cornell<br />
University, 1975 (http:/ / world. std. com/ ~sweetser/ quaternions/ ps/ cornellcstr75-245. pdf)
Quaternions and spatial rotation 138<br />
• Berthold K.P. Horn. Some Notes on Unit Quaternions and Rotation (http:/ / people. csail. mit. edu/ bkph/ articles/<br />
Quaternions. pdf).<br />
Radiosity<br />
Radiosity is a global illumination<br />
algorithm used in <strong>3D</strong> computer<br />
<strong>graphics</strong> rendering. Radiosity is an<br />
application of the finite element<br />
method to solving the rendering<br />
equation for scenes with purely diffuse<br />
surfaces. Unlike Monte Carlo<br />
algorithms (such as path tracing) which<br />
handle all types of light paths, typical<br />
radiosity methods only account for<br />
paths which leave a light source and<br />
are reflected diffusely some number of<br />
times (possibly zero) before hitting the<br />
eye. Such paths are represented as<br />
"LD*E". Radiosity calculations are<br />
viewpoint independent which increases<br />
the computations involved, but makes<br />
them useful for all viewpoints.<br />
Radiosity methods were first<br />
developed in about 1950 in the<br />
engineering field of heat transfer. They<br />
were later refined specifically for<br />
application to the problem of rendering<br />
computer <strong>graphics</strong> in 1984 by<br />
researchers at Cornell University. [1]<br />
A simple scene (Cornell box) lit both with and without radiosity. Note that in the absence<br />
of radiosity, surfaces that are not lit directly (areas in shadow) lack visual detail and are<br />
completely dark. Also note that the colour of bounced light from radiosity reflects the<br />
colour of the surfaces it bounced off of.<br />
Notable commercial radiosity engines are Enlighten by Geomerics, as seen in titles such as Battlefield 3, Need for<br />
Speed and others, Lightscape (now incorporated into the Autodesk <strong>3D</strong> Studio Max internal render engine), form•Z<br />
RenderZone Plus by AutoDesSys, Inc.), the built in render engine in LightWave <strong>3D</strong> and ElAS (Electric Image<br />
Animation System).
Radiosity 139<br />
Screenshot of scene rendered with RRV (simple implementation of radiosity<br />
Visual characteristics<br />
renderer based on OpenGL) 79 th iteration.<br />
The inclusion of radiosity calculations<br />
in the rendering process often lends an<br />
added element of realism to the<br />
finished scene, because of the way it<br />
mimics real-world phenomena.<br />
Consider a simple room scene.<br />
The image on the left was rendered<br />
with a typical direct illumination<br />
renderer. There are three types of<br />
lighting in this scene which have been<br />
specifically chosen and placed by the<br />
artist in an attempt to create realistic<br />
Difference between standard direct illumination and radiosity<br />
lighting: spot lighting with shadows (placed outside the window to create the light shining on the floor), ambient<br />
lighting (without which any part of the room not lit directly by a light source would be totally dark), and<br />
omnidirectional lighting without shadows (to reduce the flatness of the ambient lighting).<br />
The image on the right was rendered using a radiosity algorithm. There is only one source of light: an image of the<br />
sky placed outside the window. The difference is marked. The room glows with light. Soft shadows are visible on<br />
the floor, and subtle lighting effects are noticeable around the room. Furthermore, the red color from the carpet has<br />
bled onto the grey walls, giving them a slightly warm appearance. None of these effects were specifically chosen or<br />
designed by the artist.
Radiosity 140<br />
Overview of the radiosity algorithm<br />
The surfaces of the scene to be rendered are each divided up into one or more smaller surfaces (patches). A view<br />
factor is computed for each pair of patches. View factors (also known as form factors) are coefficients describing<br />
how well the patches can see each other. Patches that are far away from each other, or oriented at oblique angles<br />
relative to one another, will have smaller view factors. If other patches are in the way, the view factor will be<br />
reduced or zero, depending on whether the occlusion is partial or total.<br />
The view factors are used as coefficients in a linearized form of the rendering equation, which yields a linear system<br />
of equations. Solving this system yields the radiosity, or brightness, of each patch, taking into account diffuse<br />
interreflections and soft shadows.<br />
Progressive radiosity solves the system iteratively in such a way that after each iteration we have intermediate<br />
radiosity values for the patch. These intermediate values correspond to bounce levels. That is, after one iteration, we<br />
know how the scene looks after one light bounce, after two passes, two bounces, and so forth. Progressive radiosity<br />
is useful for getting an interactive preview of the scene. Also, the user can stop the iterations once the image looks<br />
good enough, rather than wait for the computation to numerically converge.<br />
Another common method for solving<br />
the radiosity equation is "shooting<br />
radiosity," which iteratively solves the<br />
radiosity equation by "shooting" light<br />
from the patch with the most error at<br />
each step. After the first pass, only<br />
those patches which are in direct line<br />
of sight of a light-emitting patch will<br />
As the algorithm iterates, light can be seen to flow into the scene, as multiple bounces are<br />
computed. Individual patches are visible as squares on the walls and floor.<br />
be illuminated. After the second pass, more patches will become illuminated as the light begins to bounce around the<br />
scene. The scene continues to grow brighter and eventually reaches a steady state.<br />
Mathematical formulation<br />
The basic radiosity method has its basis in the theory of thermal radiation, since radiosity relies on computing the<br />
amount of light energy transferred among surfaces. In order to simplify computations, the method assumes that all<br />
scattering is perfectly diffuse. Surfaces are typically discretized into quadrilateral or triangular elements over which a<br />
piecewise polynomial function is defined.<br />
After this breakdown, the amount of light energy transfer can be computed by using the known reflectivity of the<br />
reflecting patch, combined with the view factor of the two patches. This dimensionless quantity is computed from<br />
the geometric orientation of two patches, and can be thought of as the fraction of the total possible emitting area of<br />
the first patch which is covered by the second patch.<br />
More correctly, radiosity B is the energy per unit area leaving the patch surface per discrete time interval and is the<br />
combination of emitted and reflected energy:<br />
where:<br />
• B(x) i dA i is the total energy leaving a small area dA i around a point x.<br />
• E(x) i dA i is the emitted energy.<br />
• ρ(x) is the reflectivity of the point, giving reflected energy per unit area by multiplying by the incident energy per<br />
unit area (the total energy which arrives from other patches).<br />
• S denotes that the integration variable x' runs over all the surfaces in the scene<br />
• r is the distance between x and x'
Radiosity 141<br />
• θ x and θ x' are the angles between the line joining x and x' and vectors normal to the surface at x and x'<br />
respectively.<br />
• Vis(x,x' ) is a visibility function, defined to be 1 if the two points x and x' are visible from each other, and 0 if they<br />
are not.<br />
If the surfaces are approximated by a finite number of planar patches,<br />
each of which is taken to have a constant radiosity B i and reflectivity<br />
ρ i , the above equation gives the discrete radiosity equation,<br />
where F ij is the geometrical view factor for the radiation leaving j and hitting patch i.<br />
The geometrical form factor (or "projected solid<br />
angle") Fij.Fij can be obtained by projecting the<br />
element Aj onto a the surface of a unit<br />
hemisphere, and then projecting that in turn onto<br />
a unit circle around the point of interest in the<br />
plane of Ai. The form factor is then equal to the<br />
proportion of the unit circle covered by this<br />
projection.Form factors obey the reciprocity<br />
relation AiFij = AjFji<br />
This equation can then be applied to each patch. The equation is monochromatic, so color radiosity rendering<br />
requires calculation for each of the required colors.<br />
Solution methods<br />
The equation can formally be solved as matrix equation, to give the vector solution:<br />
This gives the full "infinite bounce" solution for B directly. However the number of calculations to compute the<br />
matrix solution scales according to n 3 , where n is the number of patches. This becomes prohibitive for realistically<br />
large values of n.<br />
Instead, the equation can more readily be solved iteratively, by repeatedly applying the single-bounce update formula<br />
above. Formally, this is a solution of the matrix equation by Jacobi iteration. Because the reflectivities ρ i are less<br />
than 1, this scheme converges quickly, typically requiring only a handful of iterations to produce a reasonable<br />
solution. Other standard iterative methods for matrix equation solutions can also be used, for example the<br />
Gauss–Seidel method, where updated values for each patch are used in the calculation as soon as they are computed,<br />
rather than all being updated synchronously at the end of each sweep. The solution can also be tweaked to iterate<br />
over each of the sending elements in turn in its main outermost loop for each update, rather than each of the<br />
receiving patches. This is known as the shooting variant of the algorithm, as opposed to the gathering variant. Using<br />
the view factor reciprocity, A i F ij = A j F ji , the update equation can also be re-written in terms of the view factor F ji<br />
seen by each sending patch A j :
Radiosity 142<br />
This is sometimes known as the "power" formulation, since it is now the total transmitted power of each element that<br />
is being updated, rather than its radiosity.<br />
The view factor F ij itself can be calculated in a number of ways. Early methods used a hemicube (an imaginary cube<br />
centered upon the first surface to which the second surface was projected, devised by Cohen and Greenberg in 1985).<br />
The surface of the hemicube was divided into pixel-like squares, for each of which a view factor can be readily<br />
calculated analytically. The full form factor could then be approximated by adding up the contribution from each of<br />
the pixel-like squares. The projection onto the hemicube, which could be adapted from standard methods for<br />
determining the visibility of polygons, also solved the problem of intervening patches partially obscuring those<br />
behind.<br />
However all this was quite computationally expensive, because ideally form factors must be derived for every<br />
possible pair of patches, leading to a quadratic increase in computation as the number of patches increased. This can<br />
be reduced somewhat by using a binary space partitioning tree to reduce the amount of time spent determining which<br />
patches are completely hidden from others in complex scenes; but even so, the time spent to determine the form<br />
factor still typically scales as n log n. New methods include adaptive integration [2]<br />
Sampling approaches<br />
The form factors F ij themselves are not in fact explicitly needed in either of the update equations; neither to estimate<br />
the total intensity ∑ j F ij B j gathered from the whole view, nor to estimate how the power A j B j being radiated is<br />
distributed. Instead, these updates can be estimated by sampling methods, without ever having to calculate form<br />
factors explicitly. Since the mid 1990s such sampling approaches have been the methods most predominantly used<br />
for practical radiosity calculations.<br />
The gathered intensity can be estimated by generating a set of samples in the unit circle, lifting these onto the<br />
hemisphere, and then seeing what was the radiosity of the element that a ray incoming in that direction would have<br />
originated on. The estimate for the total gathered intensity is then just the average of the radiosities discovered by<br />
each ray. Similarly, in the power formulation, power can be distributed by generating a set of rays from the radiating<br />
element in the same way, and spreading the power to be distributed equally between each element a ray hits.<br />
This is essentially the same distribution that a path-tracing program would sample in tracing back one diffuse<br />
reflection step; or that a bidirectional ray tracing program would sample to achieve one forward diffuse reflection<br />
step when light source mapping forwards. The sampling approach therefore to some extent represents a convergence<br />
between the two techniques, the key difference remaining that the radiosity technique aims to build up a sufficiently<br />
accurate map of the radiance of all the surfaces in the scene, rather than just a representation of the current view.<br />
Reducing computation time<br />
Although in its basic form radiosity is assumed to have a quadratic increase in computation time with added<br />
geometry (surfaces and patches), this need not be the case. The radiosity problem can be rephrased as a problem of<br />
rendering a texture mapped scene. In this case, the computation time increases only linearly with the number of<br />
patches (ignoring complex issues like cache use).<br />
Following the commercial enthusiasm for radiosity-enhanced imagery, but prior to the standardization of rapid<br />
radiosity calculation, many architects and graphic artists used a technique referred to loosely as false radiosity. By<br />
darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via<br />
self-illumination or diffuse mapping, a radiosity-like effect of patch interaction could be created with a standard<br />
scanline renderer (cf. ambient occlusion).
Radiosity 143<br />
Radiosity solutions may be displayed in realtime via Lightmaps on current desktop computers with standard <strong>graphics</strong><br />
acceleration hardware<br />
Advantages<br />
One of the advantages of the Radiosity algorithm is that it is relatively<br />
simple to explain and implement. This makes it a useful algorithm for<br />
teaching students about global illumination algorithms. A typical direct<br />
illumination renderer already contains nearly all of the algorithms<br />
(perspective transformations, texture mapping, hidden surface<br />
removal) required to implement radiosity. A strong grasp of<br />
mathematics is not required to understand or implement this algorithm.<br />
Limitations<br />
Typical radiosity methods only account for light paths of the form<br />
LD*E, i.e., paths which start at a light source and make multiple<br />
A modern render of the iconic Utah teapot.<br />
Radiosity was used for all diffuse illumination in<br />
this scene.<br />
diffuse bounces before reaching the eye. Although there are several approaches to integrating other illumination<br />
effects such as specular[3] and glossy [4] reflections, radiosity-based methods are generally not used to solve the<br />
complete rendering equation.<br />
Basic radiosity also has trouble resolving sudden changes in visibility (e.g., hard-edged shadows) because coarse,<br />
regular discretization into piecewise constant elements corresponds to a low-pass box filter of the spatial domain.<br />
Discontinuity meshing [5] uses knowledge of visibility events to generate a more intelligent discretization.<br />
Confusion about terminology<br />
Radiosity was perhaps the first rendering algorithm in widespread use which accounted for diffuse indirect lighting.<br />
Earlier rendering algorithms, such as Whitted-style ray tracing were capable of computing effects such as reflections,<br />
refractions, and shadows, but despite being highly global phenomena, these effects were not commonly referred to as<br />
"global illumination." As a consequence, the term "global illumination" became confused with "diffuse<br />
interreflection," and "Radiosity" became confused with "global illumination" in popular parlance. However, the three<br />
are distinct concepts.<br />
The radiosity method in the current computer <strong>graphics</strong> context derives from (and is fundamentally the same as) the<br />
radiosity method in heat transfer. In this context radiosity is the total radiative flux (both reflected and re-radiated)<br />
leaving a surface, also sometimes known as radiant exitance. Calculation of Radiosity rather than surface<br />
temperatures is a key aspect of the radiosity method that permits linear matrix methods to be applied to the problem.
Radiosity 144<br />
References<br />
[1] " Modeling the interaction of light between diffuse surfaces (http:/ / www. cs. rpi. edu/ ~cutler/ classes/ advanced<strong>graphics</strong>/ S07/ lectures/<br />
goral. pdf)", C. Goral, K. E. Torrance, D. P. Greenberg and B. Battaile, Computer Graphics, Vol. 18, No. 3.<br />
[2] G Walton, Calculation of Obstructed View Factors by Adaptive Integration, NIST Report NISTIR-6925 (http:/ / www. bfrl. nist. gov/<br />
IAQanalysis/ docs/ NISTIR-6925. pdf), see also http:/ / view3d. sourceforge. net/<br />
[3] http:/ / portal. acm. org/ citation. cfm?id=37438& coll=portal& dl=ACM<br />
[4] http:/ / www. cs. huji. ac. il/ labs/ cglab/ papers/ clustering/<br />
[5] http:/ / www. cs. cmu. edu/ ~ph/ discon. ps. gz<br />
External links<br />
• Radiosity Overview, from HyperGraph of SIGGRAPH (http:/ / www. siggraph. org/ education/ materials/<br />
HyperGraph/ radiosity/ overview_1. htm) (provides full matrix radiosity algorithm and progressive radiosity<br />
algorithm)<br />
• Radiosity, by Hugo Elias (http:/ / freespace. virgin. net/ hugo. elias/ radiosity/ radiosity. htm) (also provides a<br />
general overview of lighting algorithms, along with programming examples)<br />
• Radiosity, by Allen Martin (http:/ / web. cs. wpi. edu/ ~matt/ courses/ cs563/ talks/ radiosity. html) (a slightly<br />
more mathematical explanation of radiosity)<br />
• RADical, by Parag Chaudhuri (http:/ / www. cse. iitd. ernet. in/ ~parag/ projects/ CG2/ asign2/ report/ RADical.<br />
shtml) (an implementation of shooting & sorting variant of progressive radiosity algorithm with OpenGL<br />
acceleration, extending from GLUTRAD by Colbeck)<br />
• ROVER, by Tralvex Yeap (http:/ / www. tralvex. com/ pub/ rover/ abs-mnu. htm) (Radiosity Abstracts &<br />
Bibliography Library)<br />
• Radiosity Renderer and Visualizer (http:/ / dudka. cz/ rrv) (simple implementation of radiosity renderer based on<br />
OpenGL)<br />
• Enlighten (http:/ / www. geomerics. com) (Licensed software code that provides realtime radiosity for computer<br />
game applications. Developed by the UK company Geomerics)
Ray casting 145<br />
Ray casting<br />
Ray casting is the use of ray-surface intersection tests to solve a variety of problems in computer <strong>graphics</strong>. It enables<br />
spatial selections of objects in a scene by providing users a virtual beam as a visual cue extending from devices such<br />
as a baton or glove extending and intersecting with objects in the environment. The term was first used in computer<br />
<strong>graphics</strong> in a 1982 paper by Scott Roth to describe a method for rendering CSG models. [1]<br />
Usage<br />
Ray casting can refer to:<br />
• the general problem of determining the first object intersected by a ray, [2]<br />
• a technique for hidden surface removal based on finding the first intersection of a ray cast from the eye through<br />
each pixel of an image,<br />
• a non-recursive variant of ray tracing that only casts primary rays, or<br />
• a direct volume rendering method, also called volume ray casting.<br />
Although "ray casting" and "ray tracing" were often used interchangeably in early computer <strong>graphics</strong> literature, [3]<br />
more recent usage tries to distinguish the two. [4] The distinction is merely that ray casting never recursively traces<br />
secondary rays, whereas ray tracing may.<br />
Concept<br />
Ray casting is not a synonym for ray tracing, but can be thought of as an abridged, and significantly faster, version of<br />
the ray tracing algorithm. Both are image order algorithms used in computer <strong>graphics</strong> to render three dimensional<br />
scenes to two dimensional screens by following rays of light from the eye of the observer to a light source. Ray<br />
casting does not compute the new direction a ray of light might take after intersecting a surface on its way from the<br />
eye to the source of light. This eliminates the possibility of accurately rendering reflections, refractions, or the<br />
natural falloff of shadows; however all of these elements can be faked to a degree, by creative use of texture maps or<br />
other methods. The high speed of calculation made ray casting a handy rendering method in early real-time <strong>3D</strong> video<br />
games.<br />
In nature, a light source emits a ray of light that travels, eventually, to a surface that interrupts its progress. One can<br />
think of this "ray" as a stream of photons travelling along the same path. At this point, any combination of three<br />
things might happen with this light ray: absorption, reflection, and refraction. The surface may reflect all or part of<br />
the light ray, in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the<br />
reflected and/or refracted light. If the surface has any transparent or translucent properties, it refracts a portion of the<br />
light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the<br />
color). Between absorption, reflection, and refraction, all of the incoming light must be accounted for, and no more.<br />
A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to<br />
be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive,<br />
and reflective properties are again calculated based on the incoming rays. Some of these rays travel in such a way<br />
that they hit our eye, causing us to see the scene and so contribute to the final rendered image. Attempting to<br />
simulate this real-world process of tracing light rays using a computer can be considered extremely wasteful, as only<br />
a minuscule fraction of the rays in a scene would actually reach the eye.<br />
The first ray casting (versus ray tracing) algorithm used for rendering was presented by Arthur Appel in 1968. [5] The<br />
idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the path of<br />
that ray - think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the<br />
eye normally sees through that pixel. Using the material properties and the effect of the lights in the scene, this<br />
algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light,
Ray casting 146<br />
the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using<br />
traditional <strong>3D</strong> computer <strong>graphics</strong> shading models. One important advantage ray casting offered over older scanline<br />
algorithms is its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a<br />
mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be<br />
created by using solid modelling techniques and easily rendered.<br />
Ray casting for producing computer <strong>graphics</strong> was first used by scientists at Mathematical Applications Group, Inc.,<br />
(MAGI) of Elmsford, New York. [6]<br />
Ray casting in computer games<br />
Wolfenstein 3-D<br />
The world in Wolfenstein 3-D is built from a square based grid of uniform height walls meeting solid coloured floors<br />
and ceilings. In order to draw the world, a single ray is traced for every column of screen pixels and a vertical slice<br />
of wall texture is selected and scaled according to where in the world the ray hits a wall and how far it travels before<br />
doing so. [7]<br />
The purpose of the grid based levels is twofold - ray to wall collisions can be found more quickly since the potential<br />
hits become more predictable and memory overhead is reduced. However, encoding wide-open areas takes extra<br />
space.<br />
Comanche series<br />
The so-called "Voxel Space" engine developed by NovaLogic for the Comanche games traces a ray through each<br />
column of screen pixels and tests each ray against points in a heightmap. Then it transforms each element of the<br />
heightmap into a column of pixels, determines which are visible (that is, have not been covered up by pixels that<br />
have been drawn in front), and draws them with the corresponding color from the texture map. [8]<br />
Computational geometry setting<br />
In computational geometry, the ray casting problem is also known as the ray shooting problem and may be stated<br />
as the following query problem. Given a set of objects in d-dimensional space, preprocess them into a data structure<br />
so that for each query ray the first object hit by the ray can be found quickly. The problem has been investigated for<br />
various settings: space dimension, types of objects, restrictions on query rays, etc. [9] One technique is to use a sparse<br />
voxel octree.<br />
References<br />
[1] Roth, Scott D. (February 1982), "Ray Casting for Modeling Solids", Computer Graphics and Image Processing 18 (2): 109–144,<br />
doi:10.1016/0146-664X(82)90169-1<br />
[2] Woop, Sven; Schmittler, Jörg; Slusallek, Philipp (2005), "RPU: A Programmable Ray Processing Unit for Realtime Ray Tracing", Siggraph<br />
2005 24 (3): 434, doi:10.1145/1073204.1073211<br />
[3] Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1995), Computer Graphics: Principles and Practice,<br />
Addison-Wesley, pp. 701, ISBN 0-201-84840-6<br />
[4] Boulos, Solomon (2005), Notes on efficient ray tracing, "ACM SIGGRAPH 2005 <strong>Course</strong>s on - SIGGRAPH '05", SIGGRAPH 2005 <strong>Course</strong>s:<br />
10, doi:10.1145/1198555.1198749<br />
[5] "Ray-tracing and other Rendering Approaches" (http:/ / nccastaff. bournemouth. ac. uk/ jmacey/ CGF/ slides/ RayTracing4up. pdf) (PDF),<br />
lecture notes, MSc Computer Animation and Visual Effects, Jon Macey, University of Bournemouth<br />
[6] Goldstein, R. A., and R. Nagel. 3-D visual simulation. Simulation 16(1), pp. 25–31, 1971.<br />
[7] Wolfenstein-style ray casting tutorial (http:/ / www. permadi. com/ tutorial/ raycast/ ) by F. Permadi<br />
[8] Andre LaMothe. Black Art of <strong>3D</strong> Game Programming. 1995, ISBN 1571690042, pp. 14, 398, 935-936, 941-943.<br />
[9] "Ray shooting, depth orders and hidden surface removal", by Mark de Berg, Springer-Verlag, 1993, ISBN 3-540-57020-9, 201 pp.
Ray casting 147<br />
External links<br />
• Raycasting planes in WebGL with source code (http:/ / adrianboeing. blogspot. com/ 2011/ 01/<br />
raycasting-two-planes-in-webgl. html)<br />
• Raycasting-Java-Applet by Peter Paulis (http:/ / pixellove. eu/ wega/ )<br />
• Raycasting (http:/ / leftech. com/ raycaster. htm)<br />
Ray tracing<br />
In computer <strong>graphics</strong>, ray tracing is a technique for<br />
generating an image by tracing the path of light through<br />
pixels in an image plane and simulating the effects of<br />
its encounters with virtual objects. The technique is<br />
capable of producing a very high degree of visual<br />
realism, usually higher than that of typical scanline<br />
rendering methods, but at a greater computational cost.<br />
This makes ray tracing best suited for applications<br />
where the image can be rendered slowly ahead of time,<br />
such as in still images and film and television special<br />
effects, and more poorly suited for real-time<br />
applications like video games where speed is critical.<br />
Ray tracing is capable of simulating a wide variety of<br />
optical effects, such as reflection and refraction,<br />
scattering, and dispersion phenomena (such as<br />
chromatic aberration).<br />
Algorithm overview<br />
Optical ray tracing describes a method for<br />
producing visual images constructed in <strong>3D</strong><br />
computer <strong>graphics</strong> environments, with more<br />
photorealism than either ray casting or<br />
scanline rendering techniques. It works by<br />
tracing a path from an imaginary eye<br />
through each pixel in a virtual screen, and<br />
calculating the color of the object visible<br />
through it.<br />
Scenes in raytracing are described<br />
mathematically by a programmer or by a<br />
visual artist (typically using intermediary<br />
tools). Scenes may also incorporate data<br />
from images and models captured by means<br />
such as digital photography.<br />
This recursive ray tracing of a sphere demonstrates the effects of<br />
shallow depth of field, area light sources and diffuse interreflection.<br />
The ray tracing algorithm builds an image by extending rays into a scene
Ray tracing 148<br />
Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest<br />
object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the<br />
material properties of the object, and combine this information to calculate the final color of the pixel. Certain<br />
illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.<br />
It may at first seem counterintuitive or "backwards" to send rays away from the camera, rather than into it (as actual<br />
light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of<br />
light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could<br />
potentially waste a tremendous amount of computation on light paths that are never recorded. A computer simulation<br />
that starts by casting rays from the light source is called Photon mapping, and it takes much longer than a<br />
comparable ray trace.<br />
Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a<br />
maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and<br />
the pixel's value is updated. The light intensity of this pixel is computed using a number of algorithms, which may<br />
include the classic rendering algorithm and may also incorporate techniques such as radiosity.<br />
Detailed description of ray tracing computer algorithm and its genesis<br />
What happens in nature<br />
In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One<br />
can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a<br />
straight line (ignoring relativistic effects). In reality, any combination of four things might happen with this light ray:<br />
absorption, reflection, refraction and fluorescence. A surface may absorb part of the light ray, resulting in a loss of<br />
intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more<br />
directions. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself<br />
in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less<br />
commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength<br />
colour in a random direction, though this is rare enough that it can be discounted from most rendering applications.<br />
Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no<br />
more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add<br />
up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive,<br />
refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays<br />
travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.<br />
Ray casting algorithm<br />
The first ray casting (versus ray tracing) algorithm used for rendering was presented by Arthur Appel [1] in 1968. The<br />
idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the path of<br />
that ray – think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the<br />
eye normally sees through that pixel. Using the material properties and the effect of the lights in the scene, this<br />
algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light,<br />
the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using<br />
traditional <strong>3D</strong> computer <strong>graphics</strong> shading models. One important advantage ray casting offered over older scanline<br />
algorithms is its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a<br />
mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be<br />
created by using solid modeling techniques and easily rendered.
Ray tracing 149<br />
Ray tracing algorithm<br />
The next important research breakthrough<br />
came from Turner Whitted in 1979. [2]<br />
Previous algorithms cast rays from the eye<br />
into the scene until they hit an object, but<br />
the rays were traced no further. Whitted<br />
continued the process. When a ray hits a<br />
surface, it could generate up to three new<br />
types of rays: reflection, refraction, and<br />
shadow. [3] A reflected ray continues on in<br />
the mirror-reflection direction from a shiny<br />
surface. It is then intersected with objects in<br />
the scene; the closest object it intersects is<br />
what will be seen in the reflection.<br />
Refraction rays traveling through<br />
transparent material work similarly, with the<br />
addition that a refractive ray could be<br />
entering or exiting a material. To further<br />
avoid tracing all rays in a scene, a shadow<br />
ray is used to test if a surface is visible to a<br />
light. A ray hits a surface at some point. If<br />
the surface at this point faces a light, a ray<br />
(to the computer, a line segment) is traced<br />
between this intersection point and the light.<br />
If any opaque object is found in between the<br />
surface and the light, the surface is in<br />
shadow and so the light does not contribute<br />
to its shade. This new layer of ray<br />
calculation added more realism to ray traced<br />
images.<br />
Advantages over other rendering methods<br />
Ray tracing can achieve a very high degree of visual realism<br />
In addition to the high degree of realism, ray tracing can simulate the effects of a<br />
camera due to depth of field and aperture shape (in this case a hexagon).<br />
Ray tracing's popularity stems from its basis in a realistic simulation of lighting over other rendering methods (such<br />
as scanline rendering or ray casting). Effects such as reflections and shadows, which are difficult to simulate using<br />
other algorithms, are a natural result of the ray tracing algorithm. Relatively simple to implement yet yielding<br />
impressive visual results, ray tracing often represents a first foray into <strong>graphics</strong> programming. The computational
Ray tracing 150<br />
independence of each ray makes ray tracing<br />
amenable to parallelization. [4]<br />
Disadvantages<br />
A serious disadvantage of ray tracing is<br />
performance. Scanline algorithms and other<br />
algorithms use data coherence to share<br />
computations between pixels, while ray<br />
tracing normally starts the process anew,<br />
treating each eye ray separately. However,<br />
this separation offers other advantages, such<br />
as the ability to shoot more rays as needed<br />
to perform anti-aliasing and improve image<br />
quality where needed. Although it does<br />
handle interreflection and optical effects<br />
such as refraction accurately, traditional ray<br />
tracing is also not necessarily photorealistic.<br />
True photorealism occurs when the<br />
rendering equation is closely approximated<br />
or fully implemented. Implementing the<br />
rendering equation gives true photorealism,<br />
as the equation describes every physical<br />
effect of light flow. However, this is usually<br />
infeasible given the computing resources<br />
required. The realism of all rendering<br />
methods, then, must be evaluated as an<br />
approximation to the equation, and in the<br />
case of ray tracing, it is not necessarily the<br />
most realistic. Other methods, including<br />
photon mapping, are based upon ray tracing<br />
for certain parts of the algorithm, yet give<br />
far better results.<br />
Reversed direction of traversal of scene by the rays<br />
The number of reflections a “ray” can take and how it is affected each time it<br />
encounters a surface is all controlled via software settings during ray tracing. Here,<br />
each ray was allowed to reflect up to 16 times. Multiple “reflections of reflections”<br />
can thus be seen. Created with Cobalt<br />
The number of refractions a “ray” can take and how it is affected each time it<br />
encounters a surface is all controlled via software settings during ray tracing. Here,<br />
each ray was allowed to refract and reflect up to 9 times. Fresnel reflections were<br />
used. Also note the caustics. Created with Vray<br />
The process of shooting rays from the eye to the light source to render an image is sometimes called backwards ray<br />
tracing, since it is the opposite direction photons actually travel. However, there is confusion with this terminology.<br />
Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term backwards<br />
ray tracing to mean shooting rays from the lights and gathering the results. Therefore it is clearer to distinguish<br />
eye-based versus light-based ray tracing.<br />
While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can<br />
benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide<br />
reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto<br />
reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based<br />
and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and<br />
[5] [6]<br />
lights, and the paths subsequently joined by a connecting ray after some length.
Ray tracing 151<br />
Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic<br />
photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of<br />
3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the<br />
scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface<br />
points. [7] [8] The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant<br />
reuse of photons, reducing computation, at the cost of statistical bias.<br />
An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider<br />
a darkened room, with a door slightly ajar leading to a brightly-lit room), or a scene in which most points do not<br />
have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases,<br />
only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a<br />
random search of the path space, and when energetic paths are found, reuses this information by exploring the<br />
nearby space of rays. [9]<br />
To the right is an image showing a simple example of a path of rays<br />
recursively generated from the camera (or eye) to the light source using<br />
the above algorithm. A diffuse surface reflects light in all directions.<br />
First, a ray is created at an eyepoint and traced through a pixel and into<br />
the scene, where it hits a diffuse surface. From that surface the<br />
algorithm recursively generates a reflection ray, which is traced<br />
through the scene, where it hits another diffuse surface. Finally,<br />
another reflection ray is generated and traced through the scene, where<br />
it hits the light source and is absorbed. The color of the pixel now<br />
depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source.<br />
For example if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of<br />
the pixel is blue.<br />
In real time<br />
The first implementation of a "real-time" ray-tracer was credited at the 2005 SIGGRAPH computer <strong>graphics</strong><br />
conference as the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system.<br />
Initially published in 1987 at USENIX, the BRL-CAD ray-tracer is the first known implementation of a parallel<br />
network distributed ray-tracing system that achieved several frames per second in rendering performance. [10] This<br />
performance was attained by means of the highly-optimized yet platform independent LIBRT ray-tracing engine in<br />
BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a<br />
commodity network. BRL-CAD's ray-tracer, including REMRT/RT tools, continue to be available and developed<br />
today as Open source software. [11]<br />
Since then, there have been considerable efforts and research towards implementing ray tracing in real time speeds<br />
for a variety of purposes on stand-alone desktop configurations. These purposes include interactive <strong>3D</strong> <strong>graphics</strong><br />
applications such as demoscene productions, computer and video games, and image rendering. Some real-time<br />
software <strong>3D</strong> engines based on ray tracing have been developed by hobbyist demo programmers since the late<br />
1990s. [12]<br />
The OpenRT project includes a highly-optimized software core for ray tracing along with an OpenGL-like API in<br />
order to offer an alternative to the current rasterisation based approach for interactive <strong>3D</strong> <strong>graphics</strong>. Ray tracing<br />
hardware, such as the experimental Ray Processing Unit developed at the Saarland University, has been designed to<br />
accelerate some of the computationally intensive operations of ray tracing. On March 16, 2007, the University of<br />
Saarland revealed an implementation of a high-performance ray tracing engine that allowed computer games to be<br />
rendered via ray tracing without intensive resource usage. [13]
Ray tracing 152<br />
On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray<br />
Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14-29 frames per<br />
second. The demonstration ran on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz. [14]<br />
At SIGGRAPH 2009, Nvidia announced OptiX, an API for real-time ray tracing on Nvidia GPUs. The API exposes<br />
seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive<br />
intersections, shaders, shadowing, etc. [15]<br />
Example<br />
As a demonstration of the principles involved in raytracing, let us consider how one would find the intersection<br />
between a ray and a sphere. In vector notation, the equation of a sphere with center and radius is<br />
Any point on a ray starting from point with direction (here is a unit vector) can be written as<br />
where is its distance between and . In our problem, we know , , (e.g. the position of a light source)<br />
and , and we need to find . Therefore, we substitute for :<br />
Let for simplicity; then<br />
Knowing that d is a unit vector allows us this minor simplification:<br />
This quadratic equation has solutions<br />
The two values of found by solving this equation are the two ones such that are the points where the ray<br />
intersects the sphere.<br />
Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from<br />
with opposite direction).<br />
If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere.<br />
Let us suppose now that there is at least a positive solution, and let be the minimal one. In addition, let us suppose<br />
that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We<br />
need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is<br />
equal and opposite to the angle of incidence between the incident ray and the normal to the sphere.<br />
The normal to the sphere is simply<br />
where is the intersection point found before. The reflection direction can be found by a reflection of<br />
with respect to , that is<br />
Thus the reflected ray has equation
Ray tracing 153<br />
Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our<br />
reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the<br />
original light source and the one of the sphere are combined by the reflection.<br />
This is merely the math behind the line–sphere intersection and the subsequent determination of the colour of the<br />
pixel being calculated. There is, of course, far more to the general process of raytracing, but this demonstrates an<br />
example of the algorithms used.<br />
References<br />
[1] Appel A. (1968) Some techniques for shading machine rendering of solids (http:/ / <strong>graphics</strong>. stanford. edu/ courses/ Appel. pdf). AFIPS<br />
Conference Proc. 32 pp.37-45<br />
[2] Whitted T. (1979) An improved illumination model for shaded display (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 156.<br />
1534). Proceedings of the 6th annual conference on Computer <strong>graphics</strong> and interactive techniques<br />
[3] Tomas Nikodym (June 2010). "Ray Tracing Algorithm For Interactive Applications" (https:/ / dip. felk. cvut. cz/ browse/ pdfcache/<br />
nikodtom_2010bach. pdf). Czech Technical University, FEE. .<br />
[4] A. Chalmers, T. Davis, and E. Reinhard. Practical parallel rendering, ISBN 1568811799. AK Peters, Ltd., 2002.<br />
[5] Eric P. Lafortune and Yves D. Willems (December 1993). "Bi-Directional Path Tracing" (http:/ / www. <strong>graphics</strong>. cornell. edu/ ~eric/<br />
Portugal. html). Proceedings of Compu<strong>graphics</strong> '93: 145–153. .<br />
[6] Péter Dornbach. "Implementation of bidirectional ray tracing algorithm" (http:/ / www. cescg. org/ CESCG98/ PDornbach/ index. html). .<br />
Retrieved 2008-06-11.<br />
[7] Global Illumination using Photon Maps (http:/ / <strong>graphics</strong>. ucsd. edu/ ~henrik/ papers/ photon_map/<br />
global_illumination_using_photon_maps_egwr96. pdf)<br />
[8] Photon Mapping - Zack Waters (http:/ / web. cs. wpi. edu/ ~emmanuel/ courses/ cs563/ write_ups/ zackw/ photon_mapping/ PhotonMapping.<br />
html)<br />
[9] http:/ / <strong>graphics</strong>. stanford. edu/ papers/ metro/ metro. pdf<br />
[10] See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 86–98.<br />
[11] "About BRL-CAD" (http:/ / brlcad. org/ d/ about). . Retrieved 2009-07-28.<br />
[12] Piero Foscari. "The Realtime Raytracing Realm" (http:/ / www. acm. org/ tog/ resources/ RTNews/ demos/ overview. htm). ACM<br />
Transactions on Graphics. . Retrieved 2007-09-17.<br />
[13] Mark Ward (March 16, 2007). "Rays light up life-like <strong>graphics</strong>" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 6457951. stm). BBC News. .<br />
Retrieved 2007-09-17.<br />
[14] Theo Valich (June 12, 2008). "Intel converts ET: Quake Wars to ray tracing" (http:/ / www. tgdaily. com/ html_tmp/<br />
content-view-37925-113. html). TG Daily. . Retrieved 2008-06-16.<br />
[15] Nvidia (October 18, 2009). "Nvidia OptiX" (http:/ / www. nvidia. com/ object/ optix. html). Nvidia. . Retrieved 2009-11-06.<br />
External links<br />
• What is ray tracing ? (http:/ / www. codermind. com/ articles/ Raytracer-in-C+ +<br />
-Introduction-What-is-ray-tracing. html)<br />
• Ray Tracing and Gaming - Quake 4: Ray Traced Project (http:/ / www. pcper. com/ article. php?aid=334)<br />
• Ray tracing and Gaming - One Year Later (http:/ / www. pcper. com/ article. php?aid=506)<br />
• Interactive Ray Tracing: The replacement of rasterization? (http:/ / www. few. vu. nl/ ~kielmann/ theses/<br />
avdploeg. pdf)<br />
• A series of tutorials on implementing a raytracer using C++ (http:/ / www. devmaster. net/ articles/<br />
raytracing_series/ part1. php)<br />
Videos<br />
• The Compleat Angler (1978) (http:/ / www. youtube. com/ watch?v=WV4qXzM641o)
Reflection 154<br />
Reflection<br />
Reflection in computer <strong>graphics</strong> is used to emulate reflective objects<br />
like mirrors and shiny surfaces.<br />
Reflection is accomplished in a ray trace renderer by following a ray<br />
from the eye to the mirror and then calculating where it bounces from,<br />
and continuing the process until no surface is found, or a non-reflective<br />
surface is found. Reflection on a shiny surface like wood or tile can<br />
add to the photorealistic effects of a <strong>3D</strong> rendering.<br />
• Polished - A Polished Reflection is an undisturbed reflection, like a<br />
mirror or chrome.<br />
• Blurry - A Blurry Reflection means that tiny random bumps on the<br />
surface of the material cause the reflection to be blurry.<br />
• Metallic - A reflection is Metallic if the highlights and reflections<br />
retain the color of the reflective object.<br />
• Glossy - This term can be misused. Sometimes it is a setting which<br />
is the opposite of Blurry. (When "Glossiness" has a low value, the<br />
reflection is blurry.) However, some people use the term "Glossy<br />
Ray traced model demonstrating specular<br />
reflection.<br />
Reflection" as a synonym for "Blurred Reflection." Glossy used in this context means that the reflection is<br />
actually blurred.<br />
Examples<br />
Polished or Mirror reflection<br />
Mirrors are usually almost 100% reflective.<br />
Mirror on wall rendered with 100% reflection.
Reflection 155<br />
Metallic Reflection<br />
Normal, (non metallic), objects reflect<br />
light and colors in the original color of<br />
the object being reflected.<br />
Metallic objects reflect lights and<br />
colors altered by the color of the<br />
metallic object itself.<br />
Blurry Reflection<br />
Many materials are imperfect<br />
reflectors, where the reflections are<br />
blurred to various degrees due to<br />
surface roughness that scatters the rays<br />
of the reflections.<br />
The large sphere on the left is blue with its reflection marked as metallic. The large sphere<br />
on the right is the same color but does not have the metallic property selected.<br />
The large sphere on the left has sharpness set to 100%. The sphere on the right has<br />
sharpness set to 50% which creates a blurry reflection.
Reflection 156<br />
Glossy Reflection<br />
Fully glossy reflection, shows<br />
highlights from light sources, but does<br />
not show a clear reflection from<br />
objects.<br />
Reflection mapping<br />
In computer <strong>graphics</strong>, environment mapping, or reflection mapping,<br />
is an efficient Image-based lighting technique for approximating the<br />
appearance of a reflective surface by means of a precomputed texture<br />
image. The texture is used to store the image of the distant<br />
environment surrounding the rendered object.<br />
Several ways of storing the surrounding environment are employed.<br />
The first technique was sphere mapping, in which a single texture<br />
contains the image of the surroundings as reflected on a mirror ball. It<br />
has been almost entirely surpassed by cube mapping, in which the<br />
environment is projected onto the six faces of a cube and stored as six<br />
square textures or unfolded into six square regions of a single texture.<br />
The sphere on the left has normal, metallic reflection. The sphere on the right has the<br />
same parameters, except that the reflection is marked as "glossy".<br />
An example of reflection mapping.<br />
Other projections that have some superior mathematical or computational properties include the paraboloid<br />
mapping, the pyramid mapping, the octahedron mapping, and the HEALPix mapping.<br />
The reflection mapping approach is more efficient than the classical ray tracing approach of computing the exact<br />
reflection by tracing a ray and following its optical path. The reflection color used in the shading computation at a<br />
pixel is determined by calculating the reflection vector at the point on the object and mapping it to the texel in the<br />
environment map. This technique often produces results that are superficially similar to those generated by<br />
raytracing, but is less computationally expensive since the radiance value of the reflection comes from calculating<br />
the angles of incidence and reflection, followed by a texture lookup, rather than followed by tracing a ray against the<br />
scene geometry and computing the radiance of the ray, simplifying the GPU workload.<br />
However in most circumstances a mapped reflection is only an approximation of the real reflection. Environment<br />
mapping relies on two assumptions that are seldom satisfied:
Reflection mapping 157<br />
1) All radiance incident upon the object being shaded comes from an infinite distance. When this is not the case the<br />
reflection of nearby geometry appears in the wrong place on the reflected object. When this is the case, no parallax is<br />
seen in the reflection.<br />
2) The object being shaded is convex, such that it contains no self-interreflections. When this is not the case the<br />
object does not appear in the reflection; only the environment does.<br />
Reflection mapping is also a traditional Image-based lighting technique for creating reflections of real-world<br />
backgrounds on synthetic objects.<br />
Environment mapping is generally the fastest method of rendering a reflective surface. To further increase the speed<br />
of rendering, the renderer may calculate the position of the reflected ray at each vertex. Then, the position is<br />
interpolated across polygons to which the vertex is attached. This eliminates the need for recalculating every pixel's<br />
reflection direction.<br />
If normal mapping is used, each polygon has many face normals (the direction a given point on a polygon is facing),<br />
which can be used in tandem with an environment map to produce a more realistic reflection. In this case, the angle<br />
of reflection at a given point on a polygon will take the normal map into consideration. This technique is used to<br />
make an otherwise flat surface appear textured, for example corrugated metal, or brushed aluminium.<br />
Types of reflection mapping<br />
Sphere mapping<br />
Sphere mapping represents the sphere of incident illumination as though it were seen in the reflection of a reflective<br />
sphere through an orthographic camera. The texture image can be created by approximating this ideal setup, or using<br />
a fisheye lens or via prerendering a scene with a spherical mapping.<br />
The spherical mapping suffers from limitations that detract from the realism of resulting renderings. Because<br />
spherical maps are stored as azimuthal projections of the environments they represent, an abrupt point of singularity<br />
(a “black hole” effect) is visible in the reflection on the object where texel colors at or near the edge of the map are<br />
distorted due to inadequate resolution to represent the points accurately. The spherical mapping also wastes pixels<br />
that are in the square but not in the sphere.<br />
The artifacts of the spherical mapping are so severe that it is effective only for viewpoints near that of the virtual<br />
orthographic camera.
Reflection mapping 158<br />
Cube mapping<br />
Cube mapping and other polyhedron mappings address the severe<br />
distortion of sphere maps. If cube maps are made and filtered correctly,<br />
they have no visible seams, and can be used independent of the<br />
viewpoint of the often-virtual camera acquiring the map. Cube and<br />
other polyhedron maps have since superseded sphere maps in most<br />
computer <strong>graphics</strong> applications, with the exception of acquiring<br />
image-based lighting.<br />
Generally, cube mapping uses the same skybox that is used in outdoor<br />
renderings. Cube mapped reflection is done by determining the vector<br />
that the object is being viewed at. This camera ray is reflected about<br />
the surface normal of where the camera vector intersects the object.<br />
This results in the reflected ray which is then passed to the cube map<br />
to get the texel which provides the radiance value used in the lighting<br />
calculation. This creates the effect that the object is reflective.<br />
HEALPix mapping<br />
HEALPix environment mapping is similar to the other polyhedron<br />
mappings, but can be hierarchical, thus providing a unified framework<br />
for generating polyhedra that better approximate the sphere. This<br />
allows lower distortion at the cost of increased computation [1] .<br />
History<br />
Precursor work in texture mapping had been established by Edwin<br />
Catmull, with refinements for curved surfaces by James Blinn, in 1974.<br />
[2] Blinn went on to further refine his work, developing environment<br />
mapping by 1976. [3]<br />
Gene Miller experimented with spherical environment mapping in<br />
1982 at MAGI Synthavision.<br />
Wolfgang Heidrich introduced Paraboloid Mapping in 1998 [4] .<br />
Emil Praun introduced Octahedron Mapping in 2003 [5] .<br />
Mauro Steigleder introduced Pyramid Mapping in 2005 [6] .<br />
Tien-Tsin Wong, et al. introduced the existing HEALPix mapping for rendering in 2006 [1] .<br />
A diagram depicting an apparent reflection being<br />
provided by cube mapped reflection. The map is<br />
actually projected onto the surface from the point<br />
of view of the observer. Highlights which in<br />
raytracing would be provided by tracing the ray<br />
and determining the angle made with the normal,<br />
can be 'fudged', if they are manually painted into<br />
the texture field (or if they already appear there<br />
depending on how the texture map was obtained),<br />
from where they will be projected onto the<br />
mapped object along with the rest of the texture<br />
detail.<br />
Example of a three-dimensional model using<br />
cube mapped reflection
Reflection mapping 159<br />
References<br />
[1] Tien-Tsin Wong, Liang Wan, Chi-Sing Leung, and Ping-Man Lam. Real-time Environment Mapping with Equal Solid-Angle Spherical<br />
Quad-Map (http:/ / appsrv. cse. cuhk. edu. hk/ ~lwan/ paper/ sphquadmap/ sphquadmap. htm), Shader X4: Lighting & Rendering, Charles<br />
River Media, 2006<br />
[2] http:/ / www. comphist. org/ computing_history/ new_page_6. htm<br />
[3] http:/ / www. debevec. org/ ReflectionMapping/<br />
[4] Heidrich, W., and H.-P. Seidel. "View-Independent Environment Maps." Euro<strong>graphics</strong> Workshop on Graphics Hardware 1998, pp. 39–45.<br />
[5] Emil Praun and Hugues Hoppe. "Spherical parametrization and remeshing." ACM Transactions on Graphics,22(3):340–349, 2003.<br />
[6] Mauro Steigleder. "Pencil Light Transport." A thesis presented to the University of Waterloo, 2005.<br />
External links<br />
• The Story of Reflection mapping (http:/ / www. debevec. org/ ReflectionMapping/ ) by Paul Debevec<br />
• NVIDIA's paper (http:/ / developer. nvidia. com/ attach/ 6595) about sphere & cube env. mapping<br />
Relief mapping<br />
In computer <strong>graphics</strong>, relief mapping is a texture mapping technique used to render the surface details of three<br />
dimensional objects accurately and efficiently. [1] It can produce accurate depictions of self-occlusion,<br />
self-shadowing, and parallax. [2] It is a form of short-distance raytrace done on a pixel shader.<br />
References<br />
[1] "'Real-time relief mapping on arbitrary polygonal surfaces'" (http:/ / www. inf. ufrgs. br/ ~comba/ papers/ 2005/ rtrm-i3d05. pdf).<br />
Proceedings of the 2005 Symposium on Interactive <strong>3D</strong> Graphics and Games: 155–162. 2005. .<br />
[2] "'Relief Mapping of Non-Height-Field Surface Details" (http:/ / www. inf. ufrgs. br/ ~oliveira/ pubs_files/<br />
Policarpo_Oliveira_RTM_multilayer_I<strong>3D</strong>2006. pdf). Proceedings of the 2006 symposium on Interactive <strong>3D</strong> <strong>graphics</strong> and games. 2006. .<br />
Retrieved 18 February 2011.<br />
External links<br />
• Manuel's Relief texture mapping (http:/ / www. inf. ufrgs. br/ ~oliveira/ RTM. html)
Render Output unit 160<br />
Render Output unit<br />
The Render Output Unit, often abbreviated as "ROP", and sometimes called (perhaps more properly) Raster<br />
Operations Pipeline, is one of the final steps in the rendering process of modern <strong>3D</strong> accelerator boards. The pixel<br />
pipelines take pixel and texel information and process it, via specific matrix and vector operations, into a final pixel<br />
or depth value. The ROPs perform the transactions between the relevant buffers in the local memory - this includes<br />
writing or reading values, as well as blending them together.<br />
Historically the number of ROPs, texture units, and pixel shaders have been equal. However, as of 2004, several<br />
GPUs have decoupled these areas to allow optimum transistor allocation for application workload and available<br />
memory performance. As the trend continues, it is expected that <strong>graphics</strong> processors will continue to decouple the<br />
various parts of their architectures to enhance their adaptability to future <strong>graphics</strong> applications. This design also<br />
allows chip makers to build a modular line-up, where the top-end GPU are essentially using the same logic as the<br />
low-end products.<br />
Rendering<br />
Rendering is the process of generating an image from a model (or models in what<br />
collectively could be called a scene file), by means of computer programs. A scene<br />
file contains objects in a strictly defined language or data structure; it would contain<br />
geometry, viewpoint, texture, lighting, and shading information as a description of<br />
the virtual scene. The data contained in the scene file is then passed to a rendering<br />
program to be processed and output to a digital image or raster <strong>graphics</strong> image file.<br />
The term "rendering" may be by analogy with an "artist's rendering" of a scene.<br />
Though the technical details of rendering methods vary, the general challenges to<br />
overcome in producing a 2D image from a <strong>3D</strong> representation stored in a scene file<br />
are outlined as the <strong>graphics</strong> pipeline along a rendering device, such as a GPU. A<br />
GPU is a purpose-built device able to assist a CPU in performing complex rendering<br />
calculations. If a scene is to look relatively realistic and predictable under virtual<br />
lighting, the rendering software should solve the rendering equation. The rendering<br />
equation doesn't account for all lighting phenomena, but is a general lighting model<br />
for computer-generated imagery. 'Rendering' is also used to describe the process of<br />
calculating effects in a video editing file to produce final video output.<br />
Rendering is one of the major sub-topics of <strong>3D</strong> computer <strong>graphics</strong>, and in practice<br />
always connected to the others. In the <strong>graphics</strong> pipeline, it is the last major step,<br />
giving the final appearance to the models and animation. With the increasing<br />
sophistication of computer <strong>graphics</strong> since the 1970s, it has become a more distinct<br />
subject.<br />
A variety of rendering techniques<br />
applied to a single <strong>3D</strong> scene
Rendering 161<br />
Rendering has uses in architecture, video games, simulators, movie or<br />
TV visual effects, and design visualization, each employing a different<br />
balance of features and techniques. As a product, a wide variety of<br />
renderers are available. Some are integrated into larger modeling and<br />
animation packages, some are stand-alone, some are free open-source<br />
projects. On the inside, a renderer is a carefully engineered program,<br />
based on a selective mixture of disciplines related to: light physics,<br />
visual perception, mathematics and software development.<br />
In the case of <strong>3D</strong> <strong>graphics</strong>, rendering may be done slowly, as in<br />
pre-rendering, or in real time. Pre-rendering is a computationally<br />
An image created by using POV-Ray 3.6.<br />
intensive process that is typically used for movie creation, while real-time rendering is often done for <strong>3D</strong> video<br />
games which rely on the use of <strong>graphics</strong> cards with <strong>3D</strong> hardware accelerators.<br />
Usage<br />
When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or<br />
procedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the<br />
consumer or intended viewer sees.<br />
For movie animations, several images (frames) must be rendered, and stitched together in a program capable of<br />
making an animation of this sort. Most <strong>3D</strong> image editing programs can do this.<br />
Features<br />
A rendered image can be understood in terms of a number of visible<br />
features. Rendering research and development has been largely<br />
motivated by finding ways to simulate these efficiently. Some relate<br />
directly to particular algorithms and techniques, while others are<br />
produced together.<br />
• shading — how the color and brightness of a surface varies with<br />
lighting<br />
• texture-mapping — a method of applying detail to surfaces<br />
• bump-mapping — a method of simulating small-scale bumpiness on<br />
surfaces<br />
• fogging/participating medium — how light dims when passing<br />
through non-clear atmosphere or air<br />
• shadows — the effect of obstructing light<br />
• soft shadows — varying darkness caused by partially obscured light<br />
sources<br />
• reflection — mirror-like or highly glossy reflection<br />
• transparency (optics), transparency (graphic) or opacity — sharp<br />
transmission of light through solid objects<br />
• translucency — highly scattered transmission of light through solid<br />
objects<br />
• refraction — bending of light associated with transparency<br />
Image rendered with computer aided design.<br />
• diffraction — bending, spreading and interference of light passing by an object or aperture that disrupts the ray
Rendering 162<br />
• indirect illumination — surfaces illuminated by light reflected off other surfaces, rather than directly from a light<br />
source (also known as global illumination)<br />
• caustics (a form of indirect illumination) — reflection of light off a shiny object, or focusing of light through a<br />
transparent object, to produce bright highlights on another object<br />
• depth of field — objects appear blurry or out of focus when too far in front of or behind the object in focus<br />
• motion blur — objects appear blurry due to high-speed motion, or the motion of the camera<br />
• non-photorealistic rendering — rendering of scenes in an artistic style, intended to look like a painting or drawing<br />
Techniques<br />
Many rendering algorithms have been researched, and software used for rendering may employ a number of different<br />
techniques to obtain a final image.<br />
Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous<br />
amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the<br />
sampling is not intelligently restricted.<br />
Therefore, four loose families of more-efficient light transport modelling techniques have emerged: rasterization,<br />
including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical<br />
effects; ray casting considers the scene as observed from a specific point-of-view, calculating the observed image<br />
based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo<br />
techniques to reduce artifacts; and ray tracing is similar to ray casting, but employs more advanced optical<br />
simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of<br />
magnitude slower. The fourth type of light transport techique, radiosity is not usually implemented as a rendering<br />
technique, but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These<br />
surfaces are usually rendered to the display using one of the other three techniques.<br />
Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost.<br />
Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order<br />
algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually<br />
fewer objects in a scene than pixels.<br />
Scanline rendering and rasterisation<br />
A high-level representation of an image necessarily contains elements<br />
in a different domain from pixels. These elements are referred to as<br />
primitives. In a schematic drawing, for instance, line segments and<br />
curves might be primitives. In a graphical user interface, windows and<br />
buttons might be the primitives. In rendering of <strong>3D</strong> models, triangles<br />
and polygons in space might be primitives.<br />
If a pixel-by-pixel (image order) approach to rendering is impractical<br />
or too slow for some task, then a primitive-by-primitive (object order)<br />
approach to rendering may prove useful. Here, one loops through each<br />
Rendering of the European Extremely Large<br />
Telescope.<br />
of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is<br />
called rasterization, and is the rendering method used by all current <strong>graphics</strong> cards.<br />
Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of<br />
primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second,<br />
rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels<br />
occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the<br />
approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce
Rendering 163<br />
higher-quality images and is more versatile because it does not depend on as many assumptions about the image as<br />
rasterization.<br />
The older form of rasterization is characterized by rendering an entire face (primitive) as a single color.<br />
Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and<br />
then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken<br />
the old method as it allows the <strong>graphics</strong> to flow without complicated textures (a rasterized image when used face by<br />
face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there<br />
is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the <strong>graphics</strong><br />
card's more taxing shading functions and still achieves better performance because the simpler textures stored in<br />
memory use less space. Sometimes designers will use one rasterization method on some faces and the other method<br />
on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the<br />
overall effect.<br />
Ray casting<br />
In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view<br />
outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may<br />
be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes<br />
the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify<br />
the colour value by an illumination factor, but without calculating the relationship to a simulated light source. To<br />
reduce artifacts, a number of rays in slightly different directions may be averaged.<br />
Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the<br />
object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light<br />
source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated.<br />
Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two.<br />
Raycasting is primarily used for realtime simulations, such as those used in <strong>3D</strong> computer games and cartoon<br />
animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain<br />
better performance in the computational stage. This is usually the case when a large number of frames need to be<br />
animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if<br />
objects in the scene were all painted with matte finish.
Rendering 164<br />
Ray tracing<br />
Ray tracing aims to simulate the natural flow of light,<br />
interpreted as particles. Often, ray tracing methods are<br />
utilized to approximate the solution to the rendering<br />
equation by applying Monte Carlo methods to it. Some<br />
of the most used methods are Path Tracing,<br />
Bidirectional Path Tracing, or Metropolis light<br />
transport, but also semi realistic methods are in use,<br />
like Whitted Style Ray Tracing, or hybrids. While most<br />
implementations let light propagate on straight lines,<br />
applications exist to simulate relativistic spacetime<br />
effects. [1]<br />
In a final, production quality rendering of a ray traced<br />
work, multiple rays are generally shot for each pixel,<br />
and traced not just to the first object of intersection, but<br />
rather, through a number of sequential 'bounces', using<br />
the known laws of optics such as "angle of incidence<br />
equals angle of reflection" and more advanced laws that<br />
deal with refraction and surface roughness.<br />
Once the ray either encounters a light source, or more<br />
Spiral Sphere and Julia, Detail, a computer-generated image created<br />
by visual artist Robert W. McGregor using only POV-Ray 3.6 and its<br />
built-in scene description language.<br />
probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is<br />
evaluated using techniques described above, and the changes along the way through the various bounces evaluated to<br />
estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.<br />
In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however,<br />
only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments.<br />
As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to<br />
consider for short films of any degree of quality, although it has been used for special effects sequences, and in<br />
advertising, where a short portion of high quality (perhaps even photorealistic) footage is required.<br />
However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not<br />
high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is<br />
now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which<br />
show use of real-time software or hardware ray tracing.<br />
Radiosity<br />
Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light<br />
sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the<br />
'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms.<br />
The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a<br />
large spectrum of directions and illuminates the area around it.<br />
The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply<br />
illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity<br />
estimation is coupled with a high quality ray tracing algorithim, images may exhibit convincing realism, particularly<br />
for indoor scenes.
Rendering 165<br />
In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces<br />
in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring<br />
of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes<br />
including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or<br />
ray-tracing model.<br />
Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the<br />
standardization of rapid radiosity calculation, some graphic artists used a technique referred to loosely as false<br />
radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via<br />
self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be<br />
reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without<br />
examining the contribution that complex objects make to the radiosity—or complex objects may be replaced in the<br />
radiosity calculation with simpler objects of similar size and texture.<br />
Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful<br />
for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be<br />
reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without<br />
seriously impacting the overall rendering time-per-frame.<br />
Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from<br />
beginning-to-end to create a large number of well-known recent feature-length animated <strong>3D</strong>-cartoon films.<br />
Sampling and filtering<br />
One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem.<br />
Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite<br />
number of pixels. As a consequence of the Nyquist–Shannon sampling theorem, any spatial waveform that can be<br />
displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this<br />
expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than<br />
one pixel.<br />
If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly<br />
aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects<br />
where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce<br />
good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a<br />
process called antialiasing.<br />
Optimization<br />
Optimizations used by an artist when a scene is being developed<br />
Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the<br />
portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting<br />
may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the<br />
scene at high detail, and to remove objects that are not important to what is currently being developed.<br />
Common optimizations for real time rendering<br />
For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of<br />
the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'.
Rendering 166<br />
Academic core<br />
The implementation of a realistic renderer always has some basic element of physical simulation or emulation —<br />
some computation which resembles or abstracts a real physical process.<br />
The term "physically based" indicates the use of physical models and approximations that are more general and<br />
widely accepted outside rendering. A particular set of related techniques have gradually become established in the<br />
rendering community.<br />
The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or<br />
approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and<br />
practicality, an implementation will be a complex combination of different techniques.<br />
Rendering research is concerned with both the adaptation of scientific models and their efficient application.<br />
The rendering equation<br />
This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the<br />
non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations<br />
of this equation.<br />
Meaning: at a particular position and direction, the outgoing light (L o ) is the sum of the emitted light (L e ) and the<br />
reflected light. The reflected light being the sum of the incoming light (L i ) from all directions, multiplied by the<br />
surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this<br />
equation stands for the whole 'light transport' — all the movement of light — in a scene.<br />
The bidirectional reflectance distribution function<br />
The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a<br />
surface as follows:<br />
Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection,<br />
although both can be BRDFs.<br />
Geometric optics<br />
Rendering is practically exclusively concerned with the particle aspect of light physics — known as geometric<br />
optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave<br />
aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect<br />
phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both<br />
types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.<br />
Visual perception<br />
Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is<br />
mainly because image displays and human perception have restricted ranges. A renderer can simulate an almost<br />
infinite range of light brightness and color, but current displays — movie screen, computer monitor, etc. — cannot<br />
handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not<br />
need to be given large-range images to create realism. This can help solve the problem of fitting images into<br />
displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties<br />
won't be noticeable. This related subject is tone mapping.
Rendering 167<br />
Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, and<br />
Monte Carlo methods.<br />
Rendering for movies often takes place on a network of tightly connected computers known as a render farm.<br />
The current state of the art in 3-D image description for movie creation is the mental ray scene description language<br />
designed at mental images and the RenderMan shading language designed at Pixar. [2] (compare with simpler <strong>3D</strong><br />
fileformats such as VRML or APIs such as OpenGL and DirectX tailored for <strong>3D</strong> hardware accelerators).<br />
Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one<br />
or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend<br />
geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing<br />
with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought<br />
features these days may include IPR and hardware rendering/shading.<br />
Chronology of important published ideas<br />
• 1968 Ray casting (Appel, A. (1968). Some techniques for shading machine renderings of solids [3] . Proceedings<br />
of the Spring Joint Computer Conference 32, 37–49.)<br />
• 1970 Scanline rendering (Bouknight, W. J. (1970). A procedure for generation of three-dimensional half-tone<br />
computer <strong>graphics</strong> presentations. Communications of the ACM 13 (9), 527–536 (doi:10.1145/362736.362739)<br />
• 1971 Gouraud shading (Gouraud, H. (1971). Continuous shading of curved surfaces [4] . IEEE Transactions on<br />
Computers 20 (6), 623–629.)<br />
• 1974 Texture mapping (Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces [5] .<br />
PhD thesis, University of Utah.)<br />
• 1974 Z-buffering (Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces [5] . PhD<br />
thesis)<br />
• 1975 Phong shading (Phong, B-T. (1975). Illumination for computer generated pictures [6] . Communications of<br />
the ACM 18 (6), 311–316.)<br />
• 1976 Environment mapping (Blinn, J.F., Newell, M.E. (1976). Texture and reflection in computer generated<br />
images [7] . Communications of the ACM 19, 542–546.)<br />
• 1977 Shadow volumes (Crow, F.C. (1977). Shadow algorithms for computer <strong>graphics</strong> [8] . Computer Graphics<br />
(Proceedings of SIGGRAPH 1977) 11 (2), 242–248.)<br />
• 1978 Shadow buffer (Williams, L. (1978). Casting curved shadows on curved surfaces [9] . Computer Graphics<br />
(Proceedings of SIGGRAPH 1978) 12 (3), 270–274.)<br />
• 1978 Bump mapping (Blinn, J.F. (1978). Simulation of wrinkled surfaces [10] . Computer Graphics (Proceedings<br />
of SIGGRAPH 1978) 12 (3), 286–292.)<br />
• 1980 BSP trees (Fuchs, H., Kedem, Z.M., Naylor, B.F. (1980). On visible surface generation by a priori tree<br />
structures [11] . Computer Graphics (Proceedings of SIGGRAPH 1980) 14 (3), 124–133.)<br />
• 1980 Ray tracing (Whitted, T. (1980). An improved illumination model for shaded display [12] . Communications<br />
of the ACM 23 (6), 343–349.)<br />
• 1981 Cook shader (Cook, R.L., Torrance, K.E. (1981). A reflectance model for computer <strong>graphics</strong> [13] .<br />
Computer Graphics (Proceedings of SIGGRAPH 1981) 15 (3), 307–316.)<br />
• 1983 MIP maps (Williams, L. (1983). Pyramidal parametrics [14] . Computer Graphics (Proceedings of<br />
SIGGRAPH 1983) 17 (3), 1–11.)<br />
• 1984 Octree ray tracing (Glassner, A.S. (1984). Space subdivision for fast ray tracing. IEEE Computer Graphics<br />
& Applications 4 (10), 15–22.)<br />
• 1984 Alpha compositing (Porter, T., Duff, T. (1984). Compositing digital images [15] . Computer Graphics<br />
(Proceedings of SIGGRAPH 1984) 18 (3), 253–259.)
Rendering 168<br />
• 1984 Distributed ray tracing (Cook, R.L., Porter, T., Carpenter, L. (1984). Distributed ray tracing [16] .<br />
Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3), 137–145.)<br />
• 1984 Radiosity (Goral, C., Torrance, K.E., Greenberg D.P., Battaile, B. (1984). Modeling the interaction of light<br />
between diffuse surfaces [17] . Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3), 213–222.)<br />
• 1985 Hemicube radiosity (Cohen, M.F., Greenberg, D.P. (1985). The hemi-cube: a radiosity solution for<br />
complex environments [18] . Computer Graphics (Proceedings of SIGGRAPH 1985) 19 (3), 31–40.<br />
doi:10.1145/325165.325171)<br />
• 1986 Light source tracing (Arvo, J. (1986). Backward ray tracing [19] . SIGGRAPH 1986 Developments in Ray<br />
Tracing course notes)<br />
• 1986 Rendering equation (Kajiya, J. (1986). The rendering equation [20] . Computer Graphics (Proceedings of<br />
SIGGRAPH 1986) 20 (4), 143–150.)<br />
• 1987 Reyes rendering (Cook, R.L., Carpenter, L., Catmull, E. (1987). The Reyes image rendering architecture<br />
[21] . Computer Graphics (Proceedings of SIGGRAPH 1987) 21 (4), 95–102.)<br />
• 1991 Hierarchical radiosity (Hanrahan, P., Salzman, D., Aupperle, L. (1991). A rapid hierarchical radiosity<br />
algorithm [22] . Computer Graphics (Proceedings of SIGGRAPH 1991) 25 (4), 197–206.)<br />
• 1993 Tone mapping (Tumblin, J., Rushmeier, H.E. (1993). Tone reproduction for realistic computer generated<br />
images [23] . IEEE Computer Graphics & Applications 13 (6), 42–48.)<br />
• 1993 Subsurface scattering (Hanrahan, P., Krueger, W. (1993). Reflection from layered surfaces due to<br />
subsurface scattering [24] . Computer Graphics (Proceedings of SIGGRAPH 1993) 27 (), 165–174.)<br />
• 1995 Photon mapping (Jensen, H.W., Christensen, N.J. (1995). Photon maps in bidirectional monte carlo ray<br />
tracing of complex objects [25] . Computers & Graphics 19 (2), 215–224.)<br />
• 1997 Metropolis light transport (Veach, E., Guibas, L. (1997). Metropolis light transport [26] . Computer<br />
Graphics (Proceedings of SIGGRAPH 1997) 16 65–76.)<br />
• 1997 Instant Radiosity (Keller, A. (1997). Instant Radiosity [27] . Computer Graphics (Proceedings of<br />
SIGGRAPH 1997) 24, 49–56.)<br />
• 2002 Precomputed Radiance Transfer (Sloan, P., Kautz, J., Snyder, J. (2002). Precomputed Radiance Transfer<br />
for Real-Time Rendering in Dynamic, Low Frequency Lighting Environments [28] . Computer Graphics<br />
(Proceedings of SIGGRAPH 2002) 29, 527–536.)<br />
Books and summaries<br />
• Pharr; Humphreys (2004). Physically Based Rendering. Morgan Kaufmann. ISBN 0-12-553180-X.<br />
• Shirley; Morley (2003). Realistic Ray Tracing (2nd ed.). AK Peters. ISBN 1-56881-198-5.<br />
• Dutre; Bala; Bekaert (2002). Advanced Global Illumination. AK Peters. ISBN 1-56881-177-2.<br />
• Akenine-Moller; Haines (2002). Real-time Rendering (2nd ed.). AK Peters. ISBN 1-56881-182-9.<br />
• Strothotte; Schlechtweg (2002). Non-Photorealistic Computer Graphics. Morgan Kaufmann. ISBN<br />
1-55860-787-0.<br />
• Gooch; Gooch (2001). Non-Photorealistic Rendering. AKPeters. ISBN 1-56881-133-0.<br />
• Jensen (2001). Realistic Image Synthesis Using Photon Mapping. AK Peters. ISBN 1-56881-147-0.<br />
• Blinn (1996). Jim Blinns Corner - A Trip Down The Graphics Pipeline. Morgan Kaufmann. ISBN<br />
1-55860-387-5.<br />
• Glassner (1995). Principles Of Digital Image Synthesis. Morgan Kaufmann. ISBN 1-55860-276-3.<br />
• Cohen; Wallace (1993). Radiosity and Realistic Image Synthesis. AP Professional. ISBN 0-12-178270-0.<br />
• Foley; Van Dam; Feiner; Hughes (1990). Computer Graphics: Principles and Practice. Addison Wesley. ISBN<br />
0-201-12110-7.<br />
• Glassner (ed.) (1989). An Introduction To Ray Tracing. Academic Press. ISBN 0-12-286160-4.<br />
• Description of the 'Radiance' system [29]
Rendering 169<br />
External links<br />
• SIGGRAPH [30] The ACMs special interest group in <strong>graphics</strong> — the largest academic and professional<br />
association and conference.<br />
• http:/ / www. cs. brown. edu/ ~tor/ List of links to (recent) siggraph papers (and some others) on the web.<br />
References<br />
[1] http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 56. 830<br />
[2] http:/ / portal. acm. org/ citation. cfm?id=1185817& jmp=abstract& coll=GUIDE& dl=GUIDE<br />
[3] http:/ / <strong>graphics</strong>. stanford. edu/ courses/ Appel. pdf<br />
[4] http:/ / www. cs. uiowa. edu/ ~cwyman/ classes/ spring05-22C251/ papers/ ContinuousShadingOfCurvedSurfaces. pdf<br />
[5] http:/ / www. pixartouchbook. com/ storage/ catmull_thesis. pdf<br />
[6] http:/ / jesper. kalliope. org/ blog/ library/ p311-phong. pdf<br />
[7] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 87. 8903& rep=rep1& type=pdf<br />
[8] http:/ / design. osu. edu/ carlson/ history/ PDFs/ crow-shadows. pdf<br />
[9] http:/ / citeseer. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 134. 8225& rep=rep1& type=pdf<br />
[10] http:/ / research. microsoft. com/ pubs/ 73939/ p286-blinn. pdf<br />
[11] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 112. 4406& rep=rep1& type=pdf<br />
[12] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 114. 7629& rep=rep1& type=pdf<br />
[13] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 88. 7796& rep=rep1& type=pdf<br />
[14] http:/ / citeseer. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 163. 6298& rep=rep1& type=pdf<br />
[15] http:/ / keithp. com/ ~keithp/ porterduff/ p253-porter. pdf<br />
[16] http:/ / www. cs. rutgers. edu/ ~nealen/ teaching/ cs428_fall09/ readings/ cook84. pdf<br />
[17] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 112. 356& rep=rep1& type=pdf<br />
[18] http:/ / www. arnetminer. org/ dev. do?m=downloadpdf& url=http:/ / arnetminer. org/ pdf/ PDFFiles2/ --g---g-Index1255026826706/<br />
The%20hemi-cube%20%20a%20radiosity%20solution%20for%20complex%20environments1255058011060. pdf<br />
[19] http:/ / citeseer. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 31. 581& rep=rep1& type=pdf<br />
[20] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 63. 1402& rep=rep1& type=pdf<br />
[21] http:/ / <strong>graphics</strong>. pixar. com/ library/ Reyes/ paper. pdf<br />
[22] http:/ / citeseer. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 93. 5694& rep=rep1& type=pdf<br />
[23] http:/ / smartech. gatech. edu/ bitstream/ handle/ 1853/ 3686/ 92-31. pdf?sequence=1<br />
[24] http:/ / citeseer. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 57. 9761& rep=rep1& type=pdf<br />
[25] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 97. 2724& rep=rep1& type=pdf<br />
[26] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 88. 944& rep=rep1& type=pdf<br />
[27] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 15. 240& rep=rep1& type=pdf<br />
[28] http:/ / www. mpi-inf. mpg. de/ ~jnkautz/ projects/ prt/ prtSIG02. pdf<br />
[29] http:/ / radsite. lbl. gov/ radiance/ papers/ sg94. 1/<br />
[30] http:/ / www. siggraph. org/
Retained mode 170<br />
Retained mode<br />
In computing, retained mode rendering is a style for application programming interfaces of <strong>graphics</strong> libraries, in<br />
which the libraries retain a complete model of the objects to be rendered.<br />
Overview<br />
By using a "retained mode" approach, client calls do not directly cause actual rendering, but instead update an<br />
internal model (typically a list of objects) which is maintained within the library's data space. This allows the library<br />
to optimize when actual rendering takes place along with the processing of related objects.<br />
Some techniques to optimize rendering include:<br />
• managing double buffering<br />
• performing occlusion culling<br />
• only transferring data that has changed from one frame to the next from the application to the library<br />
Immediate mode is an alternative approach; the two styles can coexist in the same library and are not necessarily<br />
exclusionary in practice. For example, OpenGL has immediate mode functions that can use previously defined server<br />
side objects (textures, vertex and index buffers, shaders, etc.) without resending unchanged data.<br />
Scanline rendering<br />
Scanline rendering is an algorithm for visible surface determination, in <strong>3D</strong> computer <strong>graphics</strong>, that works on a<br />
row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are<br />
first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed<br />
using the intersection of a scan line with the polygons on the front of the sorted list, while the sorted list is updated to<br />
discard no-longer-visible polygons as the active scan line is advanced down the picture.<br />
The main advantage of this method is that sorting vertices along the normal of the scanning plane reduces the<br />
number of comparisons between edges. Another advantage is that it is not necessary to translate the coordinates of<br />
all vertices from the main memory into the working memory—only vertices defining edges that intersect the current<br />
scan line need to be in active memory, and each vertex is read in only once. The main memory is often very slow<br />
compared to the link between the central processing unit and cache memory, and thus avoiding re-accessing vertices<br />
in main memory can provide a substantial speedup.<br />
This kind of algorithm can be easily integrated with the Phong reflection model, the Z-buffer algorithm, and many<br />
other <strong>graphics</strong> techniques.<br />
Algorithm<br />
The usual method starts with edges of projected polygons inserted into buckets, one per scanline; the rasterizer<br />
maintains an active edge table(AET). Entries maintain sort links, X coordinates, gradients, and references to the<br />
polygons they bound. To rasterize the next scanline, the edges no longer relevant are removed; new edges from the<br />
current scanlines' Y-bucket are added, inserted sorted by X coordinate. The active edge table entries have X and<br />
other parameter information incremented. Active edge table entries are maintained in an X-sorted list by bubble sort,<br />
effecting a change when 2 edges cross. After updating edges, the active edge table is traversed in X order to emit<br />
only the visible spans, maintaining a Z-sorted active Span table, inserting and deleting the surfaces when edges are<br />
crossed.
Scanline rendering 171<br />
Variants<br />
A hybrid between this and Z-buffering does away with the active edge table sorting, and instead rasterizes one<br />
scanline at a time into a Z-buffer, maintaining active polygon spans from one scanline to the next.<br />
In another variant, an ID buffer is rasterized in an intermediate step, allowing deferred shading of the resulting<br />
visible pixels.<br />
History<br />
The first publication of the scanline rendering technique was probably by Wylie, Romney, Evans, and Erdahl in<br />
1967. [1]<br />
Other early developments of the scanline rendering method were by Bouknight in 1969, [2] and Newell, Newell, and<br />
Sancha in 1972. [3] Much of the early work on these methods was done in Ivan Sutherland's <strong>graphics</strong> group at the<br />
University of Utah, and at the Evans & Sutherland company in Salt Lake City.<br />
Use in realtime rendering<br />
The early Evans & Sutherland ESIG line of image-generators (IGs) employed the technique in hardware 'on the fly',<br />
to generate images one raster-line at a time without a framebuffer, saving the need for then costly memory. Later<br />
variants used a hybrid approach.<br />
The Nintendo DS is the latest hardware to render <strong>3D</strong> scenes in this manner, with the option of caching the rasterized<br />
images into VRAM.<br />
The sprite hardware prevalent in 1980s games machines can be considered a simple 2D form of scanline rendering.<br />
The technique was used in the first Quake engine for software rendering of environments (but moving objects were<br />
Z-buffered over the top). Static scenery used BSP-derived sorting for priority. It proved better than Z-buffer/painter's<br />
type algorithms at handling scenes of high depth complexity with costly pixel operations (i.e. perspective-correct<br />
texture mapping without hardware assist). This use preceded the widespread adoption of Z-buffer-based GPUs now<br />
common in PCs.<br />
Sony experimented with software scanline renderers on a second Cell processor during the development of the<br />
PlayStation 3, before settling on a conventional CPU/GPU arrangement.<br />
Similar techniques<br />
A similar principle is employed in tiled rendering (most famously the PowerVR <strong>3D</strong> chip); that is, primitives are<br />
sorted into screen space, then rendered in fast on-chip memory, one tile at a time. The Dreamcast provided a mode<br />
for rasterizing one row of tiles at a time for direct raster scanout, saving the need for a complete framebuffer,<br />
somewhat in the spirit of hardware scanline rendering.<br />
Some software rasterizers use 'span buffering' (or 'coverage buffering'), in which a list of sorted, clipped spans are<br />
stored in scanline buckets. Primitives would be successively added to this datastructure, before rasterizing only the<br />
visible pixels in a final stage.
Scanline rendering 172<br />
Comparison with Z-buffer algorithm<br />
The main advantage of scanline rendering over Z-buffering is that visible pixels are only ever processed once—a<br />
benefit for the case of high resolution or expensive shading computations.<br />
In modern Z-buffer systems, similar benefits can be gained through rough front-to-back sorting (approaching the<br />
'reverse painters algorithm'), early Z-reject (in conjunction with hierarchical Z), and less common deferred rendering<br />
techniques possible on programmable GPUs.<br />
Scanline techniques working on the raster have the drawback that overload is not handled gracefully.<br />
The technique is not considered to scale well as the number of primitives increases. This is because of the size of the<br />
intermediate datastructures required during rendering—which can exceed the size of a Z-buffer for a complex scene.<br />
Consequently, in contemporary interactive <strong>graphics</strong> applications, the Z-buffer has become ubiquitous. The Z-buffer<br />
allows larger volumes of primitives to be traversed linearly, in parallel, in a manner friendly to modern hardware.<br />
Transformed coordinates, attribute gradients, etc., need never leave the <strong>graphics</strong> chip; only the visible pixels and<br />
depth values are stored.<br />
References<br />
[1] Wylie, C, Romney, G W, Evans, D C, and Erdahl, A, "Halftone Perspective Drawings by Computer," Proc. AFIPS FJCC 1967, Vol. 31, 49<br />
[2] Bouknight W.J, "An Improved Procedure for Generation of Half-tone Computer Graphics Representation," UI, Coordinated Science<br />
Laboratory, Sept 1969<br />
[3] Newell, M E, Newell R. G, and Sancha, T.L, "A New Approach to the Shaded Picture Problem," Proc ACM National Conf. 1972<br />
External links<br />
• University of Utah Graphics Group History (http:/ / www. cs. utah. edu/ about/ history/ )
Schlick's approximation 173<br />
Schlick's approximation<br />
In <strong>3D</strong> computer <strong>graphics</strong>, Schlick's approximation is a formula for approximating the bidirectional reflectance<br />
distribution function (BRDF) of metallic surfaces. It was proposed by Christophe Schlick to approximate the<br />
contributions of Fresnel terms in the specular reflection of light from conducting surfaces.<br />
According to Schlick's model, the specular reflection coefficient R is given by<br />
where is half the angle between the incoming and outgoing light directions, and is the reflectance at normal<br />
incidence (i.e. the value of the Fresnel term when ).<br />
References<br />
• Schlick, C. (1994). "An Inexpensive BRDF Model for Physically-based Rendering". Computer Graphics Forum<br />
13 (3): 233. doi:10.1111/1467-8659.1330233.<br />
Screen Space Ambient Occlusion<br />
Screen Space Ambient Occlusion (SSAO) is a rendering<br />
technique for efficiently approximating the well-known computer<br />
<strong>graphics</strong> ambient occlusion effect in real time. It was developed by<br />
Vladimir Kajalin while working at Crytek and was used for the<br />
first time in a video game in the 2007 PC game Crysis made by<br />
Crytek.<br />
Implementation<br />
The algorithm is implemented as a pixel shader, analyzing the<br />
SSAO component of a typical game scene.<br />
scene depth buffer which is stored in a texture. For every pixel on the screen, the pixel shader samples the depth<br />
values around the current pixel and tries to compute the amount of occlusion from each of the sampled points. In its<br />
simplest implementation, the occlusion factor depends only on the depth difference between sampled point and<br />
current point.<br />
Without additional smart solutions, such a brute force method would require about 200 texture reads per pixel for<br />
good visual quality. This is not acceptable for real-time rendering on modern <strong>graphics</strong> hardware. In order to get high<br />
quality results with far fewer reads, sampling is performed using a randomly rotated kernel. The kernel orientation is<br />
repeated every N screen pixels in order to have only high-frequency noise in the final picture. In the end this high<br />
frequency noise is greatly removed by a NxN post-process blurring step taking into account depth discontinuities<br />
(using methods such as comparing adjacent normals and depths). Such a solution allows a reduction in the number of<br />
depth samples per pixel to about 16 or less while maintaining a high quality result, and allows the use of SSAO in<br />
soft real-time applications like computer games.<br />
Compared to other ambient occlusion solutions, SSAO has the following advantages:<br />
• Independent from scene complexity.<br />
• No data pre-processing needed, no loading time and no memory allocations in system memory.<br />
• Works with dynamic scenes.<br />
• Works in the same consistent way for every pixel on the screen.<br />
• No CPU usage – it can be executed completely on the GPU.
Screen Space Ambient Occlusion 174<br />
• May be easily integrated into any modern <strong>graphics</strong> pipeline.<br />
Of course, it has its disadvantages, as well:<br />
• Rather local and in many cases view-dependent, as it is dependent on adjacent texel depths which may be<br />
generated by any geometry whatsoever.<br />
• Hard to correctly smooth/blur out the noise without interfering with depth discontinuities, such as object edges<br />
(the occlusion should not "bleed" onto objects).<br />
Games using SSAO<br />
• Crysis (2007) (Windows) [1]<br />
• Gears of War 2 (2008) (Xbox 360) [2]<br />
• S.T.A.L.K.E.R.: Clear Sky (2008) (Windows) [3]<br />
• Crysis Warhead (2008) (Windows) [1]<br />
• Bionic Commando (2009) (Windows and Xbox 360 versions) [4]<br />
• Burnout: Paradise the Ultimate Box (2009) (Windows) [5]<br />
• Empire: Total War (2009) (Windows) [6]<br />
• Risen (2009) (Windows and Xbox 360 versions) [7]<br />
• BattleForge (2009) (Windows) [8]<br />
• Borderlands (2009) (Windows and Xbox 360 versions) [9]<br />
• F.E.A.R. 2: Project Origin (2009) (Windows) [10]<br />
• Fight Night Champion (2011) (PlayStation 3 and Xbox 360) [11]<br />
• Batman: Arkham Asylum (2009) (Windows and Xbox 360 versions) [12]<br />
• Uncharted 2: Among Thieves (2009) (PlayStation 3) [13]<br />
• Shattered Horizon (2009) (Windows) [14]<br />
• NecroVision (2009) (Windows)<br />
• S.T.A.L.K.E.R.: Call of Pripyat (2009) (Windows) [15]<br />
• Red Faction: Guerrilla (2009) (Windows) [16]<br />
• Napoleon: Total War (2010) (Windows) [17]<br />
• Star Trek Online (2010) (Windows)<br />
• Just Cause 2 (2010) (Windows) [18]<br />
• Metro 2033 (2010) (Windows and Xbox 360 versions) [19]<br />
• Dead to Rights: Retribution (2010) (PlayStation 3 and Xbox 360)<br />
• Alan Wake (2010) (Xbox 360) [20]<br />
• Toy Story 3: The Video Game (2010) (PlayStation 3 and Xbox 360) [21]<br />
• Eve Online (Nvidia GPUs only) [22]<br />
[23] [24]<br />
• Halo: Reach (2010) (Xbox 360)<br />
• Transformers: War for Cybertron (2010) (PlayStation 3 and Xbox 360) [25]<br />
• StarCraft II: Wings of Liberty (2010) (Windows) (after Patch 1.2.0 released 1/12/2011) [26]<br />
• City of Heroes (2010) (Windows) [27]<br />
• ArmA 2: Operation Arrowhead (2010) (Windows) [28]<br />
• The Settlers 7: Paths to a Kingdom (2010) (Windows) [29]<br />
[30] [31]<br />
• Mafia II (2010) (Windows and Xbox 360)<br />
• Amnesia: The Dark Descent (2010) (Windows) [32]<br />
• Arcania: A Gothic Tale (2010) (Windows) [33]<br />
• Assassin's Creed: Brotherhood (2010) (PlayStation 3 and Xbox 360) [34]<br />
• Battlefield: Bad Company 2 (2010) (Windows) (uses HBAO - improved form of SSAO) [35]<br />
• James Bond 007: Blood Stone (2010) (PlayStation 3, Xbox 360 and Windows) [36]
Screen Space Ambient Occlusion 175<br />
• Dragon Age II (2011) (Windows) [37]<br />
• Crysis 2 (2011) (Windows, Xbox 360 and PlayStation 3) [38]<br />
• IL-2 Sturmovik: Cliffs of Dover (2011) (Windows) [39]<br />
• The Witcher 2: Assassins of Kings (2011) (Windows) [40]<br />
• L.A. Noire (2011) (PlayStation 3 and Xbox 360) [41]<br />
• Infamous 2 (2011) (PlayStation 3) [42]<br />
• Deus Ex: Human Revolution (2011) (PlayStation 3, Xbox 360 and Windows) [43]<br />
• Dead Island (2011) (PlayStation 3, Xbox 360 and Windows) [44]<br />
• Battlefield 3 (2011) (PlayStation 3, Xbox 360 and Windows) [45]<br />
References<br />
[1] "CryENGINE® 2" (http:/ / crytek. com/ cryengine/ cryengine2/ overview). Crytek. . Retrieved 2011-08-26.<br />
[2] "Gears of War Series | Showcase | Unreal Technology" (http:/ / www. unrealengine. com/ showcase/ gears_of_war_series).<br />
Unrealengine.com. 2008-11-07. . Retrieved 2011-08-26.<br />
[3] "STALKER: Clear Sky Tweak Guide" (http:/ / www. tweakguides. com/ ClearSky_6. html). TweakGuides.com. . Retrieved 2011-08-26.<br />
[4] "Head2Head: Bionic Commando" (http:/ / www. lensoftruth. com/ head2head-bionic-commando/ ). Lens of Truth. 2009-05-29. . Retrieved<br />
2011-08-26.<br />
[5] "Benchmarks: SSAO Enabled : Burnout Paradise: The Ultimate Box, Performance Analysis" (http:/ / www. tomshardware. com/ reviews/<br />
burnout-paradise-performance,2289-7. html). Tomshardware.com. . Retrieved 2011-08-26.<br />
[6] "Empire: Total War – No anti-aliasing in combination with SSAO on Radeon <strong>graphics</strong> cards – Empire Total War, anti-aliasing, SSAO,<br />
Radeon, Geforce" (http:/ / www. pcgameshardware. com/ aid,678577/<br />
Empire-Total-War-No-anti-aliasing-in-combination-with-SSAO-on-Radeon-<strong>graphics</strong>-cards/ Practice/ ) (in (German)). PC Games<br />
Hardware. 2009-03-11. . Retrieved 2011-08-26.<br />
[7] "Risen Tuning Tips: Activate Anti Aliasing, improve <strong>graphics</strong> and start the game faster – Risen, Tipps, Anti Aliasing, Graphics<br />
Enhancements" (http:/ / www. pcgameshardware. com/ aid,696728/<br />
Risen-Tuning-Tips-Activate-Anti-Aliasing-improve-<strong>graphics</strong>-and-start-the-game-faster/ Practice/ ) (in (German)). PC Games Hardware.<br />
2009-10-06. . Retrieved 2011-08-26.<br />
[8] "AMD’s Radeon HD 5850: The Other Shoe Drops" (http:/ / www. anandtech. com/ show/ 2848/ 2). AnandTech. . Retrieved 2011-08-26.<br />
[9] "Head2Head: Borderlands Analysis" (http:/ / www. lensoftruth. com/ head2head-borderlands-analysis/ ). Lens of Truth. 2009-10-29. .<br />
Retrieved 2011-08-26.<br />
[10] http:/ / www. pcgameshardware. com/ aid,675766/ Fear-2-Project-Origin-GPU-and-CPU-benchmarks-plus-<strong>graphics</strong>-settings-compared/<br />
Reviews/<br />
[11] http:/ / imagequalitymatters. blogspot. com/ 2011/ 03/ tech-analysis-fight-night-champion-360_12. html<br />
[12] "Head2Head – Batman: Arkham Asylum" (http:/ / www. lensoftruth. com/ head2head-batman-arkham-asylum/ ). Lens of Truth.<br />
2009-08-24. . Retrieved 2011-08-26.<br />
[13] "Among Friends: How Naughty Dog Built Uncharted 2 – Page 3 | DigitalFoundry" (http:/ / www. eurogamer. net/ articles/<br />
among-friends-how-naughty-dog-built-uncharted-2?page=3). Eurogamer.net. 2010-03-20. . Retrieved 2011-08-26.<br />
[14] http:/ / mgnews. ru/ read-news/ otvety-glavnogo-dizajnera-shattered-horizon-na-vashi-voprosy<br />
[15] http:/ / www. pcgameshardware. com/ aid,699424/ Stalker-Call-of-Pripyat-DirectX-11-vs-DirectX-10/ Practice/<br />
[16] http:/ / www. eurogamer. net/ articles/ digitalfoundry-red-faction-guerilla-pc-tech-comparison?page=2<br />
[17] http:/ / www. pcgameshardware. com/ aid,705532/ Napoleon-Total-War-CPU-benchmarks-and-tuning-tips/ Practice/<br />
[18] http:/ / ve3d. ign. com/ articles/ features/ 53469/ Just-Cause-2-PC-Interview<br />
[19] http:/ / www. eurogamer. net/ articles/ metro-2033-4a-engine-impresses-blog-entry<br />
[20] "Alan Wake FAQ – Alan Wake Community Forums" (http:/ / forum. alanwake. com/ showthread. php?t=1216). Forum.alanwake.com. .<br />
Retrieved 2011-08-26.<br />
[21] "Toy Story 3: The Video Game – Wikipedia, the free encyclopedia" (http:/ / en. wikipedia. org/ wiki/ Toy_Story_3:_The_Video_Game).<br />
En.wikipedia.org. . Retrieved 2011-08-26.<br />
[22] CCP. "EVE Insider | Patchnotes" (http:/ / www. eveonline. com/ updates/ patchnotes. asp?patchlogID=230). EVE Online. . Retrieved<br />
2011-08-26.<br />
[23] "Bungie Weekly Update: 04.16.10 : 4/16/2010 3:38 PM PDT" (http:/ / www. bungie. net/ News/ content. aspx?type=topnews&<br />
link=BWU_041610). Bungie.net. . Retrieved 2011-08-26.<br />
[24] "Halo: Reach beta footage analysis – Page 1 | DigitalFoundry" (http:/ / www. eurogamer. net/ articles/<br />
digitalfoundry-haloreach-beta-analysis-blog-entry). Eurogamer.net. 2010-04-25. . Retrieved 2011-08-26.<br />
[25] http:/ / www. eurogamer. net/ articles/ digitalfoundry-xbox360-vs-ps3-round-27-face-off?page=2<br />
[26] Entertainment, Blizzard (2011-08-19). "Patch 1.2.0 Now Live – StarCraft II" (http:/ / us. battle. net/ sc2/ en/ blog/ 2053470). Us.battle.net. .<br />
Retrieved 2011-08-26.
Screen Space Ambient Occlusion 176<br />
[27] "Issue 17: Dark Mirror Patch Notes | City of Heroes® : The Worlds Most Popular Superpowered MMO" (http:/ / www. cityofheroes. com/<br />
news/ patch_notes/ issue_17_release_notes. html). Cityofheroes.com. . Retrieved 2011-08-26.<br />
[28] "Ask Bohemia (about Operation Arrowhead... or anything else you want to ask)! – Bohemia Interactive Community" (http:/ / community.<br />
bistudio. com/ wiki?title=Ask_Bohemia_(about_Operation_Arrowhead. . . _or_anything_else_you_want_to_ask)!&<br />
rcid=57637#Improvements_In_The_Original_ARMA_2_Game). Community.bistudio.com. 2010-05-06. . Retrieved 2011-08-26.<br />
[29] now to post a comment! (2010-03-21). "The Settlers 7: Paths to a Kingdom – Engine" (http:/ / www. youtube. com/<br />
watch?v=uDFqgLSAPzU). YouTube. . Retrieved 2011-08-26.<br />
[30] http:/ / imagequalitymatters. blogspot. com/ 2010/ 08/ tech-analysis-mafia-ii-demo-ps3-vs-360. html<br />
[31] http:/ / www. eurogamer. net/ articles/ digitalfoundry-mafia-ii-demo-showdown<br />
[32] http:/ / geekmontage. com/ texts/ game-fixes-amnesia-the-dark-descent-crashing-lag-black-screen-freezing-sound-fixes/<br />
[33] http:/ / www. bit-tech. net/ gaming/ pc/ 2010/ 10/ 25/ arcania-gothic-4-review/ 1<br />
[34] "Face-Off: Assassin's Creed: Brotherhood – Page 2 | DigitalFoundry" (http:/ / www. eurogamer. net/ articles/<br />
digitalfoundry-assassins-creed-brotherhood-face-off?page=2). Eurogamer.net. 2010-11-18. . Retrieved 2011-08-26.<br />
[35] http:/ / www. guru3d. com/ news/ battlefield-bad-company-2-directx-11-details-/<br />
[36] http:/ / www. lensoftruth. com/ head2head-blood-stone-007-hd-screenshot-comparison/<br />
[37] http:/ / www. techspot. com/ review/ 374-dragon-age-2-performance-test/<br />
[38] http:/ / crytek. com/ sites/ default/ files/ Crysis%202%20Key%20Rendering%20Features. pdf<br />
[39] http:/ / store. steampowered. com/ news/ 5321/ ?l=russian<br />
[40] http:/ / www. pcgamer. com/ 2011/ 05/ 25/ the-witcher-2-tweaks-guide/<br />
[41] "Face-Off: L.A. Noire – Page 1 | DigitalFoundry" (http:/ / www. eurogamer. net/ articles/ digitalfoundry-la-noire-face-off). Eurogamer.net.<br />
2011-05-23. . Retrieved 2011-08-26.<br />
[42] http:/ / imagequalitymatters. blogspot. com/ 2010/ 07/ tech-analsis-infamous-2-early-screens. html<br />
[43] http:/ / www. eurogamer. net/ articles/ deus-ex-human-revolution-face-off<br />
[44] http:/ / www. eurogamer. net/ articles/ digitalfoundry-dead-island-face-off<br />
[45] http:/ / publications. dice. se/ attachments/ BF3_NFS_WhiteBarreBrisebois_Siggraph2011. pdf<br />
External links<br />
• Finding Next Gen – CryEngine 2 (http:/ / delivery. acm. org/ 10. 1145/ 1290000/ 1281671/ p97-mittring.<br />
pdf?key1=1281671& key2=9942678811& coll=ACM& dl=ACM& CFID=15151515& CFTOKEN=6184618)<br />
• Video showing SSAO in action (http:/ / video. google. com/ videoplay?docid=-2592720445119800709& hl=en)<br />
• Image Enhancement by Unsharp Masking the Depth Buffer (http:/ / <strong>graphics</strong>. uni-konstanz. de/ publikationen/<br />
2006/ unsharp_masking/ Luft et al. -- Image Enhancement by Unsharp Masking the Depth Buffer. pdf)<br />
• Hardware Accelerated Ambient Occlusion Techniques on GPUs (http:/ / perumaal. googlepages. com/ )<br />
• Overview on Screen Space Ambient Occlusion Techniques (http:/ / meshula. net/ wordpress/ ?p=145)<br />
• Real-Time Depth Buffer Based Ambient Occlusion (http:/ / developer. download. nvidia. com/ presentations/<br />
2008/ GDC/ GDC08_Ambient_Occlusion. pdf)<br />
• Source code of SSAO shader used in Crysis (http:/ / www. pastebin. ca/ 953523)<br />
• Approximating Dynamic Global Illumination in Image Space (http:/ / www. mpi-inf. mpg. de/ ~ritschel/ Papers/<br />
SSDO. pdf)<br />
• Accumulative Screen Space Ambient Occlusion (http:/ / www. gamedev. net/ community/ forums/ topic.<br />
asp?topic_id=527170)<br />
• NVIDIA has integrated SSAO into drivers (http:/ / www. nzone. com/ object/ nzone_ambientocclusion_home.<br />
html)<br />
• Several methods of SSAO are described in ShaderX7 book (http:/ / www. shaderx7. com/ TOC. html)<br />
• SSAO Shader ( Russian ) (http:/ / lwengine. net. ru/ article/ DirectX_10/ ssao_directx10)
Self-shadowing 177<br />
Self-shadowing<br />
Self-Shadowing is a computer <strong>graphics</strong> lighting effect, used in <strong>3D</strong><br />
rendering applications such as computer animation and video games.<br />
Self-shadowing allows non-static objects in the environment, such as<br />
game characters and interactive objects (buckets, chairs, etc), to cast<br />
shadows on themselves and each other. For example, without<br />
self-shadowing, if a character puts his or her right arm over the left, the<br />
right arm will not cast a shadow over the left arm. If that same<br />
character places a hand over a ball, that hand will not cast a shadow<br />
over the ball.<br />
Shadow mapping<br />
Shadow mapping or projective shadowing is a process by which<br />
shadows are added to <strong>3D</strong> computer <strong>graphics</strong>. This concept was<br />
introduced by Lance Williams in 1978, in a paper entitled "Casting<br />
curved shadows on curved surfaces". Since then, it has been used both<br />
in pre-rendered scenes and realtime scenes in many console and PC<br />
games.<br />
Shadows are created by testing whether a pixel is visible from the light<br />
source, by comparing it to a z-buffer or depth image of the light<br />
source's view, stored in the form of a texture.<br />
Principle of a shadow and a shadow map<br />
If you looked out from a source of light, all of the objects you can see<br />
would appear in light. Anything behind those objects, however, would<br />
be in shadow. This is the basic principle used to create a shadow map.<br />
The light's view is rendered, storing the depth of every surface it sees<br />
(the shadow map). Next, the regular scene is rendered comparing the<br />
depth of every point drawn (as if it were being seen by the light, rather<br />
than the eye) to this depth map.<br />
This technique is less accurate than shadow volumes, but the shadow<br />
map can be a faster alternative depending on how much fill time is<br />
required for either technique in a particular application and therefore<br />
may be more suitable to real time applications. In addition, shadow<br />
Doom 3's unified lighting and shadowing allows<br />
for self-shadowing via shadow volumes<br />
Scene with shadow mapping<br />
Scene with no shadows<br />
maps do not require the use of an additional stencil buffer, and can be modified to produce shadows with a soft edge.<br />
Unlike shadow volumes, however, the accuracy of a shadow map is limited by its resolution.
Shadow mapping 178<br />
Algorithm overview<br />
Rendering a shadowed scene involves two major drawing steps. The first produces the shadow map itself, and the<br />
second applies it to the scene. Depending on the implementation (and number of lights), this may require two or<br />
more drawing passes.<br />
Creating the shadow map<br />
The first step renders the scene from the light's point of view. For a point light<br />
source, the view should be a perspective projection as wide as its desired<br />
angle of effect (it will be a sort of square spotlight). For directional light (e.g.,<br />
that from the Sun), an orthographic projection should be used.<br />
From this rendering, the depth buffer is extracted and saved. Because only the<br />
depth information is relevant, it is usual to avoid updating the color buffers<br />
and disable all lighting and texture calculations for this rendering, in order to<br />
save drawing time. This depth map is often stored as a texture in <strong>graphics</strong><br />
memory.<br />
This depth map must be updated any time there are changes to either the light<br />
or the objects in the scene, but can be reused in other situations, such as those<br />
where only the viewing camera moves. (If there are multiple lights, a separate<br />
depth map must be used for each light.)<br />
In many implementations it is practical to render only a subset of the objects<br />
in the scene to the shadow map in order to save some of the time it takes to<br />
redraw the map. Also, a depth offset which shifts the objects away from the<br />
light may be applied to the shadow map rendering in an attempt to resolve<br />
stitching problems where the depth map value is close to the depth of a<br />
surface being drawn (i.e., the shadow casting surface) in the next step.<br />
Scene rendered from the light view.<br />
Scene from the light view, depth map.<br />
Alternatively, culling front faces and only rendering the back of objects to the shadow map is sometimes used for a<br />
similar result.<br />
Shading the scene<br />
The second step is to draw the scene from the usual camera viewpoint, applying the shadow map. This process has<br />
three major components, the first is to find the coordinates of the object as seen from the light, the second is the test<br />
which compares that coordinate against the depth map, and finally, once accomplished, the object must be drawn<br />
either in shadow or in light.<br />
Light space coordinates
Shadow mapping 179<br />
In order to test a point against the depth map, its position in the scene<br />
coordinates must be transformed into the equivalent position as seen by the<br />
light. This is accomplished by a matrix multiplication. The location of the<br />
object on the screen is determined by the usual coordinate transformation, but<br />
a second set of coordinates must be generated to locate the object in light<br />
space.<br />
The matrix used to transform the world coordinates into the light's viewing<br />
coordinates is the same as the one used to render the shadow map in the first<br />
step (under OpenGL this is the product of the modelview and projection<br />
matrices). This will produce a set of homogeneous coordinates that need a<br />
Visualization of the depth map projected<br />
onto the scene<br />
perspective division (see <strong>3D</strong> projection) to become normalized device coordinates, in which each component (x, y,<br />
or z) falls between −1 and 1 (if it is visible from the light view). Many implementations (such as OpenGL and<br />
Direct<strong>3D</strong>) require an additional scale and bias matrix multiplication to map those −1 to 1 values to 0 to 1, which are<br />
more usual coordinates for depth map (texture map) lookup. This scaling can be done before the perspective<br />
division, and is easily folded into the previous transformation calculation by multiplying that matrix with the<br />
following:<br />
If done with a shader, or other <strong>graphics</strong> hardware extension, this transformation is usually applied at the vertex level,<br />
and the generated value is interpolated between other vertices, and passed to the fragment level.<br />
Depth map test<br />
Once the light-space coordinates are found, the x and y values usually<br />
correspond to a location in the depth map texture, and the z value corresponds<br />
to its associated depth, which can now be tested against the depth map.<br />
If the z value is greater than the value stored in the depth map at the<br />
appropriate (x,y) location, the object is considered to be behind an occluding<br />
object, and should be marked as a failure, to be drawn in shadow by the<br />
drawing process. Otherwise it should be drawn lit.<br />
If the (x,y) location falls outside the depth map, the programmer must either<br />
decide that the surface should be lit or shadowed by default (usually lit).<br />
Depth map test failures.<br />
In a shader implementation, this test would be done at the fragment level. Also, care needs to be taken when<br />
selecting the type of texture map storage to be used by the hardware: if interpolation cannot be done, the shadow will<br />
appear to have a sharp jagged edge (an effect that can be reduced with greater shadow map resolution).<br />
It is possible to modify the depth map test to produce shadows with a soft edge by using a range of values (based on<br />
the proximity to the edge of the shadow) rather than simply pass or fail.<br />
The shadow mapping technique can also be modified to draw a texture onto the lit regions, simulating the effect of a<br />
projector. The picture above, captioned "visualization of the depth map projected onto the scene" is an example of<br />
such a process.
Shadow mapping 180<br />
Drawing the scene<br />
Drawing the scene with shadows can be done in several different ways. If<br />
programmable shaders are available, the depth map test may be performed by<br />
a fragment shader which simply draws the object in shadow or lighted<br />
depending on the result, drawing the scene in a single pass (after an initial<br />
earlier pass to generate the shadow map).<br />
If shaders are not available, performing the depth map test must usually be<br />
implemented by some hardware extension (such as GL_ARB_shadow [1] ),<br />
which usually do not allow a choice between two lighting models (lighted and<br />
shadowed), and necessitate more rendering passes:<br />
Final scene, rendered with ambient<br />
shadows.<br />
1. Render the entire scene in shadow. For the most common lighting models (see Phong reflection model) this<br />
should technically be done using only the ambient component of the light, but this is usually adjusted to also<br />
include a dim diffuse light to prevent curved surfaces from appearing flat in shadow.<br />
2. Enable the depth map test, and render the scene lit. Areas where the depth map test fails will not be overwritten,<br />
and remain shadowed.<br />
3. An additional pass may be used for each additional light, using additive blending to combine their effect with the<br />
lights already drawn. (Each of these passes requires an additional previous pass to generate the associated shadow<br />
map.)<br />
The example pictures in this article used the OpenGL extension GL_ARB_shadow_ambient [2] to accomplish the<br />
shadow map process in two passes.<br />
Shadow map real-time implementations<br />
One of the key disadvantages of real time shadow mapping is that the size and depth of the shadow map determines<br />
the quality of the final shadows. This is usually visible as aliasing or shadow continuity glitches. A simple way to<br />
overcome this limitation is to increase the shadow map size, but due to memory, computational or hardware<br />
constraints, it is not always possible. Commonly used techniques for real-time shadow mapping have been developed<br />
to circumvent this limitation. These include Cascaded Shadow Maps, [3] Trapezoidal Shadow Maps, [4] Light Space<br />
Perspective Shadow maps, [5] or Parallel-Split Shadow maps. [6]<br />
Also notable is that generated shadows, even if aliasing free, have hard edges, which is not always desirable. In order<br />
to emulate real world soft shadows, several solutions have been developed, either by doing several lookups on the<br />
shadow map, generating geometry meant to emulate the soft edge or creating non standard depth shadow maps.<br />
Notable examples of these are Percentage Closer Filtering, [7] Smoothies, [8] and Variance Shadow maps. [9]
Shadow mapping 181<br />
Shadow mapping techniques<br />
Simple<br />
• SSM "Simple"<br />
Splitting<br />
• PSSM "Parallel Split" http:/ / http. developer. nvidia. com/ GPUGems3/ gpugems3_ch10. html [10]<br />
• CSM "Cascaded" http:/ / developer. download. nvidia. com/ SDK/ 10. 5/ opengl/ src/ cascaded_shadow_maps/<br />
doc/ cascaded_shadow_maps. pdf [11]<br />
Warping<br />
• LiSPSM "Light Space Perspective" http:/ / www. cg. tuwien. ac. at/ ~scherzer/ files/ papers/ LispSM_survey. pdf<br />
[12]<br />
• TSM "Trapezoid" http:/ / www. comp. nus. edu. sg/ ~tants/ tsm. html [13]<br />
• PSM "Perspective" http:/ / www-sop. inria. fr/ reves/ Marc. Stamminger/ psm/ [14]<br />
Smoothing<br />
• PCF "Percentage Closer Filtering" http:/ / http. developer. nvidia. com/ GPUGems/ gpugems_ch11. html [15]<br />
Filtering<br />
• ESM "Exponential" http:/ / www. thomasannen. com/ pub/ gi2008esm. pdf [16]<br />
• CSM "Convolution" http:/ / research. edm. uhasselt. be/ ~tmertens/ slides/ csm. ppt [17]<br />
• VSM "Variance" http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 104. 2569& rep=rep1&<br />
type=pdf [18]<br />
• SAVSM "Summed Area Variance" http:/ / http. developer. nvidia. com/ GPUGems3/ gpugems3_ch08. html [19]<br />
Soft Shadows<br />
• PCSS "Percentage Closer" http:/ / developer. download. nvidia. com/ shaderlibrary/ docs/ shadow_PCSS. pdf [20]<br />
Assorted<br />
• ASM "Adaptive" http:/ / www. cs. cornell. edu/ ~kb/ publications/ ASM. pdf [21]<br />
• AVSM "Adaptive Volumetric" http:/ / visual-computing. intel-research. net/ art/ publications/ avsm/ [22]<br />
• CSSM "Camera Space" http:/ / free-zg. t-com. hr/ cssm/ [23]<br />
• DASM "Deep Adaptive"<br />
• DPSM "Dual Paraboloid" http:/ / sites. google. com/ site/ osmanbrian2/ dpsm. pdf [24]<br />
• DSM "Deep" http:/ / <strong>graphics</strong>. pixar. com/ library/ DeepShadows/ paper. pdf [25]<br />
• FSM "Forward" http:/ / www. cs. unc. edu/ ~zhangh/ technotes/ shadow/ shadow. ps [26]<br />
• LPSM "Logarithmic" http:/ / gamma. cs. unc. edu/ LOGSM/ [27]<br />
• MDSM "Multiple Depth" http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 59. 3376& rep=rep1&<br />
type=pdf [28]<br />
• RMSM "Resolution Matched" http:/ / www. idav. ucdavis. edu/ func/ return_pdf?pub_id=919 [29]<br />
• SDSM "Sample Distribution" http:/ / visual-computing. intel-research. net/ art/ publications/ sdsm/ [30]<br />
• SPPSM "Separating Plane Perspective" http:/ / jgt. akpeters. com/ papers/ Mikkelsen07/ sep_math. pdf [31]<br />
• SSSM "Shadow Silhouette" http:/ / <strong>graphics</strong>. stanford. edu/ papers/ silmap/ silmap. pdf [32]
Shadow mapping 182<br />
Further reading<br />
• Smooth Penumbra Transitions with Shadow Maps [33] Willem H. de Boer<br />
• Forward shadow mapping [34] does the shadow test in eye-space rather than light-space to keep texture access<br />
more sequential.<br />
• Shadow mapping techniques [35] An overview of different shadow mapping techniques<br />
References<br />
[1] http:/ / www. opengl. org/ registry/ specs/ ARB/ shadow. txt<br />
[2] http:/ / www. opengl. org/ registry/ specs/ ARB/ shadow_ambient. txt<br />
[3] Cascaded shadow maps (http:/ / developer. download. nvidia. com/ SDK/ 10. 5/ opengl/ src/ cascaded_shadow_maps/ doc/<br />
cascaded_shadow_maps. pdf), NVidia, , retrieved 2008-02-14<br />
[4] Tobias Martin, Tiow-Seng Tan. Anti-aliasing and Continuity with Trapezoidal Shadow Maps (http:/ / www. comp. nus. edu. sg/ ~tants/ tsm.<br />
html). . Retrieved 2008-02-14.<br />
[5] Michael Wimmer, Daniel Scherzer, Werner Purgathofer. Light Space Perspective Shadow Maps (http:/ / www. cg. tuwien. ac. at/ research/ vr/<br />
lispsm/ ). . Retrieved 2008-02-14.<br />
[6] Fan Zhang, Hanqiu Sun, Oskari Nyman. Parallel-Split Shadow Maps on Programmable GPUs (http:/ / appsrv. cse. cuhk. edu. hk/ ~fzhang/<br />
pssm_project/ ). . Retrieved 2008-02-14.<br />
[7] "Shadow Map Antialiasing" (http:/ / http. developer. nvidia. com/ GPUGems/ gpugems_ch11. html). NVidia. . Retrieved 2008-02-14.<br />
[8] Eric Chan, Fredo Durand, Marco Corbetta. Rendering Fake Soft Shadows with Smoothies (http:/ / people. csail. mit. edu/ ericchan/ papers/<br />
smoothie/ ). . Retrieved 2008-02-14.<br />
[9] William Donnelly, Andrew Lauritzen. "Variance Shadow Maps" (http:/ / www. punkuser. net/ vsm/ ). . Retrieved 2008-02-14.<br />
[10] http:/ / http. developer. nvidia. com/ GPUGems3/ gpugems3_ch10. html<br />
[11] http:/ / developer. download. nvidia. com/ SDK/ 10. 5/ opengl/ src/ cascaded_shadow_maps/ doc/ cascaded_shadow_maps. pdf<br />
[12] http:/ / www. cg. tuwien. ac. at/ ~scherzer/ files/ papers/ LispSM_survey. pdf<br />
[13] http:/ / www. comp. nus. edu. sg/ ~tants/ tsm. html<br />
[14] http:/ / www-sop. inria. fr/ reves/ Marc. Stamminger/ psm/<br />
[15] http:/ / http. developer. nvidia. com/ GPUGems/ gpugems_ch11. html<br />
[16] http:/ / www. thomasannen. com/ pub/ gi2008esm. pdf<br />
[17] http:/ / research. edm. uhasselt. be/ ~tmertens/ slides/ csm. ppt<br />
[18] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 104. 2569& rep=rep1& type=pdf<br />
[19] http:/ / http. developer. nvidia. com/ GPUGems3/ gpugems3_ch08. html<br />
[20] http:/ / developer. download. nvidia. com/ shaderlibrary/ docs/ shadow_PCSS. pdf<br />
[21] http:/ / www. cs. cornell. edu/ ~kb/ publications/ ASM. pdf<br />
[22] http:/ / visual-computing. intel-research. net/ art/ publications/ avsm/<br />
[23] http:/ / free-zg. t-com. hr/ cssm/<br />
[24] http:/ / sites. google. com/ site/ osmanbrian2/ dpsm. pdf<br />
[25] http:/ / <strong>graphics</strong>. pixar. com/ library/ DeepShadows/ paper. pdf<br />
[26] http:/ / www. cs. unc. edu/ ~zhangh/ technotes/ shadow/ shadow. ps<br />
[27] http:/ / gamma. cs. unc. edu/ LOGSM/<br />
[28] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 59. 3376& rep=rep1& type=pdf<br />
[29] http:/ / www. idav. ucdavis. edu/ func/ return_pdf?pub_id=919<br />
[30] http:/ / visual-computing. intel-research. net/ art/ publications/ sdsm/<br />
[31] http:/ / jgt. akpeters. com/ papers/ Mikkelsen07/ sep_math. pdf<br />
[32] http:/ / <strong>graphics</strong>. stanford. edu/ papers/ silmap/ silmap. pdf<br />
[33] http:/ / www. whdeboer. com/ papers/ smooth_penumbra_trans. pdf<br />
[34] http:/ / www. cs. unc. edu/ ~zhangh/ shadow. html<br />
[35] http:/ / www. gamerendering. com/ category/ shadows/ shadow-mapping/
Shadow mapping 183<br />
External links<br />
• Hardware Shadow Mapping (http:/ / developer. nvidia. com/ attach/ 8456), nVidia<br />
• Shadow Mapping with Today's OpenGL Hardware (http:/ / developer. nvidia. com/ attach/ 6769), nVidia<br />
• Riemer's step-by-step tutorial implementing Shadow Mapping with HLSL and DirectX (http:/ / www. riemers.<br />
net/ Tutorials/ DirectX/ Csharp3/ index. php)<br />
• NVIDIA Real-time Shadow Algorithms and Techniques (http:/ / developer. nvidia. com/ object/ doc_shadows.<br />
html)<br />
• Shadow Mapping implementation using Java and OpenGL (http:/ / www. embege. com/ shadowmapping)<br />
Shadow volume<br />
Shadow volume is a technique used in <strong>3D</strong> computer <strong>graphics</strong> to add<br />
shadows to a rendered scene. They were first proposed by Frank Crow<br />
in 1977 [1] as the geometry describing the <strong>3D</strong> shape of the region<br />
occluded from a light source. A shadow volume divides the virtual<br />
world in two: areas that are in shadow and areas that are not.<br />
The stencil buffer implementation of shadow volumes is generally<br />
considered among the most practical general purpose real-time<br />
shadowing techniques for use on modern <strong>3D</strong> <strong>graphics</strong> hardware. It has<br />
been popularised by the video game Doom 3, and a particular variation<br />
of the technique used in this game has become known as Carmack's<br />
Reverse (see depth fail below).<br />
Example of Carmack's stencil shadowing in<br />
Doom 3.<br />
Shadow volumes have become a popular tool for real-time shadowing, alongside the more venerable shadow<br />
mapping. The main advantage of shadow volumes is that they are accurate to the pixel (though many<br />
implementations have a minor self-shadowing problem along the silhouette edge, see construction below), whereas<br />
the accuracy of a shadow map depends on the texture memory allotted to it as well as the angle at which the shadows<br />
are cast (at some angles, the accuracy of a shadow map unavoidably suffers). However, the shadow volume<br />
technique requires the creation of shadow geometry, which can be CPU intensive (depending on the<br />
implementation). The advantage of shadow mapping is that it is often faster, because shadow volume polygons are<br />
often very large in terms of screen space and require a lot of fill time (especially for convex objects), whereas<br />
shadow maps do not have this limitation.<br />
Construction<br />
In order to construct a shadow volume, project a ray from the light source through each vertex in the shadow casting<br />
object to some point (generally at infinity). These projections will together form a volume; any point inside that<br />
volume is in shadow, everything outside is lit by the light.<br />
For a polygonal model, the volume is usually formed by classifying each face in the model as either facing toward<br />
the light source or facing away from the light source. The set of all edges that connect a toward-face to an away-face<br />
form the silhouette with respect to the light source. The edges forming the silhouette are extruded away from the<br />
light to construct the faces of the shadow volume. This volume must extend over the range of the entire visible<br />
scene; often the dimensions of the shadow volume are extended to infinity to accomplish this (see optimization<br />
below.) To form a closed volume, the front and back end of this extrusion must be covered. These coverings are<br />
called "caps". Depending on the method used for the shadow volume, the front end may be covered by the object<br />
itself, and the rear end may sometimes be omitted (see depth pass below).
Shadow volume 184<br />
There is also a problem with the shadow where the faces along the silhouette edge are relatively shallow. In this<br />
case, the shadow an object casts on itself will be sharp, revealing its polygonal facets, whereas the usual lighting<br />
model will have a gradual change in the lighting along the facet. This leaves a rough shadow artifact near the<br />
silhouette edge which is difficult to correct. Increasing the polygonal density will minimize the problem, but not<br />
eliminate it. If the front of the shadow volume is capped, the entire shadow volume may be offset slightly away from<br />
the light to remove any shadow self-intersections within the offset distance of the silhouette edge (this solution is<br />
more commonly used in shadow mapping).<br />
The basic steps for forming a shadow volume are:<br />
1. Find all silhouette edges (edges which separate front-facing faces from back-facing faces)<br />
2. Extend all silhouette edges in the direction away from the light-source<br />
3. Add a front-cap and/or back-cap to each surface to form a closed volume (may not be necessary, depending on<br />
the implementation used)<br />
Illustration of shadow volumes. The image above at left shows a scene shadowed using shadow volumes. At right, the shadow volumes are<br />
shown in wireframe. Note how the shadows form a large conical area pointing away from the light source (the bright white point).<br />
Stencil buffer implementations<br />
After Crow, Tim Heidmann showed in 1991 how to use the stencil buffer to render shadows with shadow volumes<br />
quickly enough for use in real time applications. There are three common variations to this technique, depth pass,<br />
depth fail, and exclusive-or, but all of them use the same process:<br />
1. Render the scene as if it were completely in shadow.<br />
2. For each light source:<br />
1. Using the depth information from that scene, construct a mask in the stencil buffer that has holes only where<br />
the visible surface is not in shadow.<br />
2. Render the scene again as if it were completely lit, using the stencil buffer to mask the shadowed areas. Use<br />
additive blending to add this render to the scene.<br />
The difference between these three methods occurs in the generation of the mask in the second step. Some involve<br />
two passes, and some only one; some require less precision in the stencil buffer.<br />
Shadow volumes tend to cover large portions of the visible scene, and as a result consume valuable rasterization time<br />
(fill time) on <strong>3D</strong> <strong>graphics</strong> hardware. This problem is compounded by the complexity of the shadow casting objects,
Shadow volume 185<br />
as each object can cast its own shadow volume of any potential size onscreen. See optimization below for a<br />
discussion of techniques used to combat the fill time problem.<br />
Depth pass<br />
Heidmann proposed that if the front surfaces and back surfaces of the shadows were rendered in separate passes, the<br />
number of front faces and back faces in front of an object can be counted using the stencil buffer. If an object's<br />
surface is in shadow, there will be more front facing shadow surfaces between it and the eye than back facing<br />
shadow surfaces. If their numbers are equal, however, the surface of the object is not in shadow. The generation of<br />
the stencil mask works as follows:<br />
1. Disable writes to the depth and color buffers.<br />
2. Use back-face culling.<br />
3. Set the stencil operation to increment on depth pass (only count shadows in front of the object).<br />
4. Render the shadow volumes (because of culling, only their front faces are rendered).<br />
5. Use front-face culling.<br />
6. Set the stencil operation to decrement on depth pass.<br />
7. Render the shadow volumes (only their back faces are rendered).<br />
After this is accomplished, all lit surfaces will correspond to a 0 in the stencil buffer, where the numbers of front and<br />
back surfaces of all shadow volumes between the eye and that surface are equal.<br />
This approach has problems when the eye itself is inside a shadow volume (for example, when the light source<br />
moves behind an object). From this point of view, the eye sees the back face of this shadow volume before anything<br />
else, and this adds a −1 bias to the entire stencil buffer, effectively inverting the shadows. This can be remedied by<br />
adding a "cap" surface to the front of the shadow volume facing the eye, such as at the front clipping plane. There is<br />
another situation where the eye may be in the shadow of a volume cast by an object behind the camera, which also<br />
has to be capped somehow to prevent a similar problem. In most common implementations, because properly<br />
capping for depth-pass can be difficult to accomplish, the depth-fail method (see below) may be licensed for these<br />
special situations. Alternatively one can give the stencil buffer a +1 bias for every shadow volume the camera is<br />
inside, though doing the detection can be slow.<br />
There is another potential problem if the stencil buffer does not have enough bits to accommodate the number of<br />
shadows visible between the eye and the object surface, because it uses saturation arithmetic. (If they used arithmetic<br />
overflow instead, the problem would be insignificant.)<br />
Depth pass testing is also known as z-pass testing, as the depth buffer is often referred to as the z-buffer.<br />
Depth fail<br />
Around the year 2000, several people discovered that Heidmann's method can be made to work for all camera<br />
positions by reversing the depth. Instead of counting the shadow surfaces in front of the object's surface, the surfaces<br />
behind it can be counted just as easily, with the same end result. This solves the problem of the eye being in shadow,<br />
since shadow volumes between the eye and the object are not counted, but introduces the condition that the rear end<br />
of the shadow volume must be capped, or shadows will end up missing where the volume points backward to<br />
infinity.<br />
1. Disable writes to the depth and color buffers.<br />
2. Use front-face culling.<br />
3. Set the stencil operation to increment on depth fail (only count shadows behind the object).<br />
4. Render the shadow volumes.<br />
5. Use back-face culling.<br />
6. Set the stencil operation to decrement on depth fail.<br />
7. Render the shadow volumes.
Shadow volume 186<br />
The depth fail method has the same considerations regarding the stencil buffer's precision as the depth pass method.<br />
Also, similar to depth pass, it is sometimes referred to as the z-fail method.<br />
William Bilodeau and Michael Songy discovered this technique in October 1998, and presented the technique at<br />
Creativity, a Creative Labs developer's conference, in 1999. [2] Sim Dietrich presented this technique at a Creative<br />
Labs developer's forum in 1999. [3] A few months later, William Bilodeau and Michael Songy filed a US patent<br />
application for the technique the same year, US 6384822 [4] , entitled "Method for rendering shadows using a shadow<br />
volume and a stencil buffer" issued in 2002. John Carmack of id Software independently discovered the algorithm in<br />
2000 during the development of Doom 3. [5] Since he advertised the technique to the larger public, it is often known<br />
as Carmack's Reverse.<br />
Exclusive-or<br />
Either of the above types may be approximated with an exclusive-or variation, which does not deal properly with<br />
intersecting shadow volumes, but saves one rendering pass (if not fill time), and only requires a 1-bit stencil buffer.<br />
The following steps are for the depth pass version:<br />
1. Disable writes to the depth and color buffers.<br />
2. Set the stencil operation to XOR on depth pass (flip on any shadow surface).<br />
3. Render the shadow volumes.<br />
Optimization<br />
• One method of speeding up the shadow volume geometry calculations is to utilize existing parts of the rendering<br />
pipeline to do some of the calculation. For instance, by using homogeneous coordinates, the w-coordinate may be<br />
set to zero to extend a point to infinity. This should be accompanied by a viewing frustum that has a far clipping<br />
plane that extends to infinity in order to accommodate those points, accomplished by using a specialized<br />
projection matrix. This technique reduces the accuracy of the depth buffer slightly, but the difference is usually<br />
negligible. Please see 2002 paper Practical and Robust Stenciled Shadow Volumes for<br />
Hardware-Accelerated Rendering [6] , C. Everitt and M. Kilgard, for a detailed implementation.<br />
• Rasterization time of the shadow volumes can be reduced by using an in-hardware scissor test to limit the<br />
shadows to a specific onscreen rectangle.<br />
• NVIDIA has implemented a hardware capability called the depth bounds test [7] that is designed to remove parts<br />
of shadow volumes that do not affect the visible scene. (This has been available since the GeForce FX 5900<br />
model.) A discussion of this capability and its use with shadow volumes was presented at the Game Developers<br />
Conference in 2005. [8]<br />
• Since the depth-fail method only offers an advantage over depth-pass in the special case where the eye is within a<br />
shadow volume, it is preferable to check for this case, and use depth-pass wherever possible. This avoids both the<br />
unnecessary back-capping (and the associated rasterization) for cases where depth-fail is unnecessary, as well as<br />
the problem of appropriately front-capping for special cases of depth-pass.
Shadow volume 187<br />
References<br />
[1] Crow, Franklin C: "Shadow Algorithms for Computer Graphics", Computer Graphics (SIGGRAPH '77 Proceedings), vol. 11, no. 2, 242-248.<br />
[2] Yen, Hun (2002-12-03). "The Theory of Stencil Shadow Volumes" (http:/ / www. gamedev. net/ reference/ articles/ article1873. asp).<br />
GameDev.net. . Retrieved 2010-09-12.<br />
[3] "Creative patents Carmack's reverse" (http:/ / techreport. com/ onearticle. x/ 7113). The Tech Report. 2004-07-29. . Retrieved 2010-09-12.<br />
[4] http:/ / v3. espacenet. com/ textdoc?DB=EPODOC& IDX=US6384822<br />
[5] "Robust Shadow Volumes" (http:/ / developer. nvidia. com/ object/ robust_shadow_volumes. html). Developer.nvidia.com. . Retrieved<br />
2010-09-12.<br />
[6] http:/ / developer. nvidia. com/ attach/ 6831<br />
[7] http:/ / www. opengl. org/ registry/ specs/ EXT/ depth_bounds_test. txt<br />
[8] http:/ / www. terathon. com/ gdc_lengyel. ppt<br />
External links<br />
• The Theory of Stencil Shadow Volumes (http:/ / www. gamedev. net/ reference/ articles/ article1873. asp)<br />
• The Mechanics of Robust Stencil Shadows (http:/ / www. gamasutra. com/ features/ 20021011/ lengyel_01. htm)<br />
• An Introduction to Stencil Shadow Volumes (http:/ / www. devmaster. net/ articles/ shadow_volumes)<br />
• Shadow Mapping and Shadow Volumes (http:/ / www. devmaster. net/ articles/ shadow_techniques)<br />
• Stenciled Shadow Volumes in OpenGL (http:/ / www. 3ddrome. com/ articles/ shadowvolumes. php)<br />
• Volume shadow tutorial (http:/ / www. gamedev. net/ reference/ articles/ article2036. asp)<br />
• Fast shadow volumes (http:/ / developer. nvidia. com/ object/ fast_shadow_volumes. html) at NVIDIA<br />
• Robust shadow volumes (http:/ / developer. nvidia. com/ object/ robust_shadow_volumes. html) at NVIDIA<br />
• Advanced Stencil Shadow and Penumbral Wedge Rendering (http:/ / www. terathon. com/ gdc_lengyel. ppt)<br />
Regarding depth-fail patents<br />
• "Creative Pressures id Software With Patents" (http:/ / games. slashdot. org/ games/ 04/ 07/ 28/ 1529222. shtml).<br />
Slashdot. July 28, 2004. Retrieved 2006-05-16.<br />
• "Creative patents Carmack's reverse" (http:/ / techreport. com/ onearticle. x/ 7113). The Tech Report. July 29,<br />
2004. Retrieved 2006-05-16.<br />
• "Creative gives background to Doom III shadow story" (http:/ / www. theinquirer. net/ ?article=17525). The<br />
Inquirer. July 29, 2004. Retrieved 2006-05-16.
Silhouette edge 188<br />
Silhouette edge<br />
In computer <strong>graphics</strong>, a silhouette edge on a <strong>3D</strong> body projected onto a 2D plane (display plane) is the collection of<br />
points whose outwards surface normal is perpendicular to the view vector. Due to discontinuities in the surface<br />
normal, a silhouette edge is also an edge which separates a front facing face from a back facing face. Without loss of<br />
generality, this edge is usually chosen to be the closest one on a face, so that in parallel view this edge corresponds to<br />
the same one in a perspective view. Hence, if there is an edge between a front facing face and a side facing face, and<br />
another edge between a side facing face and back facing face, the closer one is chosen. The easy example is looking<br />
at a cube in the direction where the face normal is collinear with the view vector.<br />
The first type of silhouette edge is sometimes troublesome to handle because it does not necessarily correspond to a<br />
physical edge in the CAD model. The reason that this can be an issue is that a programmer might corrupt the original<br />
model by introducing the new silhouette edge into the problem. Also, given that the edge strongly depends upon the<br />
orientation of the model and view vector, this can introduce numerical instabilities into the algorithm (such as when<br />
a trick like dilution of precision is considered).<br />
Computation<br />
To determine the silhouette edge of an object, we first have to know the plane equation of all faces. Then, by<br />
examining the sign of the point-plane distance from the light-source to each face<br />
Using this result, we can determine if the face is front- or back facing.<br />
The silhouette edge(s) consist of all edges separating a front facing face from a back facing face.<br />
Similar Technique<br />
A convenient and practical implementation of front/back facing detection is to use the unit normal of the plane<br />
(which is commonly precomputed for lighting effects anyhow), then simply applying the dot product of the light<br />
position to the plane's unit normal and adding the D component of the plane equation (a scalar value):<br />
Note: The homogeneous coordinates, w and d, are not always needed for this computation.<br />
After doing this calculation, you may notice indicator is actually the signed distance from the plane to the light<br />
position. This distance indicator will be negative if it is behind the face, and positive if it is in front of the face.<br />
This is also the technique used in the 2002 SIGGRAPH paper, "Practical and Robust Stenciled Shadow Volumes for<br />
Hardware-Accelerated Rendering"
Silhouette edge 189<br />
External links<br />
• http:/ / wheger. tripod. com/ vhl/ vhl. htm<br />
Spectral rendering<br />
In computer <strong>graphics</strong>, spectral rendering is where a scene's light transport is modeled with real wavelengths. This<br />
process is typically a lot slower than traditional rendering, which renders the scene in its red, green, and blue<br />
components and then overlays the images. Spectral rendering is often used in ray tracing or photon mapping to more<br />
accurately simulate the scene, often for comparison with an actual photograph to test the rendering algorithm (as in a<br />
Cornell Box) or to simulate different portions of the electromagnetic spectrum for the purpose of scientific work.<br />
The images simulated are not necessarily more realistic appearing; however, when compared to a real image pixel<br />
for pixel, the result is often much closer.<br />
Spectral rendering can also simulate light sources and objects more effectively, as the light's emission spectrum can<br />
be used to release photons at a particular wavelength in proportion to the spectrum. Objects' spectral reflectance<br />
curves can similarly be used to reflect certain portions of the spectrum more accurately.<br />
As an example, certain properties of tomatoes make them appear differently under sunlight than under fluorescent<br />
light. Using the blackbody radiation equations to simulate sunlight or the emission spectrum of a fluorescent bulb in<br />
combination with the tomato's spectral reflectance curve, more accurate images of each scenario can be produced.<br />
References<br />
External links<br />
Cornell Box photo comparison (http:/ / www. <strong>graphics</strong>. cornell. edu/ online/ box/ compare. html)
Specular highlight 190<br />
Specular highlight<br />
A specular highlight is the bright spot of light that appears on shiny<br />
objects when illuminated (for example, see image at right). Specular<br />
highlights are important in <strong>3D</strong> computer <strong>graphics</strong>, as they provide a<br />
strong visual cue for the shape of an object and its location with respect<br />
to light sources in the scene.<br />
Microfacets<br />
The term specular means that light is perfectly reflected in a<br />
mirror-like way from the light source to the viewer. Specular reflection<br />
is visible only where the surface normal is oriented precisely halfway<br />
between the direction of incoming light and the direction of the viewer;<br />
this is called the half-angle direction because it bisects (divides into<br />
halves) the angle between the incoming light and the viewer. Thus, a<br />
Specular highlights on a pair of spheres.<br />
specularly reflecting surface would show a specular highlight as the perfectly sharp reflected image of a light source.<br />
However, many shiny objects show blurred specular highlights.<br />
This can be explained by the existence of microfacets. We assume that surfaces that are not perfectly smooth are<br />
composed of many very tiny facets, each of which is a perfect specular reflector. These microfacets have normals<br />
that are distributed about the normal of the approximating smooth surface. The degree to which microfacet normals<br />
differ from the smooth surface normal is determined by the roughness of the surface.<br />
The reason for blurred specular highlights is now clear. At points on the object where the smooth normal is close to<br />
the half-angle direction, many of the microfacets point in the half-angle direction and so the specular highlight is<br />
bright. As one moves away from the center of the highlight, the smooth normal and the half-angle direction get<br />
farther apart; the number of microfacets oriented in the half-angle direction falls, and so the intensity of the highlight<br />
falls off to zero.<br />
The specular highlight often reflects the color of the light source, not the color of the reflecting object. This is<br />
because many materials have a thin layer of clear material above the surface of the pigmented material. For example<br />
plastic is made up of tiny beads of color suspended in a clear polymer and human skin often has a thin layer of oil or<br />
sweat above the pigmented cells. Such materials will show specular highlights in which all parts of the color<br />
spectrum are reflected equally. On metallic materials such as gold the color of the specular highlight will reflect the<br />
color of the material.<br />
Models of microfacets<br />
A number of different models exist to predict the distribution of microfacets. Most assume that the microfacet<br />
normals are distributed evenly around the normal; these models are called isotropic. If microfacets are distributed<br />
with a preference for a certain direction along the surface, the distribution is anisotropic.<br />
NOTE: In most equations, when it says it means<br />
Phong distribution<br />
In the Phong reflection model, the intensity of the specular highlight is calculated as:<br />
Where R is the mirror reflection of the light vector off the surface, and V is the viewpoint vector.
Specular highlight 191<br />
In the Blinn–Phong shading model, the intensity of a specular highlight is calculated as:<br />
Where N is the smooth surface normal and H is the half-angle direction (the direction vector midway between L, the<br />
vector to the light, and V, the viewpoint vector).<br />
The number n is called the Phong exponent, and is a user-chosen value that controls the apparent smoothness of the<br />
surface. These equations imply that the distribution of microfacet normals is an approximately Gaussian distribution<br />
(for large ), or approximately Pearson type II distribution, of the corresponding angle. [1] While this is a useful<br />
heuristic and produces believable results, it is not a physically based model.<br />
Another similar formula, but only calculated differently:<br />
where R is an eye reflection vector, E is an eye vector (view vector), N is surface normal vector. All vectors<br />
are normalized ( ). L is a light vector. For example,<br />
Approximate formula is this:<br />
If vector H is normalized<br />
then<br />
Gaussian distribution<br />
A slightly better model of microfacet distribution can be created using a Gaussian distribution. The usual function<br />
calculates specular highlight intensity as:<br />
then:<br />
where m is a constant between 0 and 1 that controls the apparent smoothness of the surface. [2]<br />
Beckmann distribution<br />
A physically based model of microfacet distribution is the Beckmann distribution [3] :<br />
where m is the rms slope of the surface microfacets (the roughness of the material). [4] Compare to the empirical<br />
models above, this function "gives the absolute magnitude of the reflectance without introducing arbitrary constants;<br />
the disadvantage is that it requires more computation" [5] . However, this model can be simplified since<br />
. Also note that the product of and a surface distribution function is
Specular highlight 192<br />
normalized over the half-sphere which is obeyed by this function.<br />
Heidrich–Seidel anisotropic distribution<br />
The Heidrich–Seidel distribution is a simple anisotropic distribution, based on the Phong model. It can be used to<br />
model surfaces that have small parallel grooves or fibers, such as brushed metal, satin, and hair. The specular<br />
highlight intensity for this distribution is:<br />
where n is the anisotropic exponent, V is the viewing direction, L is the direction of incoming light, and T is the<br />
direction parallel to the grooves or fibers at this point on the surface. If you have a unit vector D which specifies the<br />
global direction of the anisotropic distribution, you can compute the vector T at a given point by the following:<br />
where N is the unit normal vector at that point on the surface. You can also easily compute the cosine of the angle<br />
between the vectors by using a property of the dot product and the sine of the angle by using the trigonometric<br />
identities.<br />
The anisotropic should be used in conjunction with a non-anisotropic distribution like a Phong distribution to<br />
produce the correct specular highlight.<br />
Ward anisotropic distribution<br />
The Ward anisotropic distribution [6] uses two user-controllable parameters α x and α y to control the anisotropy. If<br />
the two parameters are equal, then an isotropic highlight results. The specular term in the distribution is:<br />
The specular term is zero if N·L < 0 or N·R < 0. All vectors are unit vectors. The vector R is the mirror reflection of<br />
the light vector off the surface, L is the direction from the surface point to the light, H is the half-angle direction, N is<br />
the surface normal, and X and Y are two orthogonal vectors in the normal plane which specify the anisotropic<br />
directions.<br />
Cook–Torrance model<br />
The Cook–Torrance model [5] uses a specular term of the form<br />
.<br />
Here D is the Beckmann distribution factor as above and F is the Fresnel term,<br />
.<br />
For performance reasons in real-time <strong>3D</strong> <strong>graphics</strong> Schlick's approximation is often used to approximate Fresnel term.<br />
G is the geometric attenuation term, describing selfshadowing due to the microfacets, and is of the form<br />
In these formulas E is the vector to the camera or eye, H is the half-angle vector, L is the vector to the light source<br />
and N is the normal vector, and α is the angle between H and N.<br />
.
Specular highlight 193<br />
Using multiple distributions<br />
If desired, different distributions (usually, using the same distribution function with different values of m or n) can be<br />
combined using a weighted average. This is useful for modelling, for example, surfaces that have small smooth and<br />
rough patches rather than uniform roughness.<br />
References<br />
[1] Richard Lyon, "Phong Shading Reformulation for Hardware Renderer Simplification", Apple Technical Report #43, Apple Computer, Inc.<br />
1993 PDF (http:/ / dicklyon. com/ tech/ Graphics/ Phong_TR-Lyon. pdf)<br />
[2] Glassner, Andrew S. (ed). An Introduction to Ray Tracing. San Diego: Academic Press Ltd, 1989. p. 148.<br />
[3] Petr Beckmann, André Spizzichino, The scattering of electromagnetic waves from rough surfaces, Pergamon Press, 1963, 503 pp<br />
(Republished by Artech House, 1987, ISBN 9780890062388).<br />
[4] Foley et al. Computer Graphics: Principles and Practice. Menlo Park: Addison-Wesley, 1997. p. 764.<br />
[5] R. Cook and K. Torrance. "A reflectance model for computer <strong>graphics</strong>". Computer Graphics (SIGGRAPH '81 Proceedings), Vol. 15, No. 3,<br />
July 1981, pp. 301–316.<br />
[6] http:/ / radsite. lbl. gov/ radiance/ papers/<br />
Specularity<br />
Specularity is the visual appearance of specular reflections. In<br />
computer <strong>graphics</strong>, it means the quantity used in <strong>3D</strong> rendering which<br />
represents the amount of specular reflectivity a surface has. It is a key<br />
component in determining the brightness of specular highlights, along<br />
with shininess to determine the size of the highlights.<br />
It is frequently used in real-time computer <strong>graphics</strong> where the<br />
mirror-like specular reflection of light from other surfaces is often<br />
ignored (due to the more intensive computations required to calculate<br />
this), and the specular reflection of light direct from point light sources<br />
is modelled as specular highlights.<br />
Specular highlights on a pair of spheres.
Sphere mapping 194<br />
Sphere mapping<br />
In computer <strong>graphics</strong>, sphere mapping (or spherical environment mapping) is a type of reflection mapping that<br />
approximates reflective surfaces by considering the environment to be an infinitely far-away spherical wall. This<br />
environment is stored as a texture depicting what a mirrored sphere would look like if it were placed into the<br />
environment, using an orthographic projection (as opposed to one with perspective). This texture contains reflective<br />
data for the entire environment, except for the spot directly behind the sphere. (For one example of such an object,<br />
see Escher's drawing Hand with Reflecting Sphere.)<br />
To use this data, the surface normal of the object, view direction from the object to the camera, and/or reflected<br />
direction from the object to the environment is used to calculate a texture coordinate to look up in the<br />
aforementioned texture map. The result appears like the environment is reflected in the surface of the object that is<br />
being rendered.<br />
Usage example<br />
In the simplest case for generating texture coordinates, suppose:<br />
• The map has been created as above, looking at the sphere along the z-axis.<br />
• The texture coordinate of the center of the map is (0,0), and the sphere's image has radius 1.<br />
• We are rendering an image in the same exact situation as the sphere, but the sphere has been replaced with a<br />
reflective object.<br />
• The image being created is orthographic, or the viewer is infinitely far away, so that the view direction does not<br />
change as one moves across the image.<br />
At texture coordinate , note that the depicted location on the sphere is (where z is<br />
), and the normal at that location is also . However, we are given the reverse task (a<br />
normal for which we need to produce a texture map coordinate). So the texture coordinate corresponding to normal<br />
is .
Stencil buffer 195<br />
Stencil buffer<br />
A stencil buffer is an extra buffer, in addition to the<br />
color buffer (pixel buffer) and depth buffer<br />
(z-buffering) found on modern computer <strong>graphics</strong><br />
hardware. The buffer is per pixel, and works on integer<br />
values, usually with a depth of one byte per pixel. The<br />
depth buffer and stencil buffer often share the same<br />
area in the RAM of the <strong>graphics</strong> hardware.<br />
In the simplest case, the stencil buffer is used to limit<br />
the area of rendering (stenciling). More advanced usage<br />
of the stencil buffer makes use of the strong connection<br />
between the depth buffer and the stencil buffer in the<br />
rendering pipeline. For example, stencil values can be<br />
automatically increased/decreased for every pixel that<br />
fails or passes the depth test.<br />
The simple combination of depth test and stencil<br />
modifiers make a vast number of effects possible (such<br />
In this program the stencil buffer is filled with 1s wherever a white<br />
stripe is drawn and 0s elsewhere. Two versions of each oval, square,<br />
or triangle are then drawn. A black colored shape is drawn where the<br />
stencil buffer is 1, and a white shape is drawn where the buffer is 0.<br />
as shadows, outline drawing or highlighting of intersections between complex primitives) though they often require<br />
several rendering passes and, therefore, can put a heavy load on the <strong>graphics</strong> hardware.<br />
The most typical application is still to add shadows to <strong>3D</strong> applications. It is also used for planar reflections.<br />
Other rendering techniques, such as portal rendering, use the stencil buffer in other ways; for example, it can be used<br />
to find the area of the screen obscured by a portal and re-render those pixels correctly.<br />
The stencil buffer and its modifiers can be accessed in computer <strong>graphics</strong> APIs like OpenGL and Direct<strong>3D</strong>.
Stencil codes 196<br />
Stencil codes<br />
Stencil codes are a class of iterative kernels [1] which update array<br />
elements according to some fixed pattern, called stencil [2] . They are<br />
most commonly found in the codes of computer simulations, e.g. for<br />
computational fluid dynamics in the context of scientific and<br />
engineering applications. Other notable examples include solving<br />
partial differential equations [1] , the Jacobi kernel, the Gauss–Seidel<br />
method [2] , image processing [1] and cellular automata. [3] The regular<br />
structure of the arrays sets stencil codes apart from other modeling<br />
methods such as the Finite element method. Most finite difference<br />
codes which operate on regular grids can be formulated as stencil<br />
codes.<br />
Definition<br />
Stencil codes perform a sequence of sweeps (called timesteps) through<br />
a given array. [2] Generally this is a 2- or 3-dimensional regular grid. [3]<br />
The shape of a 6-point <strong>3D</strong> von Neumann style<br />
The elements of the arrays are often referred to as cells. In each timestep, the stencil code updates all array<br />
elements. [2] Using neighboring array elements in a fixed pattern (called the stencil), each cell's new value is<br />
computed. In most cases boundary values are left unchanged, but in some cases (e.g. LBM codes) those need to be<br />
adjusted during the course of the computation as well. Since the stencil is the same for each element, the pattern of<br />
data accesses is repeated. [4]<br />
stencil.<br />
More formally, we may define stencil codes as a 5-tuple with the following meaning: [3]<br />
• is the index set. It defines the topology of the array.<br />
• is the (not necessarily finite) set of states, one of which each cell may take on on any given timestep.<br />
• defines the initial state of the system at time 0.<br />
• is the stencil itself and describes the actual shape of the neighborhood. (There are elements in the<br />
stencil.<br />
• is the transition function which is used to determine a cell's new state, depending on its neighbors.<br />
Since I is a k-dimentional integer interval, the array will always have the topology of a finite regular grid. The array<br />
is also called simulation space and individual cells are identified by their index . The stencil is an ordered set<br />
of relative coordinates. We can now obtain for each cell the tuple of its neighbors indices<br />
Their states are given by mapping the tuple to the corresponding tuple of states :<br />
This is all we need to define the system's state for the following time steps with :<br />
Note that is defined on and not just on since the boundary conditions need to be set, too. Sometimes the<br />
elements of may be defined by a vector addition modulo the simulation space's dimension to realize toroidal
Stencil codes 197<br />
topologies:<br />
This may be useful for achieving perpetual boundary conditions, which simplifys certain physical models.<br />
Example: 2D Jacobi Iteration<br />
To illustrate the formal definition, we'll have a look at how a two<br />
dimensional Jacobi iteration can be defined. The update function<br />
computes the arithmetic mean of a cell's four neighbors. In this case we<br />
set off with an initial solution of 0. The left and right boundary are<br />
fixed at 1, while the upper and lower boundaries are set to 0. After a<br />
sufficient number of iterations, the system converges against a<br />
saddle-shape.<br />
Data dependencies of a selected cell in the 2D<br />
array.
Stencil codes 198<br />
Stencils<br />
The shape of the neighborhood used during the updates depends on the application itself. The most common stencils<br />
are the 2D or <strong>3D</strong> versions of the Von Neumann neighborhood and Moore neighborhood. The example above uses a<br />
2D von Neumann stencil while LBM codes generally use its <strong>3D</strong> variant. Conway's Game of Life uses the 2D Moore<br />
neighborhood. That said, other stencils such as a 25-point stencil for seismic wave propagation [5] can be found, too.<br />
9-point 2D stencil<br />
5-point 2D stencil<br />
6-point <strong>3D</strong> stencil<br />
25-point <strong>3D</strong> stencil
Stencil codes 199<br />
Implementation Issues<br />
Many simulation codes may be formulated naturally as stencil codes. Since computing time and memory<br />
consumption grow linearly wth the number of array elements, parallel implementations of stencil codes are of<br />
paramount importance to research. [6] This is challenging since the computations are tightly coupled (because of the<br />
cell updates depending on neighboring cells) and most stencil codes are memory bound (i.e. the ratio of memory<br />
accesses and calculations is high). [7] Virtually all current parallel architectures have been explored for executing<br />
stencil codes efficiently [8] ; at the moment GPGPUs have proven to be most efficient. [9]<br />
Libraries<br />
Due to both, the importance of stencil codes to computer simulations and their high computational requirements,<br />
there are a number of efforts which aim at creating reusable libraries to support scientists in implementing new<br />
stencil codes. The libraries are mostly concerned with the parallelization, but may also tackle other challenges, such<br />
as IO, steering and checkpointing. They may be classified by their API.<br />
Patch-Based Libraries<br />
This is a traditional design. The library manages a set of n-dimensional scalar arrays, which the user code may access<br />
to perform updates. The library handles the synchronization of the boundaries (dubbed ghost zone or halo). The<br />
advantage of this interface is that the user code may loop over the arrays, which makes it easy to integrate legacy<br />
codes [10] . The disadvantage is that the library can not handle cache blocking (as this has to be done within the<br />
loops [11] ) or wrapping of the code for accelerators (e.g. via CUDA or OpenCL). Notable implementations include<br />
Cactus [12] , a physics problem solving environment, and waLBerla [13] .<br />
Cell-Based Libraries<br />
These libraries move the interface to updating single simulation cells: only the current cell and its neighbors are<br />
exposed to the user code, e.g. via getter/setter methods. The advantage of this approach is that the library can control<br />
tightly which cells are updated in which order, which is useful not only to implement cache blocking, [9] but also to<br />
run the same code on multi-cores and GPUs. [14] This approach requires the user to recompile his source code<br />
together with the library. Otherwise a function call for every cell update would be required, which would seriously<br />
impair performance. This is only feasible with techniques such as class templates or metaprogramming, which is also<br />
the reason why this design is only found in newer libraries. Examples are Physis [15] and LibGeoDecomp [16] .<br />
References<br />
[1] Roth, Gerald et al. (1997) Proceedings of SC'97: High Performance Networking and Computing. Compiling Stencils in High Performance<br />
Fortran. (http:/ / citeseer. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 53. 1505)<br />
[2] Sloot, Peter M.A. et al. (May 28, 2002) Computational Science - ICCS 2002: International Conference, Amsterdam, The Netherlands, April<br />
21-24, 2002. Proceedings, Part I. (http:/ / books. google. com/ books?id=qVcLw1UAFUsC& pg=PA843& dq=stencil+ array&<br />
sig=g3gYXncOThX56TUBfHE7hnlSxJg#PPA843,M1) Page 843. Publisher: Springer. ISBN 3540435913.<br />
[3] Fey, Dietmar et al. (2010) Grid-Computing: Eine Basistechnologie für Computational Science (http:/ / books. google. com/<br />
books?id=RJRZJHVyQ4EC& pg=PA51& dq=fey+ grid& hl=de& ei=uGk8TtDAAo_zsgbEoZGpBQ& sa=X& oi=book_result& ct=result&<br />
resnum=1& ved=0CCoQ6AEwAA#v=onepage& q& f=true).<br />
Page 439. Publisher: Springer. ISBN 3540797467<br />
[4] Yang, Laurence T.; Guo, Minyi. (August 12, 2005) High-Performance Computing : Paradigm and Infrastructure. (http:/ / books. google.<br />
com/ books?id=qA4DbnFB2XcC& pg=PA221& dq=Stencil+ codes& as_brr=3& sig=H8wdKyABXT5P7kUh4lQGZ9C5zDk) Page 221.<br />
Publisher: Wiley-Interscience. ISBN 047165471X<br />
[5] Micikevicius, Paulius et al. (2009) <strong>3D</strong> finite difference computation on GPUs using CUDA (http:/ / portal. acm. org/ citation.<br />
cfm?id=1513905) Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units ISBN: 978-1-60558-517-8<br />
[6] Datta, Kaushik (2009) Auto-tuning Stencil Codes for Cache-Based Multicore Platforms (http:/ / www. cs. berkeley. edu/ ~kdatta/ pubs/<br />
EECS-2009-177. pdf), Ph.D. Thesis
Stencil codes 200<br />
[7] Wellein, G et al. (2009) Efficient temporal blocking for stencil computations by multicore-aware wavefront parallelization (http:/ /<br />
ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=5254211), 33rd Annual IEEE International Computer Software and Applications<br />
Conference, COMPSAC 2009<br />
[8] Datta, Kaushik et al. (2008) Stencil computation optimization and auto-tuning on state-of-the-art multicore architectures (http:/ / portal. acm.<br />
org/ citation. cfm?id=1413375), SC '08 Proceedings of the 2008 ACM/IEEE conference on Supercomputing<br />
[9] Schäfer, Andreas and Fey, Dietmar (2011) High Performance Stencil Code Algorithms for GPGPUs (http:/ / www. sciencedirect. com/<br />
science/ article/ pii/ S1877050911002791), Proceedings of the International Conference on Computational Science, ICCS 2011<br />
[10] S. Donath, J. Götz, C. Feichtinger, K. Iglberger and U. Rüde (2010) waLBerla: Optimization for Itanium-based Systems with Thousands of<br />
Processors (http:/ / www. springerlink. com/ content/ p2583237l2187374/ ), High Performance Computing in Science and Engineering,<br />
Garching/Munich 2009<br />
[11] Nguyen, Anthony et al. (2010) 3.5-D Blocking Optimization for Stencil Computations on Modern CPUs and GPUs (http:/ / dl. acm. org/<br />
citation. cfm?id=1884658), SC '10 Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing,<br />
Networking, Storage and Analysis<br />
[12] http:/ / cactuscode. org/<br />
[13] http:/ / www10. informatik. uni-erlangen. de/ Research/ Projects/ walberla/ description. shtml<br />
[14] Naoya Maruyama, Tatsuo Nomura, Kento Sato, and Satoshi Matsuoka (2011) Physis: An Implicitly Parallel Programming Model for Stencil<br />
Computations on Large-Scale GPU-Accelerated Supercomputers, SC '11 Proceedings of the 2011 ACM/IEEE International Conference for<br />
High Performance Computing, Networking, Storage and Analysis<br />
[15] https:/ / github. com/ naoyam/ physis<br />
[16] http:/ / www. libgeodecomp. org<br />
External Links<br />
[ Physis (https:/ / github. com/ naoyam/ physis)] LibGeoDecomp (http:/ / www. libgeodecomp. org)<br />
Subdivision surface<br />
A subdivision surface, in the field of <strong>3D</strong> computer <strong>graphics</strong>, is a method of representing a smooth surface via the<br />
specification of a coarser piecewise linear polygon mesh. The smooth surface can be calculated from the coarse mesh<br />
as the limit of a recursive process of subdividing each polygonal face into smaller faces that better approximate the<br />
smooth surface.
Subdivision surface 201<br />
Overview<br />
The subdivision surfaces are defined recursively. The process starts<br />
with a given polygonal mesh. A refinement scheme is then applied to<br />
this mesh. This process takes that mesh and subdivides it, creating new<br />
vertices and new faces. The positions of the new vertices in the mesh<br />
are computed based on the positions of nearby old vertices. In some<br />
refinement schemes, the positions of old vertices might also be altered<br />
(possibly based on the positions of new vertices).<br />
This process produces a denser mesh than the original one, containing<br />
more polygonal faces. This resulting mesh can be passed through the<br />
same refinement scheme again and so on.<br />
The limit subdivision surface is the surface produced from this process<br />
being iteratively applied infinitely many times. In practical use<br />
however, this algorithm is only applied a limited number of times. The<br />
limit surface can also be calculated directly for most subdivision<br />
surfaces using the technique of Jos Stam, [1] which eliminates the need<br />
for recursive refinement.<br />
Refinement schemes<br />
First three steps of Catmull–Clark subdivision of<br />
a cube with subdivision surface below<br />
Subdivision surface refinement schemes can be broadly classified into two categories: interpolating and<br />
approximating. Interpolating schemes are required to match the original position of vertices in the original mesh.<br />
Approximating schemes are not; they can and will adjust these positions as needed. In general, approximating<br />
schemes have greater smoothness, but editing applications that allow users to set exact surface constraints require an<br />
optimization step. This is analogous to spline surfaces and curves, where Bézier splines are required to interpolate<br />
certain control points, while B-splines are not.<br />
There is another division in subdivision surface schemes as well, the type of polygon that they operate on. Some<br />
function for quadrilaterals (quads), while others operate on triangles.<br />
Approximating schemes<br />
Approximating means that the limit surfaces approximate the initial meshes and that after subdivision, the newly<br />
generated control points are not in the limit surfaces. Examples of approximating subdivision schemes are:<br />
• Catmull and Clark (1978) generalized bi-cubic uniform B-spline to produce their subdivision scheme. For<br />
arbitrary initial meshes, this scheme generates limit surfaces that are C 2 continuous everywhere except at<br />
extraordinary vertices where they are C 1 continuous (Peters and Reif 1998).<br />
• Doo–Sabin - The second subdivision scheme was developed by Doo and Sabin (1978) who successfully extended<br />
Chaikin's corner-cutting method for curves to surfaces. They used the analytical expression of bi-quadratic<br />
uniform B-spline surface to generate their subdivision procedure to produce C 1 limit surfaces with arbitrary<br />
topology for arbitrary initial meshes.<br />
• Loop, Triangles - Loop (1987) proposed his subdivision scheme based on a quartic box-spline of six direction<br />
vectors to provide a rule to generate C 2 continuous limit surfaces everywhere except at extraordinary vertices<br />
where they are C 1 continuous.<br />
• Mid-Edge subdivision scheme - The mid-edge subdivision scheme was proposed independently by Peters-Reif<br />
(1997) and Habib-Warren (1999). The former used the mid-point of each edge to build the new mesh. The latter
Subdivision surface 202<br />
used a four-directional box spline to build the scheme. This scheme generates C 1 continuous limit surfaces on<br />
initial meshes with arbitrary topology.<br />
• √3 subdivision scheme - This scheme has been developed by Kobbelt (2000) and offers several interesting<br />
features: it handles arbitrary triangular meshes, it is C 2 continuous everywhere except at extraordinary vertices<br />
where it is C 1 continuous and it offers a natural adaptive refinement when required. It exhibits at least two<br />
specificities: it is a Dual scheme for triangle meshes and it has a slower refinement rate than primal ones.<br />
Interpolating schemes<br />
After subdivision, the control points of the original mesh and the new generated control points are interpolated on the<br />
limit surface. The earliest work was the butterfly scheme by Dyn, Levin and Gregory (1990), who extended the<br />
four-point interpolatory subdivision scheme for curves to a subdivision scheme for surface. Zorin, Schröder and<br />
Swelden (1996) noticed that the butterfly scheme cannot generate smooth surfaces for irregular triangle meshes and<br />
thus modified this scheme. Kobbelt (1996) further generalized the four-point interpolatory subdivision scheme for<br />
curves to the tensor product subdivision scheme for surfaces.<br />
• Butterfly, Triangles - named after the scheme's shape<br />
• Midedge, Quads<br />
• Kobbelt, Quads - a variational subdivision method that tries to overcome uniform subdivision drawbacks<br />
Editing a subdivision surface<br />
Subdivision surfaces can be naturally edited at different levels of subdivision. Starting with basic shapes you can use<br />
binary operators to create the correct topology. Then edit the coarse mesh to create the basic shape, then edit the<br />
offsets for the next subdivision step, then repeat this at finer and finer levels. You can always see how your edit<br />
effect the limit surface via GPU evaluation of the surface.<br />
A surface designer may also start with a scanned in object or one created from a NURBS surface. The same basic<br />
optimization algorithms are used to create a coarse base mesh with the correct topology and then add details at each<br />
level so that the object may be edited at different levels. These types of surfaces may be difficult to work with<br />
because the base mesh does not have control points in the locations that a human designer would place them. With a<br />
scanned object this surface is easier to work with than a raw triangle mesh, but a NURBS object probably had well<br />
laid out control points which behave less intuitively after the conversion than before.<br />
Key developments<br />
• 1978: Subdivision surfaces were discovered simultaneously by Edwin Catmull and Jim Clark (see Catmull–Clark<br />
subdivision surface). In the same year, Daniel Doo and Malcom Sabin published a paper building on this work<br />
(see Doo-Sabin subdivision surfaces.)<br />
• 1995: Ulrich Reif solved subdivision surface behaviour near extraordinary vertices. [2]<br />
• 1998: Jos Stam contributed a method for exact evaluation for Catmull–Clark and Loop subdivision surfaces under<br />
arbitrary parameter values. [1]
Subdivision surface 203<br />
References<br />
[1] Jos Stam, Exact Evaluation of Catmull–Clark Subdivision Surfaces at Arbitrary Parameter Values, Proceedings of SIGGRAPH'98. In<br />
Computer Graphics Proceedings, ACM SIGGRAPH, 1998, 395–404 ( pdf (http:/ / www. dgp. toronto. edu/ people/ stam/ reality/ Research/<br />
pdf/ sig98. pdf), downloadable eigenstructures (http:/ / www. dgp. toronto. edu/ ~stam/ reality/ Research/ SubdivEval/ index. html))<br />
[2] Ulrich Reif. 1995. A unified approach to subdivision algorithms near extraordinary vertices. Computer Aided Geometric Design. 12(2)<br />
153–174<br />
• J. Peters and U. Reif: The simplest subdivision scheme for smoothing polyhedra, ACM Transactions on Graphics<br />
16(4) (October 1997) p. 420-431, doi (http:/ / doi. acm. org/ 10. 1145/ 263834. 263851).<br />
• A. Habib and J. Warren: Edge and vertex insertion for a class of C 1 subdivision surfaces, Computer Aided<br />
Geometric Design 16(4) (May 1999) p. 223-247, doi (http:/ / dx. doi. org/ 10. 1016/ S0167-8396(98)00045-4).<br />
• L. Kobbelt: √3-subdivision, 27th annual conference on Computer <strong>graphics</strong> and interactive techniques, doi (http:/ /<br />
doi. acm. org/ 10. 1145/ 344779. 344835).<br />
External links<br />
• Resources about Subdvisions (http:/ / www. subdivision. org)<br />
• Geri's Game (http:/ / www. pixar. com/ shorts/ gg/ theater/ index. html) : Oscar winning animation by Pixar<br />
completed in 1997 that introduced subdivision surfaces (along with cloth simulation)<br />
• Subdivision for Modeling and Animation (http:/ / www. multires. caltech. edu/ pubs/ sig99notes. pdf) tutorial,<br />
SIGGRAPH 1999 course notes<br />
• Subdivision for Modeling and Animation (http:/ / www. mrl. nyu. edu/ dzorin/ sig00course/ ) tutorial,<br />
SIGGRAPH 2000 course notes<br />
• Subdivision of Surface and Volumetric Meshes (http:/ / www. hakenberg. de/ subdivision/ ultimate_consumer.<br />
htm), software to perform subdivision using the most popular schemes<br />
• Surface Subdivision Methods in CGAL, the Computational Geometry Algorithms Library (http:/ / www. cgal.<br />
org/ Pkg/ SurfaceSubdivisionMethods3)
Subsurface scattering 204<br />
Subsurface scattering<br />
Subsurface scattering (or SSS) is a<br />
mechanism of light transport in which<br />
light penetrates the surface of a<br />
translucent object, is scattered by<br />
interacting with the material, and exits<br />
the surface at a different point. The<br />
light will generally penetrate the<br />
surface and be reflected a number of<br />
times at irregular angles inside the<br />
material, before passing back out of the<br />
material at an angle other than the<br />
angle it would have if it had been<br />
reflected directly off the surface.<br />
Subsurface scattering is important in<br />
<strong>3D</strong> computer <strong>graphics</strong>, being necessary<br />
for the realistic rendering of materials<br />
such as marble, skin, and milk.<br />
Rendering Techniques<br />
Most materials used in real-time<br />
computer <strong>graphics</strong> today only account<br />
for the interaction of light at the<br />
surface of an object. In reality, many<br />
materials are slightly translucent: light<br />
Direct surface scattering (left), plus subsurface scattering (middle), create the final<br />
image on the right.<br />
Example of Subsurface scattering made in<br />
Blender software.<br />
enters the surface; is absorbed, scattered and re-emitted — potentially at a different point. Skin is a good case in<br />
point; only about 6% of reflectance is direct, 94% is from subsurface scattering. [1] An inherent property of<br />
semitransparent materials is absorption. The further through the material light travels, the greater the proportion<br />
absorbed. To simulate this effect, a measure of the distance the light has traveled through the material must be<br />
obtained.<br />
Depth Map based SSS<br />
One method of estimating this distance is to use depth maps [2] , in a<br />
manner similar to shadow mapping. The scene is rendered from the<br />
light's point of view into a depth map, so that the distance to the nearest<br />
surface is stored. The depth map is then projected onto it using<br />
standard projective texture mapping and the scene re-rendered. In this<br />
pass, when shading a given point, the distance from the light at the<br />
point the ray entered the surface can be obtained by a simple texture<br />
lookup. By subtracting this value from the point the ray exited the<br />
object we can gather an estimate of the distance the light has traveled through the object.<br />
Depth estimation using depth maps<br />
The measure of distance obtained by this method can be used in several ways. One such way is to use it to index<br />
directly into an artist created 1D texture that falls off exponentially with distance. This approach, combined with
Subsurface scattering 205<br />
other more traditional lighting models, allows the creation of different materials such as marble, jade and wax.<br />
Potentially, problems can arise if models are not convex, but depth peeling [3] can be used to avoid the issue.<br />
Similarly, depth peeling can be used to account for varying densities beneath the surface, such as bone or muscle, to<br />
give a more accurate scattering model.<br />
As can be seen in the image of the wax head to the right, light isn’t diffused when passing through object using this<br />
technique; back features are clearly shown. One solution to this is to take multiple samples at different points on<br />
surface of the depth map. Alternatively, a different approach to approximation can be used, known as texture-space<br />
diffusion.<br />
Texture Space Diffusion<br />
As noted at the start of the section, one of the more obvious effects of subsurface scattering is a general blurring of<br />
the diffuse lighting. Rather than arbitrarily modifying the diffuse function, diffusion can be more accurately modeled<br />
by simulating it in texture space. This technique was pioneered in rendering faces in The Matrix Reloaded, [4] but has<br />
recently fallen into the realm of real-time techniques.<br />
The method unwraps the mesh of an object using a vertex shader, first calculating the lighting based on the original<br />
vertex coordinates. The vertices are then remapped using the UV texture coordinates as the screen position of the<br />
vertex, suitable transformed from the [0, 1] range of texture coordinates to the [-1, 1] range of normalized device<br />
coordinates. By lighting the unwrapped mesh in this manner, we obtain a 2D image representing the lighting on the<br />
object, which can then be processed and reapplied to the model as a light map. To simulate diffusion, the light map<br />
texture can simply be blurred. Rendering the lighting to a lower-resolution texture in itself provides a certain amount<br />
of blurring. The amount of blurring required to accurately model subsurface scattering in skin is still under active<br />
research, but performing only a single blur poorly models the true effects. [5] To emulate the wavelength dependent<br />
nature of diffusion, the samples used during the (Gaussian) blur can be weighted by channel. This is somewhat of an<br />
artistic process. For human skin, the broadest scattering is in red, then green, and blue has very little scattering.<br />
A major benefit of this method is its independence of screen resolution; shading is performed only once per texel in<br />
the texture map, rather than for every pixel on the object. An obvious requirement is thus that the object have a good<br />
UV mapping, in that each point on the texture must map to only one point of the object. Additionally, the use of<br />
texture space diffusion causes implicit soft shadows, alleviating one of the more unrealistic aspects of standard<br />
shadow mapping.<br />
References<br />
[1] Krishnaswamy, A; Baronoski, GVG (2004). "A Biophysically-based Spectral Model of Light Interaction with Human Skin" (http:/ / eg04.<br />
inrialpes. fr/ Programme/ Papers/ PDF/ paper1189. pdf). Computer Graphics Forum (Blackwell Publishing) 23 (3): 331.<br />
doi:10.1111/j.1467-8659.2004.00764.x. .<br />
[2] Green, Simon (2004). "Real-time Approximations to Subsurface Scattering". GPU Gems (Addison-Wesley Professional): 263–278.<br />
[3] Nagy, Z; Klein, R (2003). "Depth-Peeling for Texture-based Volume Rendering" (http:/ / cg. cs. uni-bonn. de/ docs/ publications/ 2003/<br />
nagy-2003-depth. pdf). 11th Pacific Conference on Computer Graphics and Applications: 429. .<br />
[4] Borshukov, G; Lewis, J. P. (2005). "Realistic human face rendering for "The Matrix Reloaded"" (http:/ / www. virtualcinematography. org/<br />
publications/ acrobat/ Face-s2003. pdf). Computer Graphics (ACM Press). .<br />
[5] d’Eon, E (2007). "Advanced Skin Rendering" (http:/ / developer. download. nvidia. com/ presentations/ 2007/ gdc/ Advanced_Skin. pdf).<br />
GDC 2007. .
Subsurface scattering 206<br />
External links<br />
• Henrik Wann Jensen's subsurface scattering website (http:/ / <strong>graphics</strong>. ucsd. edu/ ~henrik/ images/ subsurf. html)<br />
• An academic paper by Jensen on modeling subsurface scattering (http:/ / <strong>graphics</strong>. ucsd. edu/ ~henrik/ papers/<br />
bssrdf/ )<br />
• Maya Tutorial - Subsurface Scattering: Using the Misss_Fast_Simple_Maya shader (http:/ / www. highend3d.<br />
com/ maya/ tutorials/ rendering_lighting/ shaders/ 135. html)<br />
• 3d Studio Max Tutorial - The definitive guide to using subsurface scattering in 3dsMax (http:/ / www.<br />
mrbluesummers. com/ 3510/ 3d-tutorials/ 3dsmax-mental-ray-sub-surface-scattering-guide/ )<br />
Surface caching<br />
Surface caching is a computer <strong>graphics</strong> technique pioneered by John Carmack, first used in the computer game<br />
Quake to apply lightmaps to level geometry. Carmack's technique was to combine lighting information with surface<br />
textures in texture-space when primitives became visible (at the appropriate mipmap level), exploiting temporal<br />
coherence for those calculations. As hardware capable of blended multi-texture rendering (and later pixel shaders)<br />
became more commonplace, the technique became less common, being replaced with screenspace combination of<br />
lightmaps in rendering hardware.<br />
Surface caching contributed greatly to the visual quality of Quakes' software rasterized 3d engine on Pentium<br />
microprocessors, which lacked dedicated <strong>graphics</strong> instructions. .<br />
Surface caching could be considered a precursor to the more recent megatexture technique in which lighting and<br />
surface decals and other procedural texture effects are combined for rich visuals devoid of un-natural repeating<br />
artefacts.<br />
External links<br />
• Quake's Lighting Model: Surface Caching [1] - an in-depth explanation by Michael Abrash<br />
References<br />
[1] http:/ / www. bluesnews. com/ abrash/ chap68. shtml
Surface normal 207<br />
Surface normal<br />
A surface normal, or simply normal, to a flat surface is a vector that<br />
is perpendicular to that surface. A normal to a non-flat surface at a<br />
point P on the surface is a vector perpendicular to the tangent plane to<br />
that surface at P. The word "normal" is also used as an adjective: a line<br />
normal to a plane, the normal component of a force, the normal<br />
vector, etc. The concept of normality generalizes to orthogonality.<br />
In the two-dimensional case, a normal line perpendicularly intersects<br />
the tangent line to a curve at a given point.<br />
The normal is often used in computer <strong>graphics</strong> to determine a surface's<br />
orientation toward a light source for flat shading, or the orientation of<br />
each of the corners (vertices) to mimic a curved surface with Phong<br />
shading.<br />
Calculating a surface normal<br />
For a convex polygon (such as a triangle), a surface normal can be<br />
calculated as the vector cross product of two (non-parallel) edges of the<br />
polygon.<br />
For a plane given by the equation , the<br />
vector is a normal.<br />
For a plane given by the equation<br />
,<br />
A polygon and two of its normal vectors<br />
A normal to a surface at a point is the same as a<br />
normal to the tangent plane to that surface at that<br />
i.e., a is a point on the plane and b and c are (non-parallel) vectors lying on the plane, the normal to the plane is a<br />
vector normal to both b and c which can be found as the cross product .<br />
For a hyperplane in n+1 dimensions, given by the equation<br />
,<br />
where a 0 is a point on the hyperplane and a i for i = 1, ... , n are non-parallel vectors lying on the hyperplane, a normal<br />
to the hyperplane is any vector in the null space of A where A is given by<br />
.<br />
That is, any vector orthogonal to all in-plane vectors is by definition a surface normal.<br />
point.
Surface normal 208<br />
If a (possibly non-flat) surface S is parameterized by a system of curvilinear coordinates x(s, t), with s and t real<br />
variables, then a normal is given by the cross product of the partial derivatives<br />
If a surface S is given implicitly as the set of points satisfying , then, a normal at a point<br />
on the surface is given by the gradient<br />
since the gradient at any point is perpendicular to the level set, and (the surface) is a level set of<br />
.<br />
For a surface S given explicitly as a function of the independent variables (e.g.,<br />
), its normal can be found in at least two equivalent ways. The first<br />
one is obtaining its implicit form , from which the normal follows readily as the<br />
gradient<br />
.<br />
(Notice that the implicit form could be defined alternatively as<br />
;<br />
these two forms correspond to the interpretation of the surface being oriented upwards or downwards, respectively,<br />
as a consequence of the difference in the sign of the partial derivative .) The second way of obtaining the<br />
normal follows directly from the gradient of the explicit form,<br />
by inspection,<br />
;<br />
, where is the upward unit vector.<br />
If a surface does not have a tangent plane at a point, it does not have a normal at that point either. For example, a<br />
cone does not have a normal at its tip nor does it have a normal along the edge of its base. However, the normal to<br />
the cone is defined almost everywhere. In general, it is possible to define a normal almost everywhere for a surface<br />
that is Lipschitz continuous.<br />
Hypersurfaces in n-dimensional space<br />
The definition of a normal to a surface in three-dimensional space can be extended to -dimensional<br />
hypersurfaces in a -dimensional space. A hypersurface may be locally defined implicitly as the set of points<br />
satisfying an equation , where is a given scalar function. If is continuously<br />
differentiable, then the hypersurface obtained is a differentiable manifold, and its hypersurface normal can be<br />
obtained from the gradient of , in the case it is not null, by the following formula
Surface normal 209<br />
Uniqueness of the normal<br />
A normal to a surface does not have a<br />
unique direction; the vector pointing in the<br />
opposite direction of a surface normal is<br />
also a surface normal. For a surface which is<br />
the topological boundary of a set in three<br />
dimensions, one can distinguish between the<br />
inward-pointing normal and<br />
outer-pointing normal, which can help<br />
define the normal in a unique way. For an<br />
oriented surface, the surface normal is<br />
usually determined by the right-hand rule. If<br />
A vector field of normals to a surface<br />
the normal is constructed as the cross product of tangent vectors (as described in the text above), it is a pseudovector.<br />
Transforming normals<br />
When applying a transform to a surface it is sometimes convenient to deriving normals for the resulting surface from<br />
the original normals. All points P on tangent plane are transformed to P′. We want to find n′ perpendicular to P. Let<br />
t be a vector on the tangent plane and M l be the upper 3x3 matrix (translation part of transformation does not apply<br />
to normal or tangent vectors).<br />
So use the inverse transpose of the linear transformation (the upper 3x3 matrix) when transforming surface normals.<br />
Uses<br />
• Surface normals are essential in defining surface integrals of vector fields.<br />
• Surface normals are commonly used in <strong>3D</strong> computer <strong>graphics</strong> for lighting calculations; see Lambert's cosine law.<br />
• Surface normals are often adjusted in <strong>3D</strong> computer <strong>graphics</strong> by normal mapping.<br />
• Render layers containing surface normal information may be used in Digital compositing to change the apparent<br />
lighting of rendered elements.
Surface normal 210<br />
Normal in geometric optics<br />
The normal is an imaginary line perpendicular to the surface [1] of an<br />
optical medium. The word normal is used here in the mathematical<br />
sense, meaning perpendicular. In reflection of light, the angle of<br />
incidence is the angle between the normal and the incident ray. The<br />
angle of reflection is the angle between the normal and the reflected<br />
ray. That Normal force will then Be perpendicular to the surface<br />
References<br />
[1] "The Law of Reflection" (http:/ / www. glenbrook. k12. il. us/ gbssci/ phys/ Class/<br />
refln/ u13l1c. html). The Physics Classroom Tutorial. . Retrieved 2008-03-31.<br />
External links<br />
• An explanation of normal vectors (http:/ / msdn. microsoft. com/<br />
en-us/ library/ bb324491(VS. 85). aspx) from Microsoft's MSDN<br />
• Clear pseudocode for calculating a surface normal (http:/ / www. opengl. org/ wiki/<br />
Calculating_a_Surface_Normal) from either a triangle or polygon.<br />
Texel<br />
A texel, or texture element (also texture pixel) is the fundamental unit<br />
of texture space, [1] used in computer <strong>graphics</strong>. Textures are represented<br />
by arrays of texels, just as pictures are represented by arrays of pixels.<br />
Texels can also be described by image regions that are obtained<br />
through a simple procedure such as thresholding. Voronoi tesselation<br />
can be used to define their spatial relationships. This means that a<br />
division is made at the half-way point between the centroid of each<br />
texel and the centroids of every surrounding texel for the entire texture.<br />
The result is that each texel centroid will have a Voronoi polygon<br />
surrounding it. This polygon region consists of all points that are closer<br />
to its texel centroid than any other centroid. [2]<br />
Rendering With Texels<br />
When texturing a <strong>3D</strong> surface (a process known as texture mapping) the<br />
renderer maps texels to appropriate pixels in the output picture. On<br />
modern computers, this operation is accomplished on the <strong>graphics</strong><br />
processing unit.<br />
The texturing process starts with a location in space. The location can<br />
be in world space, but typically it is in Model space so that the texture<br />
moves with the model. A projector function is applied to the location to<br />
Diagram of specular reflection<br />
Voronoi polygons for a group of texels.<br />
Two different projector functions.
Texel 211<br />
change the location from a three-element vector to a two-element vector with values ranging from zero to one (uv). [3]<br />
These values are multiplied by the resolution of the texture to obtain the location of the texel. When a texel is<br />
requested that is not on an integer position, texture filtering is applied.<br />
Clamping & Wrapping<br />
When a texel is requested that is outside of the texture, one of two techniques is used: clamping or wrapping.<br />
Clamping limits the texel to the texture size, moving it to the nearest edge if it is more than the texture size.<br />
Wrapping moves the texel in increments of the texture's size to bring it back into the texture. Wrapping causes a<br />
texture to be repeated; clamping causes it to be in one spot only.<br />
References<br />
[1] Andrew Glassner, An Introduction to Ray Tracing, San Francisco: Morgan–Kaufmann, 1989<br />
[2] Linda G. Shapiro and George C. Stockman, Computer Vision, Upper Saddle River: Prentice–Hall, 2001<br />
[3] Tomas Akenine-Moller, Eric Haines, and Naty Hoffman, Real-Time Rendering, Wellesley: A K Peters, 2008<br />
Texture atlas<br />
In realtime computer <strong>graphics</strong>, a texture atlas is a large image, or "atlas" which contains many smaller sub-images,<br />
each of which is a texture for some part of a <strong>3D</strong> object. The sub-textures can be rendered by modifying the texture<br />
coordinates of the object's uvmap on the atlas, essentially telling it which part of the image its texture is in. In an<br />
application where many small textures are used frequently, it is often more efficient to store the textures in a texture<br />
atlas which is treated as a unit by the <strong>graphics</strong> hardware. In particular, because there are less rendering state changes<br />
by binding once, it can be faster to bind one large texture once than to bind many smaller textures as they are drawn.<br />
For example, a tile-based game would benefit greatly in performance from a texture atlas.<br />
Atlases can consist of uniformly-sized sub-textures, or they can consist of textures of varying sizes (usually restricted<br />
to powers of two). In the latter case, the program must usually arrange the textures in an efficient manner before<br />
sending the textures to hardware. Manual arrangement of texture atlases is possible, and sometimes preferable, but<br />
can be tedious. If using mipmaps, care must be taken to arrange the textures in such a manner as to avoid sub-images<br />
being "polluted" by their neighbours.<br />
External links<br />
• Texture Atlas Whitepaper [1] - A whitepaper by NVIDIA which explains the technique.<br />
• Texture Atlas Tools [2] - Tools to create texture atlases semi-manually.<br />
• Practical Texture Atlases [3] - A guide on using a texture atlas (and the pros and cons).<br />
References<br />
[1] http:/ / download. nvidia. com/ developer/ NVTextureSuite/ Atlas_Tools/ Texture_Atlas_Whitepaper. pdf<br />
[2] http:/ / developer. nvidia. com/ content/ texture-atlas-tools<br />
[3] http:/ / www. gamasutra. com/ features/ 20060126/ ivanov_01. shtml
Texture filtering 212<br />
Texture filtering<br />
In computer <strong>graphics</strong>, texture filtering or texture smoothing is the method used to determine the texture color for a<br />
texture mapped pixel, using the colors of nearby texels (pixels of the texture). Mathematically, texture filtering is a<br />
type of anti-aliasing, but it filters out high frequencies from the texture fill whereas other AA techniques generally<br />
focus on visual edges. Put simply, it allows a texture to be applied at many different shapes, sizes and angles while<br />
minimizing blurriness, shimmering and blocking.<br />
There are many methods of texture filtering, which make different trade-offs between computational complexity and<br />
image quality.<br />
The need for filtering<br />
During the texture mapping process, a 'texture lookup' takes place to find out where on the texture each pixel center<br />
falls. Since the textured surface may be at an arbitrary distance and orientation relative to the viewer, one pixel does<br />
not usually correspond directly to one texel. Some form of filtering has to be applied to determine the best color for<br />
the pixel. Insufficient or incorrect filtering will show up in the image as artifacts (errors in the image), such as<br />
'blockiness', jaggies, or shimmering.<br />
There can be different types of correspondence between a pixel and the texel/texels it represents on the screen. These<br />
depend on the position of the textured surface relative to the viewer, and different forms of filtering are needed in<br />
each case. Given a square texture mapped on to a square surface in the world, at some viewing distance the size of<br />
one screen pixel is exactly the same as one texel. Closer than that, the texels are larger than screen pixels, and need<br />
to be scaled up appropriately - a process known as texture magnification. Farther away, each texel is smaller than a<br />
pixel, and so one pixel covers multiple texels. In this case an appropriate color has to be picked based on the covered<br />
texels, via texture minification. Graphics APIs such as OpenGL allow the programmer to set different choices for<br />
minification and magnification filters.<br />
Note that even in the case where the pixels and texels are exactly the same size, one pixel will not necessarily match<br />
up exactly to one texel - it may be misaligned, and cover parts of up to four neighboring texels. Hence some form of<br />
filtering is still required.<br />
Mipmapping<br />
Mipmapping is a standard technique used to save some of the filtering work needed during texture minification.<br />
During texture magnification, the number of texels that need to be looked up for any pixel is always four or fewer;<br />
during minification, however, as the textured polygon moves farther away potentially the entire texture might fall<br />
into a single pixel. This would necessitate reading all of its texels and combining their values to correctly determine<br />
the pixel color, a prohibitively expensive operation. Mipmapping avoids this by prefiltering the texture and storing it<br />
in smaller sizes down to a single pixel. As the textured surface moves farther away, the texture being applied<br />
switches to the prefiltered smaller size. Different sizes of the mipmap are referred to as 'levels', with Level 0 being<br />
the largest size (used closest to the viewer), and increasing levels used at increasing distances.
Texture filtering 213<br />
Filtering methods<br />
This section lists the most common texture filtering methods, in increasing order of computational cost and image<br />
quality.<br />
Nearest-neighbor interpolation<br />
Nearest-neighbor interpolation is the fastest and crudest filtering method — it simply uses the color of the texel<br />
closest to the pixel center for the pixel color. While fast, this results in a large number of artifacts - texture<br />
'blockiness' during magnification, and aliasing and shimmering during minification.<br />
Nearest-neighbor with mipmapping<br />
This method still uses nearest neighbor interpolation, but adds mipmapping — first the nearest mipmap level is<br />
chosen according to distance, then the nearest texel center is sampled to get the pixel color. This reduces the aliasing<br />
and shimmering significantly, but does not help with blockiness.<br />
Bilinear filtering<br />
Bilinear filtering is the next step up. In this method the four nearest texels to the pixel center are sampled (at the<br />
closest mipmap level), and their colors are combined by weighted average according to distance. This removes the<br />
'blockiness' seen during magnification, as there is now a smooth gradient of color change from one texel to the next,<br />
instead of an abrupt jump as the pixel center crosses the texel boundary. Bilinear filtering is almost invariably used<br />
with mipmapping; though it can be used without, it would suffer the same aliasing and shimmering problems as its<br />
nearest neighbor.<br />
Trilinear filtering<br />
Trilinear filtering is a remedy to a common artifact seen in mipmapped bilinearly filtered images: an abrupt and very<br />
noticeable change in quality at boundaries where the renderer switches from one mipmap level to the next. Trilinear<br />
filtering solves this by doing a texture lookup and bilinear filtering on the two closest mipmap levels (one higher and<br />
one lower quality), and then linearly interpolating the results. This results in a smooth degradation of texture quality<br />
as distance from the viewer increases, rather than a series of sudden drops. Of course, closer than Level 0 there is<br />
only one mipmap level available, and the algorithm reverts to bilinear filtering.<br />
Anisotropic filtering<br />
Anisotropic filtering is the highest quality filtering available in current consumer <strong>3D</strong> <strong>graphics</strong> cards. Simpler,<br />
"isotropic" techniques use only square mipmaps which are then interpolated using bi– or trilinear filtering. (Isotropic<br />
means same in all directions, and hence is used to describe a system in which all the maps are squares rather than<br />
rectangles or other quadrilaterals.)<br />
When a surface is at a high angle relative to the camera, the fill area for a texture will not be approximately square.<br />
Consider the common case of a floor in a game: the fill area is far wider than it is tall. In this case, none of the square<br />
maps are a good fit. The result is blurriness and/or shimmering, depending on how the fit is chosen. Anisotropic<br />
filtering corrects this by sampling the texture as a non-square shape. Some implementations simply use rectangles<br />
instead of squares, which are a much better fit than the original square and offer a good approximation.<br />
However, going back to the example of the floor, the fill area is not just compressed vertically, there are also more<br />
pixels across the near edge than the far edge. Consequently, more advanced implementations will use trapezoidal<br />
maps for an even better approximation (at the expense of greater processing).<br />
In either rectangular or trapezoidal implementations, the filtering produces a map, which is then bi– or trilinearly<br />
filtered, using the same filtering algorithms used to filter the square maps of traditional mipmapping.
Texture mapping 214<br />
Texture mapping<br />
Texture mapping is a method for adding detail, surface texture (a<br />
bitmap or raster image), or color to a computer-generated graphic or<br />
<strong>3D</strong> model. Its application to <strong>3D</strong> <strong>graphics</strong> was pioneered by Dr Edwin<br />
Catmull in his Ph.D. thesis of 1974.<br />
Texture mapping<br />
A texture map is applied (mapped) to the<br />
surface of a shape or polygon. [1] This process is<br />
akin to applying patterned paper to a plain white<br />
box. Every vertex in a polygon is assigned a<br />
texture coordinate (which in the 2d case is also<br />
known as a UV coordinate) either via explicit<br />
assignment or by procedural definition. Image<br />
sampling locations are then interpolated across<br />
the face of a polygon to produce a visual result<br />
that seems to have more richness than could<br />
otherwise be achieved with a limited number of<br />
polygons. Multitexturing is the use of more<br />
1 = <strong>3D</strong> model without textures<br />
2 = <strong>3D</strong> model with textures<br />
Examples of multitexturing (click for larger image);<br />
1: Untextured sphere, 2: Texture and bump maps, 3: Texture map only,<br />
4: Opacity and texture maps.<br />
than one texture at a time on a polygon. [2] For instance, a light map texture may be used to light a surface as an<br />
alternative to recalculating that lighting every time the surface is rendered. Another multitexture technique is bump<br />
mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting<br />
calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes<br />
on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video<br />
games as <strong>graphics</strong> hardware has become powerful enough to accommodate it in real-time.<br />
The way the resulting pixels on the screen are calculated from the texels (texture pixels) is governed by texture<br />
filtering. The fastest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear<br />
interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of<br />
a texture coordinate being outside the texture, it is either clamped or wrapped.
Texture mapping 215<br />
Perspective correctness<br />
Texture coordinates are specified at<br />
each vertex of a given triangle, and<br />
these coordinates are interpolated<br />
using an extended Bresenham's line<br />
algorithm. If these texture coordinates<br />
are linearly interpolated across the<br />
screen, the result is affine texture<br />
mapping. This is a fast calculation, but<br />
there can be a noticeable discontinuity<br />
between adjacent triangles when these<br />
triangles are at an angle to the plane of<br />
the screen (see figure at right – textures (the checker boxes) appear bent).<br />
Because affine texture mapping does not take into account the depth information about a<br />
polygon's vertices, where the polygon is not perpendicular to the viewer it produces a<br />
noticeable defect.<br />
Perspective correct texturing accounts for the vertices' positions in <strong>3D</strong> space, rather than simply interpolating a 2D<br />
triangle. This achieves the correct visual effect, but it is slower to calculate. Instead of interpolating the texture<br />
coordinates directly, the coordinates are divided by their depth (relative to the viewer), and the reciprocal of the<br />
depth value is also interpolated and used to recover the perspective-correct coordinate. This correction makes it so<br />
that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between texture<br />
coordinates is smaller (stretching the texture wider), and in parts that are farther away this difference is larger<br />
(compressing the texture).<br />
Affine texture mapping directly interpolates a texture coordinate between two endpoints and :<br />
where<br />
Perspective correct mapping interpolates after dividing by depth , then uses its interpolated reciprocal to<br />
recover the correct coordinate:<br />
All modern <strong>3D</strong> <strong>graphics</strong> hardware implements perspective correct texturing.<br />
Classic texture mappers generally did only simple mapping with at<br />
most one lighting effect, and the perspective correctness was about 16<br />
times more expensive. To achieve two goals - faster arithmetic results,<br />
and keeping the arithmetic mill busy at all times - every triangle is<br />
further subdivided into groups of about 16 pixels. For perspective<br />
texture mapping without hardware support, a triangle is broken down<br />
into smaller triangles for rendering, which improves details in<br />
non-architectural applications. Software renderers generally preferred<br />
screen subdivision because it has less overhead. Additionally they try<br />
to do linear interpolation along a line of pixels to simplify the set-up<br />
(compared to 2d affine interpolation) and thus again the overhead (also<br />
affine texture-mapping does not fit into the low number of registers of<br />
Doom renders vertical spans (walls) with affine<br />
texture mapping.<br />
the x86 CPU; the 68000 or any RISC is much more suited). For instance, Doom restricted the world to vertical walls<br />
and horizontal floors/ceilings. This meant the walls would be a
Texture mapping 216<br />
constant distance along a vertical line and the floors/ceilings<br />
would be a constant distance along a horizontal line. A fast affine<br />
mapping could be used along those lines because it would be<br />
correct. A different approach was taken for Quake, which would<br />
calculate perspective correct coordinates only once every 16 pixels<br />
of a scanline and linearly interpolate between them, effectively<br />
running at the speed of linear interpolation because the perspective<br />
correct calculation runs in parallel on the co-processor. [3] The<br />
polygons are rendered independently, hence it may be possible to<br />
switch between spans and columns or diagonal directions<br />
depending on the orientation of the polygon normal to achieve a<br />
more constant z, but the effort seems not to be worth it.<br />
Another technique was subdividing the polygons into smaller<br />
polygons, like triangles in 3d-space or squares in screen space, and<br />
using an affine mapping on them. The distortion of affine mapping<br />
Screen space sub division techniques. Top left:<br />
Quake-like, top right: bilinear, bottom left: const-z<br />
becomes much less noticeable on smaller polygons. Yet another technique was approximating the perspective with a<br />
faster calculation, such as a polynomial. Still another technique uses 1/z value of the last two drawn pixels to linearly<br />
extrapolate the next value. The division is then done starting from those values so that only a small remainder has to<br />
be divided, [4] but the amount of bookkeeping makes this method too slow on most systems. Finally, some<br />
programmers extended the constant distance trick used for Doom by finding the line of constant distance for<br />
arbitrary polygons and rendering along it.<br />
Resolution<br />
The resolution of a texture map is usually given as a width in pixels, assuming the map is square. For example, a 1K<br />
texture has a resolution of 1024 x 1024, or 1,048,576 pixels.<br />
Graphics cards cannot render texture maps beyond a threshold that depends on their hardware, possibly the amount<br />
of available RAM.<br />
References<br />
[1] Jon Radoff, Anatomy of an MMORPG, http:/ / radoff. com/ blog/ 2008/ 08/ 22/ anatomy-of-an-mmorpg/<br />
[2] Blythe, David. Advanced Graphics Programming Techniques Using OpenGL (http:/ / www. opengl. org/ resources/ code/ samples/ sig99/<br />
advanced99/ notes/ notes. html). Siggraph 1999. (see: Multitexture (http:/ / www. opengl. org/ resources/ code/ samples/ sig99/ advanced99/<br />
notes/ node60. html))<br />
[3] Abrash, Michael. Michael Abrash's Graphics Programming Black Book Special Edition. The Coriolis Group, Scottsdale Arizona, 1997. ISBN<br />
1-57610-174-6 ( PDF (http:/ / www. gamedev. net/ reference/ articles/ article1698. asp)) (Chapter 70, pg. 1282)<br />
[4] US 5739818 (http:/ / v3. espacenet. com/ textdoc?DB=EPODOC& IDX=US5739818), Spackman, John Neil, "Apparatus and method for<br />
performing perspectively correct interpolation in computer <strong>graphics</strong>", issued 1998-04-14
Texture mapping 217<br />
External links<br />
• Perspective Corrected Texture Mapping (http:/ / www. gamedev. net/ reference/ articles/ article331. asp) at<br />
GameDev.net<br />
• Introduction into texture mapping using C and SDL (http:/ / www. happy-werner. de/ howtos/ isw/ parts/ 3d/<br />
chapter_2/ chapter_2_texture_mapping. pdf)<br />
• Programming a textured terrain (http:/ / www. riemers. net/ eng/ Tutorials/ XNA/ Csharp/ Series4/<br />
Textured_terrain. php) using XNA/DirectX, from www.riemers.net<br />
• Perspective correct texturing (http:/ / www. gamers. org/ dEngine/ quake/ papers/ checker_texmap. html)<br />
• Time Texturing (http:/ / www. fawzma. com/ time-texturing-texture-mapping-with-bezier-lines/ ) Texture<br />
mapping with bezier lines<br />
• Polynomial Texture Mapping (http:/ / www. hp. com/ idealab/ us/ en/ relight. html) Interactive Relighting for<br />
Photos<br />
• 3 Métodos de interpolación a partir de puntos (in spanish) (http:/ / www. um. es/ geograf/ sigmur/ temariohtml/<br />
node43_ct. html) Methods that can be used to interpolate a texture knowing the texture coords at the vertices of a<br />
polygon<br />
Texture synthesis<br />
Texture synthesis is the process of algorithmically constructing a large digital image from a small digital sample<br />
image by taking advantage of its structural content. It is object of research to computer <strong>graphics</strong> and is used in many<br />
fields, amongst others digital image editing, <strong>3D</strong> computer <strong>graphics</strong> and post-production of films.<br />
Texture synthesis can be used to fill in holes in images (as in inpainting), create large non-repetitive background<br />
images and expand small pictures. See "SIGGRAPH 2007 course on Example-based Texture Synthesis" [1] for more<br />
details.<br />
Textures<br />
"Texture" is an ambiguous word and in the context of texture synthesis<br />
may have one of the following meanings:<br />
1. In common speech, "texture" used as a synonym for "surface<br />
structure". Texture has been described by five different properties in<br />
the psychology of perception: coarseness, contrast, directionality,<br />
line-likeness and roughness Tamura .<br />
2. In <strong>3D</strong> computer <strong>graphics</strong>, a texture is a digital image applied to the<br />
surface of a three-dimensional model by texture mapping to give the<br />
model a more realistic appearance. Often, the image is a photograph<br />
of a "real" texture, such as wood grain.<br />
Maple burl, an example of a texture.<br />
3. In image processing, every digital image composed of repeated elements is called a "texture." For example, see<br />
the images below.<br />
Texture can be arranged along a spectrum going from stochastic to regular:<br />
• Stochastic textures. Texture images of stochastic textures look like noise: colour dots that are randomly scattered<br />
over the image, barely specified by the attributes minimum and maximum brightness and average colour. Many<br />
textures look like stochastic textures when viewed from a distance. An example of a stochastic texture is<br />
roughcast.
Texture synthesis 218<br />
• Structured textures. These textures look like somewhat regular patterns. An example of a structured texture is a<br />
stonewall or a floor tiled with paving stones.<br />
These extremes are connected by a smooth transition, as visualized in the figure below from "Near-regular Texture<br />
Analysis and Manipulation." Yanxi Liu, Wen-Chieh Lin, and James Hays. SIGGRAPH 2004 [2]<br />
Goal<br />
Texture synthesis algorithm are intended to create an output image that meets the following requirements:<br />
• The output should have the size given by the user.<br />
• The output should be as similar as possible to the sample.<br />
• The output should not have visible artifacts such as seams, blocks and misfitting edges.<br />
• The output should not repeat, i. e. the same structures in the output image should not appear multiple places.<br />
Like most algorithms, texture synthesis should be efficient in computation time and in memory use.<br />
Methods<br />
The following methods and algorithms have been researched or developed for texture synthesis:<br />
Tiling<br />
The simplest way to generate a large image from a sample image is to tile it. This means multiple copies of the<br />
sample are simply copied and pasted side by side. The result is rarely satisfactory. Except in rare cases, there will be<br />
the seams in between the tiles and the image will be highly repetitive.<br />
Stochastic texture synthesis<br />
Stochastic texture synthesis methods produce an image by randomly choosing colour values for each pixel, only<br />
influenced by basic parameters like minimum brightness, average colour or maximum contrast. These algorithms<br />
perform well with stochastic textures only, otherwise they produce completely unsatisfactory results as they ignore<br />
any kind of structure within the sample image.
Texture synthesis 219<br />
Single purpose structured texture synthesis<br />
Algorithms of that family use a fixed procedure to create an output image, i. e. they are limited to a single kind of<br />
structured texture. Thus, these algorithms can both only be applied to structured textures and only to textures with a<br />
very similar structure. For example, a single purpose algorithm could produce high quality texture images of<br />
stonewalls; yet, it is very unlikely that the algorithm will produce any viable output if given a sample image that<br />
shows pebbles.<br />
Chaos mosaic<br />
This method, proposed by the Microsoft group for internet <strong>graphics</strong>, is a refined version of tiling and performs the<br />
following three steps:<br />
1. The output image is filled completely by tiling. The result is a repetitive image with visible seams.<br />
2. Randomly selected parts of random size of the sample are copied and pasted randomly onto the output image.<br />
The result is a rather non-repetitive image with visible seams.<br />
3. The output image is filtered to smooth edges.<br />
The result is an acceptable texture image, which is not too repetitive and does not contain too many artifacts. Still,<br />
this method is unsatisfactory because the smoothing in step 3 makes the output image look blurred.<br />
Pixel-based texture synthesis<br />
These methods, such as "Texture synthesis via a noncausal nonparametric multiscale Markov random field." Paget<br />
and Longstaff, IEEE Trans. on Image Processing, 1998 [3] , "Texture Synthesis by Non-parametric Sampling." Efros<br />
and Leung, ICCV, 1999 [4] , "Fast Texture Synthesis using Tree-structured Vector Quantization" Wei and Levoy<br />
SIGGRAPH 2000 [5] and "Image Analogies" Hertzmann et al. SIGGRAPH 2001. [6] are some of the simplest and<br />
most successful general texture synthesis algorithms. They typically synthesize a texture in scan-line order by<br />
finding and copying pixels with the most similar local neighborhood as the synthetic texture. These methods are very<br />
useful for image completion. They can be constrained, as in image analogies, to perform many interesting tasks.<br />
They are typically accelerated with some form of Approximate Nearest Neighbor method since the exhaustive search<br />
for the best pixel is somewhat slow. The synthesis can also be performed in multiresolution, such as "Texture<br />
synthesis via a noncausal nonparametric multiscale Markov random field." Paget and Longstaff, IEEE Trans. on<br />
Image Processing, 1998 [3] .<br />
Patch-based texture synthesis<br />
Patch-based texture synthesis creates a new texture by copying and stitching together textures at various offsets,<br />
similar to the use of the clone tool to manually synthesize a texture. "Image Quilting." Efros and Freeman.<br />
SIGGRAPH 2001 [7] and "Graphcut Textures: Image and Video Synthesis Using Graph Cuts." Kwatra et al.<br />
SIGGRAPH 2003 [8] are the best known patch-based texture synthesis algorithms. These algorithms tend to be more<br />
effective and faster than pixel-based texture synthesis methods.
Texture synthesis 220<br />
Pattern-based texture modeling<br />
In pattern-based modeling [9] a training image consisting of stationary textures are provided. The algorithm performs<br />
stochastic modeling, similar to the patch-based texture synthesis, to reproduce the same spatial behavior.<br />
The method works by constructing a pattern database. It will then use multi-dimensional scaling, and kernel methods<br />
to cluster the patterns into similar group. During the simulation, it will find the most similar cluster to the pattern at<br />
hand, and then, randomly selects a pattern from that cluster to paste it on the output grid. It continues this process<br />
until all the cells have been visited.<br />
Chemistry based<br />
Realistic textures can be generated by simulations of complex chemical reactions within fluids, namely<br />
Reaction-diffusion systems. It is believed that these systems show behaviors which are qualitatively equivalent to<br />
real processes (Morphogenesis) found in the nature, such as animal markings (shells, fishs, wild cats...).<br />
Implementations<br />
Some texture synthesis implementations exist as plug-ins for the free image editor Gimp:<br />
• Texturize [10]<br />
• Resynthesizer [11]<br />
A pixel-based texture synthesis implementation:<br />
• Parallel Controllable Texture Synthesis [12]<br />
Patch-based texture synthesis using Graphcut:<br />
• KUVA: Graphcut textures [13]
Texture synthesis 221<br />
Literature<br />
Several of the earliest and most referenced papers in this field include:<br />
• Popat [14] in 1993 - "Novel cluster-based probability model for texture synthesis, classification, and compression".<br />
• Heeger-Bergen [15] in 1995 - "Pyramid based texture analysis/synthesis".<br />
• Paget-Longstaff [16] in 1998 - "Texture synthesis via a noncausal nonparametric multiscale Markov random field"<br />
• Efros-Leung [17] in 1999 - "Texture Synthesis by Non-parameteric Sampling".<br />
• Wei-Levoy [5] in 2000 - "Fast Texture Synthesis using Tree-structured Vector Quantization"<br />
although there was also earlier work on the subject, such as<br />
• Gagalowicz and Song De Ma in 1986 , "Model driven synthesis of natural textures for 3-D scenes",<br />
• Lewis in 1984, "Texture synthesis for digital painting".<br />
(The latter algorithm has some similarities to the Chaos Mosaic approach).<br />
The non-parametric sampling approach of Efros-Leung is the first approach that can easily synthesis most types of<br />
texture, and it has inspired literally hundreds of follow-on papers in computer <strong>graphics</strong>. Since then, the field of<br />
texture synthesis has rapidly expanded with the introduction of <strong>3D</strong> <strong>graphics</strong> accelerator cards for personal<br />
computers. It turns out, however, that Scott Draves first published the patch-based version of this technique along<br />
with GPL code in 1993 according to Efros [18] .<br />
References<br />
[1] http:/ / www. cs. unc. edu/ ~kwatra/ SIG07_TextureSynthesis/ index. htm<br />
[2] http:/ / <strong>graphics</strong>. cs. cmu. edu/ projects/ nrt/<br />
[3] http:/ / www. texturesynthesis. com/ nonparaMRF. htm<br />
[4] http:/ / <strong>graphics</strong>. cs. cmu. edu/ people/ efros/ research/ EfrosLeung. html<br />
[5] http:/ / <strong>graphics</strong>. stanford. edu/ papers/ texture-synthesis-sig00/<br />
[6] http:/ / mrl. nyu. edu/ projects/ image-analogies/<br />
[7] http:/ / <strong>graphics</strong>. cs. cmu. edu/ people/ efros/ research/ quilting. html<br />
[8] http:/ / www-static. cc. gatech. edu/ gvu/ perception/ / projects/ graphcuttextures/<br />
[9] Honarkhah, M and Caers, J, 2010, Stochastic Simulation of Patterns Using Distance-Based Pattern Modeling (http:/ / dx. doi. org/ 10. 1007/<br />
s11004-010-9276-7), Mathematical Geosciences, 42: 487 - 517<br />
[10] http:/ / gimp-texturize. sourceforge. net/<br />
[11] http:/ / www. logarithmic. net/ pfh/ resynthesizer<br />
[12] http:/ / www-sop. inria. fr/ members/ Sylvain. Lefebvre/ _wiki_/ pmwiki. php?n=Main. TSynEx<br />
[13] http:/ / www. 131002. net/ data/ code/ kuva/<br />
[14] http:/ / xenia. media. mit. edu/ ~popat/ personal/<br />
[15] http:/ / www. cns. nyu. edu/ heegerlab/ index. php?page=publications& id=heeger-siggraph95<br />
[16] http:/ / www. texturesynthesis. com/ papers/ Paget_IP_1998. pdf<br />
[17] http:/ / <strong>graphics</strong>. cs. cmu. edu/ people/ efros/ research/ NPS/ efros-iccv99. pdf<br />
[18] http:/ / <strong>graphics</strong>. cs. cmu. edu/ people/ efros/ research/ synthesis. html<br />
External links<br />
• texture synthesis (http:/ / <strong>graphics</strong>. cs. cmu. edu/ people/ efros/ research/ synthesis. html)<br />
• texture synthesis (http:/ / www. cs. utah. edu/ ~michael/ ts/ )<br />
• texture movie synthesis (http:/ / www. cs. huji. ac. il/ labs/ cglab/ papers/ texsyn/ )<br />
• Texture2005 (http:/ / www. macs. hw. ac. uk/ texture2005/ )<br />
• Near-Regular Texture Synthesis (http:/ / <strong>graphics</strong>. cs. cmu. edu/ projects/ nrt/ )<br />
• The Texture Lab (http:/ / www. macs. hw. ac. uk/ texturelab/ )<br />
• Nonparametric Texture Synthesis (http:/ / www. texturesynthesis. com/ texture. htm)<br />
• Examples of reaction-diffusion textures (http:/ / www. texrd. com/ gallerie/ gallerie. html)<br />
• Implementation of Efros & Leung's algorithm with examples (http:/ / rubinsteyn. com/ comp_photo/ texture/ )
Texture synthesis 222<br />
• Micro-texture synthesis by phase randomization, with code and online demonstration (http:/ / www. ipol. im/ pub/<br />
algo/ ggm_random_phase_texture_synthesis/ )<br />
Tiled rendering<br />
Tiled rendering is the process of subdividing (or tiling) a computer <strong>graphics</strong> image by a regular grid in image space<br />
to exploit local spatial coherence in the scene and/or to facilitate the use of limited hardware rendering resources<br />
later in the <strong>graphics</strong> pipeline. Tiling is also used to create nonlinear framebuffer to make adjacent pixels also<br />
[1] [2]<br />
adjacent in memory.<br />
Major examples of this are:<br />
• PowerVR rendering architecture: The rasterizer consisted of a 32×32 tile into which polygons were rasterized<br />
across the image across multiple pixels in parallel. On early PC versions, tiling was performed in the display<br />
driver running on the CPU. In the application of the Dreamcast console, tiling was performed by a piece of<br />
hardware. This facilitated deferred rendering—only the visible pixels were texture-mapped, saving shading<br />
calculations and texture-bandwidth.<br />
• Xbox 360: the GPU contains an embedded 10 MiB framebuffer; this is not sufficient to hold the raster for an<br />
entire 1280×720 image with 4× anti-aliasing, so a tiling solution is superimposed.<br />
• Implementations of Reyes rendering often divide the image into "tile buckets".<br />
• Pixel Planes 5 architecture (1991) [3]<br />
• Microsoft Talisman (1996)<br />
• Dreamcast (1998)<br />
• PSVita [4]<br />
• Intel Larrabee GPU (canceled)<br />
References<br />
[1] Deucher, Alex (2008-05-16). "How Video Cards Work" (http:/ / www. x. org/ wiki/ Development/ Documentation/ HowVideoCardsWork).<br />
X.Org Foundation. . Retrieved 2010-05-27.<br />
[2] Bridgman, John (2009-05-19). "How the X (aka 2D) driver affects <strong>3D</strong> performance" (http:/ / jbridgman. livejournal. com/ 718. html).<br />
LiveJournal. . Retrieved 2010-05-27.<br />
[3] Mahaney, Jim (1998-06-22). "History" (http:/ / www. cs. unc. edu/ ~pxfl/ history. html). Pixel-Planes. University of North Carolina at Chapel<br />
Hill. . Retrieved 2008-08-04.<br />
[4] mestour, mestour (2011-07-21). "Develop 2011: PS Vita is the most developer friendly hardware Sony has ever made" (http:/ / 3dsforums.<br />
com/ lounge-2/ develop-2011-ps-vita-most-developer-friendly-hardware-sony-has-ever-made-19841/ ). PSVita. 3dsforums. . Retrieved<br />
2011-07-21.
UV mapping 223<br />
UV mapping<br />
UV mapping is the <strong>3D</strong> modeling<br />
process of making a 2D image<br />
representation of a <strong>3D</strong> model.<br />
UV mapping<br />
This process projects a texture map<br />
onto a <strong>3D</strong> object. The letters "U" and<br />
"V" are used to describe the 2D<br />
mesh [1] because "X", "Y" and "Z" are<br />
already used to describe the <strong>3D</strong> object<br />
in model space.<br />
UV texturing permits polygons that<br />
make up a <strong>3D</strong> object to be painted with<br />
color from an image. The image is<br />
called a UV texture map, [2] but it's just<br />
an ordinary image. The UV mapping<br />
process involves assigning pixels in the<br />
image to surface mappings on the<br />
polygon, usually done by<br />
"programmatically" copying a triangle<br />
shaped piece of the image map and<br />
pasting it onto a triangle on the<br />
object. [3] UV is the alternative to XY,<br />
it only maps into a texture space rather<br />
than into the geometric space of the<br />
object. But the rendering computation<br />
uses the UV texture coordinates to<br />
determine how to paint the three<br />
dimensional surface.<br />
In the example to the right, a sphere is<br />
given a checkered texture, first without<br />
and then with UV mapping. Without<br />
UV mapping, the checkers tile XYZ<br />
space and the texture is carved out of<br />
the sphere. With UV mapping, the<br />
The application of a texture in the UV space related to the effect in <strong>3D</strong>.<br />
A checkered sphere, without and with UV<br />
mapping (<strong>3D</strong> checkered or 2D checkered).<br />
A representation of the UV mapping of a cube.<br />
The flattened cube net may then be textured to<br />
texture the cube.<br />
checkers tile UV space and points on the sphere map to this space according to their latitude and longitude.<br />
When a model is created as a polygon mesh using a <strong>3D</strong> modeler, UV coordinates can be generated for each vertex in<br />
the mesh. One way is for the <strong>3D</strong> modeler to unfold the triangle mesh at the seams, automatically laying out the<br />
triangles on a flat page. If the mesh is a UV sphere, for example, the modeler might transform it into a<br />
equirectangular projection. Once the model is unwrapped, the artist can paint a texture on each triangle individually,<br />
using the unwrapped mesh as a template. When the scene is rendered, each triangle will map to the appropriate<br />
texture from the "decal sheet".
UV mapping 224<br />
A UV map can either be generated automatically by the software application, made manually by the artist, or some<br />
combination of both. Often a UV map will be generated, and then the artist will adjust and optimize it to minimize<br />
seams and overlaps. If the model is symmetric, the artist might overlap opposite triangles to allow painting both<br />
sides simultaneously.<br />
UV coordinates are applied per face, [3] not per vertex. This means a shared vertex can have different UV coordinates<br />
in each of its triangles, so adjacent triangles can be cut apart and positioned on different areas of the texture map.<br />
The UV Mapping process at its simplest requires three steps: unwrapping the mesh, creating the texture, and<br />
applying the texture. [2]<br />
Finding UV on a sphere<br />
UV coordinates represent a projection of the unit space vector onto the xy-plane.<br />
References<br />
[1] when using quaternions (which is standard), "W" is also used; cf. UVW mapping<br />
[2] Mullen, T (2009). Mastering Blender. 1st ed. Indianapolis, Indiana: Wiley Publishing, Inc.<br />
[3] Murdock, K.L. (2008). 3ds Max 2009 Bible. 1st ed. Indianapolis, Indiana: Wiley Publishing, Inc.<br />
External links<br />
• LSCM Mapping image (http:/ / de. wikibooks. org/ wiki/ Bild:Blender<strong>3D</strong>_LSCM. png) with Blender<br />
• Blender UV Mapping Tutorial (http:/ / en. wikibooks. org/ wiki/ Blender_<strong>3D</strong>:_Noob_to_Pro/ UV_Map_Basics)<br />
with Blender
UVW mapping 225<br />
UVW mapping<br />
UVW mapping is a mathematical technique for coordinate mapping. In computer <strong>graphics</strong>, it is most commonly a<br />
to map, suitable for converting a 2D image (a texture) to a three dimensional object of a given topology.<br />
"UVW", like the standard Cartesian coordinate system, has three dimensions; the third dimension allows texture<br />
maps to wrap in complex ways onto irregular surfaces. Each point in a UVW map corresponds to a point on the<br />
surface of the object. The graphic designer or programmer generates the specific mathematical function to<br />
implement the map, so that points on the texture are assigned to (XYZ) points on the target surface. Generally<br />
speaking, the more orderly the unwrapped polygons are, the easier it is for the texture artist to paint features onto the<br />
texture. Once the texture is finished, all that has to be done is to wrap the UVW map back onto the object, projecting<br />
the texture in a way that is far more flexible and advanced, preventing graphic artifacts that accompany more<br />
simplistic texture mappings such as planar projection. For this reason, UVW mapping is commonly used to texture<br />
map non-platonic solids, non-geometric primitives, and other irregularly-shaped objects, such as characters and<br />
furniture.<br />
External links<br />
• UVW Mapping Tutorial [1]<br />
References<br />
[1] http:/ / oman3d. com/ tutorials/ 3ds/ texture_stealth/<br />
Vertex<br />
In geometry, a vertex (plural vertices) is a special kind of point that describes the corners or intersections of<br />
geometric shapes.<br />
Definitions<br />
Of an angle<br />
The vertex of an angle is the point where two rays begin or meet,<br />
where two line segments join or meet, where two lines intersect<br />
(cross), or any appropriate combination of rays, segments and lines that<br />
result in two straight "sides" meeting at one place.<br />
Of a polytope<br />
A vertex is a corner point of a polygon, polyhedron, or other higher<br />
dimensional polytope, formed by the intersection of edges, faces or<br />
facets of the object: a vertices''''''''''''''''facts.<br />
In a polygon, a vertex is called "convex" if the internal angle of the<br />
polygon, that is, the angle formed by the two edges at the vertex, with<br />
A vertex of an angle is the endpoint where two<br />
line segments or lines come together.<br />
the polygon inside the angle, is less than π radians; otherwise, it is called "concave" or "reflex". More generally, a<br />
vertex of a polyhedron or polytope is convex if the intersection of the polyhedron or polytope with a sufficiently<br />
small sphere centered at the vertex is convex, and concave otherwise.
Vertex 226<br />
Polytope vertices are related to vertices of graphs, in that the 1-skeleton of a polytope is a graph, the vertices of<br />
which correspond to the vertices of the polytope, and in that a graph can be viewed as a 1-dimensional simplicial<br />
complex the vertices of which are the graph's vertices. However, in graph theory, vertices may have fewer than two<br />
incident edges, which is usually not allowed for geometric vertices. There is also a connection between geometric<br />
vertices and the vertices of a curve, its points of extreme curvature: in some sense the vertices of a polygon are<br />
points of infinite curvature, and if a polygon is approximated by a smooth curve there will be a point of extreme<br />
curvature near each polygon vertex. However, a smooth curve approximation to a polygon will also have additional<br />
vertices, at the points where its curvature is minimal.<br />
Of a plane tiling<br />
A vertex of a plane tiling or tessellation is a point where three or more tiles meet; generally, but not always, the tiles<br />
of a tessellation are polygons and the vertices of the tessellation are also vertices of its tiles. More generally, a<br />
tessellation can be viewed as a kind of topological cell complex, as can the faces of a polyhedron or polytope; the<br />
vertices of other kinds of complexes such as simplicial complexes are its zero-dimensional faces.<br />
Principal vertex<br />
A polygon vertex of a simple polygon P is a principal polygon vertex if the diagonal intersects<br />
the boundary of P only at and . There are two types of principal vertices: ears and mouths.<br />
Ears<br />
A principal vertex of a simple polygon P is called an ear if the diagonal that bridges lies<br />
entirely in P. (see also convex polygon)<br />
Mouths<br />
A principal vertex of a simple polygon P is called a mouth if the diagonal lies outside the<br />
boundary of P. (see also concave polygon)<br />
Vertices in computer <strong>graphics</strong><br />
In computer <strong>graphics</strong>, objects are often represented as triangulated polyhedra in which the object vertices are<br />
associated not only with three spatial coordinates but also with other graphical information necessary to render the<br />
object correctly, such as colors, reflectance properties, textures, and surface normals; these properties are used in<br />
rendering by a vertex shader, part of the vertex pipeline.<br />
External links<br />
• Weisstein, Eric W., "Polygon Vertex [1] " from MathWorld.<br />
• Weisstein, Eric W., "Polyhedron Vertex [2] " from MathWorld.<br />
• Weisstein, Eric W., "Principal Vertex [3] " from MathWorld.<br />
References<br />
[1] http:/ / mathworld. wolfram. com/ PolygonVertex. html<br />
[2] http:/ / mathworld. wolfram. com/ PolyhedronVertex. html<br />
[3] http:/ / mathworld. wolfram. com/ PrincipalVertex. html
Vertex Buffer Object 227<br />
Vertex Buffer Object<br />
A Vertex Buffer Object (VBO) is an OpenGL extension that provides methods for uploading data (vertex, normal<br />
vector, color, etc.) to the video device for non-immediate-mode rendering. VBOs offer substantial performance gains<br />
over immediate mode rendering primarily because the data resides in the video device memory rather than the<br />
system memory and so it can be rendered directly by the video device.<br />
The Vertex Buffer Object specification has been standardized by the OpenGL Architecture Review Board [1] as of<br />
OpenGL Version 1.5. Similar functionality was available before the standardization of VBOs via the Nvidia-created<br />
extension "Vertex Array Range" [2] or the ATI's "Vertex Array Object" [3] extension.<br />
Basic VBO functions<br />
The following functions form the core of VBO access and manipulation:<br />
In OpenGL 2.1 [4] :<br />
GenBuffersARB(sizei n, uint *buffers)<br />
Generates a new VBO and returns its ID number as an unsigned integer. Id 0 is reserved.<br />
BindBufferARB(enum target, uint buffer)<br />
Use a previously created buffer as the active VBO.<br />
BufferDataARB(enum target, sizeiptrARB size, const void *data, enum usage)<br />
Upload data to the active VBO.<br />
DeleteBuffersARB(sizei n, const uint *buffers)<br />
Deletes the specified number of VBOs from the supplied array or VBO id.<br />
In OpenGL 3.x [5] and OpenGL 4.x [6] :<br />
GenBuffers(sizei n, uint *buffers)<br />
Generates a new VBO and returns its ID number as an unsigned integer. Id 0 is reserved.<br />
BindBuffer(enum target, uint buffer)<br />
Use a previously created buffer as the active VBO.<br />
BufferData(enum target, sizeiptrARB size, const void *data, enum usage)<br />
Upload data to the active VBO.<br />
DeleteBuffers(sizei n, const uint *buffers)<br />
Deletes the specified number of VBOs from the supplied array or VBO id.<br />
Example usage in C Using OpenGL 2.1<br />
//Initialise VBO - do only once, at start of program<br />
//Create a variable to hold the VBO identifier<br />
GLuint triangleVBO;<br />
//Vertices of a triangle (counter-clockwise winding)<br />
float data[] = {1.0, 0.0, 1.0, 0.0, 0.0, -1.0, -1.0, 0.0, 1.0};<br />
//Create a new VBO and use the variable id to store the VBO id<br />
glGenBuffers(1, &triangleVBO);
Vertex Buffer Object 228<br />
//Make the new VBO active<br />
glBindBuffer(GL_ARRAY_BUFFER, triangleVBO);<br />
//Upload vertex data to the video device<br />
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW);<br />
//Draw Triangle from VBO - do each time window, view point or data<br />
changes<br />
//Establish its 3 coordinates per vertex with zero stride in this<br />
array; necessary here<br />
glVertexPointer(3, GL_FLOAT, 0, NULL);<br />
//Make the new VBO active. Repeat here incase changed since<br />
initialisation<br />
glBindBuffer(GL_ARRAY_BUFFER, triangleVBO);<br />
//Establish array contains vertices (not normals, colours, texture<br />
coords etc)<br />
glEnableClientState(GL_VERTEX_ARRAY);<br />
//Actually draw the triangle, giving the number of vertices provided<br />
glDrawArrays(GL_TRIANGLES, 0, sizeof(data) / sizeof(float) / 3);<br />
//Force display to be drawn now<br />
glFlush();<br />
Example usage in C Using OpenGL 3.x and OpenGL 4.x<br />
Function which can read any text or binary file into char buffer:<br />
/* Function will read a text file into allocated char buffer<br />
*/<br />
char* filetobuf(char *file)<br />
{<br />
are */<br />
FILE *fptr;<br />
long length;<br />
char *buf;<br />
fptr = fopen(file, "r"); /* Open file for reading */<br />
if (!fptr) /* Return NULL on failure */<br />
return NULL;<br />
fseek(fptr, 0, SEEK_END); /* Seek to the end of the file */<br />
length = ftell(fptr); /* Find out how many bytes into the file we<br />
buf = malloc(length+1); /* Allocate a buffer for the entire length<br />
of the file and a null terminator */<br />
*/<br />
fseek(fptr, 0, SEEK_SET); /* Go back to the beginning of the file
Vertex Buffer Object 229<br />
}<br />
fread(buf, length, 1, fptr); /* Read the contents of the file in to<br />
the buffer */<br />
fclose(fptr); /* Close the file */<br />
buf[length] = 0; /* Null terminator */<br />
return buf; /* Return the buffer */<br />
Vertex Shader:<br />
/*----------------- "exampleVertexShader.vert" -----------------*/<br />
#version 150 // Specify which version of GLSL we are using.<br />
// in_Position was bound to attribute index 0("shaderAtribute")<br />
in vec3 in_Position;<br />
void main(void)<br />
{<br />
1.0);<br />
}<br />
gl_Position = vec4(in_Position.x, in_Position.y, in_Position.z,<br />
/*--------------------------------------------------------------*/<br />
Fragment Shader:<br />
/*---------------- "exampleFragmentShader.frag" ----------------*/<br />
#version 150 // Specify which version of GLSL we are using.<br />
precision highp float; // Video card drivers require this next line to<br />
function properly<br />
out vec4 fragColor;<br />
void main(void)<br />
{<br />
WHITE<br />
}<br />
fragColor = vec4(1.0,1.0,1.0,1.0); //Set colour of each vertex to<br />
/*--------------------------------------------------------------*/<br />
Main OpenGL Program:<br />
/*--------------------- Main OpenGL Program ---------------------*/<br />
/* Create a variable to hold the VBO identifier */<br />
GLuint triangleVBO;<br />
/* This is a handle to the shader program */
Vertex Buffer Object 230<br />
GLuint shaderProgram;<br />
/* These pointers will receive the contents of our shader source code<br />
files */<br />
GLchar *vertexSource, *fragmentSource;<br />
/* These are handles used to reference the shaders */<br />
GLuint vertexShader, fragmentShader;<br />
const unsigned int shaderAtribute = 0;<br />
const float NUM_OF_VERTICES_IN_DATA=3;<br />
/* Vertices of a triangle (counter-clockwise winding) */<br />
float data[3][3] = {<br />
};<br />
{ 0.0, 1.0, 0.0 },<br />
{ -1.0, -1.0, 0.0 },<br />
{ 1.0, -1.0, 0.0 }<br />
/*---------------------- Initialise VBO - (Note: do only once, at start<br />
of program) ---------------------*/<br />
/* Create a new VBO and use the variable "triangleVBO" to store the VBO<br />
id */<br />
glGenBuffers(1, &triangleVBO);<br />
/* Make the new VBO active */<br />
glBindBuffer(GL_ARRAY_BUFFER, triangleVBO);<br />
/* Upload vertex data to the video device */<br />
glBufferData(GL_ARRAY_BUFFER, NUM_OF_VERTICES_IN_DATA * 3 *<br />
sizeof(float), data, GL_STATIC_DRAW);<br />
/* Specify that our coordinate data is going into attribute index<br />
0(shaderAtribute), and contains three floats per vertex */<br />
glVertexAttribPointer(shaderAtribute, 3, GL_FLOAT, GL_FALSE, 0, 0);<br />
/* Enable attribute index 0(shaderAtribute) as being used */<br />
glEnableVertexAttribArray(shaderAtribute);<br />
/* Make the new VBO active. */<br />
glBindBuffer(GL_ARRAY_BUFFER, triangleVBO);<br />
/*-------------------------------------------------------------------------------------------------------*/<br />
/*--------------------- Load Vertex and Fragment shaders from files and<br />
compile them --------------------*/
Vertex Buffer Object 231<br />
/* Read our shaders into the appropriate buffers */<br />
vertexSource = filetobuf("exampleVertexShader.vert");<br />
fragmentSource = filetobuf("exampleFragmentShader.frag");<br />
/* Assign our handles a "name" to new shader objects */<br />
vertexShader = glCreateShader(GL_VERTEX_SHADER);<br />
fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);<br />
/* Associate the source code buffers with each handle */<br />
glShaderSource(vertexShader, 1, (const GLchar**)&vertexSource, 0);<br />
glShaderSource(fragmentShader, 1, (const GLchar**)&fragmentSource, 0);<br />
/* Compile our shader objects */<br />
glCompileShader(vertexShader);<br />
glCompileShader(fragmentShader);<br />
/*-------------------------------------------------------------------------------------------------------*/<br />
/*-------------------- Create shader program, attach shaders to it and<br />
then link it ---------------------*/<br />
/* Assign our program handle a "name" */<br />
shaderProgram = glCreateProgram();<br />
/* Attach our shaders to our program */<br />
glAttachShader(shaderProgram, vertexShader);<br />
glAttachShader(shaderProgram, fragmentShader);<br />
/* Bind attribute index 0 (shaderAtribute) to in_Position*/<br />
/* "in_Position" will represent "data" array's contents in the vertex<br />
shader */<br />
glBindAttribLocation(shaderProgram, shaderAtribute, "in_Position");<br />
/* Link shader program*/<br />
glLinkProgram(shaderProgram);<br />
/*-------------------------------------------------------------------------------------------------------*/<br />
/* Set shader program as being actively used */<br />
glUseProgram(shaderProgram);<br />
/* Set background colour to BLACK */<br />
glClearColor(0.0, 0.0, 0.0, 1.0);<br />
/* Clear background with BLACK colour */<br />
glClear(GL_COLOR_BUFFER_BIT);<br />
/* Actually draw the triangle, giving the number of vertices provided<br />
by invoke glDrawArrays<br />
while telling that our data is a triangle and we want to draw 0-3
Vertex Buffer Object 232<br />
vertexes<br />
*/<br />
glDrawArrays(GL_TRIANGLES, 0, 3);<br />
/*---------------------------------------------------------------*/<br />
References<br />
[1] http:/ / www. opengl. org/ about/ arb/<br />
[2] "GL_NV_vertex_array_range Whitepaper" (http:/ / developer. nvidia. com/ object/ Using_GL_NV_fence. html). .<br />
[3] "ATI_vertex_array_object" (http:/ / oss. sgi. com/ projects/ ogl-sample/ registry/ ATI/ vertex_array_object. txt). .<br />
[4] "OpenGL 2.1 function reference" (http:/ / www. opengl. org/ sdk/ docs/ man/ xhtml/ ). .<br />
[5] "OpenGL 3.3 function reference" (http:/ / www. opengl. org/ sdk/ docs/ man3/ ). .<br />
[6] "OpenGL 4.1 function reference" (http:/ / www. opengl. org/ sdk/ docs/ man4/ ). .<br />
External links<br />
• Vertex Buffer Object Whitepaper (http:/ / www. opengl. org/ registry/ specs/ ARB/ vertex_buffer_object. txt)<br />
Vertex normal<br />
In the geometry of computer <strong>graphics</strong>, a vertex normal at a vertex of a polyhedron is the normalized average of the<br />
surface normals of the faces that contain that vertex. The average can be weighted for example by the area of the face<br />
or it can be unweighted. Vertex normals are used in Gouraud shading, Phong shading and other lighting models. This<br />
produces much smoother results than flat shading; however, without some modifications, it cannot produce a sharp<br />
edge.
Viewing frustum 233<br />
Viewing frustum<br />
In <strong>3D</strong> computer <strong>graphics</strong>, the viewing frustum or view frustum is the<br />
region of space in the modeled world that may appear on the screen; it<br />
is the field of view of the notional camera. The exact shape of this<br />
region varies depending on what kind of camera lens is being<br />
simulated, but typically it is a frustum of a rectangular pyramid (hence<br />
the name). The planes that cut the frustum perpendicular to the viewing<br />
direction are called the near plane and the far plane. Objects closer to<br />
the camera than the near plane or beyond the far plane are not drawn.<br />
Often, the far plane is placed infinitely far away from the camera so all<br />
objects within the frustum are drawn regardless of their distance from<br />
the camera.<br />
Viewing frustum culling or view frustum culling is the process of<br />
removing objects that lie completely outside the viewing frustum from<br />
A View Frustum<br />
the rendering process. Rendering these objects would be a waste of time since they are not directly visible. To make<br />
culling fast, it is usually done using bounding volumes surrounding the objects rather than the objects themselves.<br />
Definitions<br />
VPN<br />
VUV<br />
VRP<br />
PRP<br />
VRC<br />
the view-plane normal – a normal to the view plane.<br />
the view-up vector – the vector on the view plane that indicates the upward direction.<br />
the viewing reference point – a point located on the view plane, and the origin of the VRC.<br />
the projection reference point – the point where the image is projected from, for parallel projection, the PRP is<br />
at infinity.<br />
the viewing-reference coordinate system.<br />
In OpenGL, the viewing frustum is commonly set with the gluPerspective() [1] (or glFrustum() [2] )utility<br />
function.<br />
The geometry is defined by a field of view angle (in the 'y' direction), as well as an aspect ratio. Further, a set of<br />
z-planes define the near and far bounds of the frustum.<br />
References<br />
[1] OpenGL SDK Documentation: gluPerspective( ) (http:/ / www. opengl. org/ sdk/ docs/ man/ xhtml/ gluPerspective. xml) -- Accessed 9 Nov<br />
2010<br />
[2] OpenGL Programming Guide (Red Book) (http:/ / fly. cc. fer. hr/ ~unreal/ theredbook/ appendixg. html) -- Accessed 4 Jan 2011
Virtual actor 234<br />
Virtual actor<br />
A virtual human or digital clone is the creation or re-creation of a human being in image and voice using<br />
computer-generated imagery and sound. The process of creating such a virtual human on film, substituting for an<br />
existing actor, is known, after a 1992 book, as Schwarzeneggerization, and in general virtual humans employed in<br />
movies are known as synthespians, virtual actors, vactors, cyberstars, or "silicentric" actors. There are several<br />
legal ramifications for the digital cloning of human actors, relating to copyright and personality rights. People who<br />
have already been digitally cloned as simulations include Bill Clinton, Marilyn Monroe, Fred Astaire, Ed Sullivan,<br />
Elvis Presley, Anna Marie Goddard, and George Burns. Ironically, data sets of Arnold Schwarzenegger for the<br />
[1] [2]<br />
creation of a virtual Arnold (head, at least) have already been made.<br />
The name Schwarzeneggerization comes from the 1992 book Et Tu, Babe by Mark Leyner. In one scene, on pages<br />
50–51, a character asks the shop assistant at a video store to have Arnold Schwarzenegger digitally substituted for<br />
existing actors into various works, including (amongst others) Rain Man (to replace both Tom Cruise and Dustin<br />
Hoffman), My Fair Lady (to replace Rex Harrison), Amadeus (to replace F. Murray Abraham), The Diary of Anne<br />
Frank (as Anne Frank), Gandhi (to replace Ben Kingsley), and It's a Wonderful Life (to replace James Stewart).<br />
Schwarzeneggerization is the name that Leyner gives to this process. Only 10 years later, Schwarzeneggerization<br />
was close to being reality. [1]<br />
By 2002, Schwarzenegger, Jim Carrey, Kate Mulgrew, Michelle Pfeiffer, Denzel Washington, Gillian Anderson, and<br />
David Duchovny had all had their heads laser scanned to create digital computer models thereof. [1]<br />
Early history<br />
Early computer-generated animated faces include the 1985 film Tony de Peltrie and the music video for Mick<br />
Jagger's song "Hard Woman" (from She's the Boss). The first actual human beings to be digitally duplicated were<br />
Marilyn Monroe and Humphrey Bogart in a March 1987 film created by Daniel Thalmann and Nadia<br />
Magnenat-Thalmann for the 100th anniversary of the Engineering Society of Canada. The film was created by six<br />
people over a year, and had Monroe and Bogart meeting in a café in Montreal. The characters were rendered in three<br />
dimensions, and were capable of speaking, showing emotion, and shaking hands. [3]<br />
In 1987, the Kleizer-Walczak Construction Company begain its Synthespian ("synthetic thespian") Project, with the<br />
aim of creating "life-like figures based on the digital animation of clay models". [2]<br />
In 1988, Tin Toy was the first entirely computer-generated movie to win an Academy Award (Best Animated Short<br />
Film). In the same year, Mike the Talking Head, an animated head whose facial expression and head posture were<br />
controlled in real time by a puppeteer using a custom-built controller, was developed by Silicon Graphics, and<br />
performed live at SIGGRAPH. In 1989, The Abyss, directed by James Cameron included a computer-generated face<br />
[3] [4]<br />
placed onto a watery pseudopod.<br />
In 1991, Terminator 2, also directed by Cameron, confident in the abilities of computer-generated effects from his<br />
experience with The Abyss, included a mixture of synthetic actors with live animation, including computer models of<br />
Robert Patrick's face. The Abyss contained just one scene with photo-realistic computer <strong>graphics</strong>. Terminator 2<br />
[3] [4] [5]<br />
contained over forty shots throughout the film.<br />
In 1997, Industrial Light and Magic worked on creating a virtual actor that was a composite of the bodily parts of<br />
several real actors. [2]<br />
By the 21st century, virtual actors had become a reality. The face of Brandon Lee, who had died partway through the<br />
shooting of The Crow in 1994, had been digitally superimposed over the top of a body-double in order to complete<br />
those parts of the movie that had yet to be filmed. By 2001, three-dimensional computer-generated realistic humans<br />
had been used in Final Fantasy: The Spirits Within, and by 2004, a synthetic Laurence Olivier co-starred in Sky<br />
[6] [7]<br />
Captain and the World of Tomorrow.
Virtual actor 235<br />
Legal issues<br />
Critics such as Stuart Klawans in the New York Times expressed worry about the loss of "the very thing that art was<br />
supposedly preserving: our point of contact with the irreplaceable, finite person". More problematic, however, are<br />
issues of copyright and personality rights. An actor has little legal control over a digital clone of him/herself and<br />
must resort to database protection laws in order to exercise what control he/she has. (The proposed U.S. Database<br />
and Collections of Information Misappropriation Act would strengthen such laws.) An actor does not own the<br />
copyright on his/her digital clone unless he/she was the actual creator of that clone. Robert Patrick, for example,<br />
[6] [8]<br />
would have little legal control over the liquid metal cyborg digital clone of himself created for Terminator 2.<br />
The use of a digital clone in the performance of the cloned person's primary profession is an economic difficulty, as<br />
it may cause the actor to act in fewer roles, or be at a disadvantage in contract negotiations, since the clone could be<br />
used by the producers of the movie to substitute for the actor in the role. It is also a career difficulty, since a clone<br />
could be used in roles that the actor himself/herself would, conscious of the effect that such roles might have on<br />
his/her career, never accept. Bad identifications of an actor's image with a role harm careers, and actors, conscious of<br />
this, pick and choose what roles they play. (Bela Lugosi and Margaret Hamilton became typecast with their roles as<br />
Count Dracula and the Wicked Witch of the West, whereas Anthony Hopkins and Dustin Hoffman have played a<br />
diverse range of parts.) A digital clone could be used to play the parts of (for examples) an axe murderer or a<br />
prostitute, which would affect the actor's public image, and in turn affect what future casting opportunities were<br />
given to the actor. Both Tom Waits and Bette Midler have won actions for damages against people who employed<br />
their images in advertisements that they had refused to take part in themselves. [9]<br />
In the US, the use of a digital clone in advertisements, as opposed to the performance of a person's primary<br />
profession, is covered by section 43(a) of the Lanham Act, which subjects commercial speech to requirements of<br />
accuracy and truthfulness, and which makes deliberate confusion unlawful. The use of a celebrity's image would be<br />
an implied endorsement. The New York District Court held that an advertisement employing a Woody Allen<br />
impersonator would violate the Act unless it contained a disclaimer stating that Allen did not endorse the product. [9]<br />
Other concerns include posthumous use of digital clones. Barbara Creed states that "Arnold's famous threat, 'I'll be<br />
back', may take on a new meaning". Even before Brandon Lee was digitally reanimated, the California Senate drew<br />
up the Astaire Bill, in response to lobbying from Fred Astaire's widow and the Screen Actors Guild, who were<br />
seeking to restrict the use of digital clones of Astaire. Movie studios opposed the legislation, and as of 2002 it had<br />
yet to be finalized and enacted. Several companies, including Virtual Celebrity Productions, have in the meantime<br />
purchased the rights to create and use digital clones of various dead celebrities, such as Marlene Dietrich [10] and<br />
Vincent Price. [2]<br />
In fiction<br />
• S1m0ne, a 2002 science fiction drama film written, produced and directed by Andrew Niccol, starring Al Pacino.<br />
In business<br />
A Virtual Actor can also be a person who performs a role in real-time when logged into a Virtual World or<br />
Collaborative On-Line Environment. One who represents, via an avatar, a character in a simulation or training event.<br />
One who behaves as if acting a part through the use of an avatar.<br />
Vactor Studio LLC is a New York-based company, but its "Vactors" (virtual actors) are located all across the US and<br />
Canada. The Vactors log into virtual world applications from their homes or offices to participate in exercises<br />
covering an extensive range of markets including: Medical, Military, First Responder, Corporate, Government,<br />
Entertainment, and Retail. Through their own computers, they become doctors, soldiers, EMTs, customer service<br />
reps, victims for Mass Casualty Response training, or whatever the demonstration requires. Since 2005, Vactor<br />
Studio’s role-players have delivered thousands of hours of professional virtual world demonstrations, training
Virtual actor 236<br />
exercises, and event management services.<br />
References<br />
[1] Brooks Landon (2002). "Synthespians, Virtual Humans, and Hypermedia". In Veronica Hollinger and Joan Gordon. Edging Into the Future:<br />
Science Fiction and Contemporary Cultural Transformation. University of Pennsylvania Press. pp. 57–59. ISBN 0812218043.<br />
[2] Barbara Creed (2002). "The Cyberstar". In Graeme Turner. The Film Cultures Reader. Routledge. ISBN 0415252814.<br />
[3] Nadia Magnenat-Thalmann and Daniel Thalmann (2004). Handbook of Virtual Humans. John Wiley and Sons. pp. 6–7. ISBN 0470023163.<br />
[4] Paul Martin Lester (2005). Visual Communication: Images With Messages. Thomson Wadsworth. pp. 353. ISBN 0534637205.<br />
[5] Andrew Darley (2000). "The Waning of Narrative". Visual Digital Culture: Surface Play and Spectacle in New Media Genres. Routledge.<br />
pp. 109. ISBN 0415165547.<br />
[6] Ralf Remshardt (2006). "The actor as imtermedialist: remetiation, appropriation, adaptation". In Freda Chapple and Chiel Kattenbelt.<br />
Intermediality in Theatre and Performance. Rodopi. pp. 52–53. ISBN 9042016299.<br />
[7] Simon Danaher (2004). Digital <strong>3D</strong> Design. Thomson <strong>Course</strong> Technology. pp. 38. ISBN 1592003915.<br />
[8] Laikwan Pang (2006). "Expressions, originality, and fixation". Cultural Control And Globalization in Asia: Copyright, Piracy, and Cinema.<br />
Routledge. pp. 20. ISBN 0415352010.<br />
[9] Michael A. Einhorn (2004). "Publicity rights and consumer rights". Media, Technology, and Copyright: Integrating Law and Economics.<br />
Edward Elgar Publishing. pp. 121, 125. ISBN 1843766574.<br />
[10] Los Angeles Times / Digital Elite Inc. (http:/ / articles. latimes. com/ 1999/ aug/ 09/ business/ fi-64043)<br />
Further reading<br />
• Michael D. Scott and James N. Talbott (1997). "Titles and Characters". Scott on Multimedia Law. Aspen<br />
Publishers Online. ISBN 1567063330. — a detailed discussion of the law, as it stood in 1997, relating to virtual<br />
humans and the rights held over them by real humans<br />
• Richard Raysman (2002). "Trademark Law". Emerging Technologies and the Law: Forms and Analysis. Law<br />
Journal Press. pp. 6—15. ISBN 1588521079. — how trademark law affects digital clones of celebrities who have<br />
trademarked their personæ<br />
External links<br />
• Vactor Studio (http:/ / www. vactorstudio. com/ )
Volume rendering 237<br />
Volume rendering<br />
In scientific visualization and computer <strong>graphics</strong>,<br />
volume rendering is a set of techniques used to<br />
display a 2D projection of a <strong>3D</strong> discretely sampled data<br />
set.<br />
A typical <strong>3D</strong> data set is a group of 2D slice images<br />
acquired by a CT, MRI, or MicroCT scanner. Usually<br />
these are acquired in a regular pattern (e.g., one slice<br />
every millimeter) and usually have a regular number of<br />
image pixels in a regular pattern. This is an example of<br />
a regular volumetric grid, with each volume element, or<br />
voxel represented by a single value that is obtained by<br />
sampling the immediate area surrounding the voxel.<br />
To render a 2D projection of the <strong>3D</strong> data set, one first<br />
needs to define a camera in space relative to the<br />
volume. Also, one needs to define the opacity and color<br />
of every voxel. This is usually defined using an RGBA<br />
(for red, green, blue, alpha) transfer function that<br />
defines the RGBA value for every possible voxel value.<br />
For example, a volume may be viewed by extracting<br />
isosurfaces (surfaces of equal values) from the volume<br />
and rendering them as polygonal meshes or by<br />
rendering the volume directly as a block of data. The<br />
marching cubes algorithm is a common technique for<br />
extracting an isosurface from volume data. Direct<br />
volume rendering is a computationally intensive task<br />
that may be performed in several ways.<br />
Direct volume rendering<br />
A direct volume renderer [1] [2] requires every sample<br />
value to be mapped to opacity and a color. This is done<br />
with a "transfer function" which can be a simple ramp,<br />
A volume rendered cadaver head using view-aligned texture mapping<br />
and diffuse reflection<br />
Volume rendered CT scan of a forearm with different color schemes<br />
for muscle, fat, bone, and blood<br />
a piecewise linear function or an arbitrary table. Once converted to an RGBA (for red, green, blue, alpha) value, the<br />
composed RGBA result is projected on correspondent pixel of the frame buffer. The way this is done depends on the<br />
rendering technique.<br />
A combination of these techniques is possible. For instance, a shear warp implementation could use texturing<br />
hardware to draw the aligned slices in the off-screen buffer.
Volume rendering 238<br />
Volume ray casting<br />
The technique of volume ray casting can be<br />
derived directly from the rendering<br />
equation. It provides results of very high<br />
quality, usually considered to provide the<br />
best image quality. Volume ray casting is<br />
classified as image based volume rendering<br />
technique, as the computation emanates<br />
from the output image, not the input volume<br />
data as is the case with object based<br />
techniques. In this technique, a ray is<br />
generated for each desired image pixel.<br />
Using a simple camera model, the ray starts<br />
at the center of projection of the camera<br />
(usually the eye point) and passes through<br />
the image pixel on the imaginary image<br />
plane floating in between the camera and the<br />
Crocodile mummy provided by the Phoebe A. Hearst Museum of Anthropology,<br />
UC Berkeley. CT data was acquired by Dr. Rebecca Fahrig, Department of<br />
Radiology, Stanford University, using a Siemens SOMATOM Definition, Siemens<br />
Healthcare. The image was rendered by Fovia's High Definition Volume<br />
Rendering® engine<br />
volume to be rendered. The ray is clipped by the boundaries of the volume in order to save time. Then the ray is<br />
sampled at regular or adaptive intervals throughout the volume. The data is interpolated at each sample point, the<br />
transfer function applied to form an RGBA sample, the sample is composited onto the accumulated RGBA of the<br />
ray, and the process repeated until the ray exits the volume. The RGBA color is converted to an RGB color and<br />
deposited in the corresponding image pixel. The process is repeated for every pixel on the screen to form the<br />
completed image.
Volume rendering 239<br />
Splatting<br />
This is a technique which trades quality for speed. Here, every volume element is splatted, as Lee Westover said, like<br />
a snow ball, on to the viewing surface in back to front order. These splats are rendered as disks whose properties<br />
(color and transparency) vary diametrically in normal (Gaussian) manner. Flat disks and those with other kinds of<br />
[3] [4]<br />
property distribution are also used depending on the application.<br />
Shear warp<br />
The shear warp approach to volume rendering was<br />
developed by Cameron and Undrill, popularized by<br />
Philippe Lacroute and Marc Levoy. [5] In this technique,<br />
the viewing transformation is transformed such that the<br />
nearest face of the volume becomes axis aligned with<br />
an off-screen image buffer with a fixed scale of voxels<br />
to pixels. The volume is then rendered into this buffer<br />
using the far more favorable memory alignment and<br />
fixed scaling and blending factors. Once all slices of<br />
the volume have been rendered, the buffer is then<br />
warped into the desired orientation and scaled in the<br />
displayed image.<br />
This technique is relatively fast in software at the cost<br />
of less accurate sampling and potentially worse image<br />
quality compared to ray casting. There is memory<br />
overhead for storing multiple copies of the volume, for<br />
the ability to have near axis aligned volumes. This<br />
overhead can be mitigated using run length encoding.<br />
Texture mapping<br />
Example of a mouse skull (CT) rendering using the shear warp<br />
algorithm<br />
Many <strong>3D</strong> <strong>graphics</strong> systems use texture mapping to apply images, or textures, to geometric objects. Commodity PC<br />
<strong>graphics</strong> cards are fast at texturing and can efficiently render slices of a <strong>3D</strong> volume, with real time interaction<br />
capabilities. Workstation GPUs are even faster, and are the basis for much of the production volume visualization<br />
used in medical imaging, oil and gas, and other markets (2007). In earlier years, dedicated <strong>3D</strong> texture mapping<br />
systems were used on <strong>graphics</strong> systems such as Silicon Graphics InfiniteReality, HP Visualize FX <strong>graphics</strong><br />
accelerator, and others. This technique was first described by Bill Hibbard and Dave Santek. [6]<br />
These slices can either be aligned with the volume and rendered at an angle to the viewer, or aligned with the<br />
viewing plane and sampled from unaligned slices through the volume. Graphics hardware support for <strong>3D</strong> textures is<br />
needed for the second technique.<br />
Volume aligned texturing produces images of reasonable quality, though there is often a noticeable transition when<br />
the volume is rotated.
Volume rendering 240<br />
Maximum intensity projection<br />
As opposed to direct volume rendering, which requires<br />
every sample value to be mapped to opacity and a color,<br />
maximum intensity projection picks out and projects only<br />
the voxels with maximum intensity that fall in the way of<br />
parallel rays traced from the viewpoint to the plane of<br />
projection.<br />
This technique is computationally fast, but the 2D results<br />
do not provide a good sense of depth of the original data.<br />
To improve the sense of <strong>3D</strong>, animations are usually<br />
rendered of several MIP frames in which the viewpoint is<br />
slightly changed from one to the other, thus creating the<br />
illusion of rotation. This helps the viewer's perception to<br />
find the relative <strong>3D</strong> positions of the object components.<br />
This implies that two MIP renderings from opposite<br />
viewpoints are symmetrical images, which makes it<br />
impossible for the viewer to distinguish between left or<br />
right, front or back and even if the object is rotating<br />
clockwise or counterclockwise even though it makes a<br />
significant difference for the volume being rendered.<br />
CT visualized by a maximum intensity projection of a mouse<br />
MIP imaging was invented for use in nuclear medicine<br />
[7] [8] [9]<br />
by Jerold Wallis, MD, in 1988, and subsequently published in IEEE Transactions in Medical Imaging.<br />
Suprisingly, an easy improvement to MIP is Local maximum intensity projection. In this technique we don't take the<br />
global maximum value, but the first maximum value that is above a certain threshold. Because - in general - we can<br />
terminate the ray earlier this technique is faster and also gives somehow better results as it approximates<br />
occlusion [10] .<br />
Hardware-accelerated volume rendering<br />
Due to the extremely parallel nature of direct volume rendering, special purpose volume rendering hardware was a<br />
rich research topic before GPU volume rendering became fast enough. The most widely cited technology was<br />
VolumePro [11] , which used high memory bandwidth and brute force to render using the ray casting algorithm.<br />
A recently exploited technique to accelerate traditional volume rendering algorithms such as ray-casting is the use of<br />
modern <strong>graphics</strong> cards. Starting with the programmable pixel shaders, people recognized the power of parallel<br />
operations on multiple pixels and began to perform general-purpose computing on (the) <strong>graphics</strong> processing units<br />
(GPGPU). The pixel shaders are able to read and write randomly from video memory and perform some basic<br />
mathematical and logical calculations. These SIMD processors were used to perform general calculations such as<br />
rendering polygons and signal processing. In recent GPU generations, the pixel shaders now are able to function as<br />
MIMD processors (now able to independently branch) utilizing up to 1 GB of texture memory with floating point<br />
formats. With such power, virtually any algorithm with steps that can be performed in parallel, such as volume ray<br />
casting or tomographic reconstruction, can be performed with tremendous acceleration. The programmable pixel<br />
shaders can be used to simulate variations in the characteristics of lighting, shadow, reflection, emissive color and so<br />
forth. Such simulations can be written using high level shading languages.
Volume rendering 241<br />
Optimization techniques<br />
The primary goal of optimization is to skip as much of the volume as possible. A typical medical data set can be 1<br />
GB in size. To render that at 30 frame/s requires an extremely fast memory bus. Skipping voxels means that less<br />
information needs to be processed.<br />
Empty space skipping<br />
Often, a volume rendering system will have a system for identifying regions of the volume containing no visible<br />
material. This information can be used to avoid rendering these transparent regions. [12]<br />
Early ray termination<br />
This is a technique used when the volume is rendered in front to back order. For a ray through a pixel, once<br />
sufficient dense material has been encountered, further samples will make no significant contribution to the pixel and<br />
so may be neglected.<br />
Octree and BSP space subdivision<br />
The use of hierarchical structures such as octree and BSP-tree could be very helpful for both compression of volume<br />
data and speed optimization of volumetric ray casting process.<br />
Volume segmentation<br />
By sectioning out large portions of the volume that one considers uninteresting before rendering, the amount of<br />
calculations that have to be made by ray casting or texture blending can be significantly reduced. This reduction can<br />
be as much as from O(n) to O(log n) for n sequentially indexed voxels. Volume segmentation also has significant<br />
performance benefits for other ray tracing algorithms.<br />
Multiple and adaptive resolution representation<br />
By representing less interesting regions of the volume in a coarser resolution, the data input overhead can be<br />
reduced. On closer observation, the data in these regions can be populated either by reading from memory or disk, or<br />
by interpolation. The coarser resolution volume is resampled to a smaller size in the same way as a 2D mipmap<br />
image is created from the original. These smaller volume are also used by themselves while rotating the volume to a<br />
new orientation.<br />
Pre-integrated volume rendering<br />
Pre-integrated volume rendering [13] [14] is a method that can reduce sampling artifacts by pre-computing much of the<br />
required data. It is especially useful in hardware-accelerated applications [15] [16] because it improves quality without<br />
a large performance impact. Unlike most other optimizations, this does not skip voxels. Rather it reduces the number<br />
of samples needed to accurately display a region of voxels. The idea is to render the intervals between the samples<br />
instead of the samples themselves. This technique captures rapidly changing material, for example the transition<br />
from muscle to bone with much less computation.
Volume rendering 242<br />
Image-based meshing<br />
Image-based meshing is the automated process of creating computer models from <strong>3D</strong> image data (such as MRI, CT,<br />
Industrial CT or microtomography) for computational analysis and design, e.g. CAD, CFD, and FEA.<br />
Temporal reuse of voxels<br />
For a complete display view, only one voxel per pixel (the front one) is required to be shown (although more can be<br />
used for smoothing the image), if animation is needed, the front voxels to be shown can be cached and their location<br />
relative to the camera can be recalculated as it moves. Where display voxels become too far apart to cover all the<br />
pixels, new front voxels can be found by ray casting or similar, and where two voxels are in one pixel, the front one<br />
can be kept.<br />
References<br />
[1] Marc Levoy, "Display of Surfaces from Volume Data", IEEE CG&A, May 1988. Archive of Paper (http:/ / <strong>graphics</strong>. stanford. edu/ papers/<br />
volume-cga88/ )<br />
[2] Drebin, R.A., Carpenter, L., Hanrahan, P., "Volume Rendering", Computer Graphics, SIGGRAPH88. DOI citation link (http:/ / portal. acm.<br />
org/ citation. cfm?doid=378456. 378484)<br />
[3] Westover, Lee Alan (July, 1991). "SPLATTING: A Parallel, Feed-Dorward Volume Rendering Algorithm" (ftp:/ / ftp. cs. unc. edu/ pub/<br />
publications/ techreports/ 91-029. pdf) (PDF). . Retrieved 5 August 2011.<br />
[4] Huang, Jian (Spring 2002). "Splatting" (http:/ / web. eecs. utk. edu/ ~huangj/ CS594S02/ splatting. ppt) (PPT). . Retrieved 5 August 2011.<br />
[5] "Fast Volume Rendering Using a Shear-Warp Factorization of the Viewing Transformation" (http:/ / <strong>graphics</strong>. stanford. edu/ papers/ shear/ )<br />
[6] Hibbard W., Santek D., "Interactivity is the key" (http:/ / www. ssec. wisc. edu/ ~billh/ p39-hibbard. pdf), Chapel Hill Workshop on Volume<br />
Visualization, University of North Carolina, Chapel Hill, 1989, pp. 39–43.<br />
[7] Wallis JW, Miller TR, Lerner CA, Kleerup EC (1989). "Three-dimensional display in nuclear medicine". IEEE Trans Med Imaging 8 (4):<br />
297–303. doi:10.1109/42.41482. PMID 18230529.<br />
[8] Wallis JW, Miller TR (1 August 1990). "Volume rendering in three-dimensional display of SPECT images" (http:/ / jnm. snmjournals. org/<br />
cgi/ pmidlookup?view=long& pmid=2384811). J. Nucl. Med. 31 (8): 1421–8. PMID 2384811. .<br />
[9] Wallis JW, Miller TR (March 1991). "Three-dimensional display in nuclear medicine and radiology". J Nucl Med. 32 (3): 534–46.<br />
PMID 2005466.<br />
[10] "LMIP: Local Maximum Intensity Projection: Comparison of Visualization Methods Using Abdominal CT Angiograpy" (http:/ / www.<br />
image. med. osaka-u. ac. jp/ member/ yoshi/ lmip_index. html). .<br />
[11] Pfister H., Hardenbergh J., Knittel J., Lauer H., Seiler L.: The VolumePro real-time ray-casting system In Proceeding of SIGGRAPH99 DOI<br />
(http:/ / doi. acm. org/ 10. 1145/ 311535. 311563)<br />
[12] Sherbondy A., Houston M., Napel S.: Fast volume segmentation with simultaneous visualization using programmable <strong>graphics</strong> hardware.<br />
In Proceedings of IEEE Visualization (2003), pp. 171–176.<br />
[13] Max N., Hanrahan P., Crawfis R.: Area and volume coherence for efficient visualization of <strong>3D</strong> scalar functions. In Computer Graphics (San<br />
Diego Workshop on Volume Visualization, 1990) vol. 24, pp. 27–33.<br />
[14] Stein C., Backer B., Max N.: Sorting and hardware assisted rendering for volume visualization. In Symposium on Volume Visualization<br />
(1994), pp. 83–90.<br />
[15] Engel K., Kraus M., Ertl T.: High-quality pre-integrated volume rendering using hardware-accelerated pixel shading. In Proceedings of<br />
Euro<strong>graphics</strong>/SIGGRAPH Workshop on Graphics Hardware (2001), pp. 9–16.<br />
[16] Lum E., Wilson B., Ma K.: High-Quality Lighting and Efficient Pre-Integration for Volume Rendering. In Euro<strong>graphics</strong>/IEEE Symposium<br />
on Visualization 2004.<br />
Bibliography<br />
1. Barthold Lichtenbelt, Randy Crane, Shaz Naqvi, Introduction to Volume Rendering (Hewlett-Packard<br />
Professional Books), Hewlett-Packard Company 1998.<br />
2. Peng H., Ruan, Z, Long, F, Simpson, JH, Myers, EW: V<strong>3D</strong> enables real-time <strong>3D</strong> visualization and quantitative<br />
analysis of large-scale biological image data sets. Nature Biotechnology, 2010 (DOI: 10.1038/nbt.1612) Volume<br />
Rendering of large high-dimensional image data (http:/ / www. nature. com/ nbt/ journal/ vaop/ ncurrent/ full/ nbt.<br />
1612. html).
Volume rendering 243<br />
External links<br />
• The Visualization Toolkit – VTK (http:/ / www. vtk. org) is a free open source toolkit, which implements several<br />
CPU and GPU volume rendering methods in C++ using OpenGL, and can be used from python, tcl and java<br />
wrappers.<br />
• Linderdaum Engine (http:/ / www. linderdaum. com) is a free open source rendering engine with GPU raycasting<br />
capabilities.<br />
• Open Inventor by VSG (http:/ / www. vsg3d. com/ / vsg_prod_openinventor. php) is a commercial <strong>3D</strong> <strong>graphics</strong><br />
toolkit for developing scientific and industrial applications.<br />
• Avizo is a general-purpose commercial software application for scientific and industrial data visualization and<br />
analysis.<br />
Volumetric lighting<br />
Volumetric lighting is a technique used in<br />
<strong>3D</strong> computer <strong>graphics</strong> to add lighting effects<br />
to a rendered scene. It allows the viewer to<br />
see beams of light shining through the<br />
environment; seeing sunbeams streaming<br />
through an open window is an example of<br />
volumetric lighting, also known as<br />
crepuscular rays. The term seems to have<br />
been introduced from cinematography and is<br />
now widely applied to <strong>3D</strong> modelling and<br />
rendering especially in the field of <strong>3D</strong><br />
gaming.<br />
Forest scene from Big Buck Bunny, showing light rays through the canopy.<br />
In volumetric lighting, the light cone emitted by a light source is modeled as a transparent object and considered as a<br />
container of a "volume": as a result, light has the capability to give the effect of passing through an actual three<br />
dimensional medium (such as fog, dust, smoke, or steam) that is inside its volume, just like in the real world.<br />
How volumetric lighting works<br />
Volumetric lighting requires two components: a light space shadow map, and a depth buffer. Starting at the near clip<br />
plane of the camera, the whole scene is traced and sampling values are accumulated into the input buffer. For each<br />
sample, it is determined if the sample is lit by the source of light being processed or not, using the shadowmap as a<br />
comparison. Only lit samples will affect final pixel color.<br />
This basic technique works, but requires more optimization to function in real time. One way to optimize volumetric<br />
lighting effects is to render the lighting volume at a much coarser resolution than that which the <strong>graphics</strong> context is<br />
using. This creates some bad aliasing artifacts, but that is easily touched up with a blur.<br />
You can also use stencil buffer like with the shadow volume technique<br />
Another technique can also be used to provide usually satisfying, if inaccurate volumetric lighting effects. The<br />
algorithm functions by blurring luminous objects away from the center of the main light source. Generally, the<br />
transparency is progressively reduced with each blur step, especially in more luminous scenes. Note that this requires<br />
an on-screen source of light. [1]
Volumetric lighting 244<br />
References<br />
[1] NeHe Volumetric Lighting (http:/ / nehe. gamedev. net/ data/ lessons/ lesson. asp?lesson=36)<br />
External links<br />
• Volumetric lighting tutorial at Art Head Start (http:/ / www. art-head-start. com/ tutorial-volumetric. html)<br />
• <strong>3D</strong> <strong>graphics</strong> terms dictionary at Tweak<strong>3D</strong>.net (http:/ / www. tweak3d. net/ 3ddictionary/ )<br />
Voxel<br />
A voxel (volumetric pixel or, more correctly, Volumetric Picture<br />
Element) is a volume element, representing a value on a regular grid in<br />
three dimensional space. This is analogous to a pixel, which represents<br />
2D image data in a bitmap (which is sometimes referred to as a<br />
pixmap). As with pixels in a bitmap, voxels themselves do not<br />
typically have their position (their coordinates) explicitly encoded<br />
along with their values. Instead, the position of a voxel is inferred<br />
based upon its position relative to other voxels (i.e., its position in the<br />
data structure that makes up a single volumetric image). In contrast to<br />
pixels and voxels, points and polygons are often explicitly represented<br />
by the coordinates of their vertices. A direct consequence of this<br />
difference is that polygons are able to efficiently represent simple <strong>3D</strong><br />
structures with lots of empty or homogeneously filled space, while<br />
voxels are good at representing regularly sampled spaces that are<br />
non-homogeneously filled.<br />
A series of voxels in a stack with a single voxel<br />
highlighted<br />
Voxels are frequently used in the visualization and analysis of medical and scientific data. Some volumetric displays<br />
use voxels to describe their resolution. For example, a display might be able to show 512×512×512 voxels.
Voxel 245<br />
Voxel data<br />
A (smoothed) rendering of a data set of voxels for<br />
a macromolecule<br />
relating to the same voxel positions.<br />
A voxel represents the sub-volume box with constant scalar/vector<br />
value inside which is equal to scalar/vector value of the corresponding<br />
grid/pixel of the original discrete representation of the volumetric data.<br />
The boundaries of a voxel are exactly in the middle between<br />
neighboring grids. Voxel data sets have a limited resolution, as precise<br />
data is only available at the center of each cell. Under the assumption<br />
that the voxel data is sampling a suitably band-limited signal, accurate<br />
reconstructions of data points in between the sampled voxels can be<br />
attained by low-pass filtering the data set. Visually acceptable<br />
approximations to this low pass filter can be attained by polynomial<br />
interpolation such as tri-linear or tri-cubic interpolation.<br />
The value of a voxel may represent various properties. In CT scans, the<br />
values are Hounsfield units, giving the opacity of material to X-rays. [1]<br />
:29 Different types of value are acquired from MRI or ultrasound.<br />
Voxels can contain multiple scalar values - essentially vector data; in<br />
the case of ultrasound scans with B-mode and Doppler data, density,<br />
and volumetric flow rate are captured as separate channels of data<br />
Other values may be useful for immediate <strong>3D</strong> rendering, such as a surface normal vector and color.<br />
Uses<br />
Common uses of voxels include volumetric imaging in medicine and representation of terrain in games and<br />
simulations. Voxel terrain is used instead of a heightmap because of its ability to represent overhangs, caves, arches,<br />
and other <strong>3D</strong> terrain features. These concave features cannot be represented in a heightmap due to only the top 'layer'<br />
of data being represented, leaving everything below it filled (the volume that would otherwise be the inside of the<br />
caves, or the underside of arches or overhangs).<br />
Visualization<br />
A volume containing voxels can be visualized either by direct volume rendering or by the extraction of polygon<br />
iso-surfaces which follow the contours of given threshold values. The marching cubes algorithm is often used for<br />
iso-surface extraction, however other methods exist as well.<br />
Computer gaming<br />
• C4 Engine is a game engine that uses voxels for in game terrain and has a voxel editor for its built in level editor.<br />
C4 Engine uses a LOD system with its voxel terrain that was developed by the game engine's creator. All games<br />
using the current or newer versions of the engine have the ability to use voxels.<br />
• Upcoming Miner Wars 2081 uses their own Voxel Rage engine to let the user deform the terrain of asteroids<br />
allowing tunnels to be formed.<br />
• Many NovaLogic games have used voxel-based rendering technology, including the Delta Force, Armored Fist<br />
and Comanche series.<br />
• Westwood Studios' Command & Conquer: Tiberian Sun and Command & Conquer: Red Alert 2 use voxels to<br />
render most vehicles.<br />
• Westwood Studios' Blade Runner video game used voxels to render characters and artifacts.
Voxel 246<br />
• Outcast, a game made by Belgian developer Appeal, sports outdoor landscapes that are rendered by a voxel<br />
engine. [2]<br />
• The videogame Amok for the Sega Saturn makes use of voxels in its scenarios.<br />
• The computer game Vangers uses voxels for its two-level terrain system. [3]<br />
• The computer game Thunder Brigade was based entirely on a voxel renderer, according to BlueMoon Interactive<br />
making videocards redundant and offering increasing detail instead of decreasing detail with proximity.<br />
• Master of Orion III uses voxel <strong>graphics</strong> to render space battles and solar systems. Battles displaying 1000 ships at<br />
a time were rendered slowly on computers without hardware graphic acceleration.<br />
• Sid Meier's Alpha Centauri uses voxel models to render units.<br />
• Build engine first-person shooter games Shadow Warrior and Blood use voxels instead of sprites as an option for<br />
many of the items pickups and scenery. Duke Nukem <strong>3D</strong> has an optional voxel model pack created by fans, which<br />
contains the high resolution pack models converted to voxels.<br />
• Crysis uses a combination of heightmaps and voxels for its terrain system.<br />
• Worms 4: Mayhem uses a "poxel" (polygon and voxel) engine to simulate land deformation similar to the older<br />
2D Worms games.<br />
• The multi-player role playing game Hexplore uses a voxel engine allowing the player to rotate the isometric<br />
rendered playfield.<br />
• Voxelstein <strong>3D</strong> also uses voxels for fully destructible environments. [4]<br />
• The upcoming computer game Voxatron, produced by Lexaloffle, will be composed and generated fully using<br />
[5] [6]<br />
voxels.<br />
• Ace of Spades uses Ken Silverman's Voxlap engine.<br />
• The game Blockade Runner uses a Voxel engine.<br />
Voxel editors<br />
While scientific volume visualization doesn't require modifying the actual voxel data, voxel editors can be used to<br />
create art (especially <strong>3D</strong> pixel art) and models for voxel based games. Some editors are focused on a single approach<br />
to voxel editing while others mix various approaches. Some common approaches are:<br />
• Slice based – The volume is sliced in one or more axes and the user can edit each image individually using 2D<br />
raster editor tools. These generally store color information in voxels.<br />
• Sculpture – Similar to the vector counterpart but with no topology constraints. These usually store density<br />
information in voxels and lack color information.<br />
• Building blocks – The user can add and remove blocks just like a construction set toy.<br />
• Minecraft is a game built around a voxel art editor, where the player builds voxel art in a monster infested voxel<br />
world.<br />
Voxel editors for games<br />
Many game developers use in-house editors that are not released to the public, but a few games have publicly<br />
available editors, some of them created by players.<br />
• Slice based fan-made Voxel Section Editor III for Command & Conquer: Tiberian Sun and Command &<br />
Conquer: Red Alert 2. [7]<br />
• SLAB6 and VoxEd are sculpture based voxel editors used by Voxlap engine games, [8] [9] including Voxelstein <strong>3D</strong><br />
and Ace of Spades.<br />
• The official Sandbox 2 editor for CryEngine 2 games (including Crysis) has support for sculpting voxel based<br />
terrain. [10]
Voxel 247<br />
General purpose voxel editors<br />
There are a few voxel editors available that are not tied to specific games or engines. They can be used as<br />
alternatives or complements to traditional <strong>3D</strong> vector modeling.<br />
Extensions<br />
A generalization of a voxel is the doxel, or dynamic voxel. This is used in the case of a 4D dataset, for example, an<br />
image sequence that represents <strong>3D</strong> space together with another dimension such as time. In this way, an image could<br />
contain 100×100×100×100 doxels, which could be seen as a series of 100 frames of a 100×100×100 volume image<br />
(the equivalent for a <strong>3D</strong> image would be showing a 2D cross section of the image in each frame). Although storage<br />
and manipulation of such data requires large amounts of memory, it allows the representation and analysis of<br />
spacetime systems.<br />
References<br />
[1] Novelline, Robert. Squire's Fundamentals of Radiology. Harvard University Press. 5th edition. 1997. ISBN 0674833392.<br />
[2] "OUTCAST - Technology: Paradise" (http:/ / web. archive. org/ web/ 20100615185127/ http:/ / www. outcast-thegame. com/ tech/ paradise.<br />
htm). outcast-thegame.com. Archived from the original (http:/ / www. outcast-thegame. com/ tech/ paradise. htm) on 2010-06-15. . Retrieved<br />
2009-12-20.<br />
[3] "VANGERS" (http:/ / www. kdlab. com/ vangers/ eng/ features. html). kdlab.com. . Retrieved 2009-12-20.<br />
[4] http:/ / voxelstein3d. sourceforge. net/<br />
[5] Ars Technica. "We
Z-buffering 248<br />
Z-buffering<br />
In computer <strong>graphics</strong>, z-buffering is the management of image depth<br />
coordinates in three-dimensional (3-D) <strong>graphics</strong>, usually done in<br />
hardware, sometimes in software. It is one solution to the visibility<br />
problem, which is the problem of deciding which elements of a<br />
rendered scene are visible, and which are hidden. The painter's<br />
algorithm is another common solution which, though less efficient, can<br />
also handle non-opaque scene elements. Z-buffering is also known as<br />
depth buffering.<br />
When an object is rendered by a <strong>3D</strong> <strong>graphics</strong> card, the depth of a<br />
generated pixel (z coordinate) is stored in a buffer (the z-buffer or<br />
depth buffer). This buffer is usually arranged as a two-dimensional<br />
array (x-y) with one element for each screen pixel. If another object of<br />
the scene must be rendered in the same pixel, the <strong>graphics</strong> card<br />
compares the two depths and chooses the one closer to the observer.<br />
The chosen depth is then saved to the z-buffer, replacing the old one.<br />
In the end, the z-buffer will allow the <strong>graphics</strong> card to correctly<br />
Z-buffer data<br />
reproduce the usual depth perception: a close object hides a farther one. This is called z-culling.<br />
The granularity of a z-buffer has a great influence on the scene quality: a 16-bit z-buffer can result in artifacts (called<br />
"z-fighting") when two objects are very close to each other. A 24-bit or 32-bit z-buffer behaves much better,<br />
although the problem cannot be entirely eliminated without additional algorithms. An 8-bit z-buffer is almost never<br />
used since it has too little precision.<br />
Uses<br />
Z-buffer data in the area of video editing permits one to combine 2D video elements in <strong>3D</strong> space, permitting virtual<br />
sets, "ghostly passing through wall" effects, and complex effects like mapping of video on surfaces. An application<br />
for Maya, called IPR, permits one to perform post-rendering texturing on objects, utilizing multiple buffers like<br />
z-buffers, alpha, object id, UV coordinates and any data deemed as useful to the post-production process, saving time<br />
otherwise wasted in re-rendering of the video.<br />
Z-buffer data obtained from rendering a surface from a light's POV permits the creation of shadows in a scanline<br />
renderer, by projecting the z-buffer data onto the ground and affected surfaces below the object. This is the same<br />
process used in non-raytracing modes by the free and open sourced <strong>3D</strong> application Blender.<br />
Developments<br />
Even with small enough granularity, quality problems may arise when precision in the z-buffer's distance values is<br />
not spread evenly over distance. Nearer values are much more precise (and hence can display closer objects better)<br />
than values which are farther away. Generally, this is desirable, but sometimes it will cause artifacts to appear as<br />
objects become more distant. A variation on z-buffering which results in more evenly distributed precision is called<br />
w-buffering (see below).<br />
At the start of a new scene, the z-buffer must be cleared to a defined value, usually 1.0, because this value is the<br />
upper limit (on a scale of 0 to 1) of depth, meaning that no object is present at this point through the viewing<br />
frustum.
Z-buffering 249<br />
The invention of the z-buffer concept is most often attributed to Edwin Catmull, although Wolfgang Straßer also<br />
described this idea in his 1974 Ph.D. thesis 1 .<br />
On recent PC <strong>graphics</strong> cards (1999–2005), z-buffer management uses a significant chunk of the available memory<br />
bandwidth. Various methods have been employed to reduce the performance cost of z-buffering, such as lossless<br />
compression (computer resources to compress/decompress are cheaper than bandwidth) and ultra fast hardware<br />
z-clear that makes obsolete the "one frame positive, one frame negative" trick (skipping inter-frame clear altogether<br />
using signed numbers to cleverly check depths).<br />
Z-culling<br />
In rendering, z-culling is early pixel elimination based on depth, a method that provides an increase in performance<br />
when rendering of hidden surfaces is costly. It is a direct consequence of z-buffering, where the depth of each pixel<br />
candidate is compared to the depth of existing geometry behind which it might be hidden.<br />
When using a z-buffer, a pixel can be culled (discarded) as soon as its depth is known, which makes it possible to<br />
skip the entire process of lighting and texturing a pixel that would not be visible anyway. Also, time-consuming<br />
pixel shaders will generally not be executed for the culled pixels. This makes z-culling a good optimization<br />
candidate in situations where fillrate, lighting, texturing or pixel shaders are the main bottlenecks.<br />
While z-buffering allows the geometry to be unsorted, sorting polygons by increasing depth (thus using a reverse<br />
painter's algorithm) allows each screen pixel to be rendered fewer times. This can increase performance in<br />
fillrate-limited scenes with large amounts of overdraw, but if not combined with z-buffering it suffers from severe<br />
problems such as:<br />
• polygons might occlude one another in a cycle (e.g. : triangle A occludes B, B occludes C, C occludes A), and<br />
• there is no canonical "closest" point on a triangle (e.g.: no matter whether one sorts triangles by their centroid or<br />
closest point or furthest point, one can always find two triangles A and B such that A is "closer" but in reality B<br />
should be drawn first).<br />
As such, a reverse painter's algorithm cannot be used as an alternative to Z-culling (without strenuous<br />
re-engineering), except as an optimization to Z-culling. For example, an optimization might be to keep polygons<br />
sorted according to x/y-location and z-depth to provide bounds, in an effort to quickly determine if two polygons<br />
might possibly have an occlusion interaction.<br />
Algorithm<br />
Given: A list of polygons {P1,P2,.....Pn}<br />
Output: A COLOR array, which display the intensity of the visible polygon surfaces.<br />
Initialize:<br />
Begin:<br />
note : z-depth and z-buffer(x,y) is positive........<br />
z-buffer(x,y)=max depth; and<br />
COLOR(x,y)=background color.<br />
for(each polygon P in the polygon list) do{<br />
for(each pixel(x,y) that intersects P) do{<br />
Calculate z-depth of P at (x,y)<br />
}<br />
If (z-depth < z-buffer[x,y]) then{<br />
z-buffer[x,y]=z-depth;<br />
COLOR(x,y)=Intensity of P at(x,y);
Z-buffering 250<br />
}<br />
}<br />
display COLOR array.<br />
Mathematics<br />
The range of depth values in camera space (see <strong>3D</strong> projection) to be rendered is often defined between a and<br />
value of . After a perspective transformation, the new value of , or , is defined by:<br />
After an orthographic projection, the new value of , or , is defined by:<br />
where is the old value of in camera space, and is sometimes called or .<br />
The resulting values of are normalized between the values of -1 and 1, where the plane is at -1 and the<br />
plane is at 1. Values outside of this range correspond to points which are not in the viewing frustum, and<br />
shouldn't be rendered.<br />
Fixed-point representation<br />
Typically, these values are stored in the z-buffer of the hardware <strong>graphics</strong> accelerator in fixed point format. First they<br />
are normalized to a more common range which is [0,1] by substituting the appropriate conversion<br />
into the previous formula:<br />
Second, the above formula is multiplied by where d is the depth of the z-buffer (usually 16, 24 or 32<br />
bits) and rounding the result to an integer: [1]<br />
This formula can be inversed and derivated in order to calculate the z-buffer resolution (the 'granularity' mentioned<br />
earlier). The inverse of the above :<br />
where<br />
The z-buffer resolution in terms of camera space would be the incremental value resulted from the smallest change<br />
in the integer stored in the z-buffer, which is +1 or -1. Therefore this resolution can be calculated from the derivative<br />
of as a function of :<br />
Expressing it back in camera space terms, by substituting by the above :
Z-buffering 251<br />
~<br />
This shows that the values of are grouped much more densely near the plane, and much more sparsely<br />
farther away, resulting in better precision closer to the camera. The smaller the ratio is, the less precision<br />
there is far away—having the plane set too closely is a common cause of undesirable rendering artifacts in<br />
more distant objects. [2]<br />
To implement a z-buffer, the values of are linearly interpolated across screen space between the vertices of the<br />
current polygon, and these intermediate values are generally stored in the z-buffer in fixed point format.<br />
W-buffer<br />
To implement a w-buffer, the old values of in camera space, or , are stored in the buffer, generally in floating<br />
point format. However, these values cannot be linearly interpolated across screen space from the vertices—they<br />
usually have to be inverted, interpolated, and then inverted again. The resulting values of , as opposed to , are<br />
spaced evenly between and . There are implementations of the w-buffer that avoid the inversions<br />
altogether.<br />
Whether a z-buffer or w-buffer results in a better image depends on the application.<br />
References<br />
[1] The OpenGL Organization. "Open GL / FAQ 12 - The Depth buffer" (http:/ / www. opengl. org/ resources/ faq/ technical/ depthbuffer. htm). .<br />
Retrieved 2010-11-01.<br />
[2] Grégory Massal. "Depth buffer - the gritty details" (http:/ / www. codermind. com/ articles/ Depth-buffer-tutorial. html). . Retrieved<br />
2008-08-03.<br />
External links<br />
• Learning to Love your Z-buffer (http:/ / www. sjbaker. org/ steve/ omniv/ love_your_z_buffer. html)<br />
• Alpha-blending and the Z-buffer (http:/ / www. sjbaker. org/ steve/ omniv/ alpha_sorting. html)<br />
Notes<br />
Note 1: see W.K. Giloi, J.L. Encarnação, W. Straßer. "The Giloi’s School of Computer Graphics". Computer<br />
Graphics 35 4:12–16.
Z-fighting 252<br />
Z-fighting<br />
Z-fighting is a phenomenon in <strong>3D</strong> rendering that occurs when two or<br />
more primitives have similar values in the z-buffer. It is particularly<br />
prevalent with coplanar polygons, where two faces occupy essentially<br />
the same space, with neither in front. Affected pixels are rendered with<br />
fragments from one polygon or the other arbitrarily, in a manner<br />
determined by the precision of the z-buffer. It can also vary as the<br />
scene or camera is changed, causing one polygon to "win" the z test,<br />
then another, and so on. The overall effect is a flickering, noisy<br />
rasterization of two polygons which "fight" to color the screen pixels.<br />
This problem is usually caused by limited sub-pixel precision and<br />
floating point and fixed point round-off errors.<br />
The effect seen on two coplanar polygons<br />
Z-fighting can be reduced through the use of a higher resolution depth buffer, by z-buffering in some scenarios, or by<br />
simply moving the polygons further apart. Z-fighting which cannot be entirely eliminated in this manner is often<br />
resolved by the use of a stencil buffer, or by applying a post transformation screen space z-buffer offset to one<br />
polygon which does not affect the projected shape on screen, but does affect the z-buffer value to eliminate the<br />
overlap during pixel interpolation and comparison. Where z-fighting is caused by different transformation paths in<br />
hardware for the same geometry (for example in a multi-pass rendering scheme) it can sometimes be resolved by<br />
requesting that the hardware uses invariant vertex transformation.<br />
The more z-buffer precision one uses, the less likely it is that z-fighting will be encountered. But for coplanar<br />
polygons, the problem is inevitable unless corrective action is taken.<br />
As the distance between near and far clip planes increases and in particular the near plane is selected near the eye,<br />
the greater the likelihood exists that you will encounter z-fighting between primitives. With large virtual<br />
environments inevitably there is an inherent conflict between the need to resolve visibility in the distance and in the<br />
foreground, so for example in a space flight simulator if you draw a distant galaxy to scale, you will not have the<br />
precision to resolve visibility on any cockpit geometry in the foreground (although even a numerical representation<br />
would present problems prior to z-buffered rendering). To mitigate these problems, z-buffer precision is weighted<br />
towards the near clip plane, but this is not the case with all visibility schemes and it is insufficient to eliminate all<br />
z-fighting issues.
Z-fighting 253<br />
Demonstration of z-fighting with multiple colors and textures over a grey background
Appendix<br />
<strong>3D</strong> computer <strong>graphics</strong> software<br />
<strong>3D</strong> computer <strong>graphics</strong> software refers to programs used to create <strong>3D</strong> computer-generated imagery. This article<br />
covers only some of the software used.<br />
Uses<br />
<strong>3D</strong> modelers are used in a wide variety of industries. The medical industry uses them to create detailed models of<br />
organs. The movie industry uses them to create and manipulate characters and objects for animated and real-life<br />
motion pictures. The video game industry uses them to create assets for video games. The science sector uses them<br />
to create highly detailed models of chemical compounds. The architecture industry uses them to create models of<br />
proposed buildings and landscapes. The engineering community uses them to design new devices, vehicles and<br />
structures as well as a host of other uses. There are typically many stages in the "pipeline" that studios and<br />
manufacturers use to create <strong>3D</strong> objects for film, games, and production of hard goods and structures.<br />
Features<br />
Many <strong>3D</strong> modelers are designed to model various real-world entities, from plants to automobiles to people. Some are<br />
specially designed to model certain objects, such as chemical compounds or internal organs.<br />
<strong>3D</strong> modelers allow users to create and alter models via their <strong>3D</strong> mesh. Users can add, subtract, stretch and otherwise<br />
change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can<br />
be rotated and the view can be zoomed in and out.<br />
<strong>3D</strong> modelers can export their models to files, which can then be imported into other applications as long as the<br />
metadata is compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write<br />
data in the native formats of other applications.<br />
Most <strong>3D</strong> modelers contain a number of related features, such as ray tracers and other rendering alternatives and<br />
texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able<br />
to generate full-motion video of a series of rendered scenes (i.e. animation).<br />
Commercial packages<br />
A basic comparison including release date/version information can be found on the Comparison of <strong>3D</strong> computer<br />
<strong>graphics</strong> software page. A comprehensive comparison of significant <strong>3D</strong> packages can be found at CG Society Wiki<br />
[1] and TDT<strong>3D</strong> <strong>3D</strong> applications 2007 comparisons table. [2] .<br />
• 3ds Max (Autodesk), originally called <strong>3D</strong> Studio MAX, is a comprehensive and versatile <strong>3D</strong> application used in<br />
film, television, video games and architecture for Windows and Apple Macintosh. It can be extended and<br />
customized through its SDK or scripting using a Maxscript. It can use third party rendering options such as Brazil<br />
R/S, finalRender and V-Ray.<br />
• AC<strong>3D</strong> (Inivis) is a <strong>3D</strong> modeling application that began in the 90's on the Amiga platform. Used in a number of<br />
industries, MathWorks actively recommends it in many of their aerospace-related articles [3] due to price and<br />
compatibility. AC<strong>3D</strong> does not feature its own renderer, but can generate output files for both RenderMan and<br />
POV-Ray among others.<br />
254
<strong>3D</strong> computer <strong>graphics</strong> software 255<br />
• Aladdin4D (DiscreetFX), first created for the Amiga, was originally developed by Adspec Programming. After<br />
acquisition by DiscreetFX, it is multi-platform for Mac OS X, Amiga OS 4.1, MorphOS, Linux, AROS and<br />
Windows.<br />
• Animation:Master from HASH, Inc is a modeling and animation package that focuses on ease of use. It is a<br />
spline-based modeler. Its strength lies in character animation.<br />
• Bryce (DAZ Productions) is most famous for landscapes and creating 'painterly' renderings, as well as its unique<br />
user interface.<br />
• Carrara (DAZ Productions) is a fully featured <strong>3D</strong> toolset for modeling, texturing, scene rendering and animation.<br />
• Cinema 4D (MAXON) is a light (Prime) to full featured (Studio) 3d package dependant on version used.<br />
Although used in film usually for 2.5d work, Cinema's largest user base is in the television motion <strong>graphics</strong> and<br />
design/visualisation arenas. Originally developed for the Amiga, it is also available for Mac OS X and Windows.<br />
• CityEngine (Procedural Inc) is a <strong>3D</strong> modeling application specialized in the generation of three dimensional<br />
urban environments. With the procedural modeling approach, CityEngine enables the efficient creation of detailed<br />
large-scale <strong>3D</strong> city models, it is available for Mac OS X, Windows and Linux.<br />
• Cobalt is a parametric-based Computer-aided design (CAD) and <strong>3D</strong> modeling software for both the Macintosh<br />
and Microsoft Windows. It integrates wireframe, freeform surfacing, feature-based solid modeling and<br />
photo-realistic rendering (see Ray tracing), and animation.<br />
• Electric Image Animation System (EIAS<strong>3D</strong>) is a <strong>3D</strong> animation and rendering package available on both Mac<br />
OS X and Windows. Mostly known for its rendering quality and rendering speed it does not include a built-in<br />
modeler. The popular film Pirates of the Caribbean [4] and the television series Lost [5] used the software.<br />
• form•Z (AutoDesSys, Inc.) is a general purpose solid/surface <strong>3D</strong> modeler. Its primary use is for modeling, but it<br />
also features photo realistic rendering and object-centric animation support. form•Z is used in architecture,<br />
interior design, illustration, product design, and set design. It supports plug-ins and scripts, has import/export<br />
capabilities and was first released in 1991. It is currently available for both Mac OS X and Windows.<br />
• Grome is a professional outdoor scene modeler (terrain, water, vegetation) for games and other <strong>3D</strong> real-time<br />
applications.<br />
• Houdini (Side Effects Software) is used for visual effects and character animation. It was used in Disney's feature<br />
film The Wild. [6] Houdini uses a non-standard interface that it refers to as a "NODE system". It has a hybrid<br />
micropolygon-raytracer renderer, Mantra, but it also has built-in support for commercial renderers like Pixar's<br />
RenderMan and mental ray.<br />
• Inventor (Autodesk) The Autodesk Inventor is for <strong>3D</strong> mechanical design, product simulation, tooling creation,<br />
and design communication.<br />
• LightWave <strong>3D</strong> (NewTek), first developed for the Amiga, was originally bundled as part of the Video Toaster<br />
package and entered the market as a low cost way for TV production companies to create quality CGI for their<br />
programming. It first gained public attention with its use in the TV series Babylon 5 [7] and is used in several<br />
contemporary TV series. [8] [9] [10] Lightwave is also used in a variety of modern film productions. [11] [12] It is<br />
available for both Windows and Mac OS X.<br />
• MASSIVE is a <strong>3D</strong> animation system for generating crowd-related visual effects, targeted for use in film and<br />
television. Originally developed for controlling the large-scale CGI battles in The Lord of the Rings, [13] Massive<br />
has become an industry standard for digital crowd control in high-end animation and has been used on several<br />
other big-budget films. It is available for various Unix and Linux platforms as well as Windows.<br />
• Maya (Autodesk) is currently used in the film, television, and gaming industry. Maya has developed over the<br />
years into an application platform in and of itself through extendability via its MEL programming language. It is<br />
available for Windows, Linux and Mac OS X.
<strong>3D</strong> computer <strong>graphics</strong> software 256<br />
• Modo (Luxology) is a subdivision modeling, texturing and rendering tool with support for camera motion and<br />
morphs/blendshapes.and is now used in the Television Industry It is available for both Windows and Mac OS X.<br />
• Mudbox is a high resolution brush-based <strong>3D</strong> sculpting program, that claims to be the first of its type. The<br />
software was acquired by Autodesk in 2007, and has a current rival in its field known as ZBrush (see above).<br />
• Mycosm is a high-quality virtual world development engine software that uses the open source Python<br />
programming language and currently runs on Windows using the DirectX engine. The software was released in<br />
2011, and allows photo-realistic simulations to be created that feature physics, atmospherics, terrain sculpting, CG<br />
Foliage, astronomically correct sun and stars, fluid dynamics and many other vanguard features. Mycosm is<br />
created by Simmersion Holdings Pty. in Canberra, the capital of Australia.<br />
• NX ( Siemens PLM Software) is an integrated suite of software for computer-aided mechanical design<br />
(mechanical CAM), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) formed by<br />
combining the former Uni<strong>graphics</strong> and SDRC I-deas software product lines. [14] NX is currently available for the<br />
following operating systems: Windows XP and Vista, Apple Mac OS X, [15] and Novell SUSE Linux. [16]<br />
• Poser (Smith Micro) Poser is a <strong>3D</strong> rendering and animation software program optimized for models that depict<br />
the human figure in three-dimensional form and is specialized for adjusting features of preexisting character<br />
models via varying parameters. It is also for posing and rendering of models and characters. It includes some<br />
specialized tools for walk cycle creation, cloth and hair.<br />
• RealFlow simulates and renders particle systems of rigid bodies and fluids.<br />
• Realsoft<strong>3D</strong> Real<strong>3D</strong> Full featured <strong>3D</strong> modeling, animation, simulation and rendering software available for<br />
Windows, Linux, Mac OS X and Irix.<br />
• Rhinoceros <strong>3D</strong> is a commercial modeling tool which has excellent support for freeform NURBS editing.<br />
• Shade <strong>3D</strong> is a commercial modeling/rendering/animation tool from Japan with import/export format support for<br />
Adobe, Social Worlds, and Quicktime among others.<br />
• Silo (Nevercenter) is a subdivision-surface modeler available for Mac OS X and Windows. Silo does not include<br />
a renderer. Silo is the bundled in modeler for the Electric Image Animation System suite.<br />
• SketchUp Pro (Google) is a <strong>3D</strong> modeling package that features a sketch-based modeling approach. It has a pro<br />
version which supports 2D and <strong>3D</strong> model export functions among other features. A free version is integrated with<br />
Google Earth and limits export to Google's "<strong>3D</strong> Warehouse", where users can share their content.<br />
• Softimage (Autodesk) Softimage (formerly Softimage|XSI) is a <strong>3D</strong> modeling and animation package that<br />
integrates with mental ray rendering. It is feature-similar to Maya and 3ds Max and is used in the production of<br />
professional films, commercials, video games, and other media.<br />
• Solid Edge ( Siemens PLM Software) is a commercial application for design, drafting, analysis, and simulation of<br />
products, systems, machines and tools. All versions include feature-based parametric modeling, assembly<br />
modeling, drafting, sheetmetal, weldment, freeform surface design, and data management. [17]<br />
Application-programming interfaces enable scripting in Visual Basic and C programming.<br />
• solidThinking (solidThinking) is a <strong>3D</strong> solid/surface modeling and rendering suite which features a construction<br />
tree method of development. The tree is the "history" of the model construction process and allows real-time<br />
updates when modifications are made to points, curves, parameters or entire objects.<br />
• SolidWorks (SolidWorks Corporation) is an application used for the design, detailing and validation of products,<br />
systems, machines and toolings. All versions include modeling, assemblies, drawing, sheetmetal, weldment, and<br />
freeform surfacing functionality. It also has support for scripting in Visual Basic and C.<br />
• Spore (Maxis) is a game that revolutionized the gaming industry by allowing users to design their own fully<br />
functioning creatures with a very rudimentary, easy-to-use interface. The game includes a COLLADA exporter,<br />
so models can be downloaded and imported into any other <strong>3D</strong> software listed here that supports the COLLADA
<strong>3D</strong> computer <strong>graphics</strong> software 257<br />
format. Models can also be directly imported into game development software such as Unity (game engine).<br />
• Swift <strong>3D</strong> (Electric Rain) is a relatively inexpensive <strong>3D</strong> design, modeling, and animation application targeted to<br />
entry-level <strong>3D</strong> users and Adobe Flash designers. Swift <strong>3D</strong> supports vector and raster-based <strong>3D</strong> animations for<br />
Adobe Flash and Microsoft Silverlight XAML.<br />
• Vue (E-on Software) is a tool for creating, animating and rendering natural <strong>3D</strong> environments. It was most<br />
recently used to create the background jungle environments in the 2nd and 3rd Pirates of the Caribbean films. [18]<br />
• ZBrush (Pixologic) is a digital sculpting tool that combines <strong>3D</strong>/2.5D modeling, texturing and painting tool<br />
available for Mac OS X and Windows. It is used to create normal maps for low resolution models to make them<br />
look more detailed.<br />
Free packages<br />
• <strong>3D</strong> Canvas (now called <strong>3D</strong> Crafter) is a <strong>3D</strong> modeling and animation tool available in a freeware version, as well<br />
as paid versions (<strong>3D</strong> Canvas Plus and <strong>3D</strong> Canvas Pro).<br />
• Anim8or is a proprietary freeware <strong>3D</strong> rendering and animation package.<br />
• Art of Illusion is a free software package developed under the GPL.<br />
• AutoQ<strong>3D</strong> Community is not a professional CAD program and it is focused to beginners who want to make rapid<br />
<strong>3D</strong> designs. It is a free software package developed under the GPL.<br />
• Blender (Blender Foundation) is a free, open source, <strong>3D</strong> studio for animation, modeling, rendering, and texturing<br />
offering a feature set comparable to commercial <strong>3D</strong> animation suites. It is developed under the GPL and is<br />
available on all major platforms including Windows, OS X, Linux, BSD, Solaris and Irix.<br />
• Cheetah<strong>3D</strong> is proprietary freeware for Apple Macintosh computers primarily aimed at amateur <strong>3D</strong> artists with<br />
some medium- and high-end features<br />
• DAZ Studio a free <strong>3D</strong> rendering tool set for adjusting parameters of preexisting models, posing and rendering<br />
them in full <strong>3D</strong> scene environments. Imports objects created in Poser and is similar to that program, but with<br />
fewer features.<br />
• DX Studio a complete integrated development environment for creating interactive <strong>3D</strong> <strong>graphics</strong>. The system<br />
comprises both a real-time <strong>3D</strong> engine and a suite of editing tools, and is the first product to offer a complete range<br />
of tools in a single IDE.<br />
• Evolver is a portal for <strong>3D</strong> computer characters incorporating a human (humanoid) builder and a cloner to work<br />
from picture.<br />
• FaceGen is a source of human face models for other programs. Users are able to generate face models either<br />
randomly or from input photographs.<br />
• Geist<strong>3D</strong> is a free software program for real-time modeling and rendering three-dimensional <strong>graphics</strong> and<br />
animations.<br />
• GMax<br />
• GPure is a software to prepare scene/meshes from digital mockup to multiple uses<br />
• K-<strong>3D</strong> is a GNU modeling, animation, and rendering system available on Linux and Win32. It makes use of<br />
RenderMan-compliant render engines. It features scene graph procedural modelling similar to that found in<br />
Houdini.<br />
• MakeHuman is a GPL program that generates <strong>3D</strong> parametric humanoids.<br />
• MeshLab is a free Windows, Linux and Mac OS X application for visualizing, simplifying, processing and<br />
converting large three dimensional meshes to or from a variety of <strong>3D</strong> file formats.<br />
• NaroCAD a fully fledged and extensible <strong>3D</strong> parametric modeling CAD application. It is based on OpenCascade.<br />
The goal of this project is to develop a fully fledged and extensible <strong>3D</strong> CAD software based on the concept of<br />
parametric modeling of solids, comparable to well known solutions.<br />
• OpenFX is a modeling and animation studio, distributed under the GPL.
<strong>3D</strong> computer <strong>graphics</strong> software 258<br />
• Seamless3d NURBS based modelling and animation software with much of the focus on creating avatars<br />
optimized for real time animation. It is free, open source under the MIT license.<br />
• trueSpace (Caligari Corporation) is a <strong>3D</strong> program available for Windows, although the company Caligari first<br />
found its start on the Amiga platform. trueSpace features modeling, animation, <strong>3D</strong>-painting, and rendering<br />
capabilities. In 2009, Microsoft purchased TrueSpace and it is now available completely free of charge.<br />
• Wings <strong>3D</strong> is a BSD-licensed, subdivision modeler.<br />
Renderers<br />
• <strong>3D</strong>elight is a proprietary RenderMan-compliant renderer.<br />
• Aqsis is an free and open source rendering suite compliant with the RenderMan standard.<br />
• Brazil is a rendering engine for 3ds Max, Rhino and VIZ<br />
• FinalRender is a photorealistic renderer for Maya and <strong>3D</strong>s Max developed by Cebas, a German company.<br />
• FPrime for Lightwave adds a very fast preview and can in many cases be used for final rendering.<br />
• Gelato is a hardware-accelerated, non-real-time renderer created by <strong>graphics</strong> card manufacturer NVIDIA.<br />
• Indigo Renderer is an unbiased photorealistic renderer that uses XML for scene description. Exporters available<br />
for Blender, Maya (Mti), form•Z, Cinema4D, Rhino, 3ds Max.<br />
• Kerkythea is a freeware rendering system that supports raytracing. Currently, it can be integrated with 3ds Max,<br />
Blender, SketchUp, and Silo (generally any software that can export files in obj and 3ds formats). Kerkythea is a<br />
standalone renderer, using physically accurate materials and lighting.<br />
• LuxRender is an unbiased open source rendering engine featuring Metropolis light transport<br />
• Maxwell Render is a multi-platform renderer which forgoes raytracing, global illumination and radiosity in favor<br />
of photon rendering with a virtual electromagnetic spectrum, resulting in very authentic looking renders. It was<br />
the first unbiased render to market.<br />
• mental ray is another popular renderer, and comes default with most of the high-end packages. (Now owned by<br />
NVIDIA)<br />
• NaroCAD is a fully fledged and extensible <strong>3D</strong> parametric modeling CAD application. It is based on<br />
OpenCascade.<br />
• Octane Render is an unbiased GPU-accelerated renderer based on Nvidia CUDA.<br />
• Pixar's PhotoRealistic RenderMan is a renderer, used in many studios. Animation packages such as <strong>3D</strong>S Max and<br />
Maya can pipeline to RenderMan to do all the rendering.<br />
• Pixie is an open source photorealistic renderer.<br />
• POV-Ray (or The Persistence of Vision Raytracer) is a freeware (with source) ray tracer written for multiple<br />
platforms.<br />
• Sunflow is an open source, photo-realistic renderer written in Java.<br />
• Turtle (Illuminate Labs) is an alternative renderer for Maya, it specializes in faster radiosity and automatic surface<br />
baking technology which further enhances its speedy renders.<br />
• VRay is promoted for use in the architectural visualization field used in conjunction with 3ds max and 3ds viz. It<br />
is also commonly used with Maya.<br />
• YafRay is a raytracer/renderer distributed under the LGPL. This project is no longer being actively developed.<br />
• YafaRay YafRay's successor, a raytracer/renderer distributed under the LGPL.
<strong>3D</strong> computer <strong>graphics</strong> software 259<br />
Related to <strong>3D</strong> software<br />
• Swift<strong>3D</strong> is the marquee tool for producing vector-based <strong>3D</strong> content for Flash. Also comes in plug-in form for<br />
transforming models in Lightwave or <strong>3D</strong>S Max into Flash animations.<br />
• Match moving software is commonly used to match live video with computer-generated video, keeping the two in<br />
sync as the camera moves.<br />
• After producing video, studios then edit or composite the video using programs such as Adobe Premiere or Apple<br />
Final Cut at the low end, or Autodesk Combustion, Digital Fusion, Apple Shake at the high-end.<br />
• MetaCreations Detailer and Painter <strong>3D</strong> are discontinued software applications specifically for painting texture<br />
maps on 3-D Models.<br />
• Simplygon A commercial mesh processing package for remeshing general input meshes into real-time renderable<br />
meshes.<br />
• Pixar Typestry is an abandonware <strong>3D</strong> software program released in the 1990s by Pixar for Apple Macintosh and<br />
DOS-based PC computer systems. It rendered and animated text in 3d in various fonts based on the user's input.<br />
Discontinued, historic packages<br />
• Alias Animator and PowerAnimator were high-end <strong>3D</strong> packages in the 1990s, running on Silicon Graphics (SGI)<br />
workstations. Alias took code from PowerAnimator, TDI Explore and Wavefront to build Maya. Alias|Wavefront<br />
was later sold by SGI to Autodesk. SGI had originally purchased both Alias and Wavefront in 1995 as a response<br />
to Microsoft’s acquisition and Windows NT port of the then popular Softimage <strong>3D</strong> package. Interestingly<br />
Microsoft sold Softimage in 1998 to Avid Technology, from where it was acquired in 2008 by Autodesk as well.<br />
• CrystalGraphics Topas was a DOS and Windows based <strong>3D</strong> package between 1986 and the late 1990s.<br />
• Intelligent Light was a high-end <strong>3D</strong> package in the 1980s, running on Apollo/Domain and HP 9000 workstations.<br />
• Internet Space Builder, with other tools like VRMLpad and the viewer Cortona, was a full VRML edition system,<br />
published by Parallel Graphics, in the late 1990. Today only a reduced version of Cortona is available.<br />
• MacroMind Three-D was a mid-end <strong>3D</strong> package running on the Mac in the early 1990s.<br />
• MacroMind Swivel <strong>3D</strong> Professional was a mid-end <strong>3D</strong> package running on the Mac in the early 1990s.<br />
• Symbolics S-Render was an industry-leading <strong>3D</strong> package by Symbolics in the 1980s.<br />
• TDI (Thomson Digital Images) Explore was a French, high-end <strong>3D</strong> package in the late 1980s and early 1990s,<br />
running on Silicon Graphics (SGI) workstations, which later was acquired by Wavefront before it evolved into<br />
Maya.<br />
• Wavefront Advanced Visualizer was a high-end <strong>3D</strong> package between the late 1980s and mid-1990s, running on<br />
Silicon Graphics (SGI) workstations. Wavefront first acquired TDI in 1993, before Wavefront itself was acquired<br />
in 1995 along with Alias by SGI to form Alias|Wavefront.<br />
References<br />
[1] http:/ / wiki. cgsociety. org/ index. php/ Comparison_of_3d_tools<br />
[2] http:/ / www. tdt3d. be/ articles_viewer. php?art_id=99<br />
[3] "About Aerospace Coordinate Systems" (http:/ / www. mathworks. com/ access/ helpdesk/ help/ toolbox/ aeroblks/ index. html?/ access/<br />
helpdesk/ help/ toolbox/ aeroblks/ f3-22568. html). . Retrieved 2007-11-23.<br />
[4] "Electric Image Animation Software (EIAS) v8.0 UB Port Is Shipping" (http:/ / www. eias3d. com/ ). . Retrieved 2009-05-06.<br />
[5] "EIAS Production List" (http:/ / www. eias3d. com/ about/ eias3d/ ). . Retrieved 2009-05-06.<br />
[6] "C.O.R.E. Goes to The Wild" (http:/ / www. fxguide. com/ modules. php?name=press& rop=showcontent& id=385). . Retrieved 2007-11-23.<br />
[7] "Desktop Hollywood F/X" (http:/ / www. byte. com/ art/ 9507/ sec8/ art2. htm). . Retrieved 2007-11-23.<br />
[8] "So Say We All: The Visual Effects of "Battlestar Galactica"" (http:/ / www. uemedia. net/ CPC/ vfxpro/ printer_13948. shtml). . Retrieved<br />
2007-11-23.<br />
[9] "CSI: Dallas" (http:/ / web. archive. org/ web/ 20110716201558/ http:/ / www. cgw. com/ ME2/ dirmod. asp?sid=& nm=& type=Publishing&<br />
mod=Publications::Article& mid=8F3A7027421841978F18BE895F87F791& tier=4& id=48932D1DDB0F4F6B9BEA350A47CDFBE0).<br />
Archived from the original (http:/ / www. cgw. com/ ME2/ dirmod. asp?sid=& nm=& type=Publishing& mod=Publications::Article&<br />
mid=8F3A7027421841978F18BE895F87F791& tier=4& id=48932D1DDB0F4F6B9BEA350A47CDFBE0) on July 16, 2011. . Retrieved
<strong>3D</strong> computer <strong>graphics</strong> software 260<br />
2007-11-23.<br />
[10] "Lightwave projects list" (http:/ / www. newtek. com/ lightwave/ projects. php). . Retrieved 2009-07-07.<br />
[11] "Epic effects for 300" (http:/ / www. digitalartsonline. co. uk/ features/ index. cfm?featureid=1590). . Retrieved 2007-11-23.<br />
[12] "Lightwave used on Iron Man" (http:/ / newteknews. blogspot. com/ 2008/ 08/ lightwave-used-on-iron-man-bobblehead. html). 2008-08-08.<br />
. Retrieved 2009-07-07.<br />
[13] "Lord of the Rings terror: It was just a software bug" (http:/ / www. news. com/ 8301-10784_3-9809929-7. html). . Retrieved 2007-11-23.<br />
[14] Cohn, David (2004-09-16). "NX 3 – The Culmination of a 3-year Migration" (http:/ / www. newslettersonline. com/ user/ user. fas/ s=63/<br />
fp=3/ tp=47?T=open_article,847643& P=article). CADCAMNet (Cyon Research). . Retrieved 2009-07-01.<br />
[15] "Siemens PLM Software Announces Availability of NX for Mac OS X" (http:/ / www. plm. automation. siemens. com/ en_us/ about_us/<br />
newsroom/ press/ press_release. cfm?Component=82370& ComponentTemplate=822). Siemens PLM Software. 2009-06-11. . Retrieved<br />
2009-07-01.<br />
[16] "UGS Ships NX 4 and Delivers Industry’s First Complete Digital Product Development Solution on Linux" (http:/ / www. plm. automation.<br />
siemens. com/ en_us/ about_us/ newsroom/ press/ press_release. cfm?Component=25399& ComponentTemplate=822). 2009-04-04. .<br />
Retrieved 2009-06-20.<br />
[17] "Solid Edge" (http:/ / www. plm. automation. siemens. com/ en_us/ products/ velocity/ solidedge/ index. shtml). Siements PLM Software.<br />
2009. . Retrieved 2009-07-01.<br />
[18] "Vue Helps ILM Create Environments for 'Pirates Of The Caribbean: Dead Man’s Chest' VFX" (http:/ / web. archive. org/ web/<br />
20080318085442/ http:/ / www. pluginz. com/ news/ 4535). Archived from the original (http:/ / www. pluginz. com/ news/ 4535) on<br />
2008-03-18. . Retrieved 2007-11-23.<br />
External links<br />
• <strong>3D</strong> Tools table (http:/ / wiki. cgsociety. org/ index. php/ Comparison_of_3d_tools) from the CGSociety wiki<br />
• Comparison of 10 most popular modeling software (http:/ / tideart. com/ ?id=4e26f595) from TideArt
Article Sources and Contributors 261<br />
Article Sources and Contributors<br />
<strong>3D</strong> rendering Source: http://en.wikipedia.org/w/index.php?oldid=452110364 Contributors: -Majestic-, 3d rendering, AGiorgio08, ALoopingIcon, Al.locke, Alanbly, Azunda, Burpelson AFB,<br />
Calmer Waters, Chaim Leib, Chasingsol, Chowbok, CommonsDelinker, Dicklyon, Doug Bell, Drakesens, Dsajga, Eekerz, Favonian, Groupthink, Grstain, Imsosleepy123, Jeff G., Jmlk17, Julius<br />
Tuomisto, Kerotan, Kri, M-le-mot-dit, Marek69, Mdd, Michael Hardy, MrOllie, NJR ZA, Nixeagle, Oicumayberight, Pchackal, Philip Trueman, Piano non troppo, QuantumEngineer,<br />
Res2216firestar, Rilak, SantiagoCeballos, SiobhanHansa, Skhedkar, SkyWalker, Sp, Ted1712, TheBendster, TheRealFennShysa, Verbist, 66 ,123אמסיסה anonymous edits<br />
Alpha mapping Source: http://en.wikipedia.org/w/index.php?oldid=447802455 Contributors: Chaser3275, Eekerz, Ironholds, Phynicen, Squids and Chips, Sumwellrange, TiagoTiago<br />
Ambient occlusion Source: http://en.wikipedia.org/w/index.php?oldid=421288636 Contributors: ALoopingIcon, Bovineone, CitizenB, Eekerz, Falkreon, Frecklefoot, Gaius Cornelius,<br />
George100, Grafen, JohnnyMrNinja, Jotapeh, Knacker ITA, Kri, Miker@sundialservices.com, Mr.BlueSummers, Mrtheplague, Prolog, Quibik, RJHall, Rehno Lindeque, Simeon, SimonP, The<br />
Anome, 51 anonymous edits<br />
Anisotropic filtering Source: http://en.wikipedia.org/w/index.php?oldid=454365309 Contributors: Angela, Berkut, Berland, Blotwell, Bookandcoffee, Bubuka, CommonsDelinker, Dast,<br />
DavidPyke, Dorbie, Eekerz, Eyreland, Fredrik, FrenchIsAwesome, Fryed-peach, Furrykef, Gang65, Gazpacho, GeorgeMoney, Hcs, Hebrides, Holek, Illuminatiscott, Iridescent, Joelholdsworth,<br />
Karlhendrikse, Karol Langner, Knacker ITA, Kri, L888Y5, MattGiuca, Michael Snow, Neckelmann, Ni.cero, Room101, Rory096, Shandris, Skarebo, Spaceman85, Valarauka, Velvetron,<br />
Versatranitsonlywaytofly, Wayne Hardman, WhosAsking, WikiBone, Yacoby, 57 anonymous edits<br />
Back-face culling Source: http://en.wikipedia.org/w/index.php?oldid=439691083 Contributors: Andreas Kaufmann, BuzzerMan, Canderson7, Charles Matthews, Deepomega, Eric119,<br />
Gazpacho, LrdChaos, Mahyar d zig, Mdd4696, RJHall, Radagast83, Rainwarrior, Simeon, Snigbrook, Syrthiss, Tempshill, The last username left was taken, Tolkien fan, Uker, W3bbo,<br />
Xavexgoem, Yworo, ﻲﻧﺎﻣ, 12 anonymous edits<br />
Beam tracing Source: http://en.wikipedia.org/w/index.php?oldid=404067424 Contributors: Altenmann, Bduvenhage, CesarB, Hetar, Kibibu, M-le-mot-dit, Porges, RJFJR, RJHall, Reedbeta,<br />
Samuel A S, Srleffler, 7 anonymous edits<br />
Bidirectional texture function Source: http://en.wikipedia.org/w/index.php?oldid=420828000 Contributors: Andreas Kaufmann, Changorino, Dp, Guanaco, Ivokabel, Keefaas, Marasmusine,<br />
RichiH, SimonP, SirGrant, 11 anonymous edits<br />
Bilinear filtering Source: http://en.wikipedia.org/w/index.php?oldid=423887965 Contributors: -Majestic-, AxelBoldt, Berland, ChrisHodgesUK, Dcoetzee, Djanvk, Eekerz, Furrykef,<br />
Grendelkhan, HarrisX, Karlhendrikse, Lotje, Lunaverse, MarylandArtLover, Michael Hardy, MinorEdits, Mmj, Msikma, Neckelmann, NorkNork, Peter M Gerdes, Poor Yorick, Rgaddipa,<br />
Satchmo, Scepia, Shureg, Skulvar, Sparsefarce, Sterrys, Thegeneralguy, Valarauka, Vorn, Xaosflux, 27 anonymous edits<br />
Binary space partitioning Source: http://en.wikipedia.org/w/index.php?oldid=451611818 Contributors: Abdull, Altenmann, Amanaplanacanalpanama, Amritchowdhury, Angela, AquaGeneral,<br />
Ariadie, B4hand, Brucenaylor, Brutaldeluxe, Bryan Derksen, Cbraga, Cgbuff, Chan siuman, Charles Matthews, CyberSkull, Cybercobra, DanielPharos, Dcoetzee, Dysprosia, Fredrik, Frencheigh,<br />
GregorB, Headbomb, Immibis, Jafet, Jamesontai, Jkwchui, Kdau, Kelvie, KnightRider, Kri, LOL, LogiNevermore, M-le-mot-dit, Mdob, Michael Hardy, Mild Bill Hiccup, Noxin911,<br />
NuclearFriend, Obiwhonn, Oleg Alexandrov, Operator link, Palmin, Percivall, Prikipedia, QuasarTE, RPHv, Reedbeta, Spodi, Stephan Leeds, Tabletop, Tarquin, The Anome, Twri, Wiki alf,<br />
WikiLaurent, WiseWoman, Wmahan, Wonghang, Yar Kramer, Zetawoof, 57 anonymous edits<br />
Bounding interval hierarchy Source: http://en.wikipedia.org/w/index.php?oldid=437317999 Contributors: Altenmann, Imbcmdth, Michael Hardy, Oleg Alexandrov, Rehno Lindeque,<br />
Snoopy67, Srleffler, Welsh, 23 anonymous edits<br />
Bounding volume Source: http://en.wikipedia.org/w/index.php?oldid=447542832 Contributors: Aeris-chan, Ahering@cogeco.ca, Altenmann, Chris the speller, DavidCary, Flamurai, Frank<br />
Shearar, Gdr, Gene Nygaard, Interiot, Iridescent, Jafet, Jaredwf, Lambiam, LokiClock, M-le-mot-dit, Michael Hardy, Oleg Alexandrov, Oli Filth, Operativem, Orderud, RJHall, Reedbeta, Ryk,<br />
Sixpence, Smokris, Sterrys, T-tus, Tony1212, Tosha, WikHead, 38 anonymous edits<br />
Bump mapping Source: http://en.wikipedia.org/w/index.php?oldid=454893301 Contributors: ALoopingIcon, Adem4ik, Al Fecund, ArCePi, Audrius u, Baggend, Baldhur, BluesD, BobtheVila,<br />
Branko, Brion VIBBER, Chris the speller, CyberSkull, Damian Yerrick, Dhatfield, Dionyziz, Dormant25, Eekerz, Engwar, Frecklefoot, GDallimore, GoldDragon, GreatGatsby, GregorB,<br />
Greudin, Guyinblack25, H, Haakon, Hadal, Halloko, Hamiltondaniel, Honette, Imroy, IrekReklama, Jats, Kenchikuben, Kimiko, KnightRider, Komap, Loisel, Lord Crc, M-le-mot-dit, Madoka,<br />
Martin Kraus, Masem, Mephiston999, Michael Hardy, Mrwojo, Novusspero, Oyd11, Quentar, RJHall, Rainwarrior, Reedbeta, Roger Roger, Sam Hocevar, Scepia, Sdornan, ShashClp,<br />
SkyWalker, Snoyes, SpaceFlight89, SpunkyBob, Sterrys, Svick, Tarinth, Th1rt3en, ThomasTenCate, Ussphilips, Versatranitsonlywaytofly, Viznut, WaysToEscape, Werdna, Xavexgoem,<br />
Xezbeth, Yaninass2, 58 anonymous edits<br />
Catmull–Clark subdivision surface Source: http://en.wikipedia.org/w/index.php?oldid=445158604 Contributors: Ahelps, Aquilosion, Ati3414, Austin512, Bebestbe, Berland, Chase me ladies,<br />
I'm the Cavalry, Chikako, Cristiprefac, Cyp, David Eppstein, Elmindreda, Empoor, Furrykef, Giftlite, Gorbay, Guffed, Ianp5a, J.delanoy, Juhame, Karmacodex, Kinkybb, Krackpipe, Kubajzz,<br />
Lomacar, Michael Hardy, My head itches, Mysid, Mystaker1, Oleg Alexandrov, Orderud, Pablodiazgutierrez, Rasmus Faber, Sigmundv, Skybum, Tomruen, 55 anonymous edits<br />
Conversion between quaternions and Euler angles Source: http://en.wikipedia.org/w/index.php?oldid=447506883 Contributors: Anakin101, BlindWanderer, Charles Matthews, EdJohnston,<br />
Eiserlohpp, Fmalan, Gaius Cornelius, Guentherwagner, Hyacinth, Icairns, Icalanise, Incnis Mrsi, Jcuadros, Jemebius, JohnBlackburne, Juansempere, Linas, Lionelbrits, Marcofantoni84,<br />
Mjb4567, Niac2, Oleg Alexandrov, Orderud, PAR, Patrick, RJHall, Radagast83, Stamcose, Steve Lovelace, ThomasV, TobyNorris, Waldir, Woohookitty, ZeroOne, 30 anonymous edits<br />
Cube mapping Source: http://en.wikipedia.org/w/index.php?oldid=453052896 Contributors: Barticus88, Bryan Seecrets, Eekerz, Foobarnix, JamesBWatson, Jknappett, MarylandArtLover,<br />
MaxDZ8, Mo ainm, Paolo.dL, SharkD, Shashank Shekhar, Smjg, Smyth, SteveBaker, TopherTG, Versatranitsonlywaytofly, Zigger, 9 anonymous edits<br />
Diffuse reflection Source: http://en.wikipedia.org/w/index.php?oldid=445442350 Contributors: Adoniscik, Apyule, Bluemoose, Casablanca2000in, Deor, Dhatfield, Dicklyon, Eekerz,<br />
Falcon8765, Flamurai, Francs2000, GianniG46, Giftlite, Grebaldar, Jeff Dahl, JeffBobFrank, JohnOwens, Jojhutton, Lesnail, Linnormlord, Logger9, Marcosaedro, <strong>Materials</strong>cientist, Mbz1,<br />
Owen214, Patrick, RJHall, Rajah, Rjstott, Scriberius, Shonenknifefan1, Srleffler, Superblooper, Waldir, Wikijens, 39 anonymous edits<br />
Displacement mapping Source: http://en.wikipedia.org/w/index.php?oldid=446967604 Contributors: ALoopingIcon, Askewchan, CapitalR, Charles Matthews, Cmprince, Digitalntburn,<br />
Dmharvey, Eekerz, Elf, Engwar, Firefox13, Furrykef, George100, Ian Pitchford, Jacoplane, Jdtruax, Jhaiduce, JonH, Jordash, Kusunose, Mackseem, Markhoney, Moritz Moeller, NeoRicen,<br />
Novusspero, Peter bertok, PianoSpleen, Pulsar, Puzzl, RJHall, Redquark, Robchurch, SpunkyBob, Sterrys, T-tus, Tom W.M., Tommstein, Toxic1024, Twinxor, 55 anonymous edits<br />
Doo–Sabin subdivision surface Source: http://en.wikipedia.org/w/index.php?oldid=410345821 Contributors: Berland, Cuvette, Deodar, Hagerman, Jitse Niesen, Michael Hardy, Orderud,<br />
Tomruen, 6 anonymous edits<br />
Edge loop Source: http://en.wikipedia.org/w/index.php?oldid=429715012 Contributors: Albrechtphilly, Balloonguy, Costela, Fages, Fox144112, Furrykef, G9germai, George100, Grafen,<br />
Gurch, Guy BlueSummers, J04n, Marasmusine, ProveIt, Scott5114, Skapur, Zundark, 11 anonymous edits<br />
Euler operator Source: http://en.wikipedia.org/w/index.php?oldid=365493811 Contributors: BradBeattie, Dina, Elkman, Havemann, Jitse Niesen, Marylandwizard, Michael Hardy, Tompw, 2<br />
anonymous edits<br />
False radiosity Source: http://en.wikipedia.org/w/index.php?oldid=389701950 Contributors: Fratrep, Kostmo, Visionguru, 3 anonymous edits<br />
Fragment Source: http://en.wikipedia.org/w/index.php?oldid=433092161 Contributors: Abtract, Adam majewski, BenFrantzDale, Marasmusine, Sfingram, Sigma 7, Thelennonorth, Wknight94,<br />
1 anonymous edits<br />
Geometry pipelines Source: http://en.wikipedia.org/w/index.php?oldid=416796453 Contributors: Bumm13, Cybercobra, Eda eng, GL1zdA, Hazardous Matt, Jesse Viviano, JoJan, Joyous!,<br />
Jpbowen, R'n'B, Rilak, Robertvan1, Shaundakulbara, Stephenb, 12 anonymous edits<br />
Geometry processing Source: http://en.wikipedia.org/w/index.php?oldid=428916742 Contributors: ALoopingIcon, Alanbly, Betamod, Dsajga, EpsilonSquare, Frecklefoot, Happyrabbit, JMK,<br />
Jeff3000, JennyRad, Jeodesic, Lantonov, Michael Hardy, PJY, Poobslag, RJHall, Siddhant, Sterrys, 10 anonymous edits<br />
Global illumination Source: http://en.wikipedia.org/w/index.php?oldid=437668388 Contributors: Aenar, Andreas Kaufmann, Arru, Beland, Boijunk, Cappie2000, Chris Ssk, Conversion script,<br />
CoolingGibbon, Dhatfield, Dormant25, Elektron, Elena the Quiet, Fractal3, Frap, Graphicsguy, H2oski2liv, Heron, Hhanke, Imroy, JYolkowski, Jontintinjordan, Jose Ramos, Jsnow, Kansik, Kri,
Article Sources and Contributors 262<br />
Kriplozoik, Levork, MartinPackerIBM, Maruchan, Mysid, N2f, Nihiltres, Nohat, Oldmanriver42, Paranoid, Peter bertok, Petereriksson, Pietaster, Pjrich, Pokipsy76, Pongley, Proteal, Pulle,<br />
RJHall, Reedbeta, Shaboomshaboom, Skorp, Smelialichu, Smiley325, Th1rt3en, The machine512, Themunkee, Travistlo, UKURL, Wazery, Welsh, 71 anonymous edits<br />
Gouraud shading Source: http://en.wikipedia.org/w/index.php?oldid=448262278 Contributors: Akhram, Asiananimal, Bautze, Blueshade, Brion VIBBER, Chasingsol, Crater Creator, Csl77,<br />
DMacks, Da Joe, Davehodgson333, David Eppstein, Dhatfield, Dicklyon, Eekerz, Furrykef, Gargaj, Giftlite, Hairy Dude, Jamelan, Jaxl, Jon186, Jpbowen, Karada, Kocio, Kostmo, Kri, MP,<br />
Mandra Oleka, Martin Kraus, Michael Hardy, Mrwojo, N4nojohn, Olivier, Pne, Poccil, Qz, RJHall, Rainwarrior, RoyalFool, Russl5445, Scepia, SchuminWeb, Sct72, Shyland, SiegeLord,<br />
Solon.KR, The Anome, Thenickdude, Yzmo, Z10x, Zom-B, Zundark, 43 anonymous edits<br />
Graphics pipeline Source: http://en.wikipedia.org/w/index.php?oldid=450317739 Contributors: Arnero, Badduri, Bakkster Man, Banazir, BenFrantzDale, CesarB, ChopMonkey, EricR,<br />
Fernvale, Flamurai, Fmshot, Frap, Gogo Dodo, Guptan, Harryboyles, Hellisp, Hlovdal, Hymek, Jamesrnorwood, MIT Trekkie, Mackseem, Marvin Monroe, MaxDZ8, Piotrus, Posix memalign,<br />
Remag Kee, Reyk, Ricky81682, Rilak, Salam32, Seasage, Sfingram, Stilgar, TutterMouse, TuukkaH, Woohookitty, Yan Kuligin, Yousou, 56 anonymous edits<br />
Hidden line removal Source: http://en.wikipedia.org/w/index.php?oldid=428096478 Contributors: Andreas Kaufmann, Bobber0001, CesarB, Chrisjohnson, Grutness, Koozedine, Kylemcinnes,<br />
MrMambo, Oleg Alexandrov, Qz, RJHall, Resurgent insurgent, Shenme, Thumperward, Wheger, 15 anonymous edits<br />
Hidden surface determination Source: http://en.wikipedia.org/w/index.php?oldid=435940442 Contributors: Altenmann, Alvis, Arnero, B4hand, Bill william compton, CanisRufus, Cbraga,<br />
Christian Lassure, CoJaBo, Connelly, David Levy, Dougher, Everyking, Flamurai, Fredrik, Grafen, Graphicsguy, J04n, Jleedev, Jmorkel, Jonomillin, Kostmo, LOL, LPGhatguy, LokiClock,<br />
Marasmusine, MattGiuca, Michael Hardy, Nahum Reduta, Philip Trueman, RJHall, Radagast83, Remag Kee, Robofish, Shenme, Spectralist, Ssd, Tfpsly, Thiseye, Toussaint, Vendettax, Walter<br />
bz, Wknight94, Wmahan, Wolfkeeper, 51 anonymous edits<br />
High dynamic range rendering Source: http://en.wikipedia.org/w/index.php?oldid=453223576 Contributors: -Majestic-, 25, Abdull, Ahruman, Allister MacLeod, Anakin101, Appraiser, Art<br />
LaPella, Axem Titanium, Ayavaron, BIS Ondrej, Baddog121390, Betacommand, Betauser, Bongomatic, Calidarien, Cambrant, CesarB, Christopherlin, Ck lostsword, Coldpower27,<br />
CommonsDelinker, Credema, Crummy, CyberSkull, Cynthia Sue Larson, DH85868993, DabMachine, Darkuranium, Darxus, David Eppstein, Djayjp, Dmmaus, Drat, Drawn Some, Dreish,<br />
Eekerz, Ehn, Elmindreda, Entirety, Eptin, Evanreyes, Eyrian, FA010S, Falcon9x5, Frap, Gamer007, Gracefool, Hdu hh, Hibana, Holek, Imroy, Infinity Wasted, Intgr, J.delanoy, Jack Daniels<br />
BBQ Sauce, Jason Quinn, Jengelh, JigPu, Johannes re, JojoMojo, Jyuudaime, Kaotika, Karam.Anthony.K, Karlhendrikse, Katana314, Kelly Martin, King Bob324, Kocur, Korpal28, Kotofei,<br />
Krawczyk, Kungfujoe, Legionaire45, Marcika, Martyx, MattGiuca, Mboverload, Mdd4696, Mika1h, Mikmac1, Mindmatrix, Morio, Mortense, Museerouge, NCurse, Nastyman9,<br />
NoSoftwarePatents, NulNul, Oni Ookami Alfador, PHDrillSergeant, PhilMorton, Pkaulf, Pmanderson, Pmsyyz, Qutezuce, RG2, Redvers, Rich Farmbrough, Rjwilmsi, Robert K S, RoyBoy, Rror,<br />
Sam Hocevar, Shademe, Sikon, Simeon, Simetrical, Siotha, SkyWalker, Slavik262, Slicing, Snkcube, Srittau, Starfox, Starkiller88, Suruena, TJRC, Taw, ThaddeusB, The Negotiator, ThefirstM,<br />
Thequickbrownfoxjumpsoveralazydog, Thewebb, Tiddly Tom, Tijfo098, Tomlee2010, Tony1, Unico master 15, Unmitigated Success, Vendettax, Vladimirovich, Wester547, XMog, Xabora,<br />
XanthoNub, Xanzzibar, XenoL-Type, Xompanthy, ZS, Zr40, Zvar, 369 anonymous edits<br />
Image-based lighting Source: http://en.wikipedia.org/w/index.php?oldid=447236720 Contributors: Beland, Blakegripling ph, Bobo192, Chaoticbob, Dreamdra, Eekerz, Justinc, Michael Hardy,<br />
Pearle, Qutezuce, Rainjam, Rogerb67, Rror, TokyoJunkie, Wuz, 11 anonymous edits<br />
Image plane Source: http://en.wikipedia.org/w/index.php?oldid=425830302 Contributors: BenFrantzDale, CesarB, Michael C Price, RJHall, Reedbeta, TheParanoidOne, 1 anonymous edits<br />
Irregular Z-buffer Source: http://en.wikipedia.org/w/index.php?oldid=422512395 Contributors: Chris the speller, DabMachine, DavidHOzAu, Diego Moya, Fooberman, Karam.Anthony.K,<br />
Shaericell, SlipperyHippo, ThinkingInBinary, 8 anonymous edits<br />
Isosurface Source: http://en.wikipedia.org/w/index.php?oldid=444804726 Contributors: Banus, CALR, Charles Matthews, Dergrosse, George100, Khalid hassani, Kri, Michael Hardy, Onna,<br />
Ospalh, RJHall, RedWolf, Rudolf.hellmuth, Sam Hocevar, StoatBringer, Taw, The demiurge, Thurth, 7 anonymous edits<br />
Lambert's cosine law Source: http://en.wikipedia.org/w/index.php?oldid=448757874 Contributors: AvicAWB, AxelBoldt, BenFrantzDale, Charles Matthews, Choster, Css, Dbenbenn, Deuar,<br />
Escientist, Gene Nygaard, GianniG46, Helicopter34234, Hugh Hudson, Inductiveload, Jcaruth123, Jebus989, Kri, Linas, Marcosaedro, Michael Hardy, Mpfiz, Oleg Alexandrov, OptoDave,<br />
Owen, PAR, Papa November, Pflatau, Q Science, RJHall, Radagast83, Ramjar, Robobix, Scolobb, Seth Ilys, Srleffler, The wub, ThePI, Tomruen, Tøpholm, 23 anonymous edits<br />
Lambertian reflectance Source: http://en.wikipedia.org/w/index.php?oldid=451592543 Contributors: Adoniscik, Bautze, BenFrantzDale, DMG413, Deuar, Eekerz, Fefeheart, GianniG46,<br />
Girolamo Savonarola, Jtsiomb, KYN, Kri, Littlecruiser, Marc omorain, Martin Kraus, PAR, Pedrose, Pflatau, Radagast83, Sanddune777, Seabhcan, Shadowsill, Srleffler, Venkat.vasanthi,<br />
Xavexgoem, Δ, 18 anonymous edits<br />
Level of detail Source: http://en.wikipedia.org/w/index.php?oldid=452394335 Contributors: ABF, ALoopingIcon, Adzinok, Ben467, Bjdehut, Bluemoose, Bobber0001, Chris Chittleborough,<br />
ChuckNorrisPwnedYou, David Levy, Deepomega, Drat, Furrykef, GreatWhiteNortherner, IWantMonobookSkin, Jtalledo, MaxDZ8, Megapixie, Pinkadelica, Rjwilmsi, Runtime, Sterrys,<br />
ToolmakerSteve, TowerDragon, Wknight94, ZS, 36 anonymous edits<br />
Mipmap Source: http://en.wikipedia.org/w/index.php?oldid=454768520 Contributors: Alksub, Andreas Kaufmann, Andrewpmk, Anss123, Arnero, Barticus88, Bongomatic, Bookandcoffee,<br />
Brocklebjorn, Dshneeb, Eekerz, Exe, Eyreland, Goodone121, Grendelkhan, Hooperbloob, Hotlorp, Jamelan, Kerrick Staley, Knacker ITA, Knight666, Kri, Kricke, MIT Trekkie,<br />
MarylandArtLover, Mat-C, Mdockrey, Michael Hardy, Mikachu42, Moonbug2, Nbarth, Norro, RJHall, Scfencer, Sixunhuit, Spoon!, TRS-80, Tarquin, Theoh, Tribaal, VMS Mosaic, Valarauka,<br />
Xmnemonic, 44 anonymous edits<br />
Newell's algorithm Source: http://en.wikipedia.org/w/index.php?oldid=374593285 Contributors: Andreas Kaufmann, Charles Matthews, David Eppstein, Farley13, KnightRider, Komap, 6<br />
anonymous edits<br />
Non-uniform rational B-spline Source: http://en.wikipedia.org/w/index.php?oldid=452257716 Contributors: *drew, ALoopingIcon, Ahellwig, Alan Parmenter, Alanbly, Alansohn, AlphaPyro,<br />
Andreas Kaufmann, Angela, Ati3414, BMF81, Barracoon, BenFrantzDale, Berland, Buddelkiste, C0nanPayne, Cgbuff, Cgs, Commander Keane, Crahul, DMahalko, Dallben, Developer,<br />
Dhatfield, Dmmd123, Doradus, DoriSmith, Ensign beedrill, Eric Demers, Ettrig, FF2010, Fredrik, Freeformer, Furrykef, Gargoyle888, Gea, Graue, Greg L, Happyrabbit, Hasanisawi, Hazir,<br />
HugoJacques1, HuyS3, Ian Pitchford, Ihope127, Iltseng, J04n, JFPresti, Jusdafax, Karlhendrikse, Khunglongcon, Lzur, Malarame, Mardson, Matthijs, Mauritsmaartendejong, Maury Markowitz,<br />
Michael Hardy, NPowerSoftware, Nedaim, Neostarbuck, Nichalp, Nick, Nick Pisarro, Jr., Nintend06, Oleg Alexandrov, Orborde, Orderud, Oxymoron83, Parametric66, Peter M Gerdes, Pgimeno,<br />
Puchiko, Purwar, Qutezuce, Radical Mallard, Rasmus Faber, Rconan, Regenwolke, Rfc1394, Ronz, Roundaboutyes, Sedimin, Skrapion, SlowJEEP, SmilingRob, Speck-Made, Stefano.anzellotti,<br />
Stewartadcock, Strangnet, Taejo, Tamfang, The Anome, Tsa1093, Uwe rossbacher, VitruV07, Vladsinger, Whaa?, WulfTheSaxon, Xcoil, Xmnemonic, Yahastu, Yousou, ZeroOne, Zootalures,<br />
ﻲﻧﺎﻣ, 192 anonymous edits<br />
Normal mapping Source: http://en.wikipedia.org/w/index.php?oldid=453628819 Contributors: ACSE, ALoopingIcon, Ahoerstemeier, AlistairMcMillan, Andrewpmk, Ar-wiki, Bronchial,<br />
Bryan Seecrets, Cmsjustin, CobbSalad, CryptoDerk, Deepomega, Digitalntburn, Dionyziz, Dysprosia, EconomicsGuy, Eekerz, EmmetCaulfield, Engwar, Everyking, Frecklefoot, Fredrik,<br />
Furrykef, Green meklar, Gregb, Haakon, Heliocentric, Incady, Irrevenant, Jamelan, Jason One, Jean-Frédéric, Jon914, JonathanHudgins, K1Bond007, Liman<strong>3D</strong>, Lord Crc, MarkPNeyer,<br />
Maximus Rex, Nahum Reduta, Olanom, Pak21, Paolo.dL, Prime Blue, R'n'B, RJHall, ReconTanto, Redquark, Rich Farmbrough, SJP, Salam32, Scott5114, Sdornan, SkyWalker, Sorry--Really,<br />
Sterrys, SuperMidget, T-tus, TDogg310, Talcos, The Anome, The Hokkaido Crow, TheHappyFriar, Tommstein, TwelveBaud, VBrent, Versatranitsonlywaytofly, Wikster E, Xavexgoem,<br />
Xmnemonic, Yaninass2, 143 anonymous edits<br />
Oren–Nayar reflectance model Source: http://en.wikipedia.org/w/index.php?oldid=435049058 Contributors: Arch dude, Artaxiad, Bautze, CodedAperture, Compvis, Dhatfield, Dicklyon,<br />
Divya99, Eekerz, GianniG46, JeffBobFrank, Jwgu, Martin Kraus, Meekohi, ProyZ, R'n'B, Srleffler, Woohookitty, Yoshi503, Zoroastrama100, 21 anonymous edits<br />
Painter's algorithm Source: http://en.wikipedia.org/w/index.php?oldid=449571165 Contributors: 16@r, Andreas Kaufmann, BlastOButter42, Bryan Derksen, Cgbuff, EoGuy, Fabiob,<br />
Farley13, Feezo, Finell, Finlay McWalter, Fredrik, Hhanke, Jaberwocky6669, Jmabel, JohnBlackburne, KnightRider, Komap, Norm, Ordoon, PRMerkley, Phyte, RJHall, RadRafe, Rainwarrior,<br />
Rasmus Faber, Reedbeta, Rufous, Shai-kun, Shanes, Sreifa01, Sterrys, SteveBaker, Sverdrup, WISo, Whatsthatcomingoverthehill, Zapyon, 24 anonymous edits<br />
Parallax mapping Source: http://en.wikipedia.org/w/index.php?oldid=444344695 Contributors: ALoopingIcon, Aorwind, Bryan Seecrets, CadeFr, Charles Matthews, Cmprince, CyberSkull,<br />
Eekerz, Fama Clamosa, Fancypants09, Fractal3, Gustavocarra, Imroy, J5689, Jdcooper, Jitse Niesen, JonH, Kenchikuben, Lemonv1, MaxDZ8, Mjharrison, Novusspero, Oleg Alexandrov,<br />
Peter.Hozak, Qutezuce, RJHall, Rainwarrior, Rich Farmbrough, Scepia, Seth.illgard, SkyWalker, SpunkyBob, Sterrys, Strangerunbidden, TKD, Thepcnerd, Tommstein, Vacapuer, Xavexgoem,<br />
XenoL-Type, 43 anonymous edits<br />
Particle system Source: http://en.wikipedia.org/w/index.php?oldid=452585168 Contributors: Ashlux, Baron305, Bjørn, CanisRufus, Charles Matthews, Chris the speller, Darthuggla,<br />
Deadlydog, Deodar, Eekerz, Ferdzee, Fractal3, Furrykef, Gamer<strong>3D</strong>, Gracefool, Halixi72, Jpbowen, Jtsiomb, Kibibu, Krizas, MarSch, MrOllie, Mrwojo, Onebyone, Philip Trueman, Rror,<br />
Sameboat, Schmiteye, ScottDavis, SethTisue, Shanedidona, Sideris, Sterrys, SteveBaker, The Merciful, Thesalus, Zzuuzz, 64 anonymous edits
Article Sources and Contributors 263<br />
Path tracing Source: http://en.wikipedia.org/w/index.php?oldid=451780747 Contributors: BaiLong, DennyColt, Elektron, Icairns, Iceglow, Incnis Mrsi, Jonon, Keepscases, M-le-mot-dit,<br />
Markluffel, Mmernex, Mrwojo, NeD80, Psior, RJHall, Srleffler, Steve Quinn, 38 anonymous edits<br />
Per-pixel lighting Source: http://en.wikipedia.org/w/index.php?oldid=451564208 Contributors: Alphonze, Altenmann, David Wahler, Eekerz, Jheriko, Mishal153, 3 anonymous edits<br />
Phong reflection model Source: http://en.wikipedia.org/w/index.php?oldid=448531441 Contributors: Aparecki, Bdean42, Bignoter, Connelly, Csl77, Dawnseekker2000, Dicklyon, EmreDuran,<br />
Gargaj, Headbomb, Jengelh, Kri, Martin Kraus, Michael Hardy, Nixdorf, RJHall, Rainwarrior, Srleffler, The Anome, Theseus314, TimBentley, Wfaulk, 26 anonymous edits<br />
Phong shading Source: http://en.wikipedia.org/w/index.php?oldid=448869326 Contributors: ALoopingIcon, Abhorsen327, Alexsh, Alvin Seville, Andreas Kaufmann, Asiananimal, Auntof6,<br />
Bautze, Bignoter, BluesD, CALR, ChristosIET, Connelly, Csl77, Dhatfield, Dicklyon, Djexplo, Eekerz, Everyking, Eyreland, Gamkiller, Gargaj, GianniG46, Giftlite, Gogodidi, Hairy Dude,<br />
Heavyrain2408, Hymek, Instantaneous, Jamelan, Jaymzcd, Jedi2155, Karada, Kleister32, Kotasik, Kri, Litherum, Loisel, Martin Kraus, Martin451, Mdebets, Michael Hardy, N2e, Pinethicket,<br />
Preator1, RJHall, Rainwarrior, Rjwilmsi, Sigfpe, Sin-man, Sorcerer86pt, Spoon!, Srleffler, StaticGull, T-tus, Thddo, Tschis, TwoOneTwo, WikHead, Wrightbus, Xavexgoem, Z10x, Zundark, 60<br />
anonymous edits<br />
Photon mapping Source: http://en.wikipedia.org/w/index.php?oldid=447907524 Contributors: Arabani, Arnero, Astronautics, Brlcad, CentrallyPlannedEconomy, Chas zzz brown,<br />
CheesyPuffs144, Colorgas, Curps, Ewulp, Exvion, Fastily, Flamurai, Fnielsen, Fuzzypeg, GDallimore, J04n, Jimmi Hugh, Kri, LeCire, MichaelGensheimer, Nilmerg, Owen, Oyd11, Patrick,<br />
Phrood, RJHall, Rkeene0517, Strattonbrazil, T-tus, Tesi1700, Thesalus, Tobias Bergemann, Wapcaplet, XDanielx, Xcelerate, 39 anonymous edits<br />
Photon tracing Source: http://en.wikipedia.org/w/index.php?oldid=427523148 Contributors: AR3006, Cyb3rdemon, Danski14, Fuzzypeg, M-le-mot-dit, MessiahAndrw, Pjrich, Rilak, Srleffler,<br />
Ylem, Zeoverlord, 4 anonymous edits<br />
Polygon Source: http://en.wikipedia.org/w/index.php?oldid=441919702 Contributors: Arnero, BlazeHedgehog, CALR, David Levy, Iceman444k, J04n, Jagged 85, Mardus, Michael Hardy,<br />
Navstar, Orderud, Pietaster, RJHall, Reedbeta, SimonP, 3 anonymous edits<br />
Potentially visible set Source: http://en.wikipedia.org/w/index.php?oldid=443920985 Contributors: Dlegland, Graphicsguy, Gwking, Kri, Lordmetroid, NeD80, Weevil, Ybungalobill, 5<br />
anonymous edits<br />
Precomputed Radiance Transfer Source: http://en.wikipedia.org/w/index.php?oldid=446578325 Contributors: Abstracte, Colonies Chris, Deodar, Fanra, Imroy, Red Act, SteveBaker,<br />
WhiteMouseGary, 7 anonymous edits<br />
Procedural generation Source: http://en.wikipedia.org/w/index.php?oldid=454833676 Contributors: -OOPSIE-, 2over0, ALoopingIcon, Amnesiasoft, Anetode, Arnoox, Ashley Y, Axem<br />
Titanium, Bkell, Blacklemon67, CRGreathouse, Caerbannog, Cambrant, Carl67lp, Chaos5023, ChopMonkey, Chris TC01, Cjc13, Computer5t, CyberSkull, D.brodale, DabMachine, Dadomusic,<br />
Damian Yerrick, Denis C., DeylenK, DirectXMan, Disavian, Dismas, Distantbody, Dkastner, Doctor Computer, Eekerz, Eoseth, EverGreg, Exe, FatPope, Feydey, Finlay McWalter, Fippy<br />
Darkpaw, Fratrep, Fredrik, Furrykef, Fusible, Geh, GingaNinja, Gjamesnvda, GregorB, HappyVR, Hervegirod, Iain marcuson, Ihope127, Inthoforo, IronMaidenRocks, JAF1970, Jacj, Jacoplane,<br />
Jerc1, Jessmartin, Jontintinjordan, Kbolino, Keio, KenAdamsNSA, Khazar, Kungpao, KyleDantarin, Lapinmies, LeftClicker, Len Raymond, Licu, Lightmouse, Longhan2009, Lozzaaa, Lupin,<br />
MadScientistVX, Mallow40, Marasmusine, Martarius, MaxDZ8, Moskvax, Mujtaba1998, Nils, Nuggetboy, Oliverkroll, One-dimensional Tangent, Pace212, Penguin, Philwelch, PhycoFalcon,<br />
Poss, Praetor alpha, Quicksilvre, Quuxplusone, RCX, Rayofash, Retro junkie, Richlv, Rjwilmsi, Robin S, Rogerd, Ryuukuro, Saxifrage, Schmiddtchen, SharkD, Shashank Shekhar, Shinyary2,<br />
Simeon, Slippyd, Spiderboy, Spoonboy42, Stevegallery, Svea Kollavainen, Taral, Terminator484, The former 134.250.72.176, ThomasHarte, Thunderbrand, Tlogmer, TravisMunson1993,<br />
Trevc63, Tstexture, Valaggar, Virek, Virt, Viznut, Whqitsm, Wickethewok, Xobxela, Ysangkok, Zemoxian, Zvar, 195 anonymous edits<br />
Procedural texture Source: http://en.wikipedia.org/w/index.php?oldid=441859984 Contributors: Altenmann, Besieged, CapitalR, Cargoking, D6, Dhatfield, Eflouret, Foolscreen, Gadfium,<br />
Geh, Gurch, Jacoplane, Joeybuddy96, Ken md, MaxDZ8, MoogleDan, Nezbie, PaulBoxley, Petalochilus, RhinosoRoss, Spark, Thparkth, TimBentley, Viznut, Volfy, Wikedit, Wragge, Zundark,<br />
22 anonymous edits<br />
<strong>3D</strong> projection Source: http://en.wikipedia.org/w/index.php?oldid=453177111 Contributors: AManWithNoPlan, Akilaa, Akulo, Alfio, Allefant, Altenmann, Angela, Aniboy2000, Baudway,<br />
BenFrantzDale, Berland, Bloodshedder, Bobbygao, BrainFRZ, Bunyk, Charles Matthews, Cholling, Chris the speller, Ckatz, Cpl Syx, Ctachme, Cyp, Datadelay, Davidhorman, Deom, Dhatfield,<br />
Dondegroovily, Flamurai, Froth, Furrykef, Gamer Eek, Giftlite, Heymid, Jaredwf, Jovianconflict, Kevmitch, Lincher, Luckyherb, Marco Polo, MathsIsFun, Mdd, Michael Hardy, Michaelbarreto,<br />
Miym, Mrwojo, Nbarth, Oleg Alexandrov, Omegatron, Paolo.dL, Patrick, Pearle, PhilKnight, Pickypickywiki, Plowboylifestyle, PsychoAlienDog, Que, R'n'B, RJHall, Rabiee, Raven in Orbit,<br />
Remi0o, Rjwilmsi, RossA, Sandeman684, Schneelocke, Seet82, SharkD, Sietse Snel, Skytiger2, Speshall, Stephan Leeds, Stestagg, Tamfang, TimBentley, Tristanreid, Tyler, Unigfjkl, Van<br />
helsing, Zanaq, 99 anonymous edits<br />
Quaternions and spatial rotation Source: http://en.wikipedia.org/w/index.php?oldid=454392609 Contributors: Albmont, ArnoldReinhold, AxelBoldt, Ben pcc, BenFrantzDale, BenRG,<br />
Bjones410, Bmju, Brews ohare, Bulee, CALR, Catskul, Ceyockey, Charles Matthews, CheesyPuffs144, Cyp, Daniel Brockman, Daniel.villegas, Darkbane, David Eppstein, Denevans, Depakote,<br />
Dionyziz, Dl2000, Ebelular, Edward, Endomorphic, Enosch, Eugene-elgato, Fgnievinski, Fish-Face, Fropuff, Fyrael, Gaius Cornelius, GangofOne, Genedial, Giftlite, Gutza, HenryHRich,<br />
Hyacinth, Ig0r, Incnis Mrsi, J04n, Janek Kozicki, Jemebius, Jermcb, Jheald, Jitse Niesen, JohnBlackburne, JohnPritchard, Josh Triplett, Joydeep.biswas, KSmrq, Kborer, Kordas, Lambiam,<br />
LeandraVicci, Lemontea, Light current, Linas, Lkesteloot, Looxix, Lotu, Lourakis, LuisIbanez, ManoaChild, Markus Kuhn, MathsPoetry, Michael C Price, Michael Hardy, Mike Stramba, Mild<br />
Bill Hiccup, Mtschoen, Oleg Alexandrov, Orderud, PAR, Paddy3118, Paolo.dL, Patrick, Patrick Gill, Patsuloi, Ploncomi, Pt, Qz, RJHall, Rainwarrior, Randallbsmith, Reddi, Rgdboer, Robinh,<br />
RzR, Samuel Huang, Sebsch, Short Circuit, Sigmundur, SlavMFM, Soler97, TLKeller, Tamfang, Terry Bollinger, Timo Honkasalo, Tkuvho, TobyNorris, User A1, WVhybrid, WaysToEscape,<br />
Yoderj, Zhw, Zundark, 183 anonymous edits<br />
Radiosity Source: http://en.wikipedia.org/w/index.php?oldid=454048537 Contributors: 63.224.100.xxx, ALoopingIcon, Angela, Bevo, CambridgeBayWeather, Cappie2000, Chrisjldoran,<br />
Cjmccormack, Conversion script, CoolKoon, Cspiel, Dhatfield, DrFluxus, Furrykef, GDallimore, Inquam, Jdpipe, Jheald, JzG, Klparrot, Kostmo, Kshipley, Livajo, Lucio Di Madaura, Luna<br />
Santin, M0llusk, Melligem, Michael Hardy, Mintleaf, Oliphaunt, Osmaker, Philnolan3d, PseudoSudo, Qutezuce, Qz, RJHall, Reedbeta, Reinyday, Rocketmagnet, Ryulong, Sallymander,<br />
SeanAhern, Siker, Sintaku, Snorbaard, Soumyasch, Splintercellguy, Ssppbub, Thue, Tomalak geretkal, Tomruen, Trevorgoodchild, Uriyan, Vision3001, Visionguru, VitruV07, Waldir,<br />
Wapcaplet, Wernermarius, Wile E. Heresiarch, Yrithinnd, Yrodro, Σ, 70 anonymous edits<br />
Ray casting Source: http://en.wikipedia.org/w/index.php?oldid=439289047 Contributors: *Kat*, AnAj, Angela, Anticipation of a New Lover's Arrival, The, Astronautics, Barticus88, Brazucs,<br />
Cgbuff, D, Damian Yerrick, DaveGorman, Djanvk, DocumentN, Dogaroon, Eddynumbers, Eigenlambda, Ext9, Exvion, Finlay McWalter, Firsfron, Garde, Gargaj, Geekrecon, GeorgeLouis,<br />
HarisM, Hetar, Iamhove, Iridescent, J04n, JamesBurns, Jlittlet, Jodi.a.schneider, Kayamon, Kcdot, Korodzik, Kris Schnee, LOL, Lozzaaa, Mikhajist, Modster, NeD80, Ortzinator, Pinbucket,<br />
RJHall, Ravn, Reedbeta, Rich Farmbrough, RzR, TheBilly, ThomasHarte, TimBentley, Tjansen, Verne Equinox, WmRowan, Wolfkeeper, Yksyksyks, 44 anonymous edits<br />
Ray tracing Source: http://en.wikipedia.org/w/index.php?oldid=454311993 Contributors: 0x394a74, Abmac, Al Hart, Alanbly, Altenmann, Andreas Kaufmann, Anetode, Anonymous the<br />
Editor, Anteru, Arnero, ArnoldReinhold, Arthena, Bdoserror, Benindigo, Blueshade, Brion VIBBER, Brlcad, C0nanPayne, Cadience, Caesar, Camtomlee, Carrionluggage, Cdecoro, Chellmuth,<br />
Chrislk02, Claygate, Coastline, CobbSalad, ColinSSX, Colinmaharaj, Conversion script, Cowpip, Cozdas, Cybercobra, D V S, Darathin, Davepape, Davidhorman, Delicious carbuncle,<br />
Deltabeignet, Deon, Devendermishra, Dhatfield, Dhilvert, Diannaa, Dicklyon, Diligent Terrier, Domsau2, DrBob, Ed g2s, Elizium23, Erich666, Etimbo, FatalError, Femto, Fgnievinski,<br />
ForrestVoight, Fountains of Bryn Mawr, Fph, Furrykef, GDallimore, GGGregory, Geekrecon, Giftlite, Gioto, Gjlebbink, Gmaxwell, Goodone121, Graphicsguy, Greg L, Gregwhitfield,<br />
H2oski2liv, Henrikb4, Hertz1888, Hetar, Hugh2414, Imroy, Ingolfson, Ixfd64, Japsu, Jawed, Jdh30, Jeancolasp, Jesin, Jim.belk, Jj137, Jleedev, Jodi.a.schneider, Joke137, JonesMI, Jpkoester1,<br />
Juhame, Jumping cheese, K.brewster, Kolibri, Kri, Ku7485, KungfuJoe1110, Lasneyx, Lclacer, Levork, Luke490, Lumrs, Lupo, Martarius, Mattbrundage, Michael Hardy, Mikiemike, Mimigu,<br />
Minghong, MoritzMoeller, Mosquitopsu, Mun206, Nerd65536, Niky cz, NimoTh, Nneonneo, Nohat, O18, OnionKnight, Osmaker, Paolo.dL, Patrick, Penubag, Pflatau, Phresnel, Phrood,<br />
Pinbucket, Pjvpjv, Powerslide, Priceman86, Purpy Pupple, Qef, R.cabus, RJHall, Randomblue, Ravn, Rcronk, Reedbeta, Regenspaziergang, Requestion, Rich Farmbrough, RubyQ, Rusty432,<br />
Ryan Postlethwaite, Ryan Roos, Samjameshall, Samuelalang, Sebastian.mach, SebastianHelm, Shen, Simeon, Sir Lothar, Skadge, SkyWalker, Slady, Soler97, Solphusion, Soumyasch, Spiff,<br />
Srleffler, Stannered, Stevertigo, TakingUpSpace, Tamfang, Taral, The Anome, The machine512, TheRealFennShysa, Themunkee, Thumperward, Timo Honkasalo, Timrb, ToastieIL, Tom<br />
Morris, Tom-, Toxygen, Tuomari, Ubardak, Ummit, Uvainio, VBGFscJUn3, Vendettax, Versatranitsonlywaytofly, Vette92, VitruV07, Viznut, Wapcaplet, Washboardplayer, Whosasking,<br />
WikiWriteyWeb, Wrayal, Yonaa, Zeno333, Zfr, 말틴, 271 anonymous edits<br />
Reflection Source: http://en.wikipedia.org/w/index.php?oldid=401889672 Contributors: Al Hart, Dbolton, Dhatfield, Epbr123, Hom sepanta, Jeodesic, M-le-mot-dit, PowerSerj, Remag Kee,<br />
Rich Farmbrough, Siddhant, Simeon, Srleffler, 5 anonymous edits<br />
Reflection mapping Source: http://en.wikipedia.org/w/index.php?oldid=427713716 Contributors: ALoopingIcon, Abdull, Anaxial, Bryan Seecrets, C+C, Davidhorman, Fckckark, Freeformer,<br />
Gaius Cornelius, GrammarHammer 32, IronGargoyle, J04n, Jogloran, M-le-mot-dit, MaxDZ8, Paddles, Paolo.dL, Qutezuce, Redquark, Shashank Shekhar, Skorp, Smjg, Srleffler, Sterrys,<br />
SteveBaker, Tkgd2007, TokyoJunkie, Vossanova, Wizard191, Woohookitty, Yworo, 36 anonymous edits<br />
Relief mapping Source: http://en.wikipedia.org/w/index.php?oldid=448502438 Contributors: ALoopingIcon, D6, Dionyziz, Editsalot, Eep², JonH, Korg, M-le-mot-dit, PeterRander,<br />
PianoSpleen, Qwyrxian, R'n'B, Scottc1988, Searchme, Simeon, Sirus20x6, Starkiller88, Vitorpamplona, 15 anonymous edits
Article Sources and Contributors 264<br />
Render Output unit Source: http://en.wikipedia.org/w/index.php?oldid=423436253 Contributors: Accord, Arch dude, Exp HP, Fernvale, Imzjustplayin, MaxDZ8, Paolo.dL, Qutezuce,<br />
Shandris, Swaaye, Trevyn, TrinitronX, 11 anonymous edits<br />
Rendering Source: http://en.wikipedia.org/w/index.php?oldid=455002867 Contributors: 16@r, ALoopingIcon, AVM, Aaronh, Adailide, Ahy1, Al Hart, Alanbly, Altenmann, Alvin Seville,<br />
AnnaFrance, Asephei, AxelBoldt, Azunda, Ben Ben, Benbread, Benchaz, Bendman, Bjorke, Blainster, Boing! said Zebedee, Bpescod, Bryan Derksen, Cgbuff, Chalst, Charles Matthews, Chris<br />
the speller, CliffC, Cmdrjameson, Conversion script, Corti, Crahul, Das-g, Dave Law, David C, Dedeche, Deli nk, Dhatfield, Dhilvert, Doradus, Doubleyouyou, Downwards, Dpc01, Dutch15,<br />
Dzhim, Ed g2s, Edcolins, Eekerz, Eflouret, Egarduno, Erudecorp, FleetCommand, Fm2006, Frango com Nata, Fredrik, Fuhghettaboutit, Funnylemon, Gamer<strong>3D</strong>, Gary King, GeorgeBills,<br />
GeorgeLouis, Germancorredorp, Gku, Gordmoo, Gothmog.es, Graham87, Graue, Gökhan, HarisM, Harshavsn, Howcheng, Hu, Hxa7241, Imroy, Indon, Interiot, Jaraalbe, Jeweldesign, Jheald,<br />
Jimmi Hugh, Jmencisom, Joyous!, Kayamon, Kennedy311, Kimse, Kri, LaughingMan, Ledow, Levork, Lindosland, Lkinkade, M-le-mot-dit, Maian, Mani1, Martarius, Mav, MaxRipper,<br />
Maximilian Schönherr, Mdd, Melaen, Michael Hardy, MichaelMcGuffin, Minghong, Mkweise, Mmernex, Nbarth, New Age Retro Hippie, Oicumayberight, Onopearls, Paladinwannabe2, Patrick,<br />
Phresnel, Phrood, Pinbucket, Piquan, Pit, Pixelbox, Pongley, Poweroid, Ppe42, RJHall, Ravedave, Reedbeta, Rich Farmbrough, Rilak, Ronz, Sam Hocevar, Seasage, Shawnc, SiobhanHansa,<br />
Slady, Spitfire8520, Sterrys, Sverdrup, Tesi1700, The Anome, TheProject, Tiggerjay, Tomruen, Urocyon, Veinor, Vervadr, Wapcaplet, Wik, William Burroughs, Wmahan, Wolfkeeper, Xugo,<br />
219 anonymous edits<br />
Retained mode Source: http://en.wikipedia.org/w/index.php?oldid=426548699 Contributors: BAxelrod, Bovineone, Chris Chittleborough, Damian Yerrick, Klassobanieras, Peter L, Simeon,<br />
SteveBaker, Uranographer, 8 anonymous edits<br />
Scanline rendering Source: http://en.wikipedia.org/w/index.php?oldid=451605926 Contributors: Aitias, Andreas Kaufmann, CQJ, Dicklyon, Edward, Gerard Hill, Gioto, Harryboyles,<br />
Hooperbloob, Lordmetroid, Moroder, Nixdorf, Phoz, Pinky deamon, RJHall, Rilak, Rjwilmsi, Samwisefoxburr, Sterrys, Taemyr, Thatotherperson, Thejoshwolfe, Timo Honkasalo, Valarauka,<br />
Walter bz, Wapcaplet, Weimont, Wesley, Wiki Raja, Xinjinbei, 33 anonymous edits<br />
Schlick's approximation Source: http://en.wikipedia.org/w/index.php?oldid=450196648 Contributors: Alhead, AlphaPyro, Anticipation of a New Lover's Arrival, The, AySz88, Svick, 2<br />
anonymous edits<br />
Screen Space Ambient Occlusion Source: http://en.wikipedia.org/w/index.php?oldid=453836663 Contributors: 3d engineer, ALoopingIcon, Aceleo, AndyTheGrump, Bombe, Buxley Hall,<br />
Chris the speller, Closedmouth, CommonsDelinker, Cre-ker, Dcuny, Dontstopwalking, Ethryx, Frap, Fuhghettaboutit, Gerweck, IRWeta, InvertedSaint, Jackattack51, Leadwerks, LogiNevermore,<br />
Lokator, Luke831, Malcolmxl5, ManiaChris, Manolo w, NimbusTLD, Pyronite, Retep998, SammichNinja, Sdornan, Sethi Xzon, Silverbyte, Stimpy77, Strata8, Tylerp9p, UncleZeiv, Vlad<strong>3D</strong>,<br />
152 anonymous edits<br />
Self-shadowing Source: http://en.wikipedia.org/w/index.php?oldid=405912489 Contributors: Amalas, Drat, Eekerz, Invertzoo, Jean-Frédéric, Jeff3000, Llorenzi, Midkay, Roxis, Shawnc,<br />
Vendettax, Woohookitty, XenoL-Type<br />
Shadow mapping Source: http://en.wikipedia.org/w/index.php?oldid=445459982 Contributors: 7, Antialiasing, Aresio, Ashwin, Dominicos, Dormant25, Eekerz, Fresheneesz, GDallimore,<br />
Icehose, Klassobanieras, Kostmo, M-le-mot-dit, Mattijsvandelden, Midnightzulu, Mrwojo, Orderud, Pearle, Praetor alpha, Rainwarrior, ShashClp, Starfox, Sterrys, Tommstein, 42 anonymous<br />
edits<br />
Shadow volume Source: http://en.wikipedia.org/w/index.php?oldid=445458469 Contributors: Abstracte, AlistairMcMillan, Ayavaron, Chealer, Closedmouth, Cma, Damian Yerrick, Darklilac,<br />
Eekerz, Eric Lengyel, Fractal3, Frecklefoot, Fresheneesz, GDallimore, Gamer Eek, J.delanoy, Jaxad0127, Jtsiomb, Jwir3, Klassobanieras, LOL, LiDaobing, Lkinkade, Mark kilgard, Mboverload,<br />
MoraSique, Mrwojo, Orderud, PigFlu Oink, Praetor alpha, Rainwarrior, Rivo, Rjwilmsi, Slicing, Snoyes, Starfox, Staz69uk, Steve Leach, Swatoa, Technobadger, TheDaFox, Tommstein, Zolv,<br />
Zorexx, Михајло Анђелковић, 52 anonymous edits<br />
Silhouette edge Source: http://en.wikipedia.org/w/index.php?oldid=419806917 Contributors: BenFrantzDale, David Levy, Gaius Cornelius, Orderud, Quibik, RJHall, Rjwilmsi, Wheger, 14<br />
anonymous edits<br />
Spectral rendering Source: http://en.wikipedia.org/w/index.php?oldid=367532592 Contributors: 1ForTheMoney, Brighterorange, Shentino, Srleffler, Xcelerate<br />
Specular highlight Source: http://en.wikipedia.org/w/index.php?oldid=451972496 Contributors: Altenmann, Bautze, BenFrantzDale, Cgbuff, Connelly, Dhatfield, Dicklyon, ERobson, Eekerz,<br />
Ettrig, Jakarr, JeffBobFrank, Jwhiteaker, KKelvinThompson, Kri, Michael Hardy, Mmikkelsen, Niello1, Plowboylifestyle, RJHall, Reedbeta, Ti chris, Tommy2010, Versatranitsonlywaytofly,<br />
Wizard191, 31 anonymous edits<br />
Specularity Source: http://en.wikipedia.org/w/index.php?oldid=453622448 Contributors: Barticus88, Dori, Frap, Hetar, JDspeeder1, Jh559, M-le-mot-dit, Megan1967, Neonstarlight,<br />
Nintend06, Oliver Lineham, Utrecht gakusei, Volfy, 1 anonymous edits<br />
Sphere mapping Source: http://en.wikipedia.org/w/index.php?oldid=403586902 Contributors: AySz88, BenFrantzDale, Digulla, Eekerz, Jaho, Paolo.dL, Smjg, SteveBaker, Tim1357, 1<br />
anonymous edits<br />
Stencil buffer Source: http://en.wikipedia.org/w/index.php?oldid=445458852 Contributors: BluesD, Claynoik, Cyc, Ddawson, Eep², Furrykef, Guitpicker07, Kitedriver, Levj, MrKIA11,<br />
Mrwojo, O.mangold, Orderud, Rainwarrior, Zvar, Михајло Анђелковић, 16 anonymous edits<br />
Stencil codes Source: http://en.wikipedia.org/w/index.php?oldid=451998334 Contributors: Bebestbe, ChrisHodgesUK, Gentryx, Jncraton, Reyk, 2 anonymous edits<br />
Subdivision surface Source: http://en.wikipedia.org/w/index.php?oldid=422274395 Contributors: Ablewisuk, Abmac, Andreas Fabri, Ati3414, Banus, Berland, BoredTerry, Boubek, Brock256,<br />
Bubbleshooting, CapitalR, Charles Matthews, Crucificator, Deodar, Feureau, Flamurai, Furrykef, Giftlite, Husond, Korval, Lauciusa, Levork, Lomacar, MIT Trekkie, Moritz Moeller,<br />
MoritzMoeller, Mysid, Norden83, Orderud, Qutezuce, RJHall, Radioflux, Rasmus Faber, Romainbehar, Tabletop, The-Wretched, WorldRuler99, 43 anonymous edits<br />
Subsurface scattering Source: http://en.wikipedia.org/w/index.php?oldid=438543284 Contributors: ALoopingIcon, Azekeal, BenFrantzDale, Dominicos, Fama Clamosa, Frap, Kri, Meekohi,<br />
Mic ma, NRG753, Piotrek Chwała, Quadell, RJHall, Reedbeta, Robertvan1, Rufous, T-tus, Tinctorius, Xezbeth, 21 anonymous edits<br />
Surface caching Source: http://en.wikipedia.org/w/index.php?oldid=447942641 Contributors: Amalas, AnteaterZot, AvicAWB, Fredrik, Hephaestos, KirbyMeister, LOL, Markb, Mika1h,<br />
Miyagawa, Orderud, Schneelocke, Thunderbrand, Tregoweth, 14 anonymous edits<br />
Surface normal Source: http://en.wikipedia.org/w/index.php?oldid=446089587 Contributors: 16@r, 4C, Aboalbiss, Abrech, Aquishix, Arcfrk, BenFrantzDale, Daniele.tampieri, Dori,<br />
Dysprosia, Editsalot, Elembis, Epolk, Fixentries, Frecklefoot, Furrykef, Gene Nygaard, Giftlite, Hakeem.gadi, Herbee, Ilya Voyager, JasonAD, JohnBlackburne, JonathanHudgins, Jorge Stolfi,<br />
Joseph Myers, KSmrq, Kostmo, Kushal one, LOL, Lunch, Madmath789, Michael Hardy, ObscureAuthor, Oleg Alexandrov, Olegalexandrov, Paolo.dL, Patrick, Paulheath, Pooven, Quanda,<br />
R'n'B, RJHall, RevenDS, Serpent's Choice, Skytopia, Smessing, Squash, Sterrys, Subhash15, Takomat, Zvika, 42 anonymous edits<br />
Texel Source: http://en.wikipedia.org/w/index.php?oldid=445299767 Contributors: -Majestic-, Altenmann, Beno1000, BorisFromStockdale, Dicklyon, Flammifer, Furrykef, Gamer<strong>3D</strong>, Jamelan,<br />
Jynus, Kmk35, MIT Trekkie, Marasmusine, MementoVivere, Neckelmann, Neg, Nlu, ONjA, RainbowCrane, Rilak, Sterrys, Thilo, Zbbentley, Михајло Анђелковић, 21 anonymous edits<br />
Texture atlas Source: http://en.wikipedia.org/w/index.php?oldid=435068103 Contributors: Abdull, Eekerz, Fram, Gosox5555, Mattg82, Melfar, MisterPhyrePhox, Remag Kee, Spodi, Tardis,<br />
10 anonymous edits<br />
Texture filtering Source: http://en.wikipedia.org/w/index.php?oldid=437388837 Contributors: Alanius, Arnero, Banano03, Benx009, BobtheVila, Brighterorange, CoJaBo, Dawnseeker2000,<br />
Eekerz, Flamurai, GeorgeOne, Gerweck, Hooperbloob, Jagged 85, Jusdafax, Michael Hardy, Mild Bill Hiccup, Obsidian Soul, RJHall, Remag Kee, Rich Farmbrough, Shvelven, Srleffler, Tavla,<br />
Tolkien fan, Valarauka, Wilstrup, Xompanthy, 23 anonymous edits<br />
Texture mapping Source: http://en.wikipedia.org/w/index.php?oldid=452815880 Contributors: 16@r, ALoopingIcon, Abmac, Al Fecund, Alfio, Annicedda, Anyeverybody, Arjayay, Arnero,<br />
Art LaPella, AstrixZero, AzaToth, Barticus88, Besieged, Biasoli, BluesD, Blueshade, Canadacow, Cclothier, Chadloder, Collabi, CrazyTerabyte, DanielPharos, Davepape, Dhatfield, Djanvk,<br />
Donaldrap, Dwilches, Eekerz, Eep², Elf, Fawzma, Furrykef, GDallimore, Gamer<strong>3D</strong>, Gbaor, Gerbrant, Giftlite, Goododa, GrahamAsher, Helianthi, Heppe, Imroy, Isnow, JIP, Jagged 85, Jesse<br />
Viviano, Jfmantis, JonH, Kate, KnowledgeOfSelf, Kri, Kusmabite, LOL, Luckyz, M.J. Moore-McGonigal PhD, P.Eng, MIT Trekkie, ML, Mackseem, Martin Kozák, MarylandArtLover, Mav,<br />
MaxDZ8, Michael Hardy, Michael.Pohoreski, Mietchen, Neelix, Novusspero, Obsidian Soul, Oicumayberight, Ouzari, Palefire, Plasticup, Pvdl, Qutezuce, RJHall, Rainwarrior, Rich Farmbrough,<br />
Ronz, SchuminWeb, Sengkang, Simon Fenney, Simon the Dragon, SiobhanHansa, Solipsist, SpunkyBob, Srleffler, Stephen, T-tus, Tarinth, TheAMmollusc, Tompsci, Twas Now, Vaulttech,<br />
Vitorpamplona, Viznut, Wayne Hardman, Willsmith, Ynhockey, Zom-B, Zzuuzz, 112 anonymous edits
Article Sources and Contributors 265<br />
Texture synthesis Source: http://en.wikipedia.org/w/index.php?oldid=437027933 Contributors: Akinoame, Barticus88, Borsi112, Cmdrjameson, CommonsDelinker, CrimsonTexture,<br />
Davidhorman, Dhatfield, Disavian, Drlanman, Ennetws, Instantaneous, Jhhays, Kellen`, Kukini, Ljay2two, LucDecker, Mehrdadh, Michael Hardy, Nbarth, Nezbie, Nilx, Rich Farmbrough,<br />
Rpaget, Simeon, Spark, Spot, Straker, TerriersFan, That Guy, From That Show!, TheAMmollusc, Thetawave, Tom Paine, 34 anonymous edits<br />
Tiled rendering Source: http://en.wikipedia.org/w/index.php?oldid=442640072 Contributors: 1ForTheMoney, CosineKitty, Eekerz, Imroy, Kinema, Milan Keršláger, Otolemur crassicaudatus,<br />
Remag Kee, TJ Spyke, The Anome, Walter bz, Woohookitty, 16 anonymous edits<br />
UV mapping Source: http://en.wikipedia.org/w/index.php?oldid=454856801 Contributors: Bk314159, Diego Moya, DotShell, Eduard pintilie, Eekerz, Ennetws, Ep22, Fractal3, Jleedev, Kieff,<br />
Lupinewulf, Mrwojo, Phatsphere, Radical Mallard, Radioflux, Raymond Grier, Rich Farmbrough, Richard7770, Romeu, Simeon, Yworo, Zephyris, Δ, 30 anonymous edits<br />
UVW mapping Source: http://en.wikipedia.org/w/index.php?oldid=406105823 Contributors: Ajstov, Eekerz, Kenchikuben, Kuru, Mackseem, Nimur, Reach Out to the Truth, Romeu, 3<br />
anonymous edits<br />
Vertex Source: http://en.wikipedia.org/w/index.php?oldid=454604793 Contributors: ABF, Aitias, Americanhero, Anyeverybody, Ataleh, Butterscotch, CMBJ, Coopkev2, Crisis, Cronholm144,<br />
David Eppstein, DeadEyeArrow, Discospinster, DoubleBlue, Duoduoduo, Escape Orbit, Fixentries, Fly by Night, Funandtrvl, Giftlite, Hvn0413, Icairns, JForget, Knowz, Leuko, M.Virdee,<br />
MarsRover, Martin von Gagern, Mecanismo, Mendaliv, Mhaitham.shammaa, Mikayla102295, Miym, NatureA16, Orange Suede Sofa, Panscient, Petrb, Pumpmeup, SGBailey, SchfiftyThree,<br />
Shinli256, Shyland, SimpleParadox, StaticGull, Steelpillow, Synchronism, TheWeakWilled, Tomruen, WaysToEscape, William Avery, WissensDürster, ﻲﻧﺎﻣ, 119 anonymous edits<br />
Vertex Buffer Object Source: http://en.wikipedia.org/w/index.php?oldid=444428407 Contributors: Allenc28, Frecklefoot, GoingBatty, Jgottula, Omgchead, Robertbowerman, Whitepaw, 23<br />
anonymous edits<br />
Vertex normal Source: http://en.wikipedia.org/w/index.php?oldid=399999576 Contributors: David Eppstein, Eekerz, MagiMaster, Manop, Michael Hardy, Reyk, 1 anonymous edits<br />
Viewing frustum Source: http://en.wikipedia.org/w/index.php?oldid=447539396 Contributors: Archelon, AvicAWB, Craig Pemberton, Crossmr, Cyp, DavidCary, Dbchristensen, Dpv, Eep²,<br />
Flamurai, Gdr, Hymek, M-le-mot-dit, MusicScience, Nimur, Poccil, RJHall, Reedbeta, Robth, Shashank Shekhar, Welsh, 14 anonymous edits<br />
Virtual actor Source: http://en.wikipedia.org/w/index.php?oldid=448512924 Contributors: ASU, Aqwis, Bensin, Chowbok, Deacon of Pndapetzim, Donfbreed, DragonflySixtyseven,<br />
ErkDemon, FernoKlump, Fu Kung Master, Hughdbrown, Joseph A. Spadaro, Lenticel, Martijn Hoekstra, Mikola-Lysenko, NYKevin, Neelix, Otto4711, Piski125, Retired username, Sammy1000,<br />
Tavix, Uncle G, Vassyana, Woohookitty, 17 anonymous edits<br />
Volume rendering Source: http://en.wikipedia.org/w/index.php?oldid=453127260 Contributors: Andrewmu, Anilknyn, Art LaPella, Bcgrossmann, Berland, Bodysurfinyon, Breuwi, Butros,<br />
CallipygianSchoolGirl, Cameron.walsh, Chalkie666, Charles Matthews, Chowbok, Chroniker, Craig Pemberton, Crlab, Ctachme, DGG, Damian Yerrick, Davepape, Decora, Dhatfield, Dsajga,<br />
Eduardo07, Egallois, Exocom, GL1zdA, Greystar92, Hu12, Iab0rt4lulz, Iweber2003, JHKrueger, Julesd, Kostmo, Kri, Lackas, Lambiam, Levin, Locador, Male1979, Martarius, Mdd, Mugab,<br />
Nbarth, Nippashish, Pathak.ab, Pearle, Praetor alpha, PretentiousSnot, RJHall, Rich Farmbrough, Rilak, Rkikinis, Sam Hocevar, Sjappelodorus, Sjschen, Squids and Chips, Stefanbanev, Sterrys,<br />
Theroadislong, Thetawave, TimBentley, Tobo, Tom1.xeon, Uncle Dick, Welsh, Whibbard, Wilhelm Bauer, Wolfkeeper, Ömer Cengiz Çelebi, 116 anonymous edits<br />
Volumetric lighting Source: http://en.wikipedia.org/w/index.php?oldid=421846868 Contributors: Amalas, Berserker79, Edoe2, Fusion7, IgWannA, KlappCK, Lumoy, Tylerp9p,<br />
VoluntarySlave, Wwwwolf, Xanzzibar, 20 anonymous edits<br />
Voxel Source: http://en.wikipedia.org/w/index.php?oldid=454946061 Contributors: Accounting4Taste, Alansohn, Alfio, Andreba, Andrewmu, Ariesdraco, Aursani, BenFrantzDale, Bendykst,<br />
Biasedeyes, Bigdavesmith, BlindWanderer, Bojilov, Borek, Bornemix, Calliopejen1, Carpet, Centrx, Chris the speller, CommonsDelinker, Craig Pemberton, Cristan, Ctachme, Damian Yerrick,<br />
Dawidl, DefenceForce, Diego Moya, Dragon1394, Erik Zachte, Everyking, Flarn2006, Fredrik, Fubar Obfusco, Furrykef, Gordmoo, Gousst, GregorB, Hairy Dude, Haya shiloh, Hendricks266,<br />
Hplusplus, INCSlayer, Jaboja, Jamelan, Jarble, Jedlinlau, Jedrzej s, John Nevard, Karl-Henner, KasugaHuang, Kbdank71, Kelson, Kuroboushi, Lambiam, LeeHunter, MGlosenger, Maestrosync,<br />
Marasmusine, Mindmatrix, Miterdale, Mlindstr, MrOllie, Mwtoews, My Core Competency is Competency, Null Nihils, OllieFury, Omegatron, PaterMcFly, Pearle, Petr Kopač, Pleasantville,<br />
Pythagoras1, RJHall, Rajatojha, Retodon8, Ronz, Sallison, Saltvik, Satchmo, Schizobullet, SharkD, Shentino, Simeon, Softy, Soyweiser, SpeedyGonsales, Stampsm, Stefanbanev, Stephen<br />
Morley, Stormwatch, Suruena, The Anome, Thumperward, Thunderklaus, Tinclon, Tncomp, Tomtheeditor, Touchaddict, VictorAnyakin, Victordiaz, Vossman, Waldir, Wernher,<br />
WhiteHatLurker, Wlievens, Wyatt Riot, Wyrmmage, X00022027, Xanzzibar, Xezbeth, ZeiP, ZeroOne, Михајло Анђелковић, 161 ,הרפ anonymous edits<br />
Z-buffering Source: http://en.wikipedia.org/w/index.php?oldid=451724203 Contributors: Abmac, Alexcat, Alfakim, Alfio, Amillar, Antical, Archelon, Arnero, AySz88, Bcwhite,<br />
BenFrantzDale, Bohumir Zamecnik, Bookandcoffee, Chadloder, CodeCaster, Cutler, David Eppstein, DavidHOzAu, Delt01, Drfrogsplat, Feraudyh, Fredrik, Furrykef, Fuzzbox, GeorgeBills,<br />
Harutsedo2, Jmorkel, John of Reading, Kaszeta, Komap, Kotasik, Landon1980, Laoo Y, LogiNevermore, Mav, Mild Bill Hiccup, Moroder, Mronge, Msikma, Nowwatch, PenguiN42, RJHall,<br />
Rainwarrior, Salam32, Solkoll, Sterrys, T-tus, Tobias Bergemann, ToohrVyk, TuukkaH, Wik, Wikibofh, Zeus, Zoicon5, Zotel, ²¹², 65 anonymous edits<br />
Z-fighting Source: http://en.wikipedia.org/w/index.php?oldid=445460217 Contributors: AxelBoldt, AySz88, CesarB, Chentianran, CompuHacker, Furrykef, Gamer Eek, Jeepday, Mhoskins,<br />
Mrwojo, Otac0n, Qz, RJHall, Rainwarrior, Rbrwr, Reedbeta, The Rambling Man, Михајло Анђелковић, 18 anonymous edits<br />
<strong>3D</strong> computer <strong>graphics</strong> software Source: http://en.wikipedia.org/w/index.php?oldid=453709925 Contributors: -Midorihana-, 16@r, 790, 99neurons, ALoopingIcon, Adrian 1001, Agentbla, Al<br />
Hart, Alanbly, AlexTheMartian, Alibaba327, Antientropic, Archizero, Arneoog, AryconVyper, Bagatelle, BananaFiend, BcRIPster, Beetstra, Bertmg, Bigbluefish, Blackbox77, Bobsterling1975,<br />
Book2, Bovineone, Brenont, Bsmweb3d, Bwildasi, Byronknoll, CALR, CairoTasogare, CallipygianSchoolGirl, Candyer, Carioca, Chowbok, Chris Borg, Chris TC01, Chris the speller,<br />
Chrisminter, Chromecat, Cjrcl, Cremepuff222, Cyon Steve, Davester78, Dekisugi, Dicklyon, Dlee3d, Dodger, Dr. Woo, DriveDenali, Dryo, Dto, Dynaflow, EEPROM Eagle, ERobson, ESkog,<br />
Edward, Eiskis, Elf, Elfguy, Enigma100cwu, EpsilonSquare, ErkDemon, Euchiasmus, Extremophile, Fiftyquid, Firsfron, Frecklefoot, Fu Kung Master, GTBacchus, Gal911, Genius101,<br />
Goncalopp, Greg L, GustavTheMushroom, Herorev, Holdendesign, HoserHead, Hyad, Iamsouthpaw, IanManka, Im.thatoneguy, Intgr, Inthoforo, Iphonefans2009, Iridescent, JLaTondre,<br />
Jameshfisher, JayDez, Jdm64, Jdtyler, Jncraton, JohnCD, Jreynaga, Jstier, Jtanadi, Juhame, KVDP, Kev Boy, Koffeinoverdos, Kotakotakota, Kubigula, Lambda, Lantrix, Laurent Cancé,<br />
Lerdthenerd, LetterRip, Licu, Lightworkdesign, Litherlandsand, Lolbill58, Longhair, M.J. Moore-McGonigal PhD, P.Eng, Malcolmxl5, Mandarax, Marcelswiss, Markhobley, Martarius,<br />
<strong>Materials</strong>cientist, Mayalld, Michael Devore, Michael b strickland, Mike Gale, Millahnna, MrOllie, NeD80, NeoKron, Nev1, Nick Drake, Nixeagle, Nopnopzero, Nutiketaiel, Oicumayberight,<br />
Optigon.wings, Orderud, Ouzari, Papercyborg, Parametric66, Parscale, Paul Stansifer, Pepelyankov, Phiso1, Plan, Quincy2010, Radagast83, Raffaele Megabyte, Ramu50, Rapasaurus, Raven in<br />
Orbit, Relux2007, Requestion, Rich Farmbrough, Ronz, Ryan Postlethwaite, Samtroup, Scotttsweeney, Serioussamp, ShaunMacPherson, Skhedkar, Skinnydow, SkyWalker, Skybum, Smalljim,<br />
Snarius, Sparklyindigopink, Speck-Made, Stib, Strattonbrazil, Sugarsmax, Tbsmith, Team FS<strong>3D</strong>, TheRealFennShysa, Thecrusader 440, Thymefromti, Tommato, Tritos, Truthdowser, Uncle Dick,<br />
VRsim, Vdf22, Victordiaz, VitruV07, Waldir, WallaceJackson, Wcgteach, Weetoddid, Welsh, WereSpielChequers, Woohookitty, Wsultzbach, Xx3nvyxx, Yellowweasel, ZanQdo, Zarius,<br />
Zundark, Δ, 364 anonymous edits
Image Sources, Licenses and Contributors 266<br />
Image Sources, Licenses and Contributors<br />
Image:Raytraced image jawray.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Raytraced_image_jawray.jpg License: Attribution Contributors: User Jawed on en.wikipedia<br />
Image:Glasses 800 edit.png Source: http://en.wikipedia.org/w/index.php?title=File:Glasses_800_edit.png License: Public Domain Contributors: Gilles Tran<br />
Image:utah teapot.png Source: http://en.wikipedia.org/w/index.php?title=File:Utah_teapot.png License: Public domain Contributors: -<br />
Image:Perspective Projection Principle.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Perspective_Projection_Principle.jpg License: GNU Free Documentation License<br />
Contributors: -<br />
Image:Aocclude insectambient.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Aocclude_insectambient.jpg License: GNU Free Documentation License Contributors: Original<br />
uploader was Mrtheplague at en.wikipedia<br />
Image:Aocclude insectdiffuse.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Aocclude_insectdiffuse.jpg License: GNU Free Documentation License Contributors: Original<br />
uploader was Mrtheplague at en.wikipedia<br />
Image:Aocclude insectcombined.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Aocclude_insectcombined.jpg License: GNU Free Documentation License Contributors: Original<br />
uploader was Mrtheplague at en.wikipedia<br />
Image:Aocclude bentnormal.png Source: http://en.wikipedia.org/w/index.php?title=File:Aocclude_bentnormal.png License: Creative Commons Attribution-ShareAlike 3.0 Unported<br />
Contributors: Original uploader was Mrtheplague at en.wikipedia<br />
Image:Anisotropic compare.png Source: http://en.wikipedia.org/w/index.php?title=File:Anisotropic_compare.png License: unknown Contributors: -<br />
File:MipMap Example STS101 Anisotropic.png Source: http://en.wikipedia.org/w/index.php?title=File:MipMap_Example_STS101_Anisotropic.png License: GNU Free Documentation<br />
License Contributors: MipMap_Example_STS101.jpg: en:User:Mulad, based on a NASA image derivative work: Kri (talk)<br />
Image:Image-resample-sample.png Source: http://en.wikipedia.org/w/index.php?title=File:Image-resample-sample.png License: Public Domain Contributors: en:user:mmj<br />
Image:Binary space partition.svg Source: http://en.wikipedia.org/w/index.php?title=File:Binary_space_partition.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors:<br />
Jkwchui<br />
Image:BoundingBox.jpg Source: http://en.wikipedia.org/w/index.php?title=File:BoundingBox.jpg License: Creative Commons Attribution 2.0 Contributors: -<br />
File:Bump-map-demo-full.png Source: http://en.wikipedia.org/w/index.php?title=File:Bump-map-demo-full.png License: GNU Free Documentation License Contributors:<br />
Bump-map-demo-smooth.png, Orange-bumpmap.png and Bump-map-demo-bumpy.png: Original uploader was Brion VIBBER at en.wikipedia Later version(s) were uploaded by McLoaf at<br />
en.wikipedia. derivative work: GDallimore (talk)<br />
File:Bump map vs isosurface2.png Source: http://en.wikipedia.org/w/index.php?title=File:Bump_map_vs_isosurface2.png License: Public Domain Contributors: User:GDallimore<br />
Image:Catmull-Clark subdivision of a cube.svg Source: http://en.wikipedia.org/w/index.php?title=File:Catmull-Clark_subdivision_of_a_cube.svg License: GNU Free Documentation License<br />
Contributors: -<br />
Image:Eulerangles.svg Source: http://en.wikipedia.org/w/index.php?title=File:Eulerangles.svg License: Creative Commons Attribution 3.0 Contributors: Lionel Brits<br />
Image:plane.svg Source: http://en.wikipedia.org/w/index.php?title=File:Plane.svg License: Creative Commons Attribution 3.0 Contributors: Original uploader was Juansempere at<br />
en.wikipedia.<br />
File:Panorama cube map.png Source: http://en.wikipedia.org/w/index.php?title=File:Panorama_cube_map.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: SharkD<br />
File:Lambert2.gif Source: http://en.wikipedia.org/w/index.php?title=File:Lambert2.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: GianniG46<br />
Image:Diffuse reflection.gif Source: http://en.wikipedia.org/w/index.php?title=File:Diffuse_reflection.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: GianniG46<br />
File:Diffuse reflection.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Diffuse_reflection.PNG License: GNU Free Documentation License Contributors: Original uploader was<br />
Theresa knott at en.wikipedia<br />
Image:Displacement.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Displacement.jpg License: Creative Commons Attribution 2.0 Contributors: Original uploader was T-tus at<br />
en.wikipedia<br />
Image:DooSabin mesh.png Source: http://en.wikipedia.org/w/index.php?title=File:DooSabin_mesh.png License: Public Domain Contributors: Fredrik Orderud<br />
Image:DooSabin subdivision.png Source: http://en.wikipedia.org/w/index.php?title=File:DooSabin_subdivision.png License: Public Domain Contributors: Fredrik Orderud<br />
Image:Local illumination.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Local_illumination.JPG License: Public Domain Contributors: -<br />
Image:Global illumination.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Global_illumination.JPG License: Public Domain Contributors: user:Gtanski<br />
File:Gouraudshading00.png Source: http://en.wikipedia.org/w/index.php?title=File:Gouraudshading00.png License: Public Domain Contributors: Maarten Everts<br />
File:D<strong>3D</strong> Shading Modes.png Source: http://en.wikipedia.org/w/index.php?title=File:D<strong>3D</strong>_Shading_Modes.png License: Public Domain Contributors: Lukáš Buričin<br />
Image:Gouraud_low.gif Source: http://en.wikipedia.org/w/index.php?title=File:Gouraud_low.gif License: Creative Commons Attribution 3.0 Contributors: Gouraud low anim.gif: User:Jalo<br />
derivative work: Kri (talk) Attribution to: Zom-B<br />
Image:Gouraud_high.gif Source: http://en.wikipedia.org/w/index.php?title=File:Gouraud_high.gif License: Creative Commons Attribution 2.0 Contributors: -<br />
file:Obj lineremoval.png Source: http://en.wikipedia.org/w/index.php?title=File:Obj_lineremoval.png License: GNU Free Documentation License Contributors: -<br />
Image:Lost Coast HDR comparison.png Source: http://en.wikipedia.org/w/index.php?title=File:Lost_Coast_HDR_comparison.png License: unknown Contributors: -<br />
Image:Isosurface on molecule.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Isosurface_on_molecule.jpg License: unknown Contributors: -<br />
Image:Prop iso.pdf Source: http://en.wikipedia.org/w/index.php?title=File:Prop_iso.pdf License: Creative Commons Attribution-Sharealike 3.0 Contributors: Citizenthom<br />
Image:Lambert Cosine Law 1.svg Source: http://en.wikipedia.org/w/index.php?title=File:Lambert_Cosine_Law_1.svg License: Public Domain Contributors: Inductiveload<br />
Image:Lambert Cosine Law 2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Lambert_Cosine_Law_2.svg License: Public Domain Contributors: Inductiveload<br />
Image:DiscreteLodAndCullExampleRanges.MaxDZ8.svg Source: http://en.wikipedia.org/w/index.php?title=File:DiscreteLodAndCullExampleRanges.MaxDZ8.svg License: Public Domain<br />
Contributors: MaxDZ8<br />
Image:WireSphereMaxTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereMaxTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8<br />
Image:WireSphereHiTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereHiTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8<br />
Image:WireSphereStdTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereStdTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8<br />
Image:WireSphereLowTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereLowTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8<br />
Image:WireSphereMinTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereMinTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8<br />
Image:SpheresBruteForce.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:SpheresBruteForce.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8<br />
Image:SpheresLodded.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:SpheresLodded.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8<br />
Image:DifferenceImageBruteLod.MaxDZ8.png Source: http://en.wikipedia.org/w/index.php?title=File:DifferenceImageBruteLod.MaxDZ8.png License: Public Domain Contributors:<br />
MaxDZ8<br />
Image:MipMap Example STS101.jpg Source: http://en.wikipedia.org/w/index.php?title=File:MipMap_Example_STS101.jpg License: GNU Free Documentation License Contributors:<br />
en:User:Mulad, based on a NASA image<br />
Image:Painters_problem.png Source: http://en.wikipedia.org/w/index.php?title=File:Painters_problem.png License: GNU Free Documentation License Contributors: -<br />
Image:NURBS 3-D surface.gif Source: http://en.wikipedia.org/w/index.php?title=File:NURBS_3-D_surface.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: Greg A L<br />
Image:NURBstatic.svg Source: http://en.wikipedia.org/w/index.php?title=File:NURBstatic.svg License: GNU Free Documentation License Contributors: Original uploader was<br />
WulfTheSaxon at en.wikipedia.org<br />
Image:motoryacht design i.png Source: http://en.wikipedia.org/w/index.php?title=File:Motoryacht_design_i.png License: GNU Free Documentation License Contributors: Original uploader<br />
was Freeformer at en.wikipedia Later version(s) were uploaded by McLoaf at en.wikipedia.<br />
Image:Surface modelling.svg Source: http://en.wikipedia.org/w/index.php?title=File:Surface_modelling.svg License: GNU Free Documentation License Contributors: Surface1.jpg: Maksim<br />
derivative work: Vladsinger (talk)
Image Sources, Licenses and Contributors 267<br />
Image:nurbsbasisconstruct.png Source: http://en.wikipedia.org/w/index.php?title=File:Nurbsbasisconstruct.png License: GNU Free Documentation License Contributors: -<br />
Image:nurbsbasislin2.png Source: http://en.wikipedia.org/w/index.php?title=File:Nurbsbasislin2.png License: GNU Free Documentation License Contributors: -<br />
Image:nurbsbasisquad2.png Source: http://en.wikipedia.org/w/index.php?title=File:Nurbsbasisquad2.png License: GNU Free Documentation License Contributors: -<br />
Image:Normal map example.png Source: http://en.wikipedia.org/w/index.php?title=File:Normal_map_example.png License: Creative Commons Attribution-ShareAlike 1.0 Generic<br />
Contributors: -<br />
Image:Oren-nayar-vase1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-vase1.jpg License: GNU General Public License Contributors: M.Oren and S.Nayar. Original<br />
uploader was Jwgu at en.wikipedia<br />
Image:Oren-nayar-surface.png Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-surface.png License: Public Domain Contributors: -<br />
Image:Oren-nayar-reflection.png Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-reflection.png License: Public Domain Contributors: -<br />
Image:Oren-nayar-vase2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-vase2.jpg License: GNU General Public License Contributors: M. Oren and S. Nayar. Original<br />
uploader was Jwgu at en.wikipedia<br />
Image:Oren-nayar-vase3.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-vase3.jpg License: GNU General Public License Contributors: M. Oren and S. Nayar. Original<br />
uploader was Jwgu at en.wikipedia<br />
Image:Oren-nayar-sphere.png Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-sphere.png License: Public Domain Contributors: -<br />
Image:Painter's algorithm.svg Source: http://en.wikipedia.org/w/index.php?title=File:Painter's_algorithm.svg License: GNU Free Documentation License Contributors: Zapyon<br />
Image:Painters problem.svg Source: http://en.wikipedia.org/w/index.php?title=File:Painters_problem.svg License: Public Domain Contributors: Wojciech Muła<br />
Image:ParallaxMapping.jpg Source: http://en.wikipedia.org/w/index.php?title=File:ParallaxMapping.jpg License: Fair use Contributors: -<br />
Image:particle sys fire.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Particle_sys_fire.jpg License: Public Domain Contributors: Jtsiomb<br />
Image:particle sys galaxy.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Particle_sys_galaxy.jpg License: Public Domain Contributors: User Jtsiomb on en.wikipedia<br />
Image:Pi-explosion.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pi-explosion.jpg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Sameboat<br />
Image:Particle Emitter.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Particle_Emitter.jpg License: GNU Free Documentation License Contributors: -<br />
Image:Strand Emitter.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Strand_Emitter.jpg License: GNU Free Documentation License Contributors: -<br />
Image:Pathtrace3.png Source: http://en.wikipedia.org/w/index.php?title=File:Pathtrace3.png License: Public Domain Contributors: John Carter<br />
Image:Bidirectional scattering distribution function.svg Source: http://en.wikipedia.org/w/index.php?title=File:Bidirectional_scattering_distribution_function.svg License: Public Domain<br />
Contributors: Twisp<br />
Image:Phong components version 4.png Source: http://en.wikipedia.org/w/index.php?title=File:Phong_components_version_4.png License: Creative Commons Attribution-ShareAlike 3.0<br />
Unported Contributors: User:Rainwarrior<br />
Image:Phong-shading-sample.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Phong-shading-sample.jpg License: Public Domain Contributors: -<br />
File:Glas-1000-enery.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Glas-1000-enery.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Tobias R Metoc<br />
Image:Procedural Texture.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Procedural_Texture.jpg License: GNU Free Documentation License Contributors: -<br />
Image:Perspective Transform Diagram.png Source: http://en.wikipedia.org/w/index.php?title=File:Perspective_Transform_Diagram.png License: Public Domain Contributors: -<br />
Image:Space of rotations.png Source: http://en.wikipedia.org/w/index.php?title=File:Space_of_rotations.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: -<br />
Image:Hypersphere of rotations.png Source: http://en.wikipedia.org/w/index.php?title=File:Hypersphere_of_rotations.png License: Creative Commons Attribution-Sharealike 3.0<br />
Contributors: -<br />
Image:Diagonal rotation.png Source: http://en.wikipedia.org/w/index.php?title=File:Diagonal_rotation.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: -<br />
Image:Cornell Box With and Without Radiosity Enabled.gif Source: http://en.wikipedia.org/w/index.php?title=File:Cornell_Box_With_and_Without_Radiosity_Enabled.gif License:<br />
Creative Commons Attribution-Sharealike 3.0 Contributors: User:PJohnston<br />
Image:Radiosity - RRV, step 79.png Source: http://en.wikipedia.org/w/index.php?title=File:Radiosity_-_RRV,_step_79.png License: Creative Commons Attribution-Sharealike 3.0<br />
Contributors: -<br />
Image:Radiosity Comparison.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Radiosity_Comparison.jpg License: GNU Free Documentation License Contributors: Hugo Elias<br />
(myself)<br />
Image:Radiosity Progress.png Source: http://en.wikipedia.org/w/index.php?title=File:Radiosity_Progress.png License: GNU Free Documentation License Contributors: Hugo Elias (myself)<br />
File:Nusselt analog.svg Source: http://en.wikipedia.org/w/index.php?title=File:Nusselt_analog.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Jheald<br />
Image:Utah teapot simple 2.png Source: http://en.wikipedia.org/w/index.php?title=File:Utah_teapot_simple_2.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:<br />
Dhatfield<br />
File:Recursive raytrace of a sphere.png Source: http://en.wikipedia.org/w/index.php?title=File:Recursive_raytrace_of_a_sphere.png License: Creative Commons Attribution-Share Alike<br />
Contributors: Tim Babb<br />
File:Ray trace diagram.svg Source: http://en.wikipedia.org/w/index.php?title=File:Ray_trace_diagram.svg License: GNU Free Documentation License Contributors: Henrik<br />
File:Glasses 800 edit.png Source: http://en.wikipedia.org/w/index.php?title=File:Glasses_800_edit.png License: Public Domain Contributors: Gilles Tran<br />
File:BallsRender.png Source: http://en.wikipedia.org/w/index.php?title=File:BallsRender.png License: Creative Commons Attribution 3.0 Contributors: Mimigu (talk)<br />
File:Ray-traced steel balls.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Ray-traced_steel_balls.jpg License: GNU Free Documentation License Contributors: Original uploader<br />
was Greg L at en.wikipedia (Original text : Greg L)<br />
File:Glass ochem.png Source: http://en.wikipedia.org/w/index.php?title=File:Glass_ochem.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Purpy Pupple<br />
File:PathOfRays.svg Source: http://en.wikipedia.org/w/index.php?title=File:PathOfRays.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Traced by User:Stannered,<br />
original by en:user:Kolibri<br />
Image:Refl sample.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Refl_sample.jpg License: Public Domain Contributors: -<br />
Image:Mirror2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Mirror2.jpg License: Public Domain Contributors: Al Hart<br />
Image:Metallic balls.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Metallic_balls.jpg License: Public Domain Contributors: AlHart<br />
Image:Blurry reflection.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Blurry_reflection.jpg License: Public Domain Contributors: AlHart<br />
Image:Glossy-spheres.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Glossy-spheres.jpg License: Public Domain Contributors: AlHart<br />
Image:Spoon fi.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Spoon_fi.jpg License: GNU Free Documentation License Contributors: User Freeformer on en.wikipedia<br />
Image:cube mapped reflection example.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Cube_mapped_reflection_example.jpg License: GNU Free Documentation License<br />
Contributors: User TopherTG on en.wikipedia<br />
Image:Cube mapped reflection example 2.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Cube_mapped_reflection_example_2.JPG License: Public Domain Contributors: User<br />
Gamer<strong>3D</strong> on en.wikipedia<br />
File:Render Types.png Source: http://en.wikipedia.org/w/index.php?title=File:Render_Types.png License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors: Maximilian<br />
Schönherr<br />
Image:Cg-jewelry-design.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Cg-jewelry-design.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:<br />
http://www.alldzine.com<br />
File:Latest Rendering of the E-ELT.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Latest_Rendering_of_the_E-ELT.jpg License: Creative Commons Attribution 3.0<br />
Contributors: Swinburne Astronomy Productions/ESO<br />
Image:SpiralSphereAndJuliaDetail1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:SpiralSphereAndJuliaDetail1.jpg License: Creative Commons Attribution 3.0 Contributors:<br />
Robert W. McGregor Original uploader was Azunda at en.wikipedia<br />
Image:Screen space ambient occlusion.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Screen_space_ambient_occlusion.jpg License: Public domain Contributors: Vlad<strong>3D</strong> at<br />
en.wikipedia<br />
Image:Doom3Marine.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Doom3Marine.jpg License: unknown Contributors: -
Image Sources, Licenses and Contributors 268<br />
Image:7fin.png Source: http://en.wikipedia.org/w/index.php?title=File:7fin.png License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Original uploader was Praetor<br />
alpha at en.wikipedia<br />
Image:3noshadow.png Source: http://en.wikipedia.org/w/index.php?title=File:3noshadow.png License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Original<br />
uploader was Praetor alpha at en.wikipedia<br />
Image:1light.png Source: http://en.wikipedia.org/w/index.php?title=File:1light.png License: GNU Free Documentation License Contributors: Original uploader was Praetor alpha at<br />
en.wikipedia. Later version(s) were uploaded by Solarcaine at en.wikipedia.<br />
Image:2shadowmap.png Source: http://en.wikipedia.org/w/index.php?title=File:2shadowmap.png License: GNU Free Documentation License Contributors: User Praetor alpha on<br />
en.wikipedia<br />
Image:4overmap.png Source: http://en.wikipedia.org/w/index.php?title=File:4overmap.png License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Original uploader<br />
was Praetor alpha at en.wikipedia<br />
Image:5failed.png Source: http://en.wikipedia.org/w/index.php?title=File:5failed.png License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Original uploader was<br />
Praetor alpha at en.wikipedia<br />
Image:Doom3shadows.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Doom3shadows.jpg License: unknown Contributors: -<br />
Image:Shadow volume illustration.png Source: http://en.wikipedia.org/w/index.php?title=File:Shadow_volume_illustration.png License: GNU Free Documentation License Contributors:<br />
User:Rainwarrior<br />
Image:Specular highlight.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Specular_highlight.jpg License: GNU Free Documentation License Contributors: Original uploader was<br />
Reedbeta at en.wikipedia<br />
Image:Stencilb&w.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Stencilb&w.JPG License: GNU Free Documentation License Contributors: -<br />
File:<strong>3D</strong>_von_Neumann_Stencil_Model.svg Source: http://en.wikipedia.org/w/index.php?title=File:<strong>3D</strong>_von_Neumann_Stencil_Model.svg License: Creative Commons Attribution 3.0<br />
Contributors: Gentryx<br />
File:2D_von_Neumann_Stencil.svg Source: http://en.wikipedia.org/w/index.php?title=File:2D_von_Neumann_Stencil.svg License: Creative Commons Attribution 3.0 Contributors: Gentryx<br />
Image:2D_Jacobi_t_0000.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0000.png License: Creative Commons Attribution 3.0 Contributors: Gentryx<br />
Image:2D_Jacobi_t_0200.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0200.png License: Creative Commons Attribution 3.0 Contributors: Gentryx<br />
Image:2D_Jacobi_t_0400.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0400.png License: Creative Commons Attribution 3.0 Contributors: Gentryx<br />
Image:2D_Jacobi_t_0600.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0600.png License: Creative Commons Attribution 3.0 Contributors: Gentryx<br />
Image:2D_Jacobi_t_0800.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0800.png License: Creative Commons Attribution 3.0 Contributors: Gentryx<br />
Image:2D_Jacobi_t_1000.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_1000.png License: Creative Commons Attribution 3.0 Contributors: Gentryx<br />
Image:Moore_d.gif Source: http://en.wikipedia.org/w/index.php?title=File:Moore_d.gif License: Public Domain Contributors: Bob<br />
Image:Vierer-Nachbarschaft.png Source: http://en.wikipedia.org/w/index.php?title=File:Vierer-Nachbarschaft.png License: Public Domain Contributors: -<br />
Image:<strong>3D</strong>_von_Neumann_Stencil_Model.svg Source: http://en.wikipedia.org/w/index.php?title=File:<strong>3D</strong>_von_Neumann_Stencil_Model.svg License: Creative Commons Attribution 3.0<br />
Contributors: Gentryx<br />
Image:<strong>3D</strong>_Earth_Sciences_Stencil_Model.svg Source: http://en.wikipedia.org/w/index.php?title=File:<strong>3D</strong>_Earth_Sciences_Stencil_Model.svg License: Creative Commons Attribution 3.0<br />
Contributors: Gentryx<br />
Image:ShellOpticalDescattering.png Source: http://en.wikipedia.org/w/index.php?title=File:ShellOpticalDescattering.png License: Creative Commons Attribution-Sharealike 3.0<br />
Contributors: Meekohi<br />
Image:Subsurface scattering.png Source: http://en.wikipedia.org/w/index.php?title=File:Subsurface_scattering.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:<br />
Piotrek Chwała<br />
Image:Sub-surface scattering depth map.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sub-surface_scattering_depth_map.svg License: Public Domain Contributors: Tinctorius<br />
Image:Normal vectors2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Normal_vectors2.svg License: Public Domain Contributors: -<br />
Image:Surface normal illustration.png Source: http://en.wikipedia.org/w/index.php?title=File:Surface_normal_illustration.png License: Public Domain Contributors: Oleg Alexandrov<br />
Image:Surface normal.png Source: http://en.wikipedia.org/w/index.php?title=File:Surface_normal.png License: Public Domain Contributors: Original uploader was Oleg Alexandrov at<br />
en.wikipedia<br />
Image:Reflection angles.svg Source: http://en.wikipedia.org/w/index.php?title=File:Reflection_angles.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: -<br />
Image:VoronoiPolygons.jpg Source: http://en.wikipedia.org/w/index.php?title=File:VoronoiPolygons.jpg License: Creative Commons Zero Contributors: Kmk35<br />
Image:ProjectorFunc1.png Source: http://en.wikipedia.org/w/index.php?title=File:ProjectorFunc1.png License: Creative Commons Zero Contributors: Kmk35<br />
Image:Texturedm1a2.png Source: http://en.wikipedia.org/w/index.php?title=File:Texturedm1a2.png License: GNU Free Documentation License Contributors: Anynobody<br />
Image:Bumpandopacity.png Source: http://en.wikipedia.org/w/index.php?title=File:Bumpandopacity.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Anynobody<br />
Image:Perspective correct texture mapping.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Perspective_correct_texture_mapping.jpg License: Public Domain Contributors:<br />
Rainwarrior<br />
Image:Doom ingame 1.png Source: http://en.wikipedia.org/w/index.php?title=File:Doom_ingame_1.png License: Fair use Contributors: -<br />
Image:Texturemapping subdivision.svg Source: http://en.wikipedia.org/w/index.php?title=File:Texturemapping_subdivision.svg License: Public Domain Contributors: Arnero<br />
Image:Ahorn-Maser Holz.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Ahorn-Maser_Holz.JPG License: GNU Free Documentation License Contributors: -<br />
Image:Texture spectrum.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Texture_spectrum.jpg License: Public Domain Contributors: Jhhays<br />
Image:Imagequilting.gif Source: http://en.wikipedia.org/w/index.php?title=File:Imagequilting.gif License: GNU Free Documentation License Contributors: Douglas Lanman (uploaded by<br />
Drlanman, en:wikipedia, Original page)<br />
Image:UVMapping.png Source: http://en.wikipedia.org/w/index.php?title=File:UVMapping.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Tschmits<br />
Image:UV mapping checkered sphere.png Source: http://en.wikipedia.org/w/index.php?title=File:UV_mapping_checkered_sphere.png License: Creative Commons Attribution-ShareAlike 3.0<br />
Unported Contributors: Jleedev<br />
Image:Cube Representative UV Unwrapping.png Source: http://en.wikipedia.org/w/index.php?title=File:Cube_Representative_UV_Unwrapping.png License: Creative Commons<br />
Attribution-Sharealike 3.0 Contributors: - Zephyris Talk. Original uploader was Zephyris at en.wikipedia<br />
File:Two rays and one vertex.png Source: http://en.wikipedia.org/w/index.php?title=File:Two_rays_and_one_vertex.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:<br />
CMBJ<br />
Image:ViewFrustum 01.png Source: http://en.wikipedia.org/w/index.php?title=File:ViewFrustum_01.png License: Public Domain Contributors: -<br />
Image:CTSkullImage.png Source: http://en.wikipedia.org/w/index.php?title=File:CTSkullImage.png License: Public Domain Contributors: Original uploader was Sjschen at en.wikipedia<br />
Image:CTWristImage.png Source: http://en.wikipedia.org/w/index.php?title=File:CTWristImage.png License: Public Domain Contributors: http://en.wikipedia.org/wiki/User:Sjschen<br />
Image:Croc.5.3.10.a gb1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Croc.5.3.10.a_gb1.jpg License: Copyrighted free use Contributors: stefanbanev<br />
Image:volRenderShearWarp.gif Source: http://en.wikipedia.org/w/index.php?title=File:VolRenderShearWarp.gif License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0<br />
Contributors: Original uploader was Lackas at en.wikipedia<br />
Image:MIP-mouse.gif Source: http://en.wikipedia.org/w/index.php?title=File:MIP-mouse.gif License: Public Domain Contributors: Original uploader was Lackas at en.wikipedia<br />
Image:Big Buck Bunny - forest.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Big_Buck_Bunny_-_forest.jpg License: unknown Contributors: Blender Foundation / Project Peach<br />
Image:voxels.svg Source: http://en.wikipedia.org/w/index.php?title=File:Voxels.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: -<br />
Image:Ribo-Voxels.png Source: http://en.wikipedia.org/w/index.php?title=File:Ribo-Voxels.png License: Creative Commons Attribution-Sharealike 2.5 Contributors: -<br />
Image:Z buffer.svg Source: http://en.wikipedia.org/w/index.php?title=File:Z_buffer.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: -Zeus-<br />
Image:Z-fighting.png Source: http://en.wikipedia.org/w/index.php?title=File:Z-fighting.png License: Public Domain Contributors: Original uploader was Mhoskins at en.wikipedia<br />
Image:ZfightingCB.png Source: http://en.wikipedia.org/w/index.php?title=File:ZfightingCB.png License: Public Domain Contributors: CompuHacker (talk)
License 269<br />
License<br />
Creative Commons Attribution-Share Alike 3.0 Unported<br />
//creativecommons.org/licenses/by-sa/3.0/