07.01.2013 Views

3D graphics eBook - Course Materials Repository

3D graphics eBook - Course Materials Repository

3D graphics eBook - Course Materials Repository

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Graphics pipeline 52<br />

Graphics pipeline<br />

In <strong>3D</strong> computer <strong>graphics</strong>, the terms <strong>graphics</strong> pipeline or rendering pipeline most commonly refers to the current<br />

state of the art method of rasterization-based rendering as supported by commodity <strong>graphics</strong> hardware pipeline . The<br />

<strong>graphics</strong> pipeline typically accepts some representation of a three-dimensional primitives as an input and results in a<br />

2D raster image as output. OpenGL and Direct<strong>3D</strong> are two notable 3d graphic standards, both describing very similar<br />

graphic pipeline.<br />

Stages of the <strong>graphics</strong> pipeline<br />

Generations of graphic pipeline<br />

Graphic pipeline constantly evolve. This article describe graphic pipeline as can be found in OpenGL 3.2 and<br />

Direct<strong>3D</strong> 9.<br />

Transformation<br />

This stage consumes data about polygons with vertices, edges and faces that constitute the whole scene. A matrix<br />

controls the linear transformations (scaling, rotation, translation, etc.) and viewing transformations (world and view<br />

space) that are to be applied on this data.<br />

Per-vertex lighting<br />

Geometry in the complete <strong>3D</strong> scene is lit according to the defined locations of light sources, reflectance, and other<br />

surface properties. Current hardware implementations of the <strong>graphics</strong> pipeline compute lighting only at the vertices<br />

of the polygons being rendered. The lighting values between vertices are then interpolated during rasterization.<br />

Per-fragment (i.e. per-pixel) lighting can be done on modern <strong>graphics</strong> hardware as a post-rasterization process by<br />

means of a shader program.<br />

Viewing transformation or normalizing transformation<br />

Objects are transformed from 3-D world space coordinates into a 3-D coordinate system based on the position and<br />

orientation of a virtual camera. This results in the original <strong>3D</strong> scene as seen from the camera’s point of view, defined<br />

in what is called eye space or camera space. The normalizing transformation is the mathematical inverse of the<br />

viewing transformation, and maps from an arbitrary user-specified coordinate system (u, v, w) to a canonical<br />

coordinate system (x, y, z).<br />

Primitives generation<br />

After the transformation, new primitives are generated from those primitives that were sent to the beginning of the<br />

<strong>graphics</strong> pipeline.<br />

Projection transformation<br />

In the case of a Perspective projection, objects which are distant from the camera are made smaller (sheared). In an<br />

orthographic projection, objects retain their original size regardless of distance from the camera.<br />

In this stage of the <strong>graphics</strong> pipeline, geometry is transformed from the eye space of the rendering camera into a<br />

special <strong>3D</strong> coordinate space called "Homogeneous Clip Space", which is very convenient for clipping. Clip Space<br />

tends to range from [-1, 1] in X,Y,Z, although this can vary by <strong>graphics</strong> API(Direct<strong>3D</strong> or OpenGL). The Projection<br />

Transform is responsible for mapping the planes of the camera's viewing volume (or Frustum) to the planes of the<br />

box which makes up Clip Space.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!