07.01.2013 Views

3D graphics eBook - Course Materials Repository

3D graphics eBook - Course Materials Repository

3D graphics eBook - Course Materials Repository

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Volume rendering 237<br />

Volume rendering<br />

In scientific visualization and computer <strong>graphics</strong>,<br />

volume rendering is a set of techniques used to<br />

display a 2D projection of a <strong>3D</strong> discretely sampled data<br />

set.<br />

A typical <strong>3D</strong> data set is a group of 2D slice images<br />

acquired by a CT, MRI, or MicroCT scanner. Usually<br />

these are acquired in a regular pattern (e.g., one slice<br />

every millimeter) and usually have a regular number of<br />

image pixels in a regular pattern. This is an example of<br />

a regular volumetric grid, with each volume element, or<br />

voxel represented by a single value that is obtained by<br />

sampling the immediate area surrounding the voxel.<br />

To render a 2D projection of the <strong>3D</strong> data set, one first<br />

needs to define a camera in space relative to the<br />

volume. Also, one needs to define the opacity and color<br />

of every voxel. This is usually defined using an RGBA<br />

(for red, green, blue, alpha) transfer function that<br />

defines the RGBA value for every possible voxel value.<br />

For example, a volume may be viewed by extracting<br />

isosurfaces (surfaces of equal values) from the volume<br />

and rendering them as polygonal meshes or by<br />

rendering the volume directly as a block of data. The<br />

marching cubes algorithm is a common technique for<br />

extracting an isosurface from volume data. Direct<br />

volume rendering is a computationally intensive task<br />

that may be performed in several ways.<br />

Direct volume rendering<br />

A direct volume renderer [1] [2] requires every sample<br />

value to be mapped to opacity and a color. This is done<br />

with a "transfer function" which can be a simple ramp,<br />

A volume rendered cadaver head using view-aligned texture mapping<br />

and diffuse reflection<br />

Volume rendered CT scan of a forearm with different color schemes<br />

for muscle, fat, bone, and blood<br />

a piecewise linear function or an arbitrary table. Once converted to an RGBA (for red, green, blue, alpha) value, the<br />

composed RGBA result is projected on correspondent pixel of the frame buffer. The way this is done depends on the<br />

rendering technique.<br />

A combination of these techniques is possible. For instance, a shear warp implementation could use texturing<br />

hardware to draw the aligned slices in the off-screen buffer.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!