29.04.2014 Views

Real-Time Rendering of Dynamic Displacement Maps Tu The Hien ...

Real-Time Rendering of Dynamic Displacement Maps Tu The Hien ...

Real-Time Rendering of Dynamic Displacement Maps Tu The Hien ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

ABSTRACT<br />

<strong>Real</strong>-<strong>Time</strong> <strong>Rendering</strong> <strong>of</strong> <strong>Dynamic</strong> <strong>Displacement</strong> <strong>Maps</strong><br />

<strong>Tu</strong> <strong>The</strong> <strong>Hien</strong> ∗ and Low Kok Lim †<br />

Department <strong>of</strong> Computer Science<br />

School <strong>of</strong> Computing,<br />

National University <strong>of</strong> Singapore<br />

<strong>Displacement</strong> mapping is a well-known approach in computer graphics to add realistic visual<br />

details to simple 3D geometric models. Until recently, displacement mapping had been<br />

practical only for <strong>of</strong>fline rendering. However, with the advent <strong>of</strong> modern inexpensive and<br />

powerful graphics processors (GPU), it has become feasible to render displacement maps<br />

in real time. In this project, we reviewed latest development in displacement mapping<br />

algorithms as well as newest advancement in GPU programming. Based on the experiences<br />

gained on re-implemented some <strong>of</strong> these novel methods, advantages and disadvantages in<br />

terms <strong>of</strong> performance and quality <strong>of</strong> each methods are compared and presented. We also<br />

look into some interesting possible application <strong>of</strong> displacement mapping.<br />

1. INTRODUCTION<br />

1.1. <strong>The</strong> Problem<br />

In general, displacement mapping technique render the 3d surface detail given the image<br />

surface and its height map . <strong>The</strong>re’s a trade <strong>of</strong>f between quality and performance <strong>of</strong> each<br />

techniques we’re going to discuss in later chapter . Most <strong>of</strong> the technique use an <strong>of</strong>fline<br />

preprocessing step to have the distance map information before rendering the displacement<br />

surface. <strong>The</strong> largest barrier is the preprocessing time is too long, it’s very hard to dynamic<br />

update the texture map and render it beautifully in real time.<br />

1.2. Background<br />

In this section, we briefly discuss the history and background <strong>of</strong> the problem. A detail<br />

literature survey is presented in section 2<br />

Object representation is usually defined in 3 levels: macrostructure, mesostructure and<br />

microstructure. Macrostructure is the geometric model <strong>of</strong> the object. Mesostructure refers<br />

to small geometric detail but still visible to human’s eye. Microstructure is the micr<strong>of</strong>acet<br />

surface detail <strong>of</strong> the object which is indistinguishable to human’s eye. <strong>Displacement</strong> mapping<br />

is the technique that add mesostructure detail to macrostructure <strong>of</strong> object, creating an effect<br />

that the actual geometric position <strong>of</strong> points over the textured surface are displaced thus<br />

enhancing the visual effects <strong>of</strong> object representation. (Szirmay-Kalos & Umenh<strong>of</strong>fer, 2008)<br />

<strong>Displacement</strong> mapping is done by first taking a sample point on the surface and then<br />

displace it in the normal direction with the distance obtained from the height map. <strong>The</strong><br />

displacement can be done by shifting the position <strong>of</strong> the vertex ( per-verctex displacement<br />

mapping) or a points inside the surface (per-pixel displacement mapping). <strong>The</strong> new generations<br />

<strong>of</strong> graphics card allow programmers to program on the vertex shader and pixel shader<br />

<strong>of</strong> the GPU. In this report, we will review both implementation <strong>of</strong> vertex shader and pixel<br />

shader approach and analyse their disadvantages and advantages.<br />

∗Student<br />

†Supervisor


1.2.1. Per-vertex displacement mapping<br />

<strong>Displacement</strong> mapping can be implemented on vertex shader by modifying the vertex to the<br />

normal direction <strong>of</strong> the mesh. For older graphic cards ( support up to shader model 3.0),<br />

it’s impossible to change the topology <strong>of</strong> the mesh ( add new vertex), thus only the original<br />

vertex is perturbed.<br />

<strong>The</strong> vertex <strong>of</strong> is displaced in the normal direction by a distance h read from the height<br />

map. <strong>The</strong> formula is discussed as in the previous section.<br />

// get h by looking up the heightmap<br />

h = texture2D( displacementMap, glMultiTexCoord0.xy );<br />

// calculate the new vertex<br />

newVertexPos = vec4(glNormal * h * scale, 0.0) + glVertex;<br />

// return the new vertex position<br />

glPosition = glModelViewProjectionMatrix * newVertexPos;<br />

<strong>The</strong> advantages <strong>of</strong> per-vertex displacement mapping is that it actually changes the geometry<br />

thus can automatically resolve silhouette. However, it has some serious drawbacks.<br />

• Vertex shader has less processing power than pixel shader and has limited access to<br />

texture.<br />

• Vertex shader execute each vertex once whether it’s visible or not thus wasting GPU<br />

resources.<br />

1.2.2. Per-pixel displacement mapping<br />

<strong>The</strong> vertex shader transforms only the macrostructure geometry, then the height map is used<br />

to adjust the texture color. Since we do not change the geometry, the visibility problem<br />

needs to be solved in the fragment shader program by a raytracing algorithm. Started at the<br />

top <strong>of</strong> the height field, we casting rays into the height field to obtain the texture coordinates<br />

<strong>of</strong> the visible point.<br />

<strong>The</strong> vertex shader processes the macrostructure geometry, and pass to the fragment<br />

shader input with texture coordinates [u, v]. This processed point has (u, v,0) coordinates<br />

in tangent space. <strong>The</strong> fragment shader program find that point <strong>of</strong> the height field which is<br />

really seen by the ray connecting the pixel center and processed point (u, v,0). <strong>The</strong> direction<br />

<strong>of</strong> this ray is defined by tangent space view vector V ⃗ . <strong>The</strong> visible point is on the height field<br />

and thus has tangent space coordinates (u 0 , v 0 , h(u 0 , v 0 )) for some unknown (u 0 , v 0 ). This<br />

visible point is also on the ray <strong>of</strong> equation<br />

(u 0 , v 0 , h 0 ) = (u, v, 0) + V ⃗ (t)<br />

for some ray parameter t, thus we need to solve the following equation: (u 0 , v 0 , h(u 0 , v 0 )) =<br />

(u, v, 0) + V ⃗ (t) for unknown u 0 , v 0 , t parameters.<br />

Per-pixel displacement mapping approaches can be further categorized to safe and unsafe<br />

method depends on whether it finds the correct first intersection with the height field.<br />

Since unsafe methods are much faster than safe methods it makes sense to combine<br />

the two approaches. In combined methods first an unsafe iterative method aggressively<br />

leap through the empty space , then a safe method computes the accurate intersection .<br />

(Szirmay-Kalos & Umenh<strong>of</strong>fer, 2008)


2. RELATED WORK<br />

In this section, we present some previous research results that is related to our work.<br />

2.1. Simulation <strong>of</strong> wrinkled surfaces<br />

Introduced by Blinn in 1978, this paper placed the very first stone for all subsequent displacement<br />

mapping techniques later on. It’s commonly known as bump mapping and can<br />

be seen as a simplified version <strong>of</strong> displacement mapping. Although computer generated<br />

images has get some degrees <strong>of</strong> realism, they are not so realistic. <strong>The</strong> surfaces still looks<br />

artificial because <strong>of</strong> its unrealistic smoothness. <strong>The</strong> paper presents a new method <strong>of</strong> using<br />

a texturing function to perform perturbation in the direction <strong>of</strong> the surface normal before<br />

using it in the intensity calculations. (Blinn, 1978)<br />

<strong>The</strong> key idea is by changing the normal <strong>of</strong> the surfaces, usually by looking up in the<br />

normal texture, it tricks the viewer that the surfaces geometry itself actually changes. <strong>The</strong><br />

result is much better than normal texture mapping. However, the technique is unable to<br />

simulate self-occlusion, self-shadowing and viewing parallax. Although it has many flaws<br />

and cannot compete with other realistic displacement map later, this paper had an enormous<br />

influence to all subsequent research in displacement mapping, how to add realistic visual<br />

detail to the surface without actually adding any geometry to the surface<br />

2.2. Detailed sharp representation with parallax mapping<br />

More than 20 years after the first paper <strong>of</strong> Blinn , a group <strong>of</strong> graphics researcher in Tokyo,<br />

Japan invented a technique which greatly improve the visual effects <strong>of</strong> bump mapping, called<br />

parallax mapping. <strong>The</strong> method allows adding more apparent depth as well as enhances the<br />

visual appearance for bump mapping. It takes into account not only the normal <strong>of</strong> the<br />

texture for shading but also displaces the texture coordinate based on the viewing angle<br />

and the height <strong>of</strong> the current texel. Given the original texture coordinate (u,v), the new<br />

texture coordinate is replaced by (u’,v’) which calculated from tangent space view vector<br />

and height value h(u,v) read from normal texture. We simplify the ray equation:<br />

And the solution:<br />

(u ′ , v ′ , h(u, v)) = (u, v, 0) + −→ V (t)<br />

(u ′ , v ′ ) = (u, v) + h(u, v)( V x<br />

V z<br />

, V y<br />

V z<br />

)<br />

(Tomomichi Kaneko & Tachi, 2001)<br />

<strong>The</strong> greater between the view angle and the displaced texture mapped surfaces, the<br />

more distortion occur. It creates the illusion <strong>of</strong> depth due to parallax effect when the view<br />

changes, thus create more realistic bump with almost the same speed as bump mapping.<br />

Although parallax mapping is a great improvement from bump mapping, it still inherits<br />

the same drawbacks from bump mapping such as unable to simulate self-occlusion, self<br />

shadowing. Especially, when the viewing angle is more grazing (Vz approaches 0), the <strong>of</strong>fset<br />

value becomes infinity and creates many random artifacts. Subsequent algorithm has been<br />

made to improve the algorithm such as parallax with <strong>of</strong>fset limiting, iterative parallax, steep<br />

parallax, relief parallax. (Szirmay-Kalos & Umenh<strong>of</strong>fer, 2008)


2.3. Relief Texture Mapping<br />

Relief texture mapping is an alternative approach <strong>of</strong> parallax mapping with much higher<br />

accuracy. <strong>The</strong> key idea is simple, a relief texture is an extended texture with orthogonal<br />

displacement texel. When the viewing direction changes, based on the viewing direction and<br />

height map value the texture is warped using a two simple 1D transform before mapped onto<br />

the surface to create the viewing parallax. <strong>The</strong> warping function can be simple described<br />

as :<br />

What coordinates ( ut, vt) should the source pixels (us , vs ) have so that a<br />

view <strong>of</strong> such a flat distorted image on the source image plane from the target<br />

COP would be identical to a 3-D image warp <strong>of</strong> the source image onto the target<br />

image plane? (Oliveira, 2000)<br />

2.4. Per-pixel displacement mapping with distance function<br />

Published by William Donnelly in the famous graphics series <strong>of</strong> Nvidia, GPU Gems 2 , the<br />

paper presented a new approach for adding geometry detail to surfaces yet still remaining its<br />

performance in real time. <strong>The</strong> key idea is he treats displacement mapping as a ray tracing<br />

problem. In conventional displacement mapping, given a piece <strong>of</strong> geometry, we find which<br />

pixel in the image it maps to. In this algorithm, the problem is inversed, given a pixel in the<br />

image; we find its corresponding geometry. Imagine the surface as an axis aligned box, we<br />

start at the top <strong>of</strong> the surface and calculate which coordinates the viewing ray intersects with<br />

the displaced surface. We define the distance map <strong>of</strong> the surface, which give the shortest<br />

distance from any points in the surface texture space to the surface and store it into a 3D<br />

texture. <strong>The</strong>n the intersection point is advanced in the viewing direction by a distance d<br />

by looking up in the distance map. Each step brings the point closer and finally converges<br />

to the real intersection point given enough advanced steps. <strong>The</strong> introduced algorithm has<br />

many advantages over other previous work:<br />

• Taking advantages <strong>of</strong> the processing power <strong>of</strong> pixel shader over vertex shader <strong>of</strong> new<br />

generation graphics cards. - Can resolve fine detailed surfaces without aliasing or<br />

leaving any gaps between the geometry but still keep the same number <strong>of</strong> tracing<br />

steps.<br />

• Can resolve self occlusion and self-shadowing easily. Through experiments, the algorithm<br />

performs very well on detail surfaces and still keep its performance rendering in<br />

real time. However, it still has some issues:<br />

• Although the algorithm guarantees to find the exact intersection point, the number<br />

<strong>of</strong> tracing steps in pixel shader is usually fixed for performance optimization. Given a<br />

high frequency surfaces, the algorithm may need to calculate more depths in the 3D<br />

texture map and take more tracing steps to converge, thus decreasing the performance<br />

heavily.<br />

• Calculating the distance map takes a lot <strong>of</strong> time and thus not desirable for dynamic<br />

real time rendering. Storing 3D textures is memory consuming, especially when the<br />

depth is increased.<br />

(William, 2005)


2.5. Cone step mapping: An iterative ray-height field intersection algorithm<br />

Inspired by sphere tracing displacement mapping algorithm, the paper introduced a new<br />

approach to ray-height field intersection problem with many improvements compare with<br />

sphere tracing approach. It also uses preprocessed data to advance the viewing ray to find<br />

the real intersection with the surface. For a given texel, we calculate its conservative cone<br />

bounding volume. <strong>The</strong> tip <strong>of</strong> the cone is on the surface and the side <strong>of</strong> the cone touches the<br />

surface (Jonathan, 2006). <strong>The</strong> key idea is similar the ”sphere leaping”, in this case, the point<br />

is advanced to the intersection <strong>of</strong> the viewing ray and the cone. Cone step mapping can<br />

be seen as an improvement version <strong>of</strong> ”sphere step” mapping as in per-pixel displacement<br />

mapping with distance mapping. It inherits it advantages as well as drawbacks.<br />

• Can resolve highly detail surfaces and self occlusion very well.<br />

• Can be rendered in real time with pre-processed data. <strong>The</strong> pre-processed data is stored<br />

in a 2D relief map textures, hence requires less memory than ”sphere step” mapping<br />

but takes longer time to compute.<br />

• Suffered from high frequency detailed surfaces. Using the fixed number <strong>of</strong> cone steps<br />

leads to inaccuracy, but calculating the exact intersection point affects the rendering<br />

time.<br />

2.6. Relaxed cone step mapping for relief mapping<br />

This paper is a continue work <strong>of</strong> Cone Step Mapping, introduced in the newest series <strong>of</strong><br />

Nvidia GPU Gems . Using conservative cone approach, the ray tends to stop before the<br />

actual intersection which may introduces to different kinds <strong>of</strong> artifact. <strong>The</strong> new algorithm<br />

combines both strength <strong>of</strong> cone step mapping and binary search using a ”relaxed cone”.<br />

Unlike the conservative cone, the relaxed cone allows the ray to pierce the surface at most<br />

once.(Policarbo & Oliveira, 2007)<br />

That produces much wider cone and faster convergence. <strong>The</strong> key idea is first using an<br />

aggressive space leaping approach with relaxed cone. Once the cone is inside the surface, we<br />

can safely apply the binary search to find the actual intersection point. <strong>The</strong> performance is<br />

increased noticeably, using the same number <strong>of</strong> cone steps, it’s usually provide better result<br />

than other algorithm using similar approach such as cone step mapping, parallax mapping.<br />

However, the preprocessing texture data takes much longer time than other algorithms.<br />

Although the relaxed cone step provide a very realistic image, preprocessing step make it<br />

almost impossible to dynamically update relief textures and render it in real time.<br />

2.7. Accurate per-pixel displacement mapping using a pyramid structure<br />

This paper presents a new per-pixel displacement mapping method called pyramidal displacement<br />

mapping that provides high accuracy as well as real-time performance. <strong>The</strong> idea<br />

arises from the observation that we can safely skip empty regions by moving a ray to the<br />

maximum height value. Similar to similar cone step mapping and sphere tracing, it also<br />

uses the preprocess texture to advance the viewing ray and find the intersection with the<br />

displaced surfaces. In this algorithm, the author uses mipmap pyramid structure to store<br />

the bounding height field information and the ray along the view direction advances hierarchically<br />

via this pyramid structure. A pyramidal displacement map is a quadtree image<br />

pyramid, each leaf at the lowest level <strong>of</strong> mipmap contains the height difference between the


maximum height surface and the current height at each texel (Oh, Ki, & Lee, 2006). <strong>The</strong><br />

highest level <strong>of</strong> the mipmap is the global minimum difference between the height <strong>of</strong> the<br />

surface and a given texel.<br />

In the experiments, the algorithm performs very fast and accurate. <strong>The</strong>re’s no comparison<br />

between relaxed cone mapping and pyramidal displacement mapping but the quality<br />

<strong>of</strong> the image and rendering time are more or less the same. <strong>The</strong> new method also performs<br />

very well in very spatially-varying height field textures which relief mapping and parallax<br />

occlusion mapping are unable to render correctly. However, the biggest achievement in this<br />

method is the preprocessing step. Other per-pixel displacement mapping, although can<br />

achieve a very good frame rate in rendering time, relies a lot on its preprocessing step which<br />

usually takes a lot <strong>of</strong> time to compute makes them very hard for dynamically displacement<br />

mapping rendering. It takes relaxed cone step mapping O(n 4 ) time, sphere tracing<br />

O(depth ∗ n 2 ) to preprocess data. Pyramidal mapping only requires a simple 3 for loops<br />

which is levelxNxN in total to compute preprocess the pyramidal map and the algorithm is<br />

possible to implement in both cpu and gpu. In the experiment, this step spends approximately<br />

1.8ms in CPU for a 256x256 texture. Another advantage <strong>of</strong> pyramidal displacement<br />

mapping is it doesn’t need a 3D texture to store the preprocessing data but can instead use<br />

a NxNx2 texture. Although the texture size is doubled compared to other methods, many<br />

other advantages is enough to compensate for this. <strong>The</strong> new technique makes displacement<br />

mapping now is possible to render dynamically in real time.(Oh et al., 2006)<br />

3. IMPLEMENTATION<br />

3.1. Per-pixel displacement mapping with distance map<br />

<strong>The</strong> algorithm is implemented with Visual C++ 2008 and GLSL. I followed the algorithm<br />

described in chapter 8 <strong>of</strong> GPU Gems 3.<br />

3.1.1. Preprocessing 3D texture map<br />

Computing a distance transform is a well-studied problem. In our implementation, we used<br />

Danielsson’s (Danielsson, 1980) algorithm that runs in O(k.n) time with n is the number <strong>of</strong><br />

pixels in the map and k is the depth level. <strong>The</strong> idea is to create a 3D map where each texel<br />

stores a 3D displacement vector to the nearest point on the surface. <strong>The</strong> algorithm then<br />

performs a small number <strong>of</strong> sequential sweeps over the 3D domain, updating each pixels<br />

displacement vector based on neighboring displacement vectors. Once the displacements<br />

have been calculated, the distance for each pixel is computed as the magnitude <strong>of</strong> the<br />

displacement.<br />

When the distance transform algorithm is finished, we have computed the distance from<br />

each pixel in the distance map to the closest point on the surface, measured in pixels. To<br />

make these distances lie in the range [0, 1], we divide each distance by the depth <strong>of</strong> the 3D<br />

texture in pixels.<br />

3.1.2. Vertex shader<br />

In vertex shader, we just simply transform light vector and eye vector to the tangent space.<br />

<strong>The</strong> binormal vector and tangent vector is calculated before hand before passing to vertex<br />

shader. <strong>The</strong> calculation <strong>of</strong> tangent space matrix has been discussed in the previous section.


3.1.3. Fragment shader<br />

In fragment shader, we started from the top <strong>of</strong> the height map. Using the distance from the<br />

distance map we recursively traced in a fixed number <strong>of</strong> steps until we reach the surface.<br />

3.2. Relaxed cone step mapping<br />

3.2.1. Preprocess relief texture map<br />

Using an O(n2) to calculate the relaxed cone maps <strong>of</strong>fline. <strong>The</strong> idea is, for each source texel<br />

ti, trace a ray through each destination texel tj, such that this ray starts at (ti.texCoord.s,<br />

ti.texCoord.t, 0.0) and points to (tj.texCoord.s, tj.texCoord.t, tj.depth). For each ray,<br />

compute the intersection with the height field and use this intersection point to compute<br />

the cone ratio cone r atio(i, j). <strong>The</strong> coneratio = w/h is illustrated in figure 3.1 . After<br />

process all the source texels, we obtain the relaxed cone map.<br />

3.2.2. Vertex shader<br />

Similar to per-pixel displacement mapping with distance map. We just simply transform<br />

view vector and light vector to tangent space.<br />

3.2.3. Fragment shader<br />

<strong>The</strong> key idea is first using an aggressive space leaping approach with relaxed cone. Once<br />

the cone is inside the surface, we can safely apply the binary search to find the actual<br />

intersection point.<br />

m = d × rayRatio<br />

We also have,<br />

m = g × coneRatio = ((currentT exelDepth − rayCurrentDepth) − d) × coneRatio<br />

Solve the 2 equations, we have:<br />

d =<br />

Finally, the intersection point is:<br />

4. CONCLUSION<br />

(currentT exelDepth − rayCurrentDepth)coneRatio<br />

rayRatio + coneRatio<br />

I = rayCurrentP osition + d × rayDirection<br />

Given the limited time for the a 1-semester UROP project. We don’t have enough time to<br />

both understand and implement all <strong>of</strong> the displacement techniques. What more important<br />

is, through out this project, I’ve learned and fully understood the problems and various<br />

approaches to solve it. In addition to this, given having zero GPU programming background<br />

when I first started the project, after implemented 2 state-<strong>of</strong>-the-art algorithms in<br />

the famous series GPU Gems books, I’ve learned and be able to use either GLSL or CG<br />

programming languages as well as shader developing s<strong>of</strong>tware such as ATI’s Rendermonkey,<br />

Nvidia’s FXComposer. Since GPU’s programming is a very low-level programming it’s<br />

unable to print out and test the output as we normally do with high level programming<br />

language like Java or C++, it’s also important that I experienced and learned to use shader<br />

debugger and shader optimizer tool such as GDebugger, GLSL Devil Shader debugger and<br />

GPU’s Shader Analyzer.


4.1. Future work<br />

<strong>Real</strong>time displacement mapping is a very interesting area which attracted many graphics<br />

researchers all over the world. <strong>The</strong>re’re always new techniques or improvement over the<br />

existing algorithm published every year. Together with the recently achievements in graphics<br />

hardware, it’s become more and more feasible to render displacement mapping in realtime.<br />

Although this area is becoming crowded, there’s still rooms for improvement since nobody<br />

has been able to achieve an algorithm that can render displacement mapping fast and<br />

accuracy.<br />

<strong>Displacement</strong> mapping also open many possible interesting applications in image based<br />

rendering such as terrain rendering. For example, we could take a terrain image and terrain<br />

height in Google earth then using displacement mapping to render a 3D view <strong>of</strong> the<br />

earth. May be, in the future, given the graphics hardware is strong enough, and an efficient<br />

algorithm is achieved, we could have a totally 3D view <strong>of</strong> the earth terrain.<br />

5. REFERENCES<br />

[1] Blinn, J. F. (1978). Simulation <strong>of</strong> wrinkled surfaces. 12 (3), August, 1978, 286–292.<br />

[2] Danielsson, P.-E. (1980). Euclidean distance mapping. Computer Graphics and Image<br />

Processing, 14 , 1980, 227–248.<br />

[3] Jonathan, D. (2006). Cone step mapping: An iterative ray-heightfield intersection.<br />

algorithm.<br />

[4] Oh, K., Ki, H., & Lee, C.-H. (2006). Pyramidal displacement mapping: a gpu based<br />

artifacts-free ray tracing through an image pyramid. VRST ’06: Proceedings <strong>of</strong> the<br />

ACM symposium on Virtual reality s<strong>of</strong>tware and technology (pp. 75–82), 2006.<br />

[5] Oliveira, M. M. (2000). Relief texture mapping. PhD thesis, University <strong>of</strong> North Carolina.<br />

[6] Policarbo, F., & Oliveira, M. M. (2007). Gpu gems, Vol. 3, Chap. 18, pp. 409–428.<br />

Addison Wesley.<br />

[7] Szirmay-Kalos, L., & Umenh<strong>of</strong>fer, T. (2008). <strong>Displacement</strong> mapping on the GPU -<br />

State <strong>of</strong> the Art. Computer Graphics Forum, 27 (1), 2008.<br />

[8] Tomomichi Kaneko, Toshiyuki Takahei, M. I. N. K. Y. Y. T. M., & Tachi, S. (2001).<br />

Detailed shape representation with parallax mapping. Proceedings <strong>of</strong> ICAT (pp. 205–<br />

208), 2001.<br />

[9] William, D. (2005). Gpu gems, Vol. 2, Chap. 8, pp. 123–136. Addison Wesley.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!