12.07.2015 Views

Real-Time GPU Silhouette Refinement using adaptively blended ...

Real-Time GPU Silhouette Refinement using adaptively blended ...

Real-Time GPU Silhouette Refinement using adaptively blended ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

involving only nine basis functions. Since they form apartition of unity, we can obtain one of them from theremaining eight. Therefore, it suffices to store the valuesof eight basis functions, and we need only two texturelookups for evaluation per point. Note that if wechoose the center coefficient as in (4) we need three texturelookups for retrieving the basis functions, but the remainderof the shader is essentially the same.Due to the linearity of the sampling operator, we mayexpress (11) for a vertex p of P M with s(p) = m + α asv = S s(p) [F](p) = ∑ i,j,kb ijk S s(p) [ ˆB ijk ](p) (14)= ∑ i,j,kb ijk((1 − α)S m [ ˆB ijk ](p) + αS m+1 [ ˆB)ijk ](p)Thus, for every vertex p of P M , we pre-evaluateS m [ ˆB 3 300](p), . . . , S m [ ˆB 3 021](p) for every refinementlevel m = 1, . . . , M and store this in a M × 2 blockin the texture. We organize the texture such that four basisfunctions are located next to the four correspondingbasis functions of the adjacent refinement level. This layoutoptimizes spatial coherency of texture accesses sincetwo adjacent refinement levels are always accessed whena vertex is calculated. Also, if vertex shaders on futuregraphics hardware will support filtered texture lookups,we could increase performance by carrying out the linearinterpolation between refinement levels by sampling betweentexel centers.Since the values of our basis function are always in inthe interval [0, 1], we can trade precision for performanceand pack two basis functions into one channel of data,letting one basis function have the integer part while theother has the fractional part of a channel. This reduces theprecision to about 12 bits, but increases the speed of thealgorithm by 20% without adding visual artifacts.6.5 Normal and displacement mappingOur algorithm can be adjusted to accommodate most regularrendering techniques. Pure fragment level techniquescan be applied directly, but vertex-level techniques mayneed some adjustment.An example of a pure fragment-level technique is normalmapping. The idea is to store a dense sampling ofthe object’s normal field in a texture, and in the fragmentshader use the normal from this texture instead of the interpolatednormal for lighting calculations. The result of<strong>using</strong> normal mapping on a coarse mesh is depicted in theleft of Figure 8.Normal mapping only modulates the lighting calculations,it does not alter the geometry. Thus, silhouettes arestill piecewise linear. In addition, the flat geometry is distinctivelyvisible at gracing angles, which is the case forthe sea surface in Figure 8.The displacement mapping technique attacks this problemby perturbing the vertices of a mesh. The drawbackis that displacement mapping requires the geometry in. problem areas to be densely tessellated. The brute forcestrategy of tessellating the whole mesh increase the complexitysignificantly and is best suited for off-line rendering.However, a ray-tracing like approach <strong>using</strong> <strong>GPU</strong>s hasbeen demonstrated by Donnelly [6].We can use our strategy to establish the problem areasof the current frame and use our variable-level of detail refinementstrategy to tessellate these areas. First, we augmentthe silhouetteness test, tagging edges that are largein the current projection and part of planar regions at gracingangles for refinement. Then we incorporate displacementmapping in the vertex shader of Section 6.4. However,care must be taken to avoid cracks and maintain awatertight surface.DraftDraftFor a point p at integer refinement level s, we find thetriangle T = [p i , p j , p k ] of P s that contains p. We thenfind the displacement vectors at p i , p j , and p k . The displacementvector at p i is found by first doing a texturelookup in the displacement map <strong>using</strong> the texture coordinatesat p i and then multiplying this displacement withthe interpolated shading normal at p i . In the same fashionwe find the displacement vectors at p j and p k . Thethree displacement vectors are then combined <strong>using</strong> thebarycentric weights of p with respect to T , resulting ina displacement vector at p. If s is not an integer, we interpolatethe displacement vectors of two adjacent levelssimilarly to (9).The result of this approach is depicted to the right inFigure 8, where the cliff ridges are appropriately jaggedand the water surface is displaced according to the waves.9

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!