23.03.2013 Views

We are pleased to provide this sample of the 3rd ... - Veritas et Visus

We are pleased to provide this sample of the 3rd ... - Veritas et Visus

We are pleased to provide this sample of the 3rd ... - Veritas et Visus

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>We</strong> <strong>are</strong> <strong>pleased</strong> <strong>to</strong> <strong>provide</strong> <strong>this</strong> <strong>sample</strong> <strong>of</strong> <strong>the</strong> <strong>3rd</strong> Dimension newsl<strong>et</strong>ter from <strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong>.<br />

<strong>We</strong> encourage you <strong>to</strong> consider an annual subscription.<br />

• For individuals, an annual subscription (10 issues) is only $47.99. Order information<br />

is available at http://www.veritas<strong>et</strong>visus.com/order.htm.<br />

• For corporations, an annual site license subscription is $299.99. The site license<br />

enables unlimited distribution within your company, including on an intran<strong>et</strong>. Order<br />

information is available at http://www.veritas<strong>et</strong>visus.com/order_site_license.htm.<br />

• A discount is available <strong>to</strong> subscribers who order all five <strong>of</strong> our newsl<strong>et</strong>ters. Our five<br />

newsl<strong>et</strong>ters cover <strong>the</strong> following <strong>to</strong>pics:<br />

ο 3D<br />

ο Touch<br />

ο High Resolution<br />

ο Flexible Displays<br />

ο Display Standards<br />

The goal <strong>of</strong> <strong>this</strong> newsl<strong>et</strong>ter is <strong>to</strong> bring subscribers <strong>the</strong> most comprehensive review <strong>of</strong> recent<br />

news about <strong>the</strong> emerging mark<strong>et</strong>s and technologies related <strong>to</strong> 3D displays. This newsl<strong>et</strong>ter<br />

combines news summaries, feature articles, tu<strong>to</strong>rial, opinion & commentary columns,<br />

summaries <strong>of</strong> recent technology papers, interviews and event information in a<br />

straight-forward, essentially ad-free format. The <strong>3rd</strong> Dimension enables you <strong>to</strong> easily and<br />

affordably stay on <strong>to</strong>p <strong>of</strong> <strong>the</strong> myriad activities in <strong>this</strong> exciting mark<strong>et</strong>.<br />

<strong>We</strong> look forward <strong>to</strong> adding you <strong>to</strong> our rapidly growing list <strong>of</strong> subscribers!<br />

Best regards,<br />

Mark Fihn<br />

Publisher & Edi<strong>to</strong>r-in-Chief<br />

<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong><br />

http://www.veritas<strong>et</strong>visus.com


<strong>3rd</strong> Dimension<br />

<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> September 2007 Vol 2 No 10<br />

DAZ 3D, p39 MED, p72 Beowulf, p22 AMD, p43<br />

L<strong>et</strong>ter from <strong>the</strong> publisher : A different perspective… by Mark Fihn 2<br />

News from around <strong>the</strong> world 12<br />

S3D Basics+ Conference, August 29-29, 2007, Berlin, Germany 42<br />

Soci<strong>et</strong>y for Information Display 2007 Symposium, May 20-25, Long Beach, California 45<br />

International Workshop on 3D Information Technology, May 15, 2007, Seoul, Korea 49<br />

3DTV CON 2007, May 7-9, Kos Island, Greece 52<br />

Stereoscopic Displays and Applications 2007 Conference, January 29-31, San Jose 60<br />

Interview with Greg Truman from ForthDD 69<br />

Interview with Ian Underwood from MED 72<br />

Expert Commentary:<br />

• Shoveling Data by Adrian Travis 76<br />

• 3D camera for medicine and more by Mat<strong>the</strong>w Brennesholtz 78<br />

• 3D isn’t so easy by Chris Chinnock 79<br />

• Selling <strong>to</strong> <strong>the</strong> mark<strong>et</strong> for 3D LCD displays… by Jim Howard 82<br />

• 3D maps – a matter <strong>of</strong> perspective? by Alan Jones 85<br />

• PC vs. Console – Has <strong>the</strong> mark been missed? by Neil Schneider 89<br />

• 3D DLP HDTVs – all is revealed! by Andrew Woods 91<br />

• The Last Word: How things g<strong>et</strong> invented by Lenny Lip<strong>to</strong>n 96<br />

Calendar <strong>of</strong> events 98<br />

The <strong>3rd</strong> Dimension is focused on bringing news and commentary about developments and trends related <strong>to</strong> <strong>the</strong><br />

use <strong>of</strong> 3D displays and supportive components and s<strong>of</strong>tw<strong>are</strong>. The <strong>3rd</strong> Dimension is published electronically 10<br />

times annually by <strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong>, 3305 Chelsea Place, Temple, Texas, USA, 76502. Phone: +1 254 791 0603.<br />

http://www.veritas<strong>et</strong>visus.com<br />

Publisher & Edi<strong>to</strong>r-in-Chief Mark Fihn mark@veritas<strong>et</strong>visus.com<br />

Managing Edi<strong>to</strong>r Phillip Hill phill@veritas<strong>et</strong>visus.com<br />

Associate Edi<strong>to</strong>r Ge<strong>of</strong>f Walker ge<strong>of</strong>f@veritas<strong>et</strong>visus.com<br />

Contribu<strong>to</strong>rs Matt Brennesholtz, Chris Chinnock, Jim Howard, Alan Jones, Lenny Lip<strong>to</strong>n,<br />

Neil Schneider, Adrian Travis, Andrew Woods<br />

Subscription rate: US$47.99 annually. Single issues <strong>are</strong> available for US$7.99 each. Hard copy subscriptions <strong>are</strong><br />

available upon request, at a rate based on location and mailing m<strong>et</strong>hod. Copyright 2007 by <strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong>. All<br />

rights reserved. <strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> disclaims any propri<strong>et</strong>ary interest in <strong>the</strong> marks or names <strong>of</strong> o<strong>the</strong>rs.<br />

http://www.veritas<strong>et</strong>visus.com 1


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

A different perspective…<br />

by Mark Fihn<br />

Not long ago, one <strong>of</strong> our subscribers asked me why I include <strong>to</strong>pics in <strong>this</strong> newsl<strong>et</strong>ter that not directly related <strong>to</strong> 3D<br />

displays. He suggested that my coverage <strong>of</strong> 3D pho<strong>to</strong>graphy, 3D binoculars, 3D pointing devices, 3D scanners,<br />

stereo-lithography, lenticular printing, 3D art, <strong>et</strong>c., was a distraction from <strong>the</strong> <strong>to</strong>pic <strong>of</strong> displays.<br />

Fortunately, <strong>this</strong> gentleman seems <strong>to</strong> be in <strong>the</strong> minority, because most <strong>of</strong> <strong>the</strong> feedback I g<strong>et</strong> is quite positive with<br />

regard <strong>to</strong> <strong>the</strong>se non-display <strong>to</strong>pics. In any case, if you don’t c<strong>are</strong> for <strong>the</strong> non-display coverage, we’re confident that<br />

you can choose <strong>to</strong> scroll through it quickly and find those <strong>to</strong>pics that <strong>are</strong> most interesting <strong>to</strong> you.<br />

The reason that we cover non-display-related <strong>to</strong>pics in <strong>this</strong> newsl<strong>et</strong>ter is because most <strong>of</strong> it is directly related <strong>to</strong><br />

displays, and even those things that <strong>are</strong> not electronic in nature, <strong>provide</strong> us with some amazing clues about stereo<br />

vision and three-dimensional imaging.<br />

Readers <strong>of</strong> our sister newsl<strong>et</strong>ter, High Resolution, will be aw<strong>are</strong> <strong>of</strong> my fascination with optical illusions. How is it<br />

that we “see” things that really <strong>are</strong>n’t <strong>the</strong>re? Of course, <strong>this</strong> is exactly what stereographic 3D displays do – <strong>the</strong>y<br />

recreate an illusion, as we <strong>are</strong> not actually seeing three dimensions on <strong>the</strong> surface <strong>of</strong> <strong>the</strong> 2D display. Perhaps <strong>this</strong> is<br />

one <strong>of</strong> <strong>the</strong> reasons that I find 3D so fascinating – it’s little more than a trick we play on our brains. But as with all<br />

good optical illusions – <strong>the</strong> consequent image is really quite convincing.<br />

In addition <strong>to</strong> optical illusions, I find myself fascinated with artwork that considers three dimensions. Sculpture is<br />

an obvious three-dimensional exercise, but even more than sculpture, I am fascinated by <strong>the</strong> imaginative forms <strong>of</strong><br />

art that consider <strong>the</strong> third dimension from a different perspective. As such, in what will be a ra<strong>the</strong>r lengthy<br />

introduction <strong>to</strong> <strong>the</strong> newsl<strong>et</strong>ter, I’m going <strong>to</strong> sh<strong>are</strong> several creations in 3D (or <strong>the</strong> appearance <strong>of</strong> 3D) that intrigue me.<br />

Although not directly related <strong>to</strong> displays, only those <strong>of</strong> you straight-jack<strong>et</strong>ed by <strong>the</strong> practical and mundane will fail<br />

<strong>to</strong> find <strong>the</strong> next pages interesting, or at least a little fun. Enjoy!<br />

Jen Stark art work explores three-dimensional use <strong>of</strong> paper<br />

Artist Jen Stark introduced an interesting collection that explores how two-dimensional objects like paper can be<br />

transformed in<strong>to</strong> stunning three-dimensional creations. The basic premise is simple – involving nothing more than<br />

hand-cutting stacks <strong>of</strong> construction paper. http://www.jenstark.com<br />

http://www.veritas<strong>et</strong>visus.com 2


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Robert Lang shows <strong>of</strong>f incredible origami<br />

The Japanese art <strong>of</strong> paper-folding known as origami is a well known process that takes two-dimensional she<strong>et</strong>s <strong>of</strong><br />

paper <strong>to</strong> create three-dimensional objects. An American, Robert Lang, has been an avid student <strong>of</strong> origami for over<br />

thirty years and is recognized as one <strong>of</strong> <strong>the</strong> world’s leading masters <strong>of</strong> <strong>the</strong> art, with over 400 designs catalogued and<br />

diagrammed. He is noted for designs <strong>of</strong> great d<strong>et</strong>ail and realism, and includes in his reper<strong>to</strong>ire some <strong>of</strong> <strong>the</strong> most<br />

complex origami designs ever created. His work combines aspects <strong>of</strong> <strong>the</strong> <strong>We</strong>stern school <strong>of</strong> ma<strong>the</strong>matical origami<br />

design with <strong>the</strong> Eastern emphasis upon line and form <strong>to</strong> yield models that <strong>are</strong> distinctive, elegant, and challenging<br />

<strong>to</strong> fold. http://www.langorigami.com<br />

On <strong>the</strong> left is Robert Lang’s “Allosaurus Skele<strong>to</strong>n”, fashioned from 16 uncut squ<strong>are</strong>s <strong>of</strong> paper; in <strong>the</strong> center is “Tree Frog”,<br />

made from a single uncut squ<strong>are</strong> <strong>of</strong> paper; and on <strong>the</strong> right is “The Sentinel”, crafted from 2 uncut she<strong>et</strong>s <strong>of</strong> paper<br />

The Punch Bunch features 3D floral punch art<br />

My wife runs a little business called The Punch Bunch in which she wholesales craft punches <strong>to</strong> scrapbook and<br />

craft s<strong>to</strong>res. One <strong>of</strong> <strong>the</strong> things that she has worked hard <strong>to</strong> popularize is an amazing form <strong>of</strong> art in which specialty<br />

paper punches <strong>are</strong> used <strong>to</strong> cut out shapes that <strong>are</strong> <strong>the</strong>n molded and colored <strong>to</strong> create what <strong>are</strong> some amazing threedimensional<br />

floral bouqu<strong>et</strong>s. The business started almost 10 years ago out <strong>of</strong> our home, (<strong>the</strong> first w<strong>are</strong>house was<br />

some extra space in <strong>the</strong> master bathroom). She now ships all over <strong>the</strong> world and <strong>of</strong>fers perhaps <strong>the</strong> largest<br />

collection <strong>of</strong> punches in <strong>the</strong> world. A substantial portion <strong>of</strong> what she sells is used by crafters <strong>to</strong> make floral<br />

arrangements that <strong>are</strong> <strong>of</strong>ten difficult <strong>to</strong> distinguish from real flowers. http://www.<strong>the</strong>punchbunch.com<br />

Floral arrangements crafted from paper punch outs. The image on <strong>the</strong> left is by Australian punch artist Leone Em;<br />

<strong>the</strong> two images on <strong>the</strong> right <strong>are</strong> by Seattle-based artist Susan Tierney-Cockburn.<br />

http://www.veritas<strong>et</strong>visus.com 3


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Guido Daniele shows <strong>of</strong>f more Handimals<br />

In issue #18 <strong>of</strong> <strong>the</strong> <strong>3rd</strong> Dimension, we showed <strong>of</strong>f some intriguing hand paintings by Italian artist Guido Daniele<br />

whose “Handimals” serve as an excellent schoolteacher for creating 2D images on a 3D form. By changing <strong>the</strong><br />

perspective <strong>of</strong> <strong>the</strong> viewer, Daniele’s m<strong>et</strong>iculous images use <strong>the</strong> human hand as an easel, but <strong>the</strong> image is lost if <strong>the</strong><br />

perspective is altered or if <strong>the</strong> hand is moved. 3D displays have similar problems – except that <strong>the</strong>y recreate a 3D<br />

image on a 2D form, but similar problems <strong>of</strong> perspective persist. http://www.guidodaniele.com<br />

Note that body art has become quite popular, particularly when it<br />

comes <strong>to</strong> painting “clothing” on <strong>the</strong> bodies <strong>of</strong> attractive young<br />

female models. While some <strong>of</strong> <strong>this</strong> body art is truly creative, it<br />

r<strong>are</strong>ly relies so much on <strong>the</strong> shape <strong>of</strong> <strong>the</strong> body parts and <strong>the</strong><br />

perspective <strong>of</strong> <strong>the</strong> viewer <strong>to</strong> create <strong>the</strong> three-dimensional effect. In<br />

addition <strong>to</strong> his popular Handimals, Daniele’s website features<br />

numerous examples <strong>of</strong> full-body art, most <strong>of</strong> which has been created<br />

for commercial purposes. The image <strong>to</strong> <strong>the</strong> right, although not a full<br />

body-art image is an example <strong>of</strong> Daniele’s commercial art using his<br />

animal motif in relation <strong>to</strong> a Jaguar.<br />

http://www.veritas<strong>et</strong>visus.com 4


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Julian Beaver’s sidewalk chalk paintings continue <strong>to</strong> as<strong>to</strong>und<br />

In past editions <strong>of</strong> <strong>the</strong> <strong>3rd</strong> Dimension, we’ve shown images <strong>of</strong> Julian Beever’s amazing chalk paintings, (see issues<br />

#12 and #13), which so clearly show us <strong>the</strong> importance <strong>of</strong> perspective. The bot<strong>to</strong>m pair <strong>of</strong> images serves <strong>to</strong> identify<br />

just how important <strong>the</strong> viewer’s position is <strong>to</strong> a successful rendering <strong>of</strong> a 3D image on a 2D surface. In all <strong>of</strong> <strong>the</strong>se<br />

images, <strong>the</strong> lines in <strong>the</strong> sidewalk serve <strong>to</strong> remind us that <strong>the</strong>se really <strong>are</strong> 2D paintings.<br />

http://users.skyn<strong>et</strong>.be/J.Beever<br />

http://www.veritas<strong>et</strong>visus.com 5


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

John Pugh’s amazing trompe-l’œil artistry<br />

Trompe-l’œil is an art technique involving extremely realistic imagery in order <strong>to</strong> create <strong>the</strong> optical illusion that <strong>the</strong><br />

depicted objects really exist, instead <strong>of</strong> being mere, two-dimensional paintings. The name is derived from French<br />

for “trick <strong>the</strong> eye”. One <strong>of</strong> <strong>the</strong> current-day masters <strong>of</strong> <strong>the</strong> technique is John Pugh, whose stunning creations <strong>are</strong> so<br />

lifelike <strong>the</strong>y have caused traffic accidents. In his image “Art Imitating Life Imitating Art Imitating Life” (shown<br />

below), which is featured at a café in San Jose, California, a cus<strong>to</strong>mer complained he had received “<strong>the</strong> silent<br />

treatment” when he tried <strong>to</strong> introduce himself <strong>to</strong> <strong>the</strong> woman reading a book. http://www.illusion-art.com<br />

John Pugh’s “Art Imitating Life Imitating Art Imitating Life”. The lower left image is an early concept layout; <strong>the</strong><br />

lower right shows Pugh painting <strong>the</strong> statue.<br />

http://www.veritas<strong>et</strong>visus.com 6


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Eric Growe paints 3D murals<br />

Eric Growe is ano<strong>the</strong>r fascinating artist that uses <strong>the</strong> trompe-l'œil style <strong>to</strong> create amazing murals that transform 2D<br />

spaces in<strong>to</strong> stunning 3D paintings. Growe does most <strong>of</strong> <strong>the</strong> artwork by himself and researches, paints and designs<br />

each project from scratch. These paintings, all on flat surfaces, compl<strong>et</strong>ely change <strong>the</strong> perception <strong>of</strong> an o<strong>the</strong>rwise<br />

empty space and serve <strong>to</strong> create a truly novel way <strong>to</strong> attract interest and attention. http://www.ericgrohemurals.com<br />

Growe painted <strong>the</strong> above mural on <strong>the</strong> side <strong>of</strong> a shopping mall in Niagara, New York. The upper images <strong>are</strong> before and after<br />

pho<strong>to</strong>s; <strong>the</strong> lower image shows d<strong>et</strong>ail while, also providing a hint about how <strong>the</strong> image appears from differing perspectives.<br />

On <strong>the</strong> left is <strong>the</strong> side <strong>of</strong> a s<strong>to</strong>re in Massillon, Ohio, previously nothing more than a brick wall. The front <strong>of</strong> <strong>the</strong><br />

building was later painted <strong>to</strong> match <strong>the</strong> architectural d<strong>et</strong>ails <strong>of</strong> <strong>the</strong> mural on <strong>the</strong> side <strong>of</strong> <strong>the</strong> building. On <strong>the</strong> right is a<br />

mural on a wall at <strong>the</strong> Mount Carmel College <strong>of</strong> Nursing in Columbus, Ohio.<br />

http://www.veritas<strong>et</strong>visus.com 7


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Two <strong>of</strong> four murals painted by Growe at <strong>the</strong> Washing<strong>to</strong>n State Corrections Center for Women in Gig Harbor, Washing<strong>to</strong>n.<br />

RollAd comp<strong>et</strong>ition showcases 3D illusions on sides <strong>of</strong> trucks<br />

German ad agency RollAd rents out advertising space on <strong>the</strong> sides <strong>of</strong> trucks and for <strong>the</strong> past three years has<br />

sponsored <strong>the</strong> Rolling Advertising Awards comp<strong>et</strong>ition. The ads <strong>are</strong> printed on interchangeable canvas covers<br />

which <strong>are</strong> placed over <strong>the</strong> container portions <strong>of</strong> <strong>the</strong> trucks. The winners <strong>of</strong> <strong>the</strong> comp<strong>et</strong>ition have <strong>the</strong>ir mock-up<br />

designs actually implemented and showcased at <strong>the</strong> annual awards ceremony (http://www.rhino-award.com). The<br />

website shows many very clever examples. The 2005 comp<strong>et</strong>ition winner was <strong>the</strong> Pepsi design shown lower left.<br />

Interestingly, <strong>the</strong> lower right image is taken from a different perspective where <strong>the</strong> illusion is not as effective.<br />

http://www.veritas<strong>et</strong>visus.com 8


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Devorah Sperber uses chenille stems <strong>to</strong> show examples <strong>of</strong> perspective<br />

Devorah Sperber showcases on her website a couple <strong>of</strong> amazing pieces <strong>of</strong> art made from chenille straws that both<br />

center on Holbein’s art and which highlight <strong>the</strong> importance <strong>of</strong> perspective when viewing art. In her words:<br />

“While many contemporary artists utilize digital technology <strong>to</strong> create high-tech works, I strive <strong>to</strong> ‘dumbdown’<br />

technology by utilizing mundane materials and low-tech, labor-intensive assembly processes. I place<br />

equal emphasis on <strong>the</strong> whole recognizable image and how <strong>the</strong> individual parts function as abstract elements,<br />

selecting materials based on aes<strong>the</strong>tic and functional characteristics as well as for <strong>the</strong>ir capacity for a<br />

compelling and <strong>of</strong>ten contrasting relationship with <strong>the</strong> subject matter.” http://www.devorahsperber.com<br />

In <strong>this</strong> piece, called “After Holbein”, Sperber arranged chenille stems in such a way that when <strong>the</strong> image is reflected<br />

from a polished steel cylinder that Holbein’s work becomes visible<br />

In <strong>this</strong> piece, a skull becomes obvious only when viewed from an extreme viewing angle<br />

http://www.veritas<strong>et</strong>visus.com 9


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Inakadate rice farming…<br />

Each year since 1993, farmers in <strong>the</strong> <strong>to</strong>wn <strong>of</strong> Inakadate in Aomori prefecture create works <strong>of</strong> crop art by growing a<br />

little purple and yellow-leafed kodaimai rice along with <strong>the</strong>ir local green-leafed tsugaru-roman vari<strong>et</strong>y. The images<br />

start <strong>to</strong> appear in <strong>the</strong> spring and <strong>are</strong> visible until harvest time in September. While I suppose it’s arguable about<br />

whe<strong>the</strong>r <strong>the</strong>se <strong>are</strong> 3D images, <strong>the</strong> notion <strong>of</strong> perspective is a big consideration, as depicted in <strong>the</strong> close-up image<br />

depicted in <strong>the</strong> bot<strong>to</strong>m right. http://www.am.askan<strong>et</strong>.ne.jp/~tugaru/z-inakadate.htm<br />

The <strong>to</strong>p left image is <strong>the</strong> 2007 crop art image creation by <strong>the</strong> farmers <strong>of</strong> Inakadate, Japan. On <strong>the</strong> <strong>to</strong>p right is <strong>the</strong><br />

2006 image and <strong>the</strong> lower left image is from 2005. The lower right image is a close-up <strong>of</strong> <strong>the</strong> different rice plantings,<br />

identifying <strong>the</strong> importance <strong>of</strong> perspective when viewing any image.<br />

Stan Herd crop artistry<br />

Stan Herd is an American<br />

crop artist known for creating<br />

advertisements that <strong>are</strong><br />

strategically placed <strong>to</strong><br />

coincide with airline flight<br />

paths. Pictured here <strong>are</strong> a<br />

couple <strong>of</strong> his more artistic and<br />

not-so-commercial efforts. On<br />

<strong>the</strong> left is his “Sunflower<br />

Field” and on <strong>the</strong> right is a<br />

“Portrait <strong>of</strong> Saginaw Grant”.<br />

http://www.stanherdart.com<br />

http://www.veritas<strong>et</strong>visus.com 10


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Hea<strong>the</strong>r Jansch creates driftwood sculptures<br />

Hea<strong>the</strong>r Jansch’s driftwood sculptures don’t give us any insights in<strong>to</strong> 2D <strong>to</strong> 3D transformation or in<strong>to</strong> perspective,<br />

but <strong>the</strong>y <strong>are</strong> such unique and beautiful creations that I couldn’t resist including <strong>the</strong>m here anyway. Her specialty is<br />

in assembling life-size works made from scrap driftwood, particularly <strong>of</strong> horses. Jansch lives and works in <strong>the</strong> <strong>We</strong>st<br />

Country <strong>of</strong> England. She is holding “Open Studio” 2007 from September 8th <strong>to</strong> 2<strong>3rd</strong>, showing <strong>of</strong>f work-in-process,<br />

new life-size works in driftwood and in oak, a woodland walk, a children’s “treasure trail”, and some featured guest<br />

artists. http://www.jansch.freeserve.co.uk<br />

http://www.veritas<strong>et</strong>visus.com 11


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

3D news from around <strong>the</strong> world<br />

compiled by Mark Fihn and Phillip Hill<br />

Acacia Research <strong>to</strong> soon release 3D Content Creation 2007 report<br />

Mark<strong>et</strong> researchers Acacia Research announced <strong>the</strong>ir impending release <strong>of</strong> “3D Content Creation 2007”, which will<br />

examine <strong>the</strong> mark<strong>et</strong> for 3D modeling and animation <strong>to</strong>ols in <strong>the</strong> film and video, video game, advertising,<br />

visualization, and o<strong>the</strong>r industries. In addition <strong>to</strong> s<strong>of</strong>tw<strong>are</strong> shipment and revenue forecasts, <strong>this</strong> mark<strong>et</strong> study will<br />

<strong>provide</strong> d<strong>et</strong>ails on revenues and budg<strong>et</strong>s <strong>of</strong> <strong>the</strong> industries that use <strong>the</strong> <strong>to</strong>ols and a look at spending on 3D content<br />

creation within those industries. It will discuss <strong>the</strong> major trends in <strong>the</strong> industry including consolidation, new mark<strong>et</strong><br />

opportunities, <strong>the</strong> explosion <strong>of</strong> specialized and lightweight <strong>to</strong>ols, and much more. The report is available at an<br />

individual rate for $1,995.00 and at a site license rate <strong>of</strong> $2,992.50. http://www.acaciarg.com<br />

ITRI forms 3D display alliance with panel makers<br />

Taiwan’s Industrial Technology Research Institute<br />

(ITRI) recently formed <strong>the</strong> 3D Interaction & Display<br />

Alliance with Taiwan LCD panel makers, including AU<br />

Optronics (AUO), Chi Mei Op<strong>to</strong>electronics (CMO),<br />

Chunghwa Picture Tubes (CPT) and HannStar Display,<br />

and digital TV content suppliers as well as several<br />

system makers. ITRI suggested that <strong>the</strong> 3D display<br />

mark<strong>et</strong> will grow from $300 million in 2007 <strong>to</strong> over $2<br />

billion in 2010. Taiwan panel makers such as CMO and<br />

CPT have already developed 3D LCD panels with<br />

CMO s<strong>et</strong> <strong>to</strong> volume produce 22-inch 3D panels in <strong>the</strong><br />

third quarter while CPT’s 20.1-inch wide-screen panel<br />

has gained attention from first-tier display vendors,<br />

according <strong>to</strong> sources. http://www.itri.org.tw<br />

Jon Peddie Research reports about Q2’07 graphics mark<strong>et</strong><br />

JPR released its Q2’07 quarterly Mark<strong>et</strong>Watch report on PC graphics shipments. Traditionally, <strong>the</strong> second quarter<br />

is slow for <strong>the</strong> computer industry. Never<strong>the</strong>less Q2’07 saw nVidia make significant grains, while AMD and Intel<br />

saw more typical results for <strong>the</strong> time period. VIA saw a slight rise, SiS slipped more, and Matrox dropped <strong>to</strong>o.<br />

Total shipments for <strong>the</strong> quarter were 81.3 million units, up 3% in over last quarter. Comp<strong>are</strong>d <strong>to</strong> <strong>the</strong> same quarter<br />

last year shipments were up 10.6%. On <strong>the</strong> desk<strong>to</strong>p nVidia was <strong>the</strong> clear winner, claiming 35.0% against Intel’s<br />

31.3%, while AMD had a modest gain <strong>to</strong> 18.8%. In <strong>the</strong> mobile mark<strong>et</strong> Intel held its dominant position and grew<br />

slightly <strong>to</strong> 51.5%, with nVidia number two at 27% and AMD at 21%. Mobile chips continued <strong>the</strong>ir growth <strong>to</strong> claim<br />

31.2% <strong>of</strong> <strong>the</strong> mark<strong>et</strong> with 24.5 million units. http://www.jonpeddie.com<br />

Vendor Q2’07 Mark<strong>et</strong> sh<strong>are</strong> Year ago Mark<strong>et</strong> Sh<strong>are</strong> Growth<br />

AMD 15.86 19.5% 19.67 26.7% -19.4%<br />

Intel 30.59 37.5% 29.68 40.4% 3.1%<br />

nVidia 26.48 32.6% 14.48 19.7% 82.9%<br />

Matrox 0.13 0.2% 0.10 0.1% 30.0%<br />

SiS 2.00 2.5% 3.33 4.5% -39.9%<br />

VIA/S3 6.26 7.7% 6.27 8.5% -0.2%<br />

TOTAL 81.32 100.0% 73.53 100.0% 10.6%<br />

http://www.veritas<strong>et</strong>visus.com 12


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Ozaktas and Onural edit new book about “Three-Dimensional Television”<br />

The scope <strong>of</strong> a new book entitled “Three-Dimensional Television: Capture,<br />

Transmission, Display”, reflects <strong>the</strong> diverse needs <strong>of</strong> <strong>this</strong> emerging mark<strong>et</strong>. Different<br />

chapters deal with different stages <strong>of</strong> an end-<strong>to</strong>-end 3DTV system such as capture,<br />

representation, coding, transmission, and display. In addition <strong>to</strong> stereographic 3D<br />

solutions, both au<strong>to</strong>stereoscopic techniques and holographic approaches <strong>are</strong> also<br />

covered. Some chapters discuss current research trends in 3DTV technology, while<br />

o<strong>the</strong>rs address underlying <strong>to</strong>pics. In addition <strong>to</strong> questions about technology, <strong>the</strong> book<br />

also addresses some <strong>of</strong> <strong>the</strong> consumer, social, and gender issues related <strong>to</strong> 3DTV. The<br />

800-page book is expected <strong>to</strong> be available in early December. In hardcover, <strong>the</strong> book is<br />

priced at $269.00/€199.95/£154.00. http://www.springer.com<br />

Philips introduces WOWzone 132-inch 3D display wall<br />

In late August at IFA in Berlin, Philips introduced <strong>the</strong> 3D WOWzone, a large 132-inch multi-screen 3D wall,<br />

designed <strong>to</strong> grab people’s attention with stunning 3D multimedia presentations. Philips claims that <strong>the</strong> out-<strong>of</strong>screen<br />

3D effects fascinate viewers and holds <strong>the</strong>ir attention for longer than standard 2D images, <strong>the</strong>reby making<br />

3D a valuable mark<strong>et</strong>ing <strong>to</strong>ol. No glasses <strong>are</strong> needed <strong>to</strong> view <strong>the</strong> Philips 3D WOWzone and it gives mark<strong>et</strong>eers an<br />

element <strong>of</strong> surprise that leaves <strong>the</strong>ir targ<strong>et</strong> audience with an entertaining 3D multimedia experience. The Philips<br />

WOWzone multi-screen 3D wall consists <strong>of</strong> nine 42-inch Philips 3D displays in a 3x3 display s<strong>et</strong>-up. A fully<br />

au<strong>to</strong>mated dual mode feature allows <strong>the</strong> user <strong>to</strong> display 3D content as well as 2D high-definition content. Philips<br />

WOWzone is a compl<strong>et</strong>e end-<strong>to</strong>-end solution including 3D displays, mounting rig, media streamer computers,<br />

control s<strong>of</strong>tw<strong>are</strong> and dedicated 3D content creation <strong>to</strong>ols. The WOWzone is available <strong>to</strong>day on a project basis and<br />

will be commercially available from Q1 2008 onwards.<br />

Philips and eventIS demonstrate 3D video-on-demand feasibility<br />

In early September, Philips and eventIS announced that <strong>the</strong>y successfully compl<strong>et</strong>ed testing <strong>of</strong> 3D video-ondemand<br />

(VoD) using an eventIS m<strong>et</strong>adata system and Philips 3D displays. This proves that <strong>the</strong> new 3D video<br />

format, based on “2D-plus-depth”, can be integrated in<strong>to</strong> existing media distribution and management systems such<br />

as video-on-demand via cable, satellite, Intern<strong>et</strong> or terrestrial broadcasting. Earlier <strong>this</strong> year Deutsche Telekom and<br />

Philips demonstrated interactive 3D applications like movies, home shopping and online games. Now eventIS takes<br />

<strong>this</strong> a step fur<strong>the</strong>r, by demonstrating that 3D VoD capabilities can easily be implemented in <strong>the</strong>ir m<strong>et</strong>adata media<br />

management system. According <strong>to</strong> <strong>the</strong> company, VoD will play an important role in <strong>the</strong> early distribution <strong>of</strong> highquality<br />

3D movies <strong>to</strong> <strong>the</strong> consumer. In <strong>the</strong> demo, eventIS makes use <strong>of</strong> a library that consists <strong>of</strong> 3D animated,<br />

stereoscopic and 2D-<strong>to</strong>-3D converted videos. http://www.philips.com/newscenter<br />

On <strong>the</strong> left is <strong>the</strong> Philips 3D WOWzone 132-inch 3D display wall; <strong>the</strong> image on <strong>the</strong> right depicts 3D video-on-demand<br />

http://www.veritas<strong>et</strong>visus.com 13


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

iZ3D ships 22.0-inch 3D gaming moni<strong>to</strong>r<br />

In late August, San Diego-based iZ3D announced it is selling its<br />

iZ3D 22.0-inch widescreen 3D gaming moni<strong>to</strong>r for $999 – and<br />

<strong>the</strong> company is specifically targ<strong>et</strong>ing <strong>the</strong> gaming mark<strong>et</strong>. iZ3D<br />

says <strong>the</strong> system works using cus<strong>to</strong>m s<strong>of</strong>tw<strong>are</strong> drivers, and <strong>the</strong><br />

user must wear passive polarized glasses. iZ3D is a newlyformed<br />

partnership b<strong>et</strong>ween 3D imaging developer Neurok<br />

Optics and Taiwan’s Chi Mei Op<strong>to</strong>electronics. The moni<strong>to</strong>r<br />

itself <strong>of</strong>fers a 1680x1050 pixel format, 5 ms response time, 170°<br />

viewing angle, 600:1 contrast, and dual DVI/VGA inputs<br />

designed <strong>to</strong> connect <strong>to</strong> a dual-output video card. The display<br />

ships with stereo drivers which <strong>are</strong> compatible with ei<strong>the</strong>r <strong>the</strong><br />

nVidia GeForce 8 series or ATI’s FireGL V3600 workstation<br />

graphics cards. The moni<strong>to</strong>r incorporates two LCDs and can also<br />

be used for standard 2D computing tasks. http://iz3d.com<br />

Fraunh<strong>of</strong>er Research demonstrates au<strong>to</strong>stereoscopic Free2C display<br />

Fraunh<strong>of</strong>er Research showed <strong>of</strong>f its Free2C 3D Display at<br />

IFA in late August, claiming it <strong>to</strong> be “currently <strong>the</strong> most<br />

advanced development in au<strong>to</strong>stereoscopic (glasses-free<br />

3D) display technology”. Free2C is based on a special<br />

head-tracking lenticular-screen 3D display principle,<br />

allowing free head movements in three dimensions at a<br />

high level <strong>of</strong> image quality (<strong>the</strong> current resolution is<br />

1600x1200 pixels). The particular design <strong>of</strong> <strong>the</strong> lens plate<br />

ensures that <strong>the</strong> stereoscopic images <strong>are</strong> almost perfectly<br />

separated (no ghosting). The Free2C-Desk<strong>to</strong>p Display is<br />

perfectly suited for virtual pro<strong>to</strong>typing, archaeology and<br />

oceanography, minimal invasive surgery and lifelike<br />

simulations. The researchers claim that <strong>the</strong> viewer can be<br />

freely positioned without degradation <strong>to</strong> resolution,<br />

brightness, and color reproduction, all with “extremely low<br />

crosstalk”. http://www.hhi.fraunh<strong>of</strong>er.de<br />

SD&A 2007 “Discussion Forum” now on-line<br />

At <strong>the</strong> 2007 SD&A event in San Jose, a panel discussion was conducted on <strong>the</strong> <strong>to</strong>pic, “3D in <strong>the</strong> Home: How Close<br />

<strong>are</strong> <strong>We</strong>?” The discussion was moderated by Lenny Lip<strong>to</strong>n (far left) from REAL D, and from left <strong>to</strong> right, included<br />

Br<strong>et</strong>t Bryars from <strong>the</strong> USDC, Art Berman from Insight Media, Mark Fihn from <strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong>, and Steven Smith<br />

from VREX. Transcripts <strong>of</strong> <strong>the</strong> forum <strong>are</strong> now on line: http://www.stereoscopic.org/2007/forum.html<br />

http://www.veritas<strong>et</strong>visus.com 14


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Hitachi shows <strong>of</strong>f new stereoscopic vision display technology<br />

In early August, Hitachi<br />

announced its development<br />

<strong>of</strong> a new “small-sized<br />

stereoscopic vision display<br />

technology”. Measuring in<br />

at 7.9 x 7.9 x 3.9 inches and<br />

weighing 2.2 pounds, <strong>the</strong><br />

device utilizes an array <strong>of</strong><br />

mirrors and projects a<br />

“syn<strong>the</strong>tic image”. The<br />

device is reportedly similar<br />

in design <strong>to</strong> its larger “Transpost”. Hitachi hopes <strong>to</strong> implement <strong>the</strong> technology in locales such as schools,<br />

exhibitions, and museums. http://www.hitachi.co.jp<br />

NTT develops tangible 3D technology<br />

NTT Comw<strong>are</strong> has developed “Tangible-3D” technology, a next generation communication interface for real-time<br />

motion capture that reproduces <strong>the</strong> physical feel <strong>of</strong> three-dimensional video. This technology is an improved<br />

version <strong>of</strong> <strong>the</strong> NTT Comw<strong>are</strong>’s tangible 3D system without requiring special glasses that was originally developed<br />

in 2005. It relies on a pair <strong>of</strong> cameras that capture and process data about an object. It allows tactile impressions <strong>to</strong><br />

be transmitted back and forth b<strong>et</strong>ween multiple users on a real-time basis by <strong>the</strong> s<strong>of</strong>tw<strong>are</strong> <strong>to</strong> process <strong>the</strong> captured<br />

images. It displays 3D images without requiring special glasses on <strong>the</strong> 3D display and translates <strong>the</strong> images in<strong>to</strong> a<br />

tactile impression that <strong>the</strong> user can feel with <strong>the</strong> dedicated tangible interaction device at <strong>the</strong> same time. It<br />

reproduces <strong>the</strong> physical feel <strong>of</strong> three-dimensional video on a remote location as well as <strong>to</strong> allow <strong>the</strong> viewers <strong>to</strong><br />

literally reach out and <strong>to</strong>uch <strong>the</strong> person or object on <strong>the</strong> screen by means <strong>of</strong> a special device.<br />

For instance, a real-time motion capture <strong>of</strong> 3D images and a tactile impression <strong>provide</strong>s a virtual handshaking<br />

experience. To enable <strong>this</strong> experience, a pair <strong>of</strong> cameras captures <strong>the</strong> image <strong>of</strong> <strong>the</strong> hand <strong>of</strong> a user at one side. The<br />

image is processed <strong>to</strong> a 3D image and extracts <strong>the</strong> data <strong>of</strong> <strong>the</strong> tactile impression. The data is <strong>the</strong>n transmitted <strong>to</strong> <strong>the</strong><br />

receiving end on a real-time basis. The image <strong>of</strong> <strong>the</strong> hand captured is displayed on <strong>the</strong> 3D display without requiring<br />

special glasses. At <strong>the</strong> same time, <strong>the</strong> data <strong>of</strong> <strong>the</strong> tactile information for <strong>to</strong>uching a hand is transmitted <strong>to</strong> <strong>the</strong><br />

recipient’s hand through <strong>the</strong> tactile device <strong>to</strong> actually feel <strong>the</strong> on-screen image as it moves such as shaking hands.<br />

While <strong>the</strong> Tangible-3D system only works in one direction on a one-on-one basis in <strong>the</strong> demonstration for now,<br />

NTT Comw<strong>are</strong> is developing a two-way system that allows tactile impressions <strong>to</strong> be transmitted back and forth<br />

b<strong>et</strong>ween multiple users. The company is also working <strong>to</strong> improve <strong>the</strong> 3D screen for multiple angle viewing, which<br />

only appears three-dimensional from a<br />

particular viewing angle. The<br />

technology will also allow visi<strong>to</strong>rs <strong>of</strong><br />

<strong>the</strong> museum <strong>to</strong> handle items <strong>of</strong><br />

exhibits with 3D images such as<br />

fossils. If <strong>this</strong> technology is applied <strong>to</strong><br />

a remote classroom <strong>to</strong> make ceramics<br />

for example, <strong>the</strong> students can obtain<br />

perceptible information <strong>of</strong> a work<br />

such as <strong>the</strong> real shape while <strong>the</strong><br />

teacher shows <strong>the</strong> 3D image on <strong>the</strong><br />

screen. This technology also enables<br />

interactive communication for video<br />

conferences. http://www.nttcom.co.jp<br />

http://www.veritas<strong>et</strong>visus.com 15


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Novint Technologies brings out games titles for Falcon<br />

Novint Technologies announced a diverse lineup <strong>of</strong> upcoming titles for <strong>the</strong> Novint Falcon game controller. The<br />

company is adding its patented 3D <strong>to</strong>uch technology <strong>to</strong> a vari<strong>et</strong>y <strong>of</strong> existing titles, including Feelin’ It: Arctic Stud<br />

Poker Run. Novint is also creating original titles designed specifically for 3D <strong>to</strong>uch and will begin releasing all<br />

titles later <strong>this</strong> year. The Novint Falcon is a first-<strong>of</strong>-a-kind game controller that l<strong>et</strong>s people feel weight, shape,<br />

texture, dimension, dynamics and force feedback when playing enabled games. New releases will range in price<br />

from $4.95 <strong>to</strong> $29.95 and be available <strong>to</strong> download through Novint’s N VeNT player. http://www.novint.com.<br />

Mova and Gentle Giant Studios show first moving 3D sculpture <strong>of</strong> live performance<br />

Performance capture studio Mova and Gentle Giant Studios unveiled at SIGGRAPH a 3D Zo<strong>et</strong>rope that uses<br />

persistence <strong>of</strong> motion <strong>to</strong> bring <strong>to</strong> life a series <strong>of</strong> 3D models <strong>of</strong> an ac<strong>to</strong>r’s face captured live by Mova’s Con<strong>to</strong>ur<br />

Reality Capture System. This 3D Zo<strong>et</strong>rope is <strong>the</strong> first <strong>to</strong> show a live-action, natural 3D surface in motion. The<br />

resulting effect is a physical sculpture <strong>of</strong> a speaking human face that comes <strong>to</strong> life with perfect motion, faithful <strong>to</strong><br />

<strong>the</strong> original ac<strong>to</strong>r’s performance down <strong>to</strong> a fraction <strong>of</strong> a millim<strong>et</strong>er. The Zo<strong>et</strong>rope displayed at SIGGRAPH<br />

consisted <strong>of</strong> thirty 3D models <strong>of</strong> a face in motion. The models spin on a wheel and a strobe light illuminates each as<br />

it passes by a viewing window, much as still frames projected intermittently <strong>are</strong> perceived as a moving image. To<br />

<strong>the</strong> viewer, it looks like one 3D face in continuous motion. Mova used <strong>the</strong> Con<strong>to</strong>ur Reality Capture System <strong>to</strong><br />

capture <strong>the</strong> live performance <strong>of</strong> an ac<strong>to</strong>r using an array <strong>of</strong> cameras with shutters synchronized <strong>to</strong> lights flashing<br />

over 90 times per second, beyond <strong>the</strong> threshold <strong>of</strong> human perception. The glow from phosphorescent (“glow in <strong>the</strong><br />

dark”) makeup sponged on<strong>to</strong> <strong>the</strong> ac<strong>to</strong>r is captured by <strong>the</strong> camera array. Triangulation and frame-by-frame tracking<br />

<strong>of</strong> <strong>the</strong> 3D geom<strong>et</strong>ry is <strong>the</strong>n used <strong>to</strong> produce over 100,000 polygons <strong>to</strong> create a 3D face, <strong>to</strong> an accuracy <strong>of</strong> a fraction<br />

<strong>of</strong> a millim<strong>et</strong>er. Gentle Giant Studios used <strong>the</strong> captured 3D surface geom<strong>et</strong>ry and formed 30 individual models with<br />

<strong>the</strong> help <strong>of</strong> a 3D stereolithography printer, which creates <strong>the</strong> models using a plastic resin. http://www.mova.com<br />

Mova’s 3D Zo<strong>et</strong>rope uses 30 3D models that result in a very lifelike full-motion facial representation<br />

Image M<strong>et</strong>rics performance capture system <strong>to</strong> model Richard Bur<strong>to</strong>n<br />

Image M<strong>et</strong>rics announced at <strong>the</strong> SIGGRAPH tradeshow in San Diego that its propri<strong>et</strong>ary performance capture<br />

solution will <strong>provide</strong> <strong>the</strong> modeling and animation for a pho<strong>to</strong>-realistic 11-foot 3D hologram <strong>of</strong> <strong>the</strong> late Richard<br />

Bur<strong>to</strong>n, for <strong>the</strong> Live on Stage! production <strong>of</strong> <strong>the</strong> multi-award winning, 15 million-selling album, Jeff Wayne’s<br />

Musical Version <strong>of</strong> The War <strong>of</strong> The Worlds. Image M<strong>et</strong>rics’ technology analyzes <strong>the</strong> motion data captured in any<br />

video recording <strong>of</strong> an ac<strong>to</strong>r. It removes <strong>the</strong> slow process <strong>of</strong> animation by hand required by o<strong>the</strong>r motion capture<br />

programs and eliminates <strong>the</strong> need for expensive motion capture camera systems. Image M<strong>et</strong>rics is compl<strong>et</strong>ing a<br />

<strong>to</strong>tal <strong>of</strong> 23 minutes <strong>of</strong> pho<strong>to</strong>-real facial animation perfectly synchronized <strong>to</strong> <strong>the</strong> original audio recording <strong>of</strong> <strong>the</strong> star.<br />

Image M<strong>et</strong>rics contributed 72 shots developed by a team <strong>of</strong> five artists. http://www.image-m<strong>et</strong>rics.com<br />

http://www.veritas<strong>et</strong>visus.com 16


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

DAVID-Laserscanner brings out freew<strong>are</strong> for 3D laser scanning<br />

DAVID-Laserscanner is a freew<strong>are</strong> s<strong>of</strong>tw<strong>are</strong> for 3D laser range scanning <strong>to</strong> be used with a PC, a camera (e.g. a<br />

webcam), a background corner, and a laser that projects a line on<strong>to</strong> <strong>the</strong> object <strong>to</strong> be scanned. The concept <strong>of</strong><br />

DAVID was developed by <strong>the</strong> computer scientists Dr. Simon Winkelbach, Sven Molkenstruck and Pr<strong>of</strong>.<br />

F. M. Wahl at <strong>the</strong> Institute for Robotics and Process Control, Technical University <strong>of</strong> Braunschweig, Germany, and<br />

was published as a paper at <strong>the</strong> German Association for Pattern<br />

Recognition. The object <strong>to</strong> be scanned has <strong>to</strong> be put in front <strong>of</strong> a known<br />

background geom<strong>et</strong>ry (e.g. in<strong>to</strong> <strong>the</strong> corner <strong>of</strong> a room or in front <strong>of</strong> two<br />

planes with an exact angle <strong>of</strong> 90°) with <strong>the</strong> camera pointed <strong>to</strong>wards <strong>the</strong><br />

object. The laser is held freely in <strong>the</strong> hand, “brushing” <strong>the</strong> laser line over<br />

<strong>the</strong> object. Meanwhile <strong>the</strong> computer au<strong>to</strong>matically calculates 3D<br />

coordinates <strong>of</strong> <strong>the</strong> scanned object surface. To obtain a compl<strong>et</strong>e 360<br />

degree model <strong>of</strong> <strong>the</strong> 3D object, <strong>the</strong> company has developed DAVID-<br />

Shapefusion that au<strong>to</strong>matically “puzzles” <strong>to</strong>ge<strong>the</strong>r <strong>the</strong> laser scans made<br />

from different sides. http://www.david-laserscanner.com/<br />

Breuckmann launches 3D digital scanning technique<br />

Breuckmann has launched smartSCAN, especially developed for applications<br />

for high performance digitization in technique, education, art, and cultural<br />

heritage. The new system smartSCAN is positioned for targ<strong>et</strong> mark<strong>et</strong>s<br />

interested in digitization tasks. The company sees applications for high<br />

performance digitization: technical tasks, education, art, cultural heritage –<br />

everywhere where <strong>the</strong> emphasis is on creating reliable and accurate data. Due<br />

<strong>to</strong> <strong>the</strong> new and compact design <strong>the</strong> smartSCAN is easy <strong>to</strong> handle. SmartSCAN<br />

is available as a mono or stereo system: <strong>the</strong> s<strong>et</strong>up can be configured with<br />

ei<strong>the</strong>r one or two color cameras and one projection unit. With <strong>this</strong> design, <strong>the</strong><br />

system can be configured <strong>to</strong> cover <strong>this</strong> wide range <strong>of</strong> applications.<br />

http://www.breuckmann.com<br />

Fraunh<strong>of</strong>er develops projec<strong>to</strong>r tiled wall system<br />

Fraunh<strong>of</strong>er <strong>of</strong> Germany has developed a cluster-based VR application in an extremely complex process handling<br />

distributed rendering, device management and state synchronization. Fraunh<strong>of</strong>er has dramatically simplified <strong>this</strong><br />

process by utilizing X3D as <strong>the</strong> VR/AR application description language<br />

while hiding all <strong>the</strong> low level distribution mechanisms. It <strong>provide</strong>s a new<br />

cluster application deployment solution which allows <strong>the</strong> running <strong>of</strong> X3D<br />

content on computer clusters without changes. The application developer<br />

can build high-level interactive application utilizing <strong>the</strong> full immersive<br />

pr<strong>of</strong>ile <strong>of</strong> <strong>the</strong> ISO standard including PointingSensors and Scripting. The<br />

system has been demonstrated and deployed at <strong>the</strong> 48-projec<strong>to</strong>r/18-millionpixel<br />

tiled-display called HEyeWall. The HEyeWall <strong>of</strong>fers an unmatched<br />

visual resolution comp<strong>are</strong>d <strong>to</strong> standard projection systems as it enables <strong>the</strong><br />

visualization <strong>of</strong> brilliant pictures and stereoscopic 3D models. The mark<strong>et</strong>ready<br />

display system was developed by researchers <strong>of</strong> Fraunh<strong>of</strong>er IGD.<br />

Until now, <strong>the</strong> alerted eye could see single pixels, blurred shapes and colors<br />

when standing close <strong>to</strong> <strong>the</strong> projection wall. The HEyeWall allows<br />

examination <strong>of</strong> <strong>the</strong> projected image from any position. Various possible<br />

fields <strong>of</strong> application result from <strong>this</strong>: from efficient product development <strong>to</strong> <strong>the</strong> simulation <strong>of</strong> heavy flow <strong>of</strong> traffic<br />

and <strong>the</strong> visualization <strong>of</strong> highly structured 3D <strong>are</strong>a and city models <strong>to</strong> <strong>the</strong> specific planning <strong>of</strong> rescue operations. A<br />

free b<strong>et</strong>a version <strong>of</strong> <strong>the</strong> cluster-client/server solution is available from http://www.instantreality.org/home/.<br />

http://www.veritas<strong>et</strong>visus.com 17


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

NRL scientists viewing STEREO images using EVL’s ImmersaDesk4 technology<br />

Solar physicists at <strong>the</strong> Naval Research Labora<strong>to</strong>ry (NRL) <strong>are</strong> viewing solar disturbances whose depth and violent<br />

nature <strong>are</strong> now clearly visible in <strong>the</strong> first true stereoscopic images ever captured <strong>of</strong> <strong>the</strong> Sun. These views from <strong>the</strong><br />

STEREO program <strong>are</strong> providing scientists with unprecedented insight in<strong>to</strong> solar physics and <strong>the</strong> violent solar<br />

wea<strong>the</strong>r events that can bombard Earth’s magne<strong>to</strong>sphere with particles and affect systems ranging from wea<strong>the</strong>r <strong>to</strong><br />

our electrical grids. NRL scientists <strong>are</strong> viewing <strong>the</strong> high-resolution stereo pairs on an ImmersaDesk4 (I-Desk4)<br />

display system specifically commissioned and installed at <strong>the</strong> labora<strong>to</strong>ry last<br />

summer in anticipation <strong>of</strong> <strong>the</strong> release <strong>of</strong> <strong>the</strong> data. The I-Desk4, invented at <strong>the</strong><br />

University <strong>of</strong> Illinois at Chicago’s (UIC) Electronic Visualization Labora<strong>to</strong>ry<br />

(EVL), is a tracked, 4-million-pixel display system driven by a 64-bit graphics<br />

workstation. Its compact workstation design is comprised <strong>of</strong> two 30-inch<br />

Apple LCD moni<strong>to</strong>rs mounted with quarter-wave plates and bisected by a<br />

half-silvered mirror enabling circular polarization. Multiple users can view <strong>the</strong><br />

head-tracked 3D scene using lightweight polarized glasses. The Solar Physics<br />

Branch at NRL developed <strong>the</strong> SECCHI (Sun Earth Connection Coronal and<br />

Heliospheric Investigation) suite <strong>of</strong> telescopes for <strong>the</strong> spacecraft. The highresolution<br />

sensor suite includes coronagraphs, wide-angle cameras and an<br />

Extreme Ultraviol<strong>et</strong> Imager. The sensors generate 10 synchronized video<br />

feeds, each up <strong>to</strong> 2K by 2K pixels. In summer 2006, EVL student Cole<br />

Krumbholz worked with NRL solar physicist Dr. Angelos Vourlidas <strong>to</strong> help<br />

establish a solar imagery display environment at NRL. Krumbholz helped<br />

build two EVL-developed display systems capable <strong>of</strong> viewing and managing<br />

files on <strong>the</strong> scale <strong>of</strong> thousands <strong>of</strong> pixels per squ<strong>are</strong> inch: a nine-panel tiled<br />

LCD wall ideal for viewing high-resolution 2D imagery, and an I-Desk4 for<br />

Dr. Angelos Vourlidas <strong>of</strong> <strong>the</strong><br />

NRL's Solar Physics Branch<br />

views stereoscopic solar images<br />

taken with <strong>the</strong> EUV telescopes <strong>of</strong><br />

NRL's SECCHI instrument suite<br />

viewing high-resolution 3D imagery. The tiled wall is capable <strong>of</strong><br />

synchronously displaying multiple high-resolution video streams. NRL<br />

scientists can also composite <strong>the</strong> sensor data in<strong>to</strong> a single video <strong>to</strong> conduct a<br />

multi-spectral analysis, and view multiple days <strong>of</strong> video. Krumbholz<br />

implemented a distributed video-rendering <strong>to</strong>ol with interactive features such<br />

as pan, zoom and crop. http://www.evl.uic.edu<br />

Anteryon develops new display screen optics for Zecotek 2D-3D system<br />

Anteryon <strong>of</strong> <strong>the</strong> Ne<strong>the</strong>rlands announced <strong>the</strong> launch <strong>of</strong> a new display screen optics product developed for <strong>the</strong><br />

Zecotek 2D–3D display system. Zecotek is a Vancouver, Canada, based company with facilities in Vancouver and<br />

Singapore where <strong>the</strong> Anteryon display screen will be assembled in<strong>to</strong> <strong>the</strong> Zecotek display system. The initial<br />

application focus for <strong>this</strong> Zecotek product lies in <strong>the</strong> field <strong>of</strong> biomedical imaging. http://www.anteryon.com<br />

Ramboll launches free floating video at Copenhagen Airport<br />

Ramboll <strong>of</strong> Denmark launched <strong>the</strong> Cheoptics360 XL at<br />

Copenhagen Airport where it is on display until Oc<strong>to</strong>ber 4.<br />

Cheoptics360 XL displays free floating 3D video, opening up a<br />

whole new universe <strong>of</strong> possibilities <strong>to</strong> those seeking innovative<br />

and persuasive m<strong>et</strong>hods <strong>to</strong> present <strong>the</strong>ir products. Cheoptics360<br />

XL is suitable as a stand-alone installation <strong>to</strong> be viewed from all<br />

angles, and it can also be integrated in<strong>to</strong> all kinds <strong>of</strong> buildings,<br />

structures or environments. Presentations can be viewed on<br />

Cheoptics ranging in size from 1.5 m<strong>et</strong>ers wide up <strong>to</strong> 10 m<strong>et</strong>ers<br />

wide, allowing displays <strong>of</strong> both small and large objects.<br />

http://www.3dscreen.ramboll.dk<br />

http://www.veritas<strong>et</strong>visus.com 18


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

University <strong>of</strong> Tokyo researchers develop TWISTER<br />

A research team from <strong>the</strong> University <strong>of</strong> Tokyo has developed a rotating panoramic display that immerses viewers in<br />

a 3D video environment. The Telexistence Wide-angle Immersive STEReoscope, or TWISTER, is <strong>the</strong> world’s first<br />

full-color 360-degree 3D display that does not require viewers <strong>to</strong> wear special glasses. The researchers have spent<br />

over 10 years researching and developing <strong>the</strong> device. Inside <strong>the</strong> 4-foot by 6.5-foot cylindrical display <strong>are</strong> 50,000<br />

LEDs arranged in columns. As <strong>the</strong> display rotates around <strong>the</strong> observer’s head at a speed <strong>of</strong> 1.6 revolutions per<br />

second, <strong>the</strong>se specially arranged LED columns show a slightly different image <strong>to</strong> each <strong>of</strong> <strong>the</strong> observer’s eyes, thus<br />

creating <strong>the</strong> illusion <strong>of</strong> a 3D image. In o<strong>the</strong>r words, TWISTER tricks <strong>the</strong> eye by exploiting “binocular parallax”.<br />

For now, TWISTER is capable <strong>of</strong> serving up pre-recorded 3D video from a computer, allowing viewers <strong>to</strong><br />

experience things like virtual amusement park rides or close-up views <strong>of</strong> molecular models. However, <strong>the</strong><br />

researchers <strong>are</strong> working <strong>to</strong> develop TWISTER’s 3D videophone capabilities by equipping it with a camera system<br />

that can capture real-time three-dimensional images <strong>of</strong> <strong>the</strong> person inside, which can <strong>the</strong>n be sent <strong>to</strong> ano<strong>the</strong>r<br />

TWISTER via fiber optics. In <strong>this</strong> way, two people separated by physical distance will be able <strong>to</strong> step in<strong>to</strong> <strong>the</strong>ir<br />

TWISTERs <strong>to</strong> enjoy real-time 3D virtual interaction. http://www.star.t.u-<strong>to</strong>kyo.ac.jp/projects/TWISTER/<br />

MIT researchers develop 3D microscope that generates video images<br />

MIT researchers designed a microscope for generating three-dimensional movies <strong>of</strong> live cells. The microscope,<br />

which works like a cellular CT scanner, will l<strong>et</strong> scientists watch how cells behave in real time at a greater level <strong>of</strong><br />

d<strong>et</strong>ail. This new device overcomes a trade-<strong>of</strong>f b<strong>et</strong>ween resolution and live action that has hindered researchers’<br />

ability <strong>to</strong> examine cells and could lead <strong>to</strong> new m<strong>et</strong>hods for screening drugs. Cells can’t be examined under a<br />

traditional microscope because <strong>the</strong>y don't absorb very much visible light. So <strong>the</strong> MIT microscope relies on ano<strong>the</strong>r<br />

optical property <strong>of</strong> cells: how <strong>the</strong>y refract light. As light passes through a cell, its direction and wavelength shift.<br />

Different parts <strong>of</strong> <strong>the</strong> cell refract light in different ways, so <strong>the</strong> MIT microscope can show <strong>the</strong> parts in all <strong>the</strong>ir<br />

d<strong>et</strong>ail. The microscope creates three-dimensional images by combining many pictures <strong>of</strong> a cell taken from several<br />

different angles. It currently takes only a tenth <strong>of</strong> a second <strong>to</strong> generate each three-dimensional image, fast enough <strong>to</strong><br />

watch cells respond in real time. This processing technique, called <strong>to</strong>mography, is also used for medical imaging in<br />

CT scans, which combine X-ray images taken from many different angles <strong>to</strong> create three-dimensional images <strong>of</strong> <strong>the</strong><br />

body. http://web.mit.edu/news<strong>of</strong>fice/2007/cells-0812.html<br />

This image <strong>of</strong> a live, one millim<strong>et</strong>er-long worm taken with a new 3D microscope clearly shows internal structures including<br />

<strong>the</strong> digestive system. The worm’s mouth is at <strong>the</strong> left and <strong>the</strong> thick red band is <strong>the</strong> worm’s pharynx.<br />

http://www.veritas<strong>et</strong>visus.com 19


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

DNP and Sony commence production <strong>of</strong> hologram technology <strong>to</strong> counter fake merchandise<br />

Dai Nippon Printing and Sony PCL announced in July <strong>the</strong> start <strong>of</strong> made-<strong>to</strong>-order production <strong>of</strong> a new Lippmann<br />

hologram for security uses, which is capable <strong>of</strong> s<strong>to</strong>ring dynamic picture images, including animation and liveaction<br />

created with stereogram technology. The newly developed hologram has <strong>the</strong> capacity <strong>to</strong> s<strong>to</strong>re in excess <strong>of</strong><br />

100 image frames on a single hologram, and because it is extremely difficult <strong>to</strong> counterfeit, is effective in helping<br />

<strong>to</strong> discriminate b<strong>et</strong>ween genuine and counterfeit goods via uses including certification seals on genuine products.<br />

Live-action film as viewed via <strong>the</strong> newly developed hologram. By changing <strong>the</strong> viewing angle, <strong>the</strong> images<br />

appear <strong>to</strong> continually change.<br />

Unlike existing mainstream embossed holograms, which record images in physical relief on <strong>the</strong> surface <strong>of</strong> <strong>the</strong><br />

material, <strong>the</strong> newly developed hologram is a Lippmann hologram, which s<strong>to</strong>res images by recording interference<br />

patterns in pho<strong>to</strong>-sensitive layers produced by laser. Lippmann holograms <strong>are</strong> extremely difficult <strong>to</strong> counterfeit, as<br />

it is difficult <strong>to</strong> obtain <strong>the</strong> pho<strong>to</strong>-sensitive materials used, as Lippmann holograms <strong>are</strong> capable <strong>of</strong> producing unique<br />

image expressions not possible with o<strong>the</strong>r hologram formats, and as <strong>the</strong>y require specialized manufacturing<br />

technology. DNP and Sony PCL have made it even more difficult <strong>to</strong> illicitly reproduce <strong>the</strong> holograms by providing<br />

<strong>the</strong>m with <strong>the</strong> capacity <strong>to</strong> record in excess <strong>of</strong> 100 image frames on a single hologram via <strong>the</strong> unique application <strong>of</strong><br />

line order recording technology, which has made it possible <strong>to</strong> record dynamic images, including flying logos and<br />

animation. Each parallax image displayed on <strong>the</strong> LCD is horizontally compressed in<strong>to</strong> a vertical slit by a cylindrical<br />

lens. The beam <strong>of</strong> <strong>the</strong> slit and <strong>the</strong> reference beam form an interference pattern, which is <strong>the</strong>n recorded on <strong>the</strong> pho<strong>to</strong>sensitive<br />

material on <strong>the</strong> glass substrate. The hundreds <strong>of</strong> vertical slits, placed sequentially side-by-side, form <strong>the</strong><br />

larger hologram. The new hologram has undergone approximately 18 months <strong>of</strong> field tests, and DNP and Sony<br />

PCL have moved in<strong>to</strong> full-scale operations after successfully confirming <strong>the</strong> effectiveness <strong>of</strong> <strong>the</strong> new hologram as<br />

an anti-counterfeiting measure. http://www.dnp.co.jp/international/holo/index.html<br />

New Carl Zeiss stereo microscope claims greatest FOV, zoom, resolution<br />

Carl Zeiss MicroImaging Inc. introduced <strong>the</strong> SteREO Discovery.V20 stereo microscope, which claims <strong>the</strong><br />

industry’s largest field <strong>of</strong> view (23 mm at 10x), highest zoom range (20 <strong>to</strong> 1), and greatest resolution, all combined<br />

in one stereo microscope <strong>to</strong> allow visualization <strong>of</strong> large <strong>sample</strong>s and <strong>the</strong>ir fine d<strong>et</strong>ails without changing objectives<br />

or eyepieces. The new <strong>to</strong>ol also promises a substantially greater depth <strong>of</strong> field than o<strong>the</strong>r stereo microscopes,<br />

allowing <strong>the</strong> ability <strong>to</strong> view and measure well-resolved object d<strong>et</strong>ails with greater ease and accuracy. Step mo<strong>to</strong>r<br />

control enables continuous increases in magnification with precise zoom levels<br />

<strong>to</strong> create a well-defined, high contrast image throughout <strong>the</strong> zoom range. The<br />

System Control Panel (SyCoP) puts all major microscope functions at <strong>the</strong><br />

user's fingertips, allowing for fast changeover b<strong>et</strong>ween zoom, focus, and<br />

illumination functions and displays for <strong>to</strong>tal magnification, object field,<br />

resolution, depth <strong>of</strong> field, and Z position. Carl Zeiss has designed <strong>the</strong> system <strong>to</strong><br />

me<strong>et</strong> <strong>the</strong> ergonomic demands <strong>of</strong> users working for hours at a time. The<br />

SteREO Discovery.V20 is fully integrated in<strong>to</strong> Zeiss’ modular SteREO<br />

Discovery system and compatible with all SteREO components. The<br />

microscope can be combined with <strong>the</strong> AxioCam digital camera and <strong>the</strong><br />

AxioVision image analysis and evaluation s<strong>of</strong>tw<strong>are</strong> for a powerful, compl<strong>et</strong>e<br />

image recording and analysis system. http://www.zeiss.com.au<br />

http://www.veritas<strong>et</strong>visus.com 20


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Thomas Jefferson University Hospital s<strong>of</strong>tw<strong>are</strong> creates 3D view <strong>of</strong> <strong>the</strong> brain<br />

Researchers at Thomas Jefferson University Hospital in Philadelphia have developed<br />

s<strong>of</strong>tw<strong>are</strong> that integrates data from multiple imaging technologies <strong>to</strong> create an interactive<br />

3D map <strong>of</strong> <strong>the</strong> brain. The enhanced visualization gives neurosurgeons a much cle<strong>are</strong>r<br />

picture <strong>of</strong> <strong>the</strong> spatial relationship <strong>of</strong> a patient’s brain structures than is possible with any<br />

single imaging m<strong>et</strong>hods. In doing so, it could serve as an advanced guide for surgical<br />

procedures, such as brain-tumor removal and epilepsy surgery. The new imaging<br />

s<strong>of</strong>tw<strong>are</strong> collates data from different types <strong>of</strong> brain-imaging m<strong>et</strong>hods, including<br />

conventional magn<strong>et</strong>ic resonance imaging (MRI), functional MRI (fMRI), and<br />

diffusion-tensor imaging (DTI). The MRI gives d<strong>et</strong>ails on <strong>the</strong> ana<strong>to</strong>my, fMRI <strong>provide</strong>s<br />

information on <strong>the</strong> activated <strong>are</strong>as <strong>of</strong> <strong>the</strong> brain, and DTI <strong>provide</strong>s images <strong>of</strong> <strong>the</strong> n<strong>et</strong>work<br />

<strong>of</strong> nerve fibers connecting different brain <strong>are</strong>as. The fusion <strong>of</strong> <strong>the</strong>se different images<br />

produces a 3D display that surgeons can manipulate: <strong>the</strong>y can navigate through <strong>the</strong><br />

images at different orientations, virtually slice <strong>the</strong> brain in different sections, and zoom<br />

in on specific sections. With <strong>the</strong> new s<strong>of</strong>tw<strong>are</strong>, surgeons <strong>are</strong> able <strong>to</strong> see <strong>the</strong> depth <strong>of</strong> <strong>the</strong><br />

fibers going inside <strong>the</strong> tumor, shown as dashed lines, and <strong>the</strong> proximity <strong>of</strong> those on <strong>the</strong><br />

outside, shown as solid lines. The lines <strong>are</strong> color-coded based on <strong>the</strong>ir depth; <strong>the</strong>y range<br />

from dark red, which represents <strong>the</strong> deepest, <strong>to</strong> dark blue, which represents <strong>the</strong><br />

shallowest. The scale on <strong>the</strong> left side <strong>of</strong> <strong>the</strong> accompanying images is based on depth,<br />

dark red being <strong>the</strong> deepest; dark blue <strong>the</strong> shallowest. http://www.jeffersonhospital.org<br />

AIST improves 3D projec<strong>to</strong>r<br />

In 1926, Kenjiro Takayanagi, known as <strong>the</strong> “fa<strong>the</strong>r <strong>of</strong> Japanese television,” transmitted <strong>the</strong> image <strong>of</strong> a katakana<br />

character (イ) <strong>to</strong> a TV receiver built with a cathode ray tube, signaling <strong>the</strong> birth <strong>of</strong> <strong>the</strong> world’s first all-electronic<br />

television. In early August, in a symbolic gesture over 80 years later, researchers from Japan’s National Institute <strong>of</strong><br />

Advanced Industrial Science and Technology (AIST), Bur<strong>to</strong>n Inc., and Hamamatsu Pho<strong>to</strong>nics K.K. displayed <strong>the</strong><br />

same katakana character using a 3D projec<strong>to</strong>r that generates moving images in mid-air. The 3D projec<strong>to</strong>r, which<br />

was first unveiled in February 2006 but has seen some recent improvements, uses focused laser beams <strong>to</strong> create<br />

flashpoint “pixels” in mid-air. The pixels <strong>are</strong> generated as <strong>the</strong> focused lasers heat <strong>the</strong> oxygen and nitrogen<br />

molecules floating in <strong>the</strong> air, causing <strong>the</strong>m <strong>to</strong> spark in a phenomenon known as plasma emission. By rapidly<br />

moving <strong>the</strong>se flashpoints in a controlled fashion, <strong>the</strong> projec<strong>to</strong>r creates a three-dimensional image that appears <strong>to</strong><br />

float in empty space. The projec<strong>to</strong>r’s recent upgrades include an improved 3D scanning system that boosts laser<br />

accuracy, as well as a system <strong>of</strong> highintensity<br />

solid-state fem<strong>to</strong>second lasers<br />

recently developed by Hamamatsu<br />

Pho<strong>to</strong>nics. The new lasers, which<br />

unleash 100-billion-watt pulses<br />

(0.1-terawatt peak output) <strong>of</strong> light<br />

every 10-trillionths <strong>of</strong> a second<br />

(100 fem<strong>to</strong>seconds), improve image<br />

smoothness and boost <strong>the</strong> resolution <strong>to</strong><br />

1,000 pixels per second. In addition,<br />

image brightness and contrast can be<br />

controlled by regulating <strong>the</strong> number <strong>of</strong><br />

pulses fired at each point in space.<br />

http://www.aist.go.jp<br />

http://www.veritas<strong>et</strong>visus.com 21


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

“Harry Potter and <strong>the</strong> Order <strong>of</strong> <strong>the</strong> Phoenix”: IMAX 3D shatters box <strong>of</strong>fice records<br />

IMAX Corporation and Warner Bros. Pictures announced<br />

that “Harry Potter and <strong>the</strong> Order <strong>of</strong> <strong>the</strong> Phoenix”<br />

shattered virtually every opening box <strong>of</strong>fice record at<br />

IMAX <strong>the</strong>atres during its debut, contributing $7.3 million<br />

<strong>of</strong> <strong>the</strong> $140 million that <strong>the</strong> film grossed at <strong>the</strong> domestic<br />

box <strong>of</strong>fice, from July 11 through July 15. The picture also<br />

broke <strong>the</strong> record for IMAX’s largest single day<br />

worldwide <strong>to</strong>tal at $1.9 million and posted a domestic<br />

opening per screen average <strong>of</strong> $80,500. “Harry Potter and<br />

<strong>the</strong> Order <strong>of</strong> <strong>the</strong> Phoenix” opened on 91 domestic IMAX<br />

screens and 35 international IMAX screens, making it <strong>the</strong><br />

largest opening in IMAX’s 40-year his<strong>to</strong>ry, with a recordsmashing<br />

worldwide estimated <strong>to</strong>tal <strong>of</strong> $9.4 million. The<br />

film’s overall worldwide debut <strong>to</strong>tal was an estimated<br />

$333 million. Through its 7th week, <strong>the</strong> film earned more<br />

than $24 million on 91 IMAX screens domestically and more than $11 million on 52 IMAX screens internationally.<br />

The worldwide IMAX <strong>to</strong>tal is now more than $35 million with an impressive per screen average <strong>of</strong> $243,000<br />

making it <strong>the</strong> highest grossing live-action Hollywood IMAX release. http://www.imax.com<br />

“Beowulf” <strong>to</strong> be available in Dolby 3D Digital Cinema…<br />

Dolby Labora<strong>to</strong>ries announced in early August that Paramount Pictures’ “Beowulf”, scheduled for release on<br />

November 16, will be made available <strong>to</strong> select exhibi<strong>to</strong>rs who have installed Dolby 3D Digital Cinema technology<br />

by <strong>the</strong> film’s release date. Dolby claims that <strong>the</strong>ir 3D<br />

Digital Cinema <strong>provide</strong>s exhibi<strong>to</strong>rs and distribu<strong>to</strong>rs an<br />

efficient and cost-effective 3D solution. The ability <strong>to</strong><br />

utilize a standard white screen gives exhibi<strong>to</strong>rs a cost<br />

advantage, as no special “silver screen” is required. The<br />

ease <strong>of</strong> shifting <strong>the</strong> Dolby 3D Digital Cinema system from<br />

3D <strong>to</strong> 2D and back, as well as moving <strong>the</strong> 3D film b<strong>et</strong>ween<br />

audi<strong>to</strong>riums <strong>of</strong> different sizes, r<strong>et</strong>ains <strong>the</strong> flexibility<br />

exhibi<strong>to</strong>rs have come <strong>to</strong> expect. Dolby 3D Digital Cinema<br />

uses a unique color filter technology that <strong>provide</strong>s very<br />

realistic color reproduction with extremely sharp images<br />

delivering a great 3D experience <strong>to</strong> every seat in <strong>the</strong> house.<br />

http://www.dolby.com<br />

…also in REAL D and IMAX 3D<br />

In addition <strong>to</strong> <strong>the</strong> Dolby 3D Digital Cinema screens,<br />

“Beowulf” will also be presented on both REAL D and<br />

IMAX 3D platforms, which cater <strong>to</strong> audiences willing <strong>to</strong><br />

pay a premium price for a premium, or multi-dimensional,<br />

viewing experience. Beowulf is a digitally enhanced liveaction<br />

film using <strong>the</strong> same motion-capture technology seen<br />

in “The Polar Express”. Until now, IMAX and Paramount<br />

hadn't released a film <strong>to</strong>ge<strong>the</strong>r since IMAX began<br />

remastering commercial films in<strong>to</strong> <strong>the</strong> large format in 2002.<br />

In <strong>to</strong>tal, it’s expected that “Beowulf” will show in 3D on<br />

well over 1000 screens worldwide at its release.<br />

http://www.veritas<strong>et</strong>visus.com 22


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

“Monsters vs. Aliens” in 3D <strong>to</strong> hit <strong>the</strong>atres on May 15, 2009<br />

DreamWorks Animation’s “Monsters vs. Aliens” is slated for domestic release May 15, 2009, a week earlier than<br />

previously announced. “Monsters vs. Aliens”, now confirmed as <strong>the</strong> <strong>of</strong>ficial title, will be <strong>the</strong> first DreamWorks<br />

Animation film produced in<br />

stereoscopic 3D. It is described as<br />

a reinvention <strong>of</strong> <strong>the</strong> classic 1950s<br />

monster movie in<strong>to</strong> an irreverent<br />

modern-day action comedy.<br />

Directed by Conrad Vernon and<br />

Rob L<strong>et</strong>terman, <strong>the</strong> film is in<br />

production and will be distributed<br />

domestically by Paramount<br />

Pictures. May 2009 is shaping up<br />

<strong>to</strong> be a crowded month for 3D<br />

releases. James Cameron’s 3D<br />

stereoscopic film “Avatar” is slated<br />

for May 22, which was <strong>the</strong> planned release date for Monsters vs. Aliens. With two anticipated stereoscopic films<br />

s<strong>et</strong> <strong>to</strong> debut during <strong>the</strong> frame, <strong>the</strong> digital-cinema community is watching <strong>this</strong> release window. Real D has advised<br />

that it is on track <strong>to</strong> have 4,000 3D-ready digital-cinema screens installed in <strong>the</strong> US by May 2009, though that<br />

number might increase. Jeffery Katzenberg, head <strong>of</strong> DreamWorks has suggested that 6,000 screens need <strong>to</strong> be<br />

available for “Monsters vs. Aliens” <strong>to</strong> be <strong>the</strong> success <strong>the</strong> studio is hoping. http://www.dreamworksanimation.com<br />

“Sea Monsters: A Prehis<strong>to</strong>ric Adventure” <strong>to</strong> open in IMAX 3D<br />

National Geographic’s new giant-screen film “Sea Monsters: A Prehis<strong>to</strong>ric Adventure” premieres worldwide in<br />

IMAX and o<strong>the</strong>r specialty <strong>the</strong>atres on Oc<strong>to</strong>ber 5th. The movie brings <strong>to</strong> life <strong>the</strong> extraordinary marine reptiles <strong>of</strong> <strong>the</strong><br />

dinosaur age on <strong>the</strong> world’s biggest screens in both 3D and 2D. The film, narrated by Tony Award-winning ac<strong>to</strong>r<br />

Liev Schreiber and with an original score by longtime musical collabora<strong>to</strong>rs Richard Evans, David Rhodes and<br />

P<strong>et</strong>er Gabriel, takes audiences on a journey in<strong>to</strong> <strong>the</strong> relatively unexplored world <strong>of</strong> <strong>the</strong> “o<strong>the</strong>r dinosaurs”, those<br />

reptiles that lived beneath <strong>the</strong> water. Funded in part through a grant from <strong>the</strong> National Science Foundation, <strong>the</strong> film<br />

delivers <strong>to</strong> <strong>the</strong> giant screen <strong>the</strong> fascinating science behind what we know, and a vision <strong>of</strong> his<strong>to</strong>ry’s grandest ocean<br />

creatures. http://www.imax.com<br />

The film follows a family <strong>of</strong> Dolichorhynchops, also known informally as Dollies as <strong>the</strong>y traverse ancient waters<br />

populated with saber-<strong>to</strong>o<strong>the</strong>d fish, prehis<strong>to</strong>ric sharks and giant squid. On <strong>the</strong>ir journey <strong>the</strong> Dollies encounter<br />

o<strong>the</strong>r extraordinary sea creatures: lizard-like reptiles called Platecarpus that swallowed <strong>the</strong>ir prey whole like<br />

snakes; Styxosaurus with necks nearly 20 fe<strong>et</strong> long and paddle-like fins as large as an adult human; and at <strong>the</strong> <strong>to</strong>p<br />

<strong>of</strong> <strong>the</strong> food chain, <strong>the</strong> monstrous Tylosaurus, a preda<strong>to</strong>r with no enemies.<br />

http://www.veritas<strong>et</strong>visus.com 23


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

3D Entertainment compl<strong>et</strong>es pho<strong>to</strong>graphy on “Dolphins and Whales 3-D: Tribes <strong>of</strong> <strong>the</strong> Ocean”<br />

3D Entertainment Ltd. announced <strong>the</strong> successful compl<strong>et</strong>ion <strong>of</strong> principal pho<strong>to</strong>graphy on its upcoming feature,<br />

“Dolphins and Whales 3-D: Tribes <strong>of</strong> <strong>the</strong> Ocean”. This new breathtaking documentary will make its US debut on<br />

IMAX 3-D screens in February 2008 before expanding in<strong>to</strong> Europe and<br />

will be released in collaboration with <strong>the</strong> United Nations Environment<br />

Program and its North American <strong>of</strong>fice, RONA, based in Washing<strong>to</strong>n<br />

D.C. “Dolphins and Whales 3-D: Tribes <strong>of</strong> <strong>the</strong> Ocean” is currently in<br />

post-production and will be compl<strong>et</strong>ed by late November. Principal<br />

pho<strong>to</strong>graphy began in June 2004 in Polynesia and an extensive three<br />

years were required <strong>to</strong> capture <strong>the</strong> necessary footage. Filming consisted<br />

<strong>of</strong> no fewer than 12 expeditions and 600 hours underwater at some <strong>of</strong> <strong>the</strong><br />

remotest locations on Earth, including <strong>of</strong>f <strong>the</strong> Pacific Ocean a<strong>to</strong>lls <strong>of</strong><br />

Moorea and Rurutu, Vava'u Island <strong>of</strong> <strong>the</strong> Kingdom <strong>of</strong> Tonga, Pico Island<br />

in <strong>the</strong> Azores archipelago and <strong>the</strong> Bay <strong>of</strong> Islands in New Zealand.<br />

Following “Ocean Wonderland” (2003) and “Sharks 3-D” (2005),<br />

“Dolphins and Whales 3-D: Tribes <strong>of</strong> <strong>the</strong> Ocean” marks <strong>the</strong> final chapter<br />

in a unique trilogy <strong>of</strong> ocean-<strong>the</strong>med documentaries that have proven<br />

immensely popular with audiences, grossing a combined $52.5 million at<br />

<strong>the</strong> box <strong>of</strong>fice. http://www.3defilms.com<br />

Lightspeed Design and DeepSea Ventures announce compl<strong>et</strong>ion “DIVE!”<br />

Lightspeed Design and DeepSea Ventures announce <strong>the</strong> compl<strong>et</strong>ion <strong>of</strong> <strong>the</strong>ir digital 3D stereoscopic film, “DIVE!<br />

Manned Submersibles and The New Explorers”. Utilizing deep-ocean manned submersibles in <strong>the</strong> Pacific Ocean<br />

<strong>of</strong>f <strong>the</strong> coast <strong>of</strong> Washing<strong>to</strong>n State, principle 3D pho<strong>to</strong>graphy for “Dive!” was realized in late 2006 by stereoscopic<br />

filmmaker, Lightspeed Design <strong>of</strong> Bellevue, Washing<strong>to</strong>n. In order <strong>to</strong> fit in<strong>to</strong> <strong>the</strong> small, three-person submersible,<br />

Lightspeed cus<strong>to</strong>m-engineered an op<strong>to</strong>-mechanical dual<br />

camera rig for two Panasonic HVX-200 high-definition<br />

cameras. The advanced rig creates precise control <strong>of</strong><br />

camera <strong>of</strong>fs<strong>et</strong>s, which is d<strong>et</strong>ermined by 3D algorithms<br />

and Lightspeed’s propri<strong>et</strong>ary live HD video streaming<br />

s<strong>of</strong>tw<strong>are</strong>. During <strong>the</strong> voyage two lost shipwrecks were<br />

discovered 1000 fe<strong>et</strong> below <strong>the</strong> surface. Both were<br />

fishing vessels, one <strong>of</strong> Japanese origin and <strong>the</strong> o<strong>the</strong>r<br />

most likely American. The ships were located by<br />

DeepSea Ventures (DSV), a deep ocean exploration<br />

company based in Spokane, Washing<strong>to</strong>n. The research<br />

vessel Valero IV - Seattle, and submersible experts<br />

Nuytco Research Ltd. <strong>of</strong> Vancouver, BC, supported <strong>the</strong><br />

mission. “DIVE!” is a 22-minute high-definition 3D<br />

film combines computer graphics and live-action <strong>to</strong> literally take <strong>the</strong> audience along for <strong>the</strong> ride as a unique<br />

expedition <strong>of</strong> “Citizen Explorers” voyage in submarines <strong>to</strong> <strong>the</strong> bot<strong>to</strong>m <strong>of</strong> <strong>the</strong> ocean. “Dive!” opened June 21, at<br />

MOSI Tampa Florida. http://www.lightspeeddesign.com<br />

Kinepolis chooses Dolby for cinema conversion<br />

The Kinepolis Group has selected <strong>the</strong> new Dolby 3D Digital Cinema technology <strong>to</strong> outfit 17 screens throughout<br />

Europe. Kinepolis recently opened its 2<strong>3rd</strong> cinema multiplex, in Ostend, Belgium, and installed <strong>the</strong> first Dolby 3D<br />

system in Europe. The Belgium-based exhibi<strong>to</strong>r plans <strong>to</strong> convert one screen per complex using <strong>the</strong> Dolby 3D<br />

system. http://www.dolby.com<br />

http://www.veritas<strong>et</strong>visus.com 24


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Eclipse 3D Systems combines monochrome and color <strong>to</strong> produce 3D<br />

Eclipse 3D Systems announced a new patent-pending technology for displaying 3D movies in <strong>the</strong>aters and<br />

homes. The Eclipse 3D technology promises <strong>to</strong> be less expensive and brighter than polarized projection, which<br />

some <strong>the</strong>aters have used <strong>to</strong> show 3D movies. The new technology is applicable <strong>to</strong> digital projec<strong>to</strong>rs and flat panel<br />

displays opening <strong>the</strong> possibility <strong>of</strong> distributing high quality 3D through most <strong>of</strong> <strong>the</strong> major movie distribution<br />

channels including movie <strong>the</strong>aters,<br />

DVD sales and rental, and digital<br />

TV. The Eclipse 3D technology<br />

combines a monochrome image<br />

with a full-color image <strong>to</strong> produce<br />

full-color 3D. The 3D images can<br />

be viewed with Eclipse colored<br />

filter glasses. The images can be<br />

projected on any white screen or<br />

surface. Since a silver screen is not<br />

needed, <strong>the</strong> Eclipse 3D format is<br />

less expensive and more portable<br />

than <strong>the</strong> polarized format. Due <strong>to</strong><br />

<strong>the</strong> properties <strong>of</strong> <strong>the</strong> human visual<br />

system, <strong>the</strong> monochrome image is<br />

perceived with a brightness gain <strong>of</strong><br />

about four times while not<br />

contributing significantly <strong>to</strong> color<br />

vision. This process is similar <strong>to</strong><br />

night vision, although <strong>the</strong> full-color<br />

One <strong>of</strong> <strong>the</strong> most surprising aspects <strong>of</strong> <strong>the</strong> Eclipse 3D format is that full color<br />

perception can be obtained from only one eye. This 3D pair contains a red<br />

monochrome image and a full-color image, in which case <strong>the</strong> observed color in<br />

<strong>the</strong> 3D image is full-color. For even b<strong>et</strong>ter color, put a red filter from a pair <strong>of</strong><br />

red/cyan glasses over your left eye.<br />

image is perceived with normal brightness and color. Color perception comes almost entirely from <strong>the</strong> full-color<br />

image. The gain in brightness for <strong>the</strong> monochrome image means that little brightness is used in adding 3D <strong>to</strong> a<br />

display. As such, Eclipse 3D images <strong>are</strong> about 4X brighter than polarized alternatives. http://www.eclipse-3d.com<br />

Pace and Quantel team on 3D post system<br />

Vince Pace, who co-developed <strong>the</strong> Fusion 3D camera system with direc<strong>to</strong>r James Cameron, has been working<br />

closely with manufacturer Quantel on <strong>the</strong> design <strong>of</strong> a 3D stereoscopic postproduction system. Quantel and Pace<br />

have presented private technology demonstrations <strong>of</strong> <strong>the</strong> developing system. Quantel’s Mark Hor<strong>to</strong>n estimated that<br />

<strong>the</strong>re were about 100 visi<strong>to</strong>rs <strong>to</strong> Pace’s Burbank <strong>of</strong>fice, including direc<strong>to</strong>rs, visual effects supervisors,<br />

postproduction execs and representatives from most <strong>of</strong> <strong>the</strong> major studios. Hor<strong>to</strong>n said <strong>the</strong> feedback was<br />

encouraging and that as a result, Quantel intends <strong>to</strong> release <strong>the</strong> <strong>to</strong>ols<strong>et</strong> as a product. A shipping date has not been<br />

d<strong>et</strong>ermined, but Hor<strong>to</strong>n said that it would be a new version release <strong>of</strong> Quantel’s Pablo digital intermediate/color<br />

grading system. Current Pablo cus<strong>to</strong>mers would have <strong>the</strong> option <strong>to</strong> upgrade. The goal is <strong>to</strong> increase speed, reduce<br />

cost and add creative flexibility in 3D stereoscopic filmmaking. Quantel said <strong>the</strong> technology is being developed <strong>to</strong><br />

enable creative post decisions <strong>to</strong> be made and viewed in 3D in real time. http://www.quantel.com<br />

3ality Digital uses SCRATCH s<strong>of</strong>tw<strong>are</strong> in U2 film<br />

“U2 3D”, <strong>the</strong> 3D feature film, was one <strong>of</strong> <strong>the</strong> highlights at <strong>the</strong> recent Cannes Film<br />

Festival, and 3ality Digital and ASSIMILATE teamed <strong>to</strong> bring <strong>the</strong> same experience<br />

<strong>to</strong> <strong>the</strong> IBC 07 audience. A three-song segment <strong>of</strong> “U2 3D”, hosted by Steve<br />

Schklair, founder and CEO <strong>of</strong> 3ality, was featured in IBC’s Big Screen Programme<br />

venue on September 9. 3ality’s stereoscopic 3D technology, coupled with<br />

ASSIMILATE’s SCRATCH real-time 3D data workflow and DI <strong>to</strong>ol suite (from conform <strong>to</strong> finish) allows lead<br />

singer Bono <strong>to</strong> reach out <strong>to</strong>ward <strong>the</strong> 3D camera and appear <strong>to</strong> be stepping in<strong>to</strong> <strong>the</strong> <strong>the</strong>ater. “U2 3D” is scheduled for<br />

release <strong>to</strong> <strong>the</strong>aters <strong>this</strong> year. http://www.3alityDigital.com<br />

http://www.veritas<strong>et</strong>visus.com 25


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Kerner debuts 3D mobile cinemas<br />

Kerner Mobile Technologies announced <strong>the</strong> debut <strong>of</strong> <strong>the</strong> first in its new line <strong>of</strong> Kerner 3D Mobile Cinemas at <strong>the</strong><br />

California Speedway in greater Los Angeles on Labor Day weekend. Kerner Mobile’s 30 foot 3D movie screen is<br />

s<strong>et</strong> inside a 10,000 sq. ft. tented <strong>the</strong>ater made by Tentnology. “Opportunity, California FanZone” <strong>provide</strong>d car<br />

racing fans with everything from music concerts <strong>to</strong> shopping. In <strong>the</strong> near future, Kerner says that its Mobile 3D<br />

Cinemas will surpass movie <strong>the</strong>aters with <strong>the</strong> development <strong>of</strong> special lighting effects, fog and surround sound: 3D<br />

sights, smells and a light breeze on <strong>the</strong> face - an immersive 3D experience. http://www.kernermobile.com<br />

nWave releases first “true” 3D feature<br />

nWave, based in Brussels and Los Angeles, has<br />

released “Fly Me To The Moon”, loosely based on<br />

<strong>the</strong> Apollo 11 moon-landing including Buzz Aldrin as<br />

himself, and <strong>the</strong> crucial interventional <strong>of</strong> three flies,<br />

hence <strong>the</strong> play on words with <strong>the</strong> song-title. It is <strong>the</strong><br />

first true 3D feature film <strong>to</strong> be released, according <strong>to</strong><br />

nWave CEO Ben Stassen. It will be going <strong>to</strong> around<br />

700 digital 3D cinemas and over 200 IMAX <strong>the</strong>aters.<br />

According <strong>to</strong> Stassen, <strong>the</strong> first 3D full-length film was<br />

<strong>the</strong> Robert Zemeckis film, “Monster House”. But he<br />

stresses that “Monster House” was not originally<br />

created in 3D. Instead, it utilized s<strong>of</strong>tw<strong>are</strong> applied<br />

after filming in 2D. Disney’s “Me<strong>et</strong> The Robinsons”<br />

was <strong>the</strong> second 3D release film, which also used<br />

s<strong>of</strong>tw<strong>are</strong> applied after <strong>the</strong> fact, he says.<br />

“Recent advances in computer technology make it possible <strong>to</strong> convert 2D films <strong>to</strong> 3D. However, while<br />

converted films like “Chicken Little” and “Monster House” will be crucial <strong>to</strong> spurring <strong>the</strong> development <strong>of</strong><br />

digital 3D <strong>the</strong>aters, <strong>to</strong> fully utilize <strong>the</strong> potential <strong>of</strong> 3D cinema, you must design and produce a film differently<br />

than you would a 2D film,” Stassen says. “It’s a different medium. It involves more than just adding depth and<br />

perspective <strong>to</strong> a 2D image. There’s a very strong physical component <strong>to</strong> au<strong>the</strong>ntic 3D.”<br />

He points out that <strong>the</strong>re <strong>are</strong> very encouraging signs that Hollywood is starting <strong>to</strong> pay attention <strong>to</strong> <strong>the</strong> 3D revival<br />

spreading worldwide through <strong>the</strong> giant screen <strong>the</strong>ater n<strong>et</strong>work. He pinpoints <strong>the</strong> importance <strong>of</strong> “The Polar Express”<br />

that benefited from a great 3D IMAX version, generating over $40 million <strong>of</strong> <strong>the</strong> film’s $283 million worldwide<br />

grosses on only 64 screens. nWave Pictures is known for being one <strong>of</strong> <strong>the</strong> most prolific producers <strong>of</strong> 3D films in <strong>the</strong><br />

world. Founded in 1994 by Ben Stassen and Brussels-based D&D Media Group, nWave Pictures quickly<br />

established itself as <strong>the</strong> world’s leading producer and distribu<strong>to</strong>r <strong>of</strong> ride films for <strong>the</strong> motion simula<strong>to</strong>r mark<strong>et</strong>. The<br />

company’s current library <strong>of</strong> titles makes up an estimated 60-70% <strong>of</strong> all ride simulation films being shown<br />

worldwide. Core <strong>to</strong> <strong>the</strong> nWave operation is <strong>the</strong> idea that a computer graphics workstation is a mini Hollywood on a<br />

desk<strong>to</strong>p. It can create a whole movie and, with high-speed Intern<strong>et</strong>, even distribute it. He points out that he only<br />

uses <strong>of</strong>f-<strong>the</strong>-shelf s<strong>of</strong>tw<strong>are</strong> – Maya, Lightwave, and more<br />

recently Pixar’s Renderman. Where <strong>the</strong> difference comes is that<br />

he does not need <strong>to</strong> use hundreds <strong>of</strong> animation artists - only<br />

about 50 people <strong>are</strong> working on a production at any one time.<br />

http://www.flyme<strong>to</strong><strong>the</strong>moon<strong>the</strong>movie.com<br />

As an interesting sidenote, <strong>the</strong> movie’s website includes some<br />

anaglyph images <strong>to</strong> help showcase <strong>the</strong> characters. Next <strong>to</strong> an<br />

icon showing red/green glasses, <strong>the</strong> following warning has been<br />

inserted:<br />

http://www.veritas<strong>et</strong>visus.com 26


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

ANDXOR and The Light Millennium put forward human rights proposal <strong>to</strong> <strong>the</strong> UN<br />

ANDXOR Corporation and The Light Millennium have submitted a joint proposal <strong>to</strong> <strong>the</strong> Department <strong>of</strong> Public<br />

Information/NGO Section & Planning Committee <strong>of</strong> <strong>the</strong> United Nations <strong>to</strong> create a stereoscopic movie on human<br />

rights. The two companies <strong>are</strong> <strong>of</strong>fering <strong>to</strong> produce 40 minutes footage in ortho-stereoscopic <strong>to</strong> “allow a true three<br />

dimensional vision and an incredible immersive participation <strong>of</strong> <strong>the</strong> viewer”. They will create a movie regarding<br />

human rights, “<strong>the</strong> basic rights and freedoms <strong>to</strong> which all humans <strong>are</strong> entitled (from liberty <strong>to</strong> children abuse, from<br />

freedom <strong>of</strong> expression <strong>to</strong> education) and in particular could include Darfur and Karabakh/Azerbaijan”. The<br />

companies say that <strong>the</strong> footage will be very useful <strong>to</strong> UN campaigns in terms <strong>of</strong> <strong>the</strong> viewers’ support and<br />

aw<strong>are</strong>ness. The footage would be filmed with special stereoscopic cameras in digital full-HD. The movie would be<br />

played at <strong>the</strong> UN/DPI-NGO 61st Annual Conference and after in all <strong>the</strong> new stereo ready <strong>the</strong>aters. The companies<br />

will also create a DVD Blu-ray format <strong>to</strong> be distributed <strong>to</strong>ge<strong>the</strong>r with stereoscopic glasses and ready <strong>to</strong> be played<br />

also using standard television. http://www.andxor.com http://www.lightmillennium.org<br />

Reallusion and DAZ 3D partner on real-time filmmaking 3D content<br />

Reallusion, a s<strong>of</strong>tw<strong>are</strong> developer providing Hollywood-like 3D moviemaking <strong>to</strong>ols for PC and embedded devices,<br />

and DAZ 3D, a developer <strong>of</strong> 3D s<strong>of</strong>tw<strong>are</strong> and digital content creation, announced a strategic partnership <strong>to</strong> bring<br />

real-time filmmaking and 3D content <strong>to</strong> <strong>the</strong> masses. Thanks <strong>to</strong> <strong>this</strong> partnership, users will be able <strong>to</strong> import content<br />

created in DAZ Studio or purchased from DAZ 3D’s library <strong>of</strong> pr<strong>of</strong>essional content in<strong>to</strong> iClone, Reallusion’s realtime<br />

filmmaking engine, using Reallusion’s recently released 3DXchange object conversion <strong>to</strong>ol. The result will be<br />

a truly open filmmaking platform that will empower aspiring filmmakers <strong>of</strong> all stripes <strong>to</strong>, in <strong>the</strong> words <strong>of</strong><br />

Reallusion’s <strong>the</strong>me for SIGGRAPH 2007, “Go Real-Time” with “Movies, Models and Motion.” Reallusion’s<br />

3DXchange supports most 3DS or OBJ files. It also loads existing props, accessories or 3D scenes from current<br />

iClone content so users can cus<strong>to</strong>mize an object’s position, orientation, size, specularity, shadow or o<strong>the</strong>r attribute<br />

s<strong>et</strong>ting. Props, accessories and scenes can also be generated in<strong>to</strong> massive libraries for both long and short-form<br />

iClone film productions. DAZ Studio is a free s<strong>of</strong>tw<strong>are</strong> application that allows users <strong>to</strong> easily create digital art.<br />

Users can use <strong>this</strong> s<strong>of</strong>tw<strong>are</strong> <strong>to</strong> load in people, animals, vehicles, buildings, props, and accessories <strong>to</strong> create digital<br />

scenes. http://www.reallusion.com<br />

Belgian cinema chain opens with Barco projec<strong>to</strong>rs<br />

Barco announced that its latest range <strong>of</strong> 2K digital cinema projec<strong>to</strong>rs has been installed in Kinepolis’s newest<br />

cinema multiplex at Ostend, Belgium. Exactly one year after its opening <strong>of</strong> Kinepolis Brugge, Belgian’s number<br />

one cinema chain, <strong>the</strong> new multiplex is <strong>the</strong> chain’s 2<strong>3rd</strong> in Europe and houses eight state-<strong>of</strong>-<strong>the</strong>-art cinemas with a<br />

<strong>to</strong>tal <strong>of</strong> 1,755 seats. Kinepolis Oostende has been fitted with Barco’s latest range <strong>of</strong> digital cinema projec<strong>to</strong>rs, <strong>the</strong><br />

DP-3000 and DP-1500, which were launched at Sho<strong>We</strong>st in<br />

March <strong>this</strong> year. The DP-3000 is Barco’s new flagship, and<br />

<strong>the</strong> brightest “large venue” digital cinema projec<strong>to</strong>r in <strong>the</strong><br />

industry. Using Texas Instrument’s 1.2-inch DLP Cinema<br />

chip, <strong>the</strong> DP-3000 is designed for screens up <strong>to</strong> 30 m (98 ft)<br />

wide and has a 2000:1 contrast ratio, new lenses, a new<br />

optical design and high efficiency 6.5 kW lamps. The DP-<br />

1500 is Barco’s new mid and small-venue projec<strong>to</strong>r, designed<br />

for screens up <strong>to</strong> 15 m (49 ft) wide. It incorporates Texas<br />

Instrument’s new 0.98-inch DLP Cinema chip that <strong>of</strong>fers <strong>the</strong><br />

same pixel resolution (2048x1080) as its larger 1.2-inch<br />

counterpart, but its smaller size <strong>of</strong>fers significant advantages.<br />

http://www.barco.com<br />

http://www.veritas<strong>et</strong>visus.com 27


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

3D – lost in translation<br />

<strong>We</strong> couldn’t resist including <strong>this</strong><br />

screen capture from a Korean<br />

website devoted <strong>to</strong> stereo imaging.<br />

Their online poll doesn’t translate<br />

very well using Google’s transla<strong>to</strong>r<br />

function. http://www.3dnshop.com<br />

Hang Zhou World now selling 120 Tri-lens stereo cameras<br />

Hang Zhou 3D World Pho<strong>to</strong>graphic Equipment Co., Ltd introduced <strong>the</strong>ir 120<br />

Tri-lens manual reflex stereo camera – <strong>the</strong> first one developed and made in<br />

China. Specifications include anti-reflection coated glass optics, seven<br />

elements in six groups, f/2.8, 80 mm focal length, a lens separation <strong>of</strong><br />

63.5 mm, light m<strong>et</strong>ering consisting<br />

<strong>of</strong> two SPD’s (silicon pho<strong>to</strong><br />

diodes) for light measurement; and<br />

aperture and shutter speeds<br />

matched according <strong>to</strong> <strong>the</strong> LED<br />

display. The focusing screen<br />

consists <strong>of</strong> a split-image microprism<br />

surrounded by a Fresnel<br />

screen, three LEDs in five<br />

exposure graduations. The camera<br />

uses one roll <strong>of</strong> 120-reversal film<br />

for a pair <strong>of</strong> 58 x 56 mm stereo<br />

images; six pairs per roll. The<br />

company employs about 100<br />

people focused on <strong>the</strong> development<br />

<strong>of</strong> devices that promote stereo<br />

imaging. http://www.3dworld.cn<br />

New IBM mainframe platform developed <strong>to</strong> support virtual worlds<br />

The International Herald Tribune revealed that IBM is launching a new mainframe platform specifically designed<br />

for next-generation virtual worlds and 3D virtual environments. In concert with Brazilian game developer Hoplon,<br />

IBM will use <strong>the</strong> PlayStation3’s ultra-high-powered Cell processor <strong>to</strong> create a mainframe architecture that will<br />

<strong>provide</strong> <strong>the</strong> security, scalability and speed that <strong>are</strong> currently lacking in 3D environments – a lack that is one <strong>of</strong> <strong>the</strong><br />

fac<strong>to</strong>rs keeping <strong>the</strong>m from becoming widely adopted.<br />

USGS posted 3D pho<strong>to</strong>s <strong>of</strong> national parks<br />

The US Geological Survey (USGS) has posted hundreds <strong>of</strong> highly-d<strong>et</strong>ailed anaglyphic 3D pho<strong>to</strong>graphs <strong>of</strong> national<br />

parks on <strong>the</strong> <strong>We</strong>b. http://3dparks.wr.usgs.gov/index.html<br />

Anaglyph images from Arches National Park and Saguaro National Monument released by USGS<br />

http://www.veritas<strong>et</strong>visus.com 28


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

StereoEye features huge collection <strong>of</strong> 3D pho<strong>to</strong>graphic images<br />

A Japanese stereo soci<strong>et</strong>y has posted a large number <strong>of</strong> images, in numerous formats <strong>to</strong> <strong>the</strong>ir website. Bew<strong>are</strong>;<br />

visiting <strong>this</strong> website could consume a couple <strong>of</strong> hours… http://www.stereoeye.jp<br />

The StereoEye website showcases hundreds <strong>of</strong> 3D pho<strong>to</strong>s in several form fac<strong>to</strong>rs including <strong>the</strong>se anaglyph images <strong>of</strong> <strong>the</strong><br />

Tokyo Tower and a fireworks display in Tokyo Harbor.<br />

3D image <strong>of</strong> <strong>the</strong> Moon captured by pho<strong>to</strong>grapher from <strong>the</strong> Earth<br />

It’s intuitive <strong>to</strong> think that g<strong>et</strong>ting a stereo image <strong>of</strong> <strong>the</strong> moon from Earth is not possible, but all it requires is g<strong>et</strong>ting<br />

two pictures from different angles only, which requires only a little patience. In <strong>this</strong> case, pho<strong>to</strong>grapher Laurent<br />

Laveder used two pictures taken months apart, one in November 2006 and one in January 2007. He relied on <strong>the</strong><br />

Moon’s continuous libration (or wobble) as it orbits <strong>to</strong> produce two shifted images <strong>of</strong> a full moon, resulting in a<br />

compelling stereo view. http://www.pixheaven.n<strong>et</strong><br />

The image on <strong>the</strong> left is an anaglyph <strong>of</strong> <strong>the</strong> Moon; <strong>the</strong> right image is a stereo pair intended for cross-eyed viewing<br />

http://www.veritas<strong>et</strong>visus.com 29


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

NASA’s STEREO reveals solar prominences<br />

In late August, STEREO observed a ga<strong>the</strong>ring <strong>of</strong> solar<br />

prominences in pr<strong>of</strong>ile as <strong>the</strong>y twisted, str<strong>et</strong>ched and floated<br />

just above <strong>the</strong> solar surface. Over about two and a half days<br />

(August 16-18, 2007), <strong>the</strong> prominences were seen in extreme<br />

ultraviol<strong>et</strong> light by <strong>the</strong> Ahead spacecraft. Prominences <strong>are</strong><br />

clouds <strong>of</strong> cooler gases controlled by powerful magn<strong>et</strong>ic<br />

forces that extend above <strong>the</strong> Sun’s surface. In a video created<br />

by NASA, <strong>the</strong> c<strong>are</strong>ful observer can som<strong>et</strong>imes see <strong>the</strong> gases<br />

arcing out from one point and sliding above <strong>the</strong> surface <strong>to</strong><br />

ano<strong>the</strong>r point. In <strong>the</strong> most interesting sequence near <strong>the</strong> end<br />

<strong>of</strong> <strong>the</strong> clip, <strong>the</strong> upper prominence seems <strong>to</strong> arch away in<strong>to</strong><br />

space. Such sequences serve <strong>to</strong> show <strong>the</strong> dynamic nature <strong>of</strong><br />

<strong>the</strong> Sun. STEREO (Solar TErrestrial RElations Observa<strong>to</strong>ry)<br />

is a two-year mission; launched Oc<strong>to</strong>ber 2006 that <strong>provide</strong>s a<br />

unique view <strong>of</strong> <strong>the</strong> Sun-Earth system. The two nearly<br />

identical observa<strong>to</strong>ries, one ahead <strong>of</strong> Earth’s orbit, <strong>the</strong> o<strong>the</strong>r<br />

behind, trace <strong>the</strong> flow <strong>of</strong> energy and matter from Sun <strong>to</strong><br />

Earth. The image <strong>to</strong> <strong>the</strong> left is in anaglyph form.<br />

http://www.nasa.gov/mission_pages/stereo/main/index.html<br />

Globe4D shows <strong>of</strong>f four-dimensional globe<br />

Globe4D is an interactive, four-dimensional globe. It’s a projection <strong>of</strong> <strong>the</strong> Earth’s surface on a physical sphere that<br />

shows <strong>the</strong> his<strong>to</strong>rical movement <strong>of</strong> <strong>the</strong> continents as its main feature, but is also capable <strong>of</strong> displaying all kinds <strong>of</strong><br />

o<strong>the</strong>r geographical data such as climate changes, plant growth, radiation, rainfall, forest fires, seasons, airplane<br />

routes, and more. The user can interact with <strong>the</strong> globe in two ways. First: rotation <strong>of</strong> <strong>the</strong> sphere itself. Second:<br />

turning a ring around <strong>the</strong> sphere. By rotating <strong>the</strong> sphere <strong>the</strong> projected image rotates along with <strong>the</strong> input movement.<br />

Turning <strong>the</strong> ring controls time as <strong>the</strong> 4th dimension <strong>of</strong> <strong>the</strong> globe. Of course Globe4D is not limited <strong>to</strong> <strong>the</strong> Earth<br />

alone. The Moon, <strong>the</strong> Sun, Mars and any o<strong>the</strong>r spherical object can be projected as well. Users can even go <strong>to</strong> <strong>the</strong><br />

middle <strong>of</strong> <strong>the</strong> Earth by zooming in on <strong>the</strong> crust and peeling <strong>the</strong> earth as if an onion. http://www.globe4d.com<br />

The Globe4D l<strong>et</strong>s viewers see video images on <strong>the</strong> movable sphere, while time and o<strong>the</strong>r functions <strong>are</strong><br />

controlled by turning <strong>the</strong> ring around <strong>the</strong> sphere.<br />

http://www.veritas<strong>et</strong>visus.com 30


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Micros<strong>of</strong>t and NASA create several new Pho<strong>to</strong>synths<br />

Several new Pho<strong>to</strong>synths were generated through a collaboration b<strong>et</strong>ween NASA and Micros<strong>of</strong>t’s Live Labs. They<br />

show different aspects <strong>of</strong> <strong>the</strong> Shuttle’s lifecycle related <strong>to</strong> <strong>the</strong> Orbiter, Endeavor, Launch Pad, and Vehicle<br />

Assembly Building. The Pho<strong>to</strong>synth process weaves hundreds <strong>of</strong> images <strong>to</strong>ge<strong>the</strong>r and allows viewers <strong>to</strong> pan and<br />

zoom amongst <strong>the</strong> images in a three-dimensional layer <strong>of</strong> images. http://labs.live.com<br />

The image on <strong>the</strong> left shows <strong>the</strong> Pho<strong>to</strong>synth from a distance. Users can zoom on images <strong>to</strong> reveal high-resolution<br />

shots such as <strong>the</strong> close-up <strong>of</strong> <strong>the</strong> Endeavor on <strong>the</strong> right.<br />

Google Earth adds Sky<br />

In late August, Google announced <strong>the</strong> launch <strong>of</strong> Sky, a new feature that enables users <strong>of</strong> Google Earth <strong>to</strong> view <strong>the</strong><br />

sky as seen from plan<strong>et</strong> Earth. With Sky, users can now float through <strong>the</strong> skies via Google Earth. This easy-<strong>to</strong>-use<br />

<strong>to</strong>ol enables all Earth users <strong>to</strong> view and navigate through 100 million individual stars and 200 million galaxies.<br />

High-resolution imagery and informative overlays create a unique playground for visualizing and learning about<br />

space. To access Sky, users need only click “Switch<br />

<strong>to</strong> Sky” from <strong>the</strong> “View” drop-down menu in<br />

Google Earth, or click <strong>the</strong> Sky but<strong>to</strong>n on <strong>the</strong><br />

Google Earth <strong>to</strong>olbar. The interface and navigation<br />

<strong>are</strong> similar <strong>to</strong> that <strong>of</strong> standard Google Earth<br />

steering, including dragging, zooming, search, “My<br />

Places”, and layer selection. As part <strong>of</strong> <strong>the</strong> new<br />

feature, Google is introducing seven informative<br />

layers that illustrate various celestial bodies and<br />

events, including Constellations, Backyard<br />

Astronomy, Hubble Space Telescope Imagery,<br />

Moon, Plan<strong>et</strong>s, Users Guide <strong>to</strong> <strong>the</strong> Galaxies, and<br />

Life <strong>of</strong> a Star. The announcement follows last<br />

month’s inclusion <strong>of</strong> <strong>the</strong> NASA layer group in<br />

Google Earth, showcasing NASA’s Earth<br />

exploration. The group has three main components,<br />

including Astronaut Pho<strong>to</strong>graphy <strong>of</strong> Earth, Satellite Imagery, and Earth City Lights. Astronaut Pho<strong>to</strong>graphy <strong>of</strong><br />

Earth showcases pho<strong>to</strong>graphs <strong>of</strong> <strong>the</strong> Earth as seen from space from <strong>the</strong> early 1960s on, while Satellite Imagery<br />

highlights Earth images taken by NASA satellites over <strong>the</strong> years and Earth City Lights traces well-lit cities across<br />

<strong>the</strong> globe. The feature will be available on all Google Earth domains, in 13 languages. To access Sky in Google<br />

Earth, users need <strong>to</strong> download <strong>the</strong> newest version <strong>of</strong> Google Earth, available at: http://earth.google.com.<br />

http://www.veritas<strong>et</strong>visus.com 31


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Google Earth introduces flight simula<strong>to</strong>r<br />

In <strong>the</strong> new Google Earth 4.2 b<strong>et</strong>a, <strong>the</strong>re’s a flight simula<strong>to</strong>r<br />

mode which <strong>provide</strong>s a fascinating 3D experience. To<br />

access, hit <strong>the</strong> special keyboard shortcut: CTRL-ALT-A <strong>to</strong><br />

g<strong>et</strong> a reques<strong>to</strong>r allowing you <strong>to</strong> choose from two types <strong>of</strong><br />

aircraft - an F-16 or a SR-22, and choose from one <strong>of</strong> several<br />

airports. When you’re ready, select “Start Flight”. You’ll<br />

find controls for flaps, landing gear, trim, and more. The<br />

SR22 is easier <strong>to</strong> fly for beginners. You g<strong>et</strong> a head up<br />

display (HUD) just like in a fighter-j<strong>et</strong>. And <strong>the</strong> indica<strong>to</strong>rs<br />

tell you which direction you <strong>are</strong> moving, rate <strong>of</strong> climb,<br />

altitude, and o<strong>the</strong>r useful information most flight simula<strong>to</strong>r<br />

aficionados will understand. Some useful tips for using <strong>the</strong><br />

new simula<strong>to</strong>r <strong>are</strong> available at http://www.gearthblog.com.<br />

Virtual Earth 3D adds new cities and greater d<strong>et</strong>ail<br />

Digital Urban recently added a tu<strong>to</strong>rial for creating very high-resolution cityscape panoramas with Virtual Earth.<br />

Several new cities have recently been launched in 3D and along with <strong>the</strong> tu<strong>to</strong>rial, <strong>the</strong> Digital Urban site includes<br />

videos, some obscure tips, and some great insights in<strong>to</strong> <strong>the</strong> building <strong>of</strong> a realistic virtual world such as <strong>the</strong> subtle<br />

tweaks made in modeling low, dense cities like Toulouse, shown below. http://www.digitalurban.blogspot.com<br />

Numerous new VE3D images were recently added, including those <strong>of</strong> Montreal and Toulouse<br />

Niagara Falls – impressive in Virtual Earth<br />

Good digital elevation models, super high-resolution aerial imagery and 3D modeling combine <strong>to</strong> create virtual<br />

worlds <strong>of</strong> amazing realism. The first image below is a static Birds Eye and <strong>the</strong> second image is a snapshot <strong>of</strong> <strong>the</strong><br />

same part <strong>of</strong> <strong>the</strong> Horseshoe Falls at Niagara in interactive 3D. http://www.micros<strong>of</strong>t.com/virtualearth/<br />

http://www.veritas<strong>et</strong>visus.com 32


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Georgia Institute <strong>of</strong> Technology and Micros<strong>of</strong>t Research develop 4D Cities<br />

Computer scientists from <strong>the</strong> Georgia Institute <strong>of</strong> Technology and Micros<strong>of</strong>t Research have developed 4D Cities, a<br />

s<strong>of</strong>tw<strong>are</strong> package that shows <strong>the</strong> evolution <strong>of</strong> a city over time, creating a virtual his<strong>to</strong>rical <strong>to</strong>ur. The s<strong>of</strong>tw<strong>are</strong> can<br />

au<strong>to</strong>matically sort a collection <strong>of</strong> his<strong>to</strong>rical city snapshots in<strong>to</strong> date order. It <strong>the</strong>n constructs an animated 3D model<br />

that shows how <strong>the</strong> city has changed over <strong>the</strong> years. The idea is <strong>to</strong> give architects, his<strong>to</strong>rians, <strong>to</strong>wn planners,<br />

environmentalists and <strong>the</strong> curious a new way <strong>to</strong> look at cities, says Frank Dellaert at <strong>the</strong> Georgia Institute <strong>of</strong><br />

Technology in Atlanta, who built <strong>the</strong> system with his<br />

colleague Grant Schindler and Sing Bing Kang <strong>of</strong><br />

Micros<strong>of</strong>t’s research lab in Redmond, Washing<strong>to</strong>n.<br />

To create a model <strong>of</strong> Atlanta, <strong>the</strong> researchers scanned<br />

in numerous his<strong>to</strong>rical pho<strong>to</strong>s <strong>of</strong> <strong>the</strong> city that had been<br />

snapped from similar vantage points. The s<strong>of</strong>tw<strong>are</strong> is<br />

designed <strong>to</strong> identify <strong>the</strong> 3D structures within <strong>the</strong><br />

image and break <strong>the</strong>m down in<strong>to</strong> a series <strong>of</strong> points. It<br />

<strong>the</strong>n comp<strong>are</strong>s <strong>the</strong> view in each one <strong>to</strong> work out why<br />

some <strong>of</strong> <strong>the</strong>se points <strong>are</strong> visible in some <strong>of</strong> <strong>the</strong> images<br />

but not o<strong>the</strong>rs. Was <strong>the</strong> building simply out <strong>of</strong> shot?<br />

Or was <strong>the</strong> view <strong>of</strong> one building blocked by ano<strong>the</strong>r?<br />

The s<strong>of</strong>tw<strong>are</strong> continually rearranges <strong>the</strong> order <strong>of</strong> <strong>the</strong><br />

images taken from each vantage point until <strong>the</strong><br />

visibility patterns <strong>of</strong> all <strong>the</strong> buildings <strong>are</strong> consistent.<br />

The result is that <strong>the</strong> images appear in time order,<br />

allowing <strong>the</strong> researchers <strong>to</strong> construct and animate a<br />

3D graphic <strong>of</strong> <strong>the</strong> city through which users can travel<br />

backwards or forwards in time. The researchers plan<br />

<strong>to</strong> extend <strong>the</strong> system <strong>to</strong> create models <strong>of</strong> o<strong>the</strong>r cities,<br />

and <strong>to</strong> improve <strong>the</strong> s<strong>of</strong>tw<strong>are</strong>’s ability <strong>to</strong> recognize whe<strong>the</strong>r different pho<strong>to</strong>s <strong>are</strong> showing exactly <strong>the</strong> same scene.<br />

This can be difficult as some cityscapes change so pr<strong>of</strong>oundly. Here is how <strong>the</strong>y introduce <strong>the</strong> project on <strong>the</strong> 4D<br />

Cities home page. “The research described here aims at building time-varying 3D models that can serve <strong>to</strong> pull<br />

<strong>to</strong>ge<strong>the</strong>r large collections <strong>of</strong> images pertaining <strong>to</strong> <strong>the</strong> appearance, evolution, and events surrounding one place or<br />

artifact over time, as exemplified by <strong>the</strong> 4D Cities project: <strong>the</strong> compl<strong>et</strong>ely au<strong>to</strong>matic construction <strong>of</strong> a 4D database<br />

showing <strong>the</strong> evolution over time <strong>of</strong> a single city.” (www.cc.gatech.edu/~phlos<strong>of</strong>t).<br />

AMRADNET takes over MedView<br />

American Radiologist N<strong>et</strong>work (AMRADNET) has acquired ViewTec’s medical division. AMRADNET purchased<br />

ViewTec’s medical business along with its MedView core product and technology. Through <strong>this</strong> acquisition,<br />

existing MedView users worldwide will now be serviced by AMRADNET. Development <strong>of</strong> MedView, a DICOM<br />

compatible high performance s<strong>of</strong>tw<strong>are</strong> product for digital medical imaging, <strong>to</strong>ok place at <strong>the</strong> University <strong>of</strong> Zurich<br />

in cooperation with specialists from leading hospitals in Switzerland. http://www.viewtec.ch<br />

GAF <strong>to</strong> distribute Intermap 3D data throughout Europe<br />

Intermap Technologies Corp and GAF AG, an international geo-information technology company located in<br />

Munich, Germany, have signed an agreement <strong>to</strong> allow GAF <strong>to</strong> immediately begin distributing Intermap’s highresolution<br />

3D digital elevation data and geom<strong>et</strong>ric images throughout Germany and <strong>the</strong> rest <strong>of</strong> Europe. GAF, a<br />

private sec<strong>to</strong>r enterprise is part <strong>of</strong> <strong>the</strong> Telespazio group <strong>of</strong> companies. The company was founded in 1985 and<br />

<strong>of</strong>fers a broad range <strong>of</strong> geospatial applications, including geodata procurement, image processing, s<strong>of</strong>tw<strong>are</strong><br />

development, and consulting services. http://www.Intermap.com<br />

http://www.veritas<strong>et</strong>visus.com 33


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Immersive Media continues expansion in<strong>to</strong> commercial media<br />

Immersive Media Corp. announced <strong>the</strong> first ever use <strong>of</strong> 360 degree video for experiential mark<strong>et</strong>ing. In<br />

collaboration with adidas and TAOW Productions, IMC captured <strong>the</strong> “premiere” sports event <strong>of</strong> <strong>this</strong> summer –<br />

David Beckham’s first game with <strong>the</strong> Los Angeles Galaxy. This immersive video was launched by adidas on <strong>the</strong>ir<br />

website. Soccer fans can experience <strong>the</strong> 360 degree, full motion video and look around in every direction as if <strong>the</strong>y<br />

were behind <strong>the</strong> scenes. With regards <strong>to</strong> <strong>the</strong> city collection program, IMC is continuing <strong>to</strong> expand its GeoImmersive<br />

database with additional cities being added in North America and Europe. The European expansion includes cities<br />

in England, Germany, France, Spain and Italy. The GeoImmersive imagery is being licensed <strong>to</strong> commercial and<br />

public organizations for promotional, planning and ass<strong>et</strong> management purposes. To preview <strong>the</strong> GeoImmersive<br />

Imagery visit http://demos.immersivemedia.com/onlinecities<br />

Above <strong>are</strong> images from a drive down <strong>the</strong> famed 6 th Stre<strong>et</strong> in Austin, Texas. The images were captured at a pause in <strong>the</strong> video<br />

and show three different images from <strong>the</strong> 360 o view available from <strong>the</strong> on-line demonstration.<br />

ComputaMaps releases 3D urban models<br />

ComputaMaps recently released 3D urban video models <strong>of</strong> several cities, including Toron<strong>to</strong>, Dubai, Baltimore,<br />

Washing<strong>to</strong>n DC, Hong Kong (Aberdeen), and Durban, South Africa. The company manufactures multi-resolution<br />

3D databases ranging in d<strong>et</strong>ail from <strong>the</strong> entire globe down <strong>to</strong> pho<strong>to</strong>-realistic urban environments. These data can be<br />

deployed in various applications such as interactive entertainment, broadcast wea<strong>the</strong>r and news graphics s<strong>of</strong>tw<strong>are</strong>.<br />

Download animations <strong>of</strong> <strong>the</strong> following cities derived from QuickBird satellite imagery. Animations <strong>of</strong> <strong>are</strong> available<br />

at: http://www.computamaps.com/3d-visualization/3d-visualization.html<br />

These images <strong>are</strong> captured from video animations <strong>of</strong> ComputaMaps flybys in Baltimore and Hong Kong<br />

http://www.veritas<strong>et</strong>visus.com 34


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

RabbitHoles acquires XYZ Imaging<br />

After recently acquiring <strong>the</strong> hologram company XYZ Imaging, RabbitHoles announced that it is seeking<br />

partnerships with “boundary-breaking artists <strong>to</strong> create limited edition RabbitHole 3D Motion Art and <strong>to</strong> be pioneers<br />

<strong>of</strong> <strong>this</strong> new contemporary art medium”. The company advertises that “for 3D artists, working in RabbitHoles is <strong>the</strong><br />

first and only way for you <strong>to</strong> showcase in 3D on gallery walls and<br />

in <strong>the</strong> homes and work-places <strong>of</strong> visionary collec<strong>to</strong>rs. For 2D<br />

artists, RabbitHoles <strong>of</strong>fers an invitation <strong>to</strong> experiment with what's<br />

next. The RabbitHole 3D Motion Art is a reflective technology<br />

reliant on precisely placed halogen light <strong>to</strong> expose its full-color<br />

3D artwork. This radical new art form springs from patented<br />

digital technology that instructs red, green and blue pulse lasers <strong>to</strong><br />

expose a specially formulated film 300 times finer that ISO 300.<br />

The company is currently focusing on gaining exposure in<br />

selective, high-pr<strong>of</strong>ile, artistic contexts that will yield highlycollectible<br />

limited edition series and affirm <strong>the</strong> medium as<br />

revolutionary contemporary art. http://www.rabbitholes.com<br />

Virtual Images Unlimited acquires Kodak lenticular technology and equipment<br />

Virtual Images Unlimited, a division <strong>of</strong> IGH Solutions from Minnesota, announced that it has acquired <strong>the</strong> large<br />

format lenticular manufacturing ass<strong>et</strong>s <strong>of</strong> Dynamic Images. The equipment purchased, which utilizes highresolution<br />

pho<strong>to</strong>graphic techniques <strong>to</strong> produce large-format lenticular in single-panel sizes up <strong>to</strong> 4 fe<strong>et</strong> by 8 fe<strong>et</strong>,<br />

was originally developed by Kodak. The technology produces 3D and animated effects. The former Dynamic<br />

Images purchased <strong>the</strong> technology from Kodak in 2001. The items purchased will allow VIU <strong>to</strong> continue selling<br />

movie standees, posters, and bus shelters in<strong>to</strong> <strong>the</strong> entertainment industry and o<strong>the</strong>r key mark<strong>et</strong>s. VIU will locate <strong>the</strong><br />

equipment at its p<strong>are</strong>nt company’s facility in Minnesota. http://www.viu.com<br />

3D Systems brings out a hard plastic for rapid pro<strong>to</strong>typing<br />

3D Systems Corporation, a <strong>provide</strong>r <strong>of</strong> 3D modeling, rapid pro<strong>to</strong>typing and manufacturing solutions, announced<br />

Accura Xtreme Plastic, a new material for stereolithography systems. This addition <strong>to</strong> <strong>the</strong> company’s family <strong>of</strong><br />

Accura materials facilitates <strong>the</strong> efficient design, development and manufacturing <strong>of</strong> products by enabling<br />

production <strong>of</strong> early pro<strong>to</strong>types having improved durability and functionality. Accura Xtreme Plastic is now<br />

available for b<strong>et</strong>a testing by qualified cus<strong>to</strong>mers. An extremely <strong>to</strong>ugh and versatile material, Accura Xtreme Plastic<br />

is designed for functional assemblies that demand durability. Due <strong>to</strong> its high elongation and moderate modulus,<br />

Accura Xtreme Plastic is ideally suited for many rapid pro<strong>to</strong>typing and rapid manufacturing applications. Accura<br />

Xtreme Plastic’s properties closely mimic those found in molded ABS and polypropylene, which <strong>are</strong> major<br />

production plastics. Accura Xtreme Plastic also features lower viscosity and higher processing speeds than o<strong>the</strong>r<br />

materials in <strong>the</strong> mark<strong>et</strong>place, resulting in easy operation, fast part creation, and quick cleaning and finishing with<br />

less waste. http://www.3dsystems.com<br />

HumanEyes Technologies demonstrates lenticular printing with UV printers<br />

HumanEyes Technologies demonstrated its 3D and lenticular production on <strong>the</strong> newest UV flatbed inkj<strong>et</strong> printers at<br />

its partners’ booths throughout Graph Expo, held in Chicago, from September 9-12. HumanEyes s<strong>of</strong>tw<strong>are</strong> allows<br />

printers <strong>to</strong> take advantage <strong>of</strong> <strong>the</strong> newest technology <strong>to</strong> produce high quality lenticular and 3D effects on digital<br />

presses from HP, Gandinnovations, Fujifilm Graphic Systems and Océ. The latest hardw<strong>are</strong> developments <strong>are</strong><br />

making <strong>the</strong> production <strong>of</strong> lenticular easier, faster and <strong>of</strong> highest quality. New processes and materials have also<br />

considerably reduced <strong>the</strong> cost <strong>of</strong> specialty print production. The next generation <strong>of</strong> UV curable ink flatbed presses<br />

<strong>of</strong>fer very low drop volume, resulting in high resolution, closely comparable <strong>to</strong> that <strong>of</strong> pho<strong>to</strong> quality desk<strong>to</strong>p inkj<strong>et</strong>s<br />

and high-end plotters. Also, an impressive geom<strong>et</strong>ric accuracy <strong>of</strong> drop placement secures highly accurate printing.<br />

These features produce very high quality lenticular printing. http://www.humaneyes.com<br />

http://www.veritas<strong>et</strong>visus.com 35


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

National Graphics partners with Sports Image International<br />

Sports Image International announced <strong>the</strong> launch <strong>of</strong> a new line <strong>of</strong> three-dimensional sports lenticular images<br />

featuring licensed classic images from Major League Baseball and The National Hockey League. By combining <strong>the</strong><br />

latest technology in pho<strong>to</strong> enhancement and National Graphics’ lenticular imaging with memorable sports<br />

moments, SII has created a new class <strong>of</strong> collectible that is unique <strong>to</strong> <strong>the</strong> sports memorabilia mark<strong>et</strong>. These<br />

collectibles <strong>are</strong> now available online at http://www.sportsimageintl.com, where sports fans can experience and<br />

purchase <strong>the</strong> three-dimensional lenticular images. They <strong>are</strong> also available at both Yankee and Shea Stadium gift<br />

shops. The images <strong>are</strong> produced by National Graphics, pioneers in lenticular imaging, who claims <strong>to</strong> <strong>provide</strong> <strong>the</strong><br />

highest quality lithographic lenticular products in <strong>the</strong> world. http://www.extremevision.com<br />

3D Center <strong>of</strong> Art and Pho<strong>to</strong>graphy <strong>to</strong> exhibit French graffiti art<br />

The 3D Center <strong>of</strong> Art and Pho<strong>to</strong>graphy <strong>of</strong> Portland, Oregon will exhibit “Urban Spaces” from September 13<br />

through Oc<strong>to</strong>ber 28. “Urban Spaces” is an exhibition <strong>of</strong> 11 stereoscopic images from <strong>the</strong> series “Kunstfabrik”<br />

(2000-2007) by Ekkehart<br />

Rautenstrauch <strong>of</strong> Nantes, France.<br />

Not far from <strong>the</strong> center <strong>of</strong> Nantes<br />

stands an old deserted foundry, out<br />

<strong>of</strong> service since <strong>the</strong> 1980s. Since<br />

2000 Rautenstrauch has been<br />

documenting <strong>the</strong> transformation <strong>of</strong><br />

<strong>the</strong> space by a host <strong>of</strong> taggers,<br />

graffiti artists and o<strong>the</strong>rs wishing<br />

<strong>to</strong> leave <strong>the</strong>ir mark on <strong>the</strong> space.<br />

As <strong>the</strong> artist describes, “I quite<br />

steadily followed <strong>the</strong> pictural<br />

transformation, <strong>the</strong> continuous<br />

fadedness and every new<br />

expression testifying <strong>to</strong> an alive and constant creation. In my own artistic work <strong>the</strong> freedom <strong>of</strong> gesture, <strong>the</strong> rhythm<br />

<strong>of</strong> bodies, <strong>the</strong> writing extended in<strong>to</strong> space, have always been essential elements.” Rautenstrauch’s works <strong>are</strong><br />

presented in specially made folding viewers called “folioscopes” (designed by Sylvain Arnoux) which hang on <strong>the</strong><br />

wall at eye level, allowing <strong>the</strong> viewer <strong>to</strong> view <strong>the</strong> stereoscopic images. http://www.3dcenter.us<br />

University <strong>of</strong> <strong>We</strong>imar develops unsynchronized 4D barcodes<br />

Researchers from <strong>the</strong> University <strong>of</strong> <strong>We</strong>imar recently developed a novel technique for optical data transfer b<strong>et</strong>ween<br />

public displays and mobile devices based on unsynchronized 4D barcodes. In a project entitled PhoneGuide, <strong>the</strong><br />

researchers assumed that no direct (electromagn<strong>et</strong>ic or o<strong>the</strong>r) connection b<strong>et</strong>ween two devices can exist. Timemultiplexed,<br />

2D color barcodes <strong>are</strong> displayed on<br />

screens and recorded with camera equipped mobile<br />

phones. This allows for <strong>the</strong> transmission <strong>of</strong><br />

information optically b<strong>et</strong>ween two devices. This<br />

approach maximizes <strong>the</strong> data throughput and <strong>the</strong><br />

robustness <strong>of</strong> <strong>the</strong> barcode recognition, while no<br />

immediate synchronization exists. Although <strong>the</strong><br />

transfer rate is much smaller than can be achieved<br />

with electromagn<strong>et</strong>ic techniques (e.g., Blue<strong>to</strong>oth or<br />

WiFi), <strong>the</strong>y envision applying such a technique<br />

wherever no direct connection is available. 4D barcodes can, for instance, be integrated in<strong>to</strong> public web-pages,<br />

movie sequences, advertisement presentations or information displays, and <strong>the</strong>y encode and transmit more<br />

information than possible with single 2D or 3D barcodes. http://www.uni-weimar.de/medien/ar/research.php<br />

http://www.veritas<strong>et</strong>visus.com 36


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

RTT updates s<strong>of</strong>tw<strong>are</strong> for high-speed development and visualization<br />

RTT unveiling <strong>the</strong> new versions <strong>of</strong> its core products – RTT DeltaGen 7.0 and RTT Portal 3.0. With a range <strong>of</strong> new<br />

functions, both s<strong>of</strong>tw<strong>are</strong> solutions do not only deliver <strong>the</strong> maximum degree <strong>of</strong> visualization, but also several<br />

possibilities for process acceleration and workflow efficiency. The RTT DeltaGen 7.0 s<strong>of</strong>tw<strong>are</strong> suite enables<br />

extremely realistic, pr<strong>of</strong>essional 3D real-time visualization. One <strong>of</strong> <strong>the</strong> highlights is <strong>the</strong> freshly designed graphical<br />

user interface. The new color system includes smart icons and facilitates <strong>the</strong> creation <strong>of</strong> 3D models and scenes.<br />

Ano<strong>the</strong>r key element <strong>of</strong> <strong>the</strong> recent version is <strong>the</strong> novel functionality <strong>of</strong> assembly handling and <strong>the</strong> direct connection<br />

<strong>to</strong> RTT Portal libraries. The latter feature enables users <strong>to</strong> directly access object and materials libraries, such as a<br />

wheel rim database for cars, from RTT DeltaGen <strong>to</strong> RTT Portal 3.0. Assembly handling allows 3D scenes that have<br />

been divided in<strong>to</strong> assemblies <strong>to</strong> be loaded in<strong>to</strong> RTT DeltaGen 7.0 <strong>to</strong> be linked <strong>to</strong> 3D models or unloaded. Single<br />

work steps can subsequently be accomplished simultaneously and independently from each o<strong>the</strong>r by various users.<br />

http://www.rtt.ag<br />

nVidia demonstrates high-speed renderer<br />

nVidia demonstrated its next-generation, near-real-time, high-quality rendering product with performance<br />

improvements capable <strong>of</strong> re-lighting 60 frames <strong>of</strong> a complex scene in 60 seconds. Along with a new release <strong>of</strong><br />

nVidia Gela<strong>to</strong> GPU-accelerated s<strong>of</strong>tw<strong>are</strong> renderer and <strong>the</strong> announcement <strong>of</strong> <strong>the</strong> nVidia Quadro Plex Visual<br />

Computing System (VCS) Model S4 1U graphics server, <strong>this</strong> technology demonstration shows nVidia’s expertise<br />

in <strong>the</strong> field <strong>of</strong> high-quality rendering. Adding high-quality frames, like those used in film and o<strong>the</strong>r applications<br />

where visual quality is paramount, have been slow <strong>to</strong> be integrated in<strong>to</strong> an interactive workflow because <strong>the</strong>y take<br />

<strong>to</strong>o long <strong>to</strong> render. nVidia previewed <strong>the</strong> new technology at <strong>the</strong> SIGGRAPH 2007 conference that will be part <strong>of</strong> its<br />

next-generation renderer, harnessing <strong>the</strong> full power <strong>of</strong> <strong>the</strong> nVidia GPU <strong>to</strong> bring a truly interactive workflow <strong>to</strong><br />

relighting high-quality scenes in about a second. And, it can also be used for high-speed final renders <strong>of</strong> broadcastquality<br />

frames. Running on <strong>the</strong> latest nVidia GPU architecture, <strong>this</strong> technology can achieve rendering performance<br />

improvements <strong>of</strong> more than 100 times that <strong>of</strong> traditional s<strong>of</strong>tw<strong>are</strong> rendering solutions, <strong>the</strong> company says. By using<br />

<strong>the</strong> GPU <strong>to</strong> enhance rendering CPU-based performance, pr<strong>of</strong>essional quality, interactive final-frame rendering and<br />

interactive relighting is now possible — accelerating production workflow, improving review and approval cycles,<br />

and reducing overall production schedules. http://www.nVidia.com<br />

e frontier launches Poser Pro and teams up with N-Sided<br />

e frontier, Inc. announced Poser Pro, a high end addition <strong>to</strong> its Poser product line. Ge<strong>are</strong>d <strong>to</strong>ward a multitude <strong>of</strong><br />

production environments in both <strong>the</strong> 2D and 3D realms, Poser Pro <strong>of</strong>fers <strong>the</strong> features and functionality <strong>of</strong> Poser 7<br />

plus pr<strong>of</strong>essional level application integration, a 64 bit render engine, and n<strong>et</strong>work rendering support. Poser Pro<br />

now supports <strong>the</strong> COLLADA exchange format for content production, pre-visualization, gaming and film<br />

production, and <strong>of</strong>fers <strong>the</strong> ability <strong>to</strong> fully host Poser scenes in pr<strong>of</strong>essional applications such as Maxon’s CINEMA<br />

4D, Au<strong>to</strong>desk’s 3ds Max and Maya, and Newtek’s Lightwave. O<strong>the</strong>r features include increased support for Adobe<br />

Pho<strong>to</strong>shop CS3 Extended (via COLLADA) and export <strong>of</strong> HDR imagery. In addition, N-Sided will <strong>provide</strong><br />

“QUIDAM for Poser” based on <strong>the</strong>ir QUIDAM character creation s<strong>of</strong>tw<strong>are</strong>. QUIDAM features <strong>the</strong> ability <strong>to</strong> import<br />

and export Poser character files, which will be bundled exclusively with Poser Pro, that brings Poser content and<br />

animations in<strong>to</strong> pr<strong>of</strong>essionals’ workflow. http://www.e-frontier.com/go/poserpro<br />

TI incorporates DDD s<strong>of</strong>tw<strong>are</strong> in 3D HDTV<br />

DDD announced that Texas Instruments demonstrated high definition 3D video using DDD’s TriDef 3D<br />

Experience s<strong>of</strong>tw<strong>are</strong> in conjunction with TI’s DLP 3D HDTV at <strong>the</strong> IFA consumer electronics conference and trade<br />

show in Berlin b<strong>et</strong>ween August 31st and September 5th. TI recently announced <strong>the</strong> world’s first 3D DLP HDTV<br />

based on TI’s all digital DLP imaging device used in <strong>the</strong> latest HDTVs. The 3D DLP HDTV uses active 3D glasses<br />

<strong>to</strong> bring games and movies <strong>to</strong> life, jumping <strong>of</strong>f <strong>the</strong> high definition screen in<strong>to</strong> <strong>the</strong> viewer’s home <strong>the</strong>ater. The 3D<br />

enabled feature will be <strong>of</strong>fered by DLP HDTV manufacturers including Samsung and Mitsubishi. The TriDef 3D<br />

Experience is <strong>the</strong> latest consumer 3D content solution from DDD that enables a full range <strong>of</strong> popular entertainment<br />

from PC games <strong>to</strong> <strong>the</strong> latest high definition 3D movies. http://www.DDD.com<br />

http://www.veritas<strong>et</strong>visus.com 37


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Samsung incorporates DDD s<strong>of</strong>tw<strong>are</strong> in<strong>to</strong> latest mobile phone<br />

DDD Group, <strong>the</strong> 3D s<strong>of</strong>tw<strong>are</strong> and content company, announced that Samsung Electronics has launched a 3D<br />

mobile telephone in Korea incorporating <strong>the</strong> DDD Mobile s<strong>of</strong>tw<strong>are</strong> library under license from DDD. The Samsung<br />

SCH-B710 3D hands<strong>et</strong> is already available in selected SK Telecom r<strong>et</strong>ail s<strong>to</strong>res in South Korea. The SCH-B710 is<br />

a CDMA hands<strong>et</strong> capable <strong>of</strong> receiving both <strong>the</strong> satellite (S-DMB) and <strong>the</strong> terrestrial (T-DMB) mobile television<br />

channels that <strong>are</strong> presently available in South Korea. Included in <strong>the</strong> hands<strong>et</strong> is a 3D LCD display that can be<br />

switched b<strong>et</strong>ween normal 2D display mode and “glasses-free” stereo 3D mode. Using DDD’s solution, standard 2D<br />

mobile TV channels can be au<strong>to</strong>matically converted <strong>to</strong> stereo 3D as <strong>the</strong>y <strong>are</strong> received by <strong>the</strong> hands<strong>et</strong>. The license<br />

agreement with Samsung follows <strong>the</strong> compl<strong>et</strong>ion <strong>of</strong> <strong>the</strong> £500,000 development agreement that was announced in<br />

mid 2005. DDD has also granted Samsung exclusive rights <strong>to</strong> <strong>the</strong> real time 2D <strong>to</strong> 3D conversion feature <strong>of</strong> DDD<br />

Mobile for use on mobile telephones made for sale in <strong>the</strong> Korean mark<strong>et</strong> until June 2009. http://www.DDD.com<br />

DDD launches TriDef 3D Experience<br />

DDD has brought out <strong>the</strong> TriDef 3D Experience - a comprehensive package <strong>of</strong> s<strong>of</strong>tw<strong>are</strong> <strong>to</strong> support a wide range <strong>of</strong><br />

stereoscopic 3D display systems, including Samsung’s DLP 3D HDTVs. It can play a wide range <strong>of</strong> 2D and 3D<br />

movies and pho<strong>to</strong>s, including open format files (.avi, .mpg, .jpg, <strong>et</strong>c); explore Google Earth in 3D; play 3D games;<br />

enable third party applications <strong>to</strong> work on 3D displays; and plays current 2D DVDs in 3D. The TriDef Experience<br />

includes DDD’s 2D-<strong>to</strong>-3D conversion s<strong>of</strong>tw<strong>are</strong> enabling existing 2D pho<strong>to</strong>s, movies and DVDs <strong>to</strong> be enjoyed in<br />

dynamic 3D. A free version is available at http://www.tridef.com/download/latest.html.<br />

Barco and Medicsight team up on colon imaging s<strong>of</strong>tw<strong>are</strong><br />

Barco and Medicsight, a developer <strong>of</strong> computer-aided d<strong>et</strong>ection (CAD) technologies, have signed a partnership<br />

agreement <strong>to</strong> incorporate Medicsight’s “ColonCAD” image analysis s<strong>of</strong>tw<strong>are</strong> <strong>to</strong>ols within Barco’s “Voxar 3D<br />

ColonM<strong>et</strong>rix” virtual colonography application. By integrating Medicsight’s CAD function, Barco fur<strong>the</strong>r expands<br />

<strong>the</strong> functionality <strong>of</strong> its ColonM<strong>et</strong>rix s<strong>of</strong>tw<strong>are</strong> solution, enabling<br />

faster and more efficient recognition <strong>of</strong> suspect lesions during<br />

virtual colonography. Medicsight's ColonCAD is an image<br />

analysis s<strong>of</strong>tw<strong>are</strong> <strong>to</strong>ol designed <strong>to</strong> be used with CT<br />

colonography (virtual colonoscopy) scans. It has been<br />

specifically designed <strong>to</strong> support <strong>the</strong> d<strong>et</strong>ection and segmentation<br />

<strong>of</strong> abnormalities within <strong>the</strong> colon that may potentially be<br />

adenoma<strong>to</strong>us polyps. ColonCAD can be seamlessly integrated<br />

within advanced 3D visualization and PACS platforms <strong>of</strong><br />

industry leading imaging equipment partners. Barco's Voxar<br />

3D ColonM<strong>et</strong>rix is a compl<strong>et</strong>e virtual colonoscopy workflow<br />

and reporting solution that allows radiologists <strong>to</strong> interpr<strong>et</strong> a CT<br />

colonography study and generate a report typically within 10<br />

minutes. http://www.medicsight.com<br />

Intuitive Surgical selects Christie for non-invasive surgery<br />

Christie was selected by Intuitive Surgical, a pioneer in surgical robotics, <strong>to</strong> help display high definition 3D video<br />

images generated by <strong>the</strong> company’s da Vinci surgical system, which is designed <strong>to</strong> enable surgeons <strong>to</strong> perform<br />

complex surgery using a minimally invasive approach. In demonstrations at trade shows and pr<strong>of</strong>essional<br />

conferences around <strong>the</strong> country, Intuitive Surgical successfully harnessed <strong>the</strong> power <strong>of</strong> a pair <strong>of</strong> Christie DS+5K 3-<br />

Chip DLP digital projec<strong>to</strong>rs <strong>to</strong> render highly accurate 3D images <strong>of</strong> surgical procedures in passive stereo, as seen<br />

by surgeons operating <strong>the</strong> da Vinci surgical system. According <strong>to</strong> Intuitive Surgical, prior <strong>to</strong> <strong>the</strong> da Vinci system,<br />

only highly skilled surgeons could routinely attempt complex minimally invasive surgery. The Christie DS+5K<br />

projec<strong>to</strong>r <strong>of</strong>fers 6,500 ANSI lumens, native 1400x1050 resolution, and 1600-2000:1 contrast ratio. It features 3chip<br />

DLP technology and <strong>the</strong> ability <strong>to</strong> display standard and high-definition video. http://www.christiedigital.com<br />

http://www.veritas<strong>et</strong>visus.com 38


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

US hospital first <strong>to</strong> use Viking’s 3Di visualization system in pediatric urological surgery<br />

Viking Systems, a designer and manufacturer <strong>of</strong> laparoscopic vision systems for use in minimally invasive surgical<br />

(MIS) procedures, announced that Dr. Rama Jayanthi and his surgical team at <strong>the</strong> Columbus Children’s Hospital in<br />

California successfully performed <strong>the</strong> first intravesical minimally invasive ur<strong>et</strong>eral reimplantation using Viking<br />

Systems’ 3Di Vision System. Dr. Jayanthi used a 5 mm 3D laparoscope that enabled him <strong>to</strong> see inside <strong>the</strong> bladder<br />

and perform <strong>this</strong> complex procedure. “This procedure is especially delicate and requires surgical precision and<br />

close attention <strong>to</strong> d<strong>et</strong>ail,”' said Dr. Jayanthi. “The 3D high definition view we had during <strong>the</strong> procedure was<br />

incredibly precise and we look forward <strong>to</strong> working with <strong>this</strong> technology more and more. The 3D view certainly<br />

makes fine suturing easier and more accurate.” The 3Di Vision System manufactured by Viking Systems delivers a<br />

magnified, high-resolution 3D image that allows <strong>the</strong> surgeon <strong>to</strong> visualize depth in <strong>the</strong> underlying ana<strong>to</strong>mical<br />

structures and tissue during complex MIS. The 3D images <strong>are</strong> viewed by <strong>the</strong> surgeon and surgical team via<br />

Viking’s Personal Head Display (PHD). The PHD places <strong>the</strong> 3D image directly before <strong>the</strong> surgeon’s eyes,<br />

providing a high definition immersive view <strong>of</strong> <strong>the</strong> surgical field. The Viking 3Di Vision System also delivers an<br />

information management solution known as Infomatix, which <strong>provide</strong>s immediate, picture-in-picture access <strong>to</strong><br />

additional surgical information through voice activation. This critical information can be <strong>provide</strong>d simultaneously<br />

with <strong>the</strong> surgical image on <strong>the</strong> surgeon’s PHD. http://www.vikingsystems.com<br />

DAZ 3D announces Carrara Version 6 – “The Next Dimension in 3D Art”<br />

DAZ 3D recently announced <strong>the</strong> upcoming release <strong>of</strong> <strong>the</strong> latest version <strong>of</strong> <strong>the</strong> popular 3D s<strong>of</strong>tw<strong>are</strong>, Carrara. This<br />

new version will allow users <strong>to</strong> choose from a large array <strong>of</strong> <strong>to</strong>ols while exploring new dimensions in 3D creation.<br />

Carrara 6 <strong>provide</strong>s 3D figure posing and animation, modeling,<br />

environment creation, and rendering <strong>to</strong>ols within a single application. The<br />

extensive support for DAZ 3D content includes handling <strong>of</strong> morph<br />

targ<strong>et</strong>s, <strong>the</strong> conversion <strong>of</strong> Surface Materials and compl<strong>et</strong>e Rigging, and<br />

Enhanced Remote Control, which allows users control over multiple<br />

translation and transform dials simultaneously. Notable upgrades include<br />

Non-linear Animation, giving users <strong>the</strong> ability <strong>to</strong> create clips <strong>of</strong> animation<br />

that can be reused and combined on multiple tracks <strong>of</strong> animation;<br />

Dynamic Hair that allows artists <strong>to</strong> style, cut, brush, and drape <strong>the</strong> hair;<br />

Displacement Modeling where <strong>the</strong> user can paint d<strong>et</strong>ail on a model using<br />

free-form brush <strong>to</strong>ols; and Symm<strong>et</strong>rical Modeling that allows content<br />

crea<strong>to</strong>rs <strong>to</strong> edit both sides <strong>of</strong> a symm<strong>et</strong>rical object at <strong>the</strong> same time using<br />

a vari<strong>et</strong>y <strong>of</strong> editing <strong>to</strong>ols. Carrara 6 was released for sale in late August at<br />

a $249 for <strong>the</strong> standard edition, while Carrara 6 Pro will have a MSRP <strong>of</strong><br />

$549. http://www.DAZ3D.com<br />

New 3D format approved by Ecma International<br />

On June 28, 2007, at its General Assembly me<strong>et</strong>ing in Prien am Chiemsee, in Germany, <strong>the</strong> new 4th Edition <strong>of</strong> <strong>the</strong><br />

Universal 3D (U3D) File Format (ECMA-363) was approved. In <strong>the</strong> new edition, <strong>the</strong> overall consistency <strong>of</strong> <strong>the</strong><br />

format has been improved, and <strong>the</strong> free-form curve and surface specification, including <strong>the</strong> specification <strong>of</strong><br />

NURBS, has been added. In addition, <strong>the</strong> non-normative reference source code, available at SourceForge.n<strong>et</strong> has<br />

been updated accordingly. “The Universal 3D (U3D) File Format Standard (ECMA-363) is a unique 3D<br />

visualization format being an open standard and having an unsurpassed installed 3D reader base due <strong>to</strong> <strong>the</strong> massive<br />

deployment <strong>of</strong> Adobe Reader,” said Lutz K<strong>et</strong>tner, Direc<strong>to</strong>r Geom<strong>et</strong>ry Product Development, mental images GmbH,<br />

and Co-Edi<strong>to</strong>r <strong>of</strong> Ecma TC43. “3D visualization is finally becoming available <strong>to</strong> everyone. The U3D File Format<br />

specification and standardization is an ongoing process in which features such as mesh compression, hierarchical<br />

surface descriptions, and generalized shading will be addressed in <strong>the</strong> near future <strong>to</strong> satisfy even <strong>the</strong> most<br />

demanding visualization needs.” http://www.ecma-international.org<br />

http://www.veritas<strong>et</strong>visus.com 39


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Dassault Systèmes and Seemage announce strategic partnership<br />

Dassault Systèmes and Seemage announced <strong>the</strong>ir intention <strong>to</strong> become strategic partners. The partnership will<br />

leverage <strong>the</strong> companies’ respective strengths <strong>to</strong> grow <strong>the</strong>ir presence in <strong>the</strong> 3D product documentation mark<strong>et</strong>. The<br />

partnership will <strong>provide</strong> a seamless link b<strong>et</strong>ween product documentation and PLM product-related data. For<br />

companies, <strong>this</strong> eliminates all disparities b<strong>et</strong>ween product-related IP and any required product documentation, such<br />

as animations, graphics and illustrations for training, maintenance manuals and service procedures. Working<br />

<strong>to</strong>ge<strong>the</strong>r, <strong>the</strong> companies will permit <strong>the</strong> exploitation <strong>of</strong> 3D as a universal media. Seemage users can exploit 3D data<br />

from any 3D CAD or enterprise system and create content from <strong>this</strong> for any desired output in formats including<br />

Micros<strong>of</strong>t Office documents, PDF and HTML. Seemage’s XML-based architecture integrates seamlessly with<br />

enterprise systems. http://www.seemage.com<br />

NaturalMotion tackles football video games with “Backbreaker”<br />

NaturalMotion, <strong>the</strong> company behind <strong>the</strong> euphoria<br />

animation technology featured in “Grand Theft Au<strong>to</strong><br />

IV” and “Star Wars: The Force Unleashed”, announced<br />

Backbreaker, an American football game developed<br />

exclusively for next-generation consoles. The title is<br />

slated for a 2008 release. “Backbreaker” is <strong>the</strong> first<br />

football game with truly interactive tackles. By utilizing<br />

our motion syn<strong>the</strong>sis engine euphoria, players will<br />

never make <strong>the</strong> same tackle twice, giving <strong>the</strong>m an<br />

intensely unique experience every time <strong>the</strong>y play <strong>the</strong><br />

game,” said NaturalMotion CEO Torsten Reil.<br />

http://www.backbreakergame.com<br />

Sony and mental images join up on visualization workflows<br />

Sony and mental images announced a joint project that will allow <strong>the</strong> Academy Award winning mental ray highend<br />

rendering s<strong>of</strong>tw<strong>are</strong> <strong>to</strong> operate with Sony’s new pro<strong>to</strong>type Cell Computing Board in a range <strong>of</strong> visualization<br />

workflows that feature Cell Broadband Engine (Cell/B.E.) technology. The Cell/B.E. is a high-performance<br />

microprocessor jointly developed by Sony Corporation, Sony Computer Entertainment Inc., Toshiba Corporation,<br />

and IBM Corporation. According <strong>to</strong> <strong>the</strong> companies, <strong>the</strong> technology’s innovative architecture is particularly wellsuited<br />

for highly parallelized, compute-intensive tasks. The “Cell Computing Board”, developed by Sony<br />

Corporation’s B2B Solutions Business Group, incorporates <strong>the</strong> high-performance Cell/B.E. microprocessor and<br />

RSX graphics processor <strong>to</strong> deliver high computational performance capable <strong>of</strong> handling large amounts <strong>of</strong> data at<br />

high speed while also achieving reductions in size and energy consumption. An essential element <strong>of</strong> <strong>the</strong> project will<br />

be <strong>the</strong> support <strong>of</strong> mental images’s new universal M<strong>et</strong>aSL shading language on <strong>the</strong> Cell Computing Board platform.<br />

A large library <strong>of</strong> essential shaders will be <strong>provide</strong>d. In addition, M<strong>et</strong>aSL shaders can easily be created with<br />

“mental mill”, <strong>the</strong> graphical shader creation and development technology from mental images. The companies<br />

expect <strong>to</strong> demonstrate <strong>the</strong>ir results in <strong>the</strong> second half <strong>of</strong> 2008. http://www.mentalimages.com<br />

Au<strong>to</strong>desk takes over Skymatter<br />

Au<strong>to</strong>desk announced that it has signed a definitive agreement <strong>to</strong> acquire substantially all <strong>the</strong> ass<strong>et</strong>s <strong>of</strong> Skymatter<br />

Limited, <strong>the</strong> developer <strong>of</strong> Mudbox 3D modeling s<strong>of</strong>tw<strong>are</strong>. This acquisition will augment Au<strong>to</strong>desk’s <strong>of</strong>fering for<br />

<strong>the</strong> film, television and game mark<strong>et</strong> segments, while providing additional growth opportunities for o<strong>the</strong>r design<br />

disciplines. Skymatter is a privately held New Zealand-based company. Skymatter’s Mudbox s<strong>of</strong>tw<strong>are</strong> <strong>of</strong>fers a new<br />

paradigm <strong>of</strong> 3D brush-based modeling, allowing users <strong>to</strong> sculpt organic shapes in 3D space with brush-like <strong>to</strong>ols.<br />

Appealing <strong>to</strong> both traditional sculp<strong>to</strong>rs and digital artists, Mudbox <strong>provide</strong>s a simple and fast <strong>to</strong>ols<strong>et</strong> for creative<br />

modeling, pro<strong>to</strong>typing and d<strong>et</strong>ailing. 3D ass<strong>et</strong>s created in Mudbox <strong>are</strong> <strong>of</strong>ten imported in<strong>to</strong> Au<strong>to</strong>desk 3ds Max and<br />

Au<strong>to</strong>desk Maya s<strong>of</strong>tw<strong>are</strong> for texturing, rigging, animation and final rendering. http://www.au<strong>to</strong>desk.com/mudbox.<br />

http://www.veritas<strong>et</strong>visus.com 40


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Luxology launches on-line hub for 3D community<br />

Luxology announced Luxology TV, a new online hub that allows <strong>the</strong> 3D community <strong>to</strong> exchange and view highresolution<br />

video clips on Luxology’s website. Luxology TV enables anyone <strong>to</strong> enhance <strong>the</strong>ir 3D learning<br />

experience by searching, selecting and immediately watching videos on a vari<strong>et</strong>y <strong>of</strong> subjects such as modeling,<br />

rendering, painting and sculpting. Luxology TV is structured <strong>to</strong> quickly grow in<strong>to</strong> a reposi<strong>to</strong>ry <strong>of</strong> training and<br />

presentation material on modo and o<strong>the</strong>r <strong>to</strong>pics pertaining <strong>to</strong> 3D content creation. The majority <strong>of</strong> videos <strong>are</strong> free.<br />

Commercial pr<strong>of</strong>essional training materials from Luxology and third-party vendors will also be available for<br />

purchase. Luxology TV is now live and can be experienced by visiting http://www.luxology.com/training/.<br />

Lockheed Martin acquires 3Dsolve<br />

Lockheed Martin Corporation announced it has acquired 3Dsolve; Inc. 3Dsolve is a privately held company that<br />

creates simulation-based learning solutions for government, military and corporate applications. The company’s<br />

s<strong>of</strong>tw<strong>are</strong> <strong>to</strong>ols assist clients with collaborative training utilizing interactive 3D graphics. 3Dsolve’s core<br />

comp<strong>et</strong>encies include multi-media, s<strong>of</strong>tw<strong>are</strong> engineering, digital artwork, instructional design and project<br />

management for use in state-<strong>of</strong>-<strong>the</strong>-art simulation learning solutions. http://www.lockheedmartin.com<br />

Dan Lejerskar presents his vision <strong>of</strong> <strong>the</strong> future <strong>of</strong> 3D<br />

Real life and digital simulation will merge by 2011, producing a mixed-reality environment that will change <strong>the</strong><br />

way consumers communicate, interact and conduct commerce, according <strong>to</strong> futurist Dan Lejerskar, chairman <strong>of</strong><br />

EON Reality, <strong>the</strong> interactive 3D s<strong>of</strong>tw<strong>are</strong> <strong>provide</strong>r. “What once was imagined soon will be experienced,” Lejerskar<br />

explained. “The technology convergence <strong>of</strong> virtual reality, artificial intelligence, <strong>We</strong>b and search, and digital<br />

content means that people can experience more in <strong>the</strong>ir daily lives by blurring <strong>the</strong> distinction b<strong>et</strong>ween <strong>the</strong>ir physical<br />

existence and digital reality.” As evidence <strong>of</strong> <strong>this</strong> trend, he points <strong>to</strong> <strong>the</strong> realization <strong>of</strong> commercially viable<br />

applications for 3D interactive virtual reality technology – as well as <strong>the</strong> position <strong>of</strong> industry thought leaders<br />

championing <strong>the</strong> advancement <strong>of</strong> such experiences. Heavyweights Google and Micros<strong>of</strong>t <strong>are</strong> pushing <strong>this</strong> trend<br />

<strong>to</strong>ward <strong>the</strong> manifestation <strong>of</strong> <strong>the</strong> 3D Intern<strong>et</strong>, while computer and video game developers <strong>are</strong> wh<strong>et</strong>ting consumers'<br />

app<strong>et</strong>ites for 3D experiences with new technologies, such as Nintendo’s Wii. Hollywood studios and amusement<br />

parks also <strong>are</strong> incorporating 3D interactive virtual reality elements in<strong>to</strong> <strong>the</strong>ir <strong>of</strong>ferings. “<strong>We</strong>'re witnessing <strong>the</strong><br />

creation <strong>of</strong> an environment in which visualization companies, industry, academia and <strong>the</strong> public sec<strong>to</strong>r can me<strong>et</strong><br />

and exchange knowledge, experiences and ideas,” Lejerskar said. “Within three <strong>to</strong> four years, we’ll see radical<br />

changes in how we shop, learn and communicate with business associates, friends and family. Consumers crave<br />

user-generated experiences that combine virtual reality technology with physical location-based events <strong>to</strong> produce<br />

<strong>to</strong>tally immersive 3D interactive experiences.” http://www.eonreality.com.<br />

EON Reality brings out Visualizer for idiot-pro<strong>of</strong> 3D content creation<br />

EON Reality unveiled its EON Visualizer at <strong>the</strong> SIGGRAPH Technology Conference. EON Visualizer is a 3D<br />

interactive authoring <strong>to</strong>ol that allows non-technical business users <strong>to</strong> generate 3D worlds for <strong>We</strong>b, print, video and<br />

real-time formats. An <strong>of</strong>f-<strong>the</strong>-shelf <strong>to</strong>ol, EON Visualizer makes it easy for anyone with a computer and Intern<strong>et</strong><br />

access <strong>to</strong> create realistic, interactive, 3D content. According <strong>to</strong> Gartner Research, by 2011, 1.6 billion out <strong>of</strong> a <strong>to</strong>tal<br />

2 billion Intern<strong>et</strong> users will actively participate in virtual worlds. However, <strong>the</strong> knowledge necessary <strong>to</strong> create <strong>the</strong>se<br />

worlds <strong>to</strong>day is limited <strong>to</strong> only <strong>the</strong> most technically sophisticated. EON Visualizer is founded on EON Reality’s<br />

new kernel, Dali, which improves <strong>the</strong> scalability and flexibility <strong>of</strong> <strong>the</strong> functionality. Even users without<br />

programming knowledge <strong>of</strong> 3D s<strong>of</strong>tw<strong>are</strong> can use <strong>the</strong> EON Visualizer <strong>to</strong> create 3D content, <strong>the</strong> company says for<br />

role-playing games, social n<strong>et</strong>work communities, business mark<strong>et</strong>ing and sales presentations and education and<br />

training. A user with programming skills will be able <strong>to</strong> add advanced features <strong>to</strong> EON Visualizer. The following<br />

<strong>are</strong> some <strong>of</strong> EON Visualizer’s key features: intuitive interface with drag-and-drop <strong>to</strong>ols; <strong>We</strong>b-based 3D object<br />

library with 12,000 3D objects and products, more than 12 showrooms and 360-degree landscape s<strong>et</strong>tings and<br />

image backdrops <strong>to</strong> create augmented realities; visual (non-text) search for objects and components through EON I-<br />

Search functionality (Google-supported search engine) allows users <strong>to</strong> search additional EON Reality-supported<br />

content available on <strong>the</strong> Intern<strong>et</strong>. EON Visualizer will ship in Nov. 2007. http://www.EONReality.com<br />

http://www.veritas<strong>et</strong>visus.com 41


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

S3D-Basics+ Conference<br />

August 28-29, 2007, Berlin, Germany<br />

Phillip Hill reports on presentations from Blue Frames Media, Advanced Micro Devices,<br />

BrainLAB, Infitec, and Spatial View<br />

Florian Maier <strong>of</strong> Blue Frames Media <strong>of</strong> Germany presented on “3Drecording<br />

devices for two-channel or multi-channel applications”. The<br />

talk covered research, one-camera 3D recording devices, and multicamera<br />

3D recording devices. The motivation for <strong>the</strong> company is <strong>the</strong><br />

new 3D wave due <strong>to</strong> digital possibilities and huge demand on 3D<br />

content. But Maier said that <strong>the</strong>re was a lack <strong>of</strong> efficient and exact<br />

recording devices and a lack <strong>of</strong> knowledge about 3D recording<br />

param<strong>et</strong>ers. The aims <strong>of</strong> <strong>the</strong> research work <strong>are</strong> <strong>to</strong> analyze <strong>the</strong> best 3D<br />

param<strong>et</strong>ers depending on <strong>the</strong> s<strong>et</strong>-up; development <strong>of</strong> a PC program; and<br />

<strong>the</strong> development <strong>of</strong> pho<strong>to</strong>graphic recording devices. The company has<br />

carried out deep studies <strong>of</strong> 3D basics: physiological limits, physiological<br />

problems (3D sickness), existing recording and display techniques. The<br />

program for <strong>the</strong> calculation <strong>of</strong> 3D param<strong>et</strong>ers gives <strong>the</strong> best interaxial<br />

distance b<strong>et</strong>ween cameras <strong>to</strong> avoid 3D sickness and <strong>the</strong> best adaptation<br />

<strong>to</strong> different display techniques.<br />

The 3D pho<strong>to</strong>graphic recording devices that <strong>the</strong> company has developed<br />

fall in<strong>to</strong> two categories: a one-camera system for static objects, and a<br />

multi-camera system for dynamic objects. The one-camera recording<br />

system gives exact and reproducible results, it is very efficient due <strong>to</strong><br />

au<strong>to</strong>mation, and is designed for special purposes such as macro<br />

pho<strong>to</strong>graphy or lif<strong>et</strong>ime exposure. A small version will be available<br />

soon. The multi-camera system minimizes interaxial distance and gives<br />

a larger field <strong>of</strong> depth. Close-ups <strong>of</strong> dynamic objects become feasible<br />

with no loss <strong>of</strong> picture quality, and normal digital cameras (both pho<strong>to</strong>s<br />

and video) can be used.<br />

>>>>>>>>>>>>>>>>>>>>


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

It targ<strong>et</strong>s pr<strong>of</strong>essional users <strong>of</strong> CAD, digital content creation, medical<br />

imaging and visual simulation applications. It maximizes graphics<br />

throughput by dynamically allocating resources as needed. It<br />

instinctively configures hardw<strong>are</strong> and s<strong>of</strong>tw<strong>are</strong> for optimal<br />

performance for certified applications. It enables real-time interaction<br />

with larger datas<strong>et</strong>s and more complex models and scenes – 2GB<br />

graphics memory. It supports multiple 3D accelera<strong>to</strong>rs in a single<br />

system for up <strong>to</strong> quad display output. Finally, it delivers hardw<strong>are</strong><br />

acceleration <strong>of</strong> DirectX 10 and OpenGL 2.1 without impacting CPU<br />

performance.<br />

>>>>>>>>>>>>>>>>>>>>>>


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

http://www.veritas<strong>et</strong>visus.com 44


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Soci<strong>et</strong>y for Information Display 2007 Symposium<br />

May 20-25, Long Beach, California<br />

In <strong>this</strong> second report from <strong>the</strong> principal event <strong>of</strong> <strong>the</strong> year, Phillip Hill covers presentations from<br />

Samsung SDI, Communications Research Centre Canada, Philips Research Labora<strong>to</strong>ries,<br />

and SeeReal Technologies<br />

28.3: Dense Disparity Map Calculation from Color Stereo Images using Edge Information<br />

Ja Seung Ku, Hui Nam, Chan Young Park, Beom Shik Kim, Yeon-Gon Mo, Hyoung Wook Jang, Hye-Dong Kim,<br />

and Ho Kyoon Chung<br />

Samsung SDI, Korea<br />

Samsung has developed a stereo-corresponding algorithm using edge information. The conventional stereocorresponding<br />

algorithm, SAD (Sum <strong>of</strong> Absolute Difference), has good quality in case <strong>of</strong> texture region, but it has<br />

false matching in <strong>the</strong> non-texture region. In order <strong>to</strong> reduce <strong>the</strong> false matching in <strong>the</strong> non-texture region, a new cost<br />

function, which is defined as SED (Sum <strong>of</strong> Edge Difference) based on global optimization, is added <strong>to</strong> <strong>the</strong> cost<br />

function <strong>of</strong> SAD. They evaluate <strong>the</strong> algorithm and benchmark Middlebury database. The experimental results show<br />

that <strong>the</strong> algorithm successfully produces piecewise smooth disparity maps while reducing <strong>the</strong> false matching in <strong>the</strong><br />

non-texture region. Moreover, <strong>the</strong> algorithm is faster in terms <strong>of</strong> g<strong>et</strong>ting <strong>the</strong> best quality than SAD.<br />

32.1: Invited Paper: — Human Stereoscopic Vision: Research Applications for 3D-TV<br />

Wa James Tam<br />

Communications Research Centre Canada, Ottawa, Canada<br />

The Communications Research Centre (CRC)<br />

Canada has been conducting research on 3D-TV<br />

and related stereoscopic technologies since 1995.<br />

Three <strong>are</strong>as <strong>of</strong> CRC’s research on human<br />

stereoscopic vision and its application <strong>to</strong> 3D-TV<br />

<strong>are</strong> highlighted. The author presents work on <strong>the</strong><br />

use <strong>of</strong> inter-ocular masking <strong>to</strong> reduce bandwidth<br />

requirements, without sacrificing high image<br />

quality. Secondly, he presents experimental<br />

results that show <strong>the</strong> effect <strong>of</strong> stereoscopic objects<br />

in motion on visual comfort. Thirdly, he presents<br />

studies <strong>to</strong> illustrate how <strong>the</strong> tendency <strong>of</strong> <strong>the</strong><br />

human visuo-cognitive system <strong>to</strong> correct or fill in<br />

missing visual information can be used <strong>to</strong><br />

generate effective stereoscopic images from<br />

sparse depth maps.<br />

Figure 1: An example <strong>of</strong> a surrogate depth map is shown at <strong>the</strong><br />

bot<strong>to</strong>m right. The original source image and its typical depth<br />

map <strong>are</strong> shown at <strong>the</strong> <strong>to</strong>p and on <strong>the</strong> bot<strong>to</strong>m left, respectively.<br />

http://www.veritas<strong>et</strong>visus.com 45


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

In <strong>the</strong> search for ways <strong>to</strong> generate depth maps, <strong>the</strong> researcher investigated <strong>the</strong> possibility <strong>of</strong> creating depth maps<br />

from <strong>the</strong> pic<strong>to</strong>rial depth information contained in standard 2D images, such as from blur information arising from<br />

<strong>the</strong> limited depth <strong>of</strong> field <strong>of</strong> a camera lens. For <strong>this</strong> example, blur information is useful if one assumes that blurred<br />

objects <strong>are</strong> at a far<strong>the</strong>r distance than sharp objects and, <strong>the</strong>refore, a depth map can be created from <strong>the</strong> blur<br />

information contained in <strong>the</strong> original 2D images. Along <strong>the</strong> way, CRC discovered that depth maps do not have <strong>to</strong><br />

contain dense information <strong>to</strong> be effective. CRC found that depth maps containing sparse depth information, that is,<br />

depth information concentrated mainly at edges and object boundaries in <strong>the</strong> original 2D images, <strong>are</strong> sufficient <strong>to</strong><br />

yield an enhanced sensation <strong>of</strong> depth, comp<strong>are</strong>d <strong>to</strong> a corresponding monoscopic reference. CRC named <strong>the</strong>se maps<br />

“surrogate depth maps”. An example is shown in Figure 1 (previous page).<br />

In conclusion, <strong>the</strong> studies show that <strong>the</strong> approach combining surrogate depth maps and DIBR can be used <strong>to</strong><br />

generate rendered stereoscopic images with good perceived depth. The effectiveness <strong>of</strong> surrogate depth maps can<br />

be explained if it is assumed that <strong>the</strong> human visual system combines <strong>the</strong> depth information available at <strong>the</strong><br />

boundary regions <strong>to</strong>ge<strong>the</strong>r with pic<strong>to</strong>rial depth cues <strong>to</strong> compensate for <strong>the</strong> missing/erroneous <strong>are</strong>as and arrive at an<br />

overall perception <strong>of</strong> depth <strong>of</strong> a visual scene. The results <strong>of</strong> <strong>the</strong> studies also <strong>provide</strong> useful indications with respect<br />

<strong>to</strong> <strong>the</strong> minimum depth information required <strong>to</strong> produce an enhanced sensation <strong>of</strong> depth in a stereo image, i.e., depth<br />

at object boundaries. This minimum depth information can be used as a backup m<strong>et</strong>hod when no o<strong>the</strong>r depth<br />

information is available <strong>to</strong> a 3D-TV broadcast system.<br />

32.2: Effect <strong>of</strong> Crosstalk in Multi-View Au<strong>to</strong>stereoscopic 3D Displays on<br />

Perceived Image Quality<br />

Ronald Kaptein and Ingrid Heynderickx<br />

Philips Research Labora<strong>to</strong>ries, Eindhoven, The Ne<strong>the</strong>rlands<br />

The effect <strong>of</strong> crosstalk in multi-view au<strong>to</strong>stereoscopic 3D displays on perceived image<br />

quality was assessed in two experiments. The first experiment shows that preference<br />

decreases with increasing crosstalk, but not as strong as expected. The second<br />

experiment shows that <strong>the</strong> crosstalk visibility threshold is higher than found in earlier<br />

studies.<br />

Gaining insight in <strong>the</strong> ambivalent effects <strong>of</strong> crosstalk is essential when it comes <strong>to</strong><br />

improving <strong>the</strong> quality <strong>of</strong> multi-view au<strong>to</strong>stereoscopic 3D displays. Therefore, <strong>this</strong><br />

study investigated <strong>the</strong> visibility <strong>of</strong> crosstalk in still images, and its effect on image<br />

quality (preference), taking in<strong>to</strong> account <strong>the</strong> properties <strong>of</strong> multiview au<strong>to</strong>stereoscopic<br />

3D displays. To do <strong>this</strong>, it was necessary <strong>to</strong> vary <strong>the</strong> amount <strong>of</strong> crosstalk over a<br />

considerable range. This was not feasible using a multi-view lenticular 3D display<br />

because <strong>of</strong> <strong>the</strong> fixed lens. However, since binocular image dis<strong>to</strong>rtion was found <strong>to</strong> be<br />

<strong>the</strong> average <strong>of</strong> <strong>the</strong> monocular image dis<strong>to</strong>rtions <strong>of</strong> both eyes, depth was not a<br />

necessary feature. This meant that <strong>the</strong> researchers could investigate <strong>the</strong> perceptual<br />

effects <strong>of</strong> crosstalk by simulating it on a 2D panel. To include <strong>the</strong> typical pixel<br />

structure <strong>of</strong> a multi-view lenticular 3D display, <strong>the</strong>y used a high-resolution panel and<br />

simulated a single 3D pixel using multiple pixels from <strong>the</strong> high-resolution display. In<br />

<strong>the</strong> present study, only crosstalk and <strong>the</strong> pixel structure were taken in<strong>to</strong> account. Two<br />

different experiments were performed. The first experiment assessed <strong>the</strong> preference<br />

for different crosstalk levels. In <strong>this</strong> experiment, <strong>the</strong> trade <strong>of</strong>f b<strong>et</strong>ween visibility <strong>of</strong><br />

crosstalk (i.e. blurring and ghosting) and visibility <strong>of</strong> pixel structure artifacts was<br />

investigated. The visibility threshold <strong>of</strong> crosstalk was d<strong>et</strong>ermined in a second<br />

perception experiment.<br />

Figure 1: (A) shows LCD<br />

panel and <strong>the</strong> position <strong>of</strong> <strong>the</strong><br />

lenses. Oblique lines<br />

indicate lens edges. (B)<br />

shows a real 3D pixel<br />

structure, (C) <strong>the</strong><br />

simulation.<br />

http://www.veritas<strong>et</strong>visus.com 46


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

The 3D TV taken as a reference was a Philips lenticular 3D TV; <strong>the</strong> high-resolution 2D display used for <strong>the</strong><br />

simulations was a 22.2-inch IBM T221 display, with a resolution <strong>of</strong> 3840x2400 pixels and a pixel pitch <strong>of</strong> 0.1245<br />

mm. The researchers wanted <strong>to</strong> simulate <strong>the</strong> 3D pixel structure using as few 2D pixels as possible, while preserving<br />

<strong>the</strong> essential characteristics (slope and aspect ratio <strong>of</strong> <strong>the</strong> sides). The solution can be seen in Figure 1C (previous<br />

page). The width <strong>of</strong> <strong>this</strong> structure is 9 pixels, i.e. 1.12 mm. However, a real 3D pixel has a width and height <strong>of</strong><br />

about 1.45 mm. To compensate for <strong>this</strong>, <strong>the</strong> simulated image should be viewed from a slightly different distance,<br />

namely a fac<strong>to</strong>r <strong>of</strong> 0.77 closer comp<strong>are</strong>d <strong>to</strong> <strong>the</strong> 3D TV (i.e. 2.3 m). How crosstalk manifests itself on pixel level can<br />

be derived from Figure 1A.<br />

The results suggest that crosstalk is less visible in multi-view au<strong>to</strong>stereoscopic 3D displays than expected. The<br />

results also suggest that pixel-structure artifacts in lenticular based 3D displays, although visible, play a minor role<br />

in d<strong>et</strong>ermining image quality, comp<strong>are</strong>d <strong>to</strong> crosstalk, at least at <strong>the</strong> viewing distance for which <strong>the</strong> 3D display was<br />

designed (3 m). These results <strong>are</strong> <strong>of</strong> importance considering <strong>the</strong> optimal design <strong>of</strong> multiview au<strong>to</strong>stereoscopic 3D<br />

displays. They give a first indication <strong>of</strong> <strong>the</strong> decrease in image quality that is <strong>to</strong> be expected when crosstalk is<br />

increased. Increasing crosstalk can help <strong>to</strong> obtain more uniform display intensity and smoo<strong>the</strong>r view transitions<br />

during head movements. All in all, <strong>the</strong> results can help in finding a b<strong>et</strong>ter balance b<strong>et</strong>ween <strong>the</strong> various fac<strong>to</strong>rs that<br />

play a role, Philips says.<br />

32.3: A New Approach <strong>to</strong> Electro-Holography for TV and Projection Displays<br />

A. Schwerdtner, N. Leister, and R. Häussler<br />

SeeReal Technologies, Dresden, Germany<br />

Among 3D displays, solely electro-holographic displays <strong>are</strong> in principle capable <strong>of</strong> compl<strong>et</strong>ely matching natural<br />

viewing. SeeReal’s new approach <strong>to</strong> electro-holography facilitates large object reconstructions with moderate<br />

resolution <strong>of</strong> <strong>the</strong> spatial light modula<strong>to</strong>r. They verified <strong>the</strong> approach with a standard 20-inch LCD as a spatial light<br />

modula<strong>to</strong>r. A key fac<strong>to</strong>r limiting universal application <strong>of</strong> stereoscopic displays <strong>are</strong> vision problems due <strong>to</strong> <strong>the</strong><br />

inherent mismatch b<strong>et</strong>ween eye focusing and convergence. Solely holographic displays <strong>are</strong> in principle capable <strong>of</strong><br />

compl<strong>et</strong>ely matching natural viewing. The most severe problem creating large-size video holograms is <strong>the</strong> so-called<br />

space-bandwidth product which is directly related <strong>to</strong> <strong>the</strong> number <strong>of</strong> display pixels. This is <strong>the</strong> reason why electroholographic<br />

displays have been restricted <strong>to</strong> display very small scenes with very small viewing angles and low<br />

image quality. Therefore, <strong>the</strong> object <strong>of</strong> <strong>the</strong> project<br />

was <strong>to</strong> develop a new approach <strong>to</strong> electro-holography<br />

that will enable <strong>the</strong> observer(s) <strong>to</strong> see large<br />

holographic reconstructions <strong>of</strong> 3D objects from<br />

electro-holographic displays having moderate pixel<br />

resolution. Figure 1 illustrates <strong>the</strong> concept. The light<br />

source LS illuminates <strong>the</strong> spatial light modula<strong>to</strong>r<br />

SLM and is imaged by <strong>the</strong> lens F in<strong>to</strong> <strong>the</strong> observer<br />

plane OP. The hologram is encoded in <strong>the</strong> spatial<br />

light modula<strong>to</strong>r. The observer window OW is located<br />

at or close <strong>to</strong> <strong>the</strong> observer eye OE. The size <strong>of</strong> <strong>the</strong><br />

OW is limited <strong>to</strong> one diffraction order <strong>of</strong> <strong>the</strong><br />

hologram. The observer sees a holographically<br />

Figure 1: Schematic drawing <strong>of</strong> <strong>the</strong> holographic display<br />

reconstructed three-dimensional object 3D-S in a reconstruction frustum RF that is defined by <strong>the</strong> observer window<br />

and <strong>the</strong> hologram. An overlap <strong>of</strong> higher diffraction orders in <strong>the</strong> observer window is avoided by encoding <strong>the</strong><br />

holographic information <strong>of</strong> each single point P <strong>of</strong> <strong>the</strong> object 3D-S in an associated limited <strong>are</strong>a A1 in <strong>the</strong> hologram.<br />

The correct size and position <strong>of</strong> A1 <strong>are</strong> obtained by projecting <strong>the</strong> OW through <strong>the</strong> point P on<strong>to</strong> <strong>the</strong> light modula<strong>to</strong>r<br />

SLM, as indicated by <strong>the</strong> lines from <strong>the</strong> OW through P <strong>to</strong> <strong>the</strong> <strong>are</strong>a A1. Light emanating from higher diffraction<br />

orders <strong>of</strong> <strong>the</strong> reconstructed point P will not reach <strong>the</strong> OW and is <strong>the</strong>refore not visible. A 3D-object comprising<br />

many object points results in overlapping associated <strong>are</strong>as with holographic information that <strong>are</strong> superimposed <strong>to</strong><br />

http://www.veritas<strong>et</strong>visus.com 47


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

<strong>the</strong> <strong>to</strong>tal hologram. The researchers summarize by saying that <strong>the</strong> approach significantly reduces <strong>the</strong> requirements<br />

on optical components and s<strong>of</strong>tw<strong>are</strong> comp<strong>are</strong>d <strong>to</strong> conventional electro-holographic displays. They achieve <strong>this</strong> by<br />

generating <strong>the</strong> visible object information only at positions where it is actually needed, i.e. at <strong>the</strong> eye positions.<br />

Large holographic object reconstructions <strong>are</strong> possible as <strong>the</strong> pixel pitch <strong>of</strong> <strong>the</strong> spatial light modula<strong>to</strong>r does not limit<br />

<strong>the</strong> size <strong>of</strong> <strong>the</strong> reconstructed object. The fundamental idea in <strong>the</strong> concept is <strong>to</strong> give highest priority <strong>to</strong> reconstruction<br />

<strong>of</strong> <strong>the</strong> wave field at <strong>the</strong> observer’s eyes and not <strong>the</strong> three-dimensional object itself. They say that <strong>the</strong>y have been<br />

working <strong>to</strong> extend <strong>the</strong> approach <strong>to</strong> projection displays. Again, <strong>the</strong>re <strong>are</strong> one or several observer windows through<br />

which one or several observers see a holographically reconstructed object. The hologram is encoded on a small<br />

spatial light modula<strong>to</strong>r and <strong>the</strong> holographic reconstruction is optically enlarged by magnification optics.<br />

>>>>>>>>>>>>>>>>>>>>


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

International Workshop on 3D Information Technology<br />

May 15, 2007, Seoul, Korea<br />

by Andrew Woods<br />

The 3D Display Research Centre (3DRC) based at Kwangwoon University (Seoul, South Korea) recently organized<br />

<strong>the</strong> fourth in its series <strong>of</strong> international workshops. The workshop was held at <strong>the</strong> compound <strong>of</strong> Cheong Wa Dae<br />

(literal translation “<strong>the</strong> blue house”, which is <strong>the</strong> <strong>of</strong>fice <strong>of</strong> <strong>the</strong> South Korean president) and featured presentations<br />

from eight international invited speakers, four local speakers, and also a poster session.<br />

The presentations were a mixture <strong>of</strong><br />

a summary <strong>of</strong> 3D research from <strong>the</strong><br />

authors’ origin country and also a<br />

summary <strong>of</strong> <strong>the</strong> particular research<br />

<strong>of</strong> each presenter.<br />

The first presentation was by<br />

Pr<strong>of</strong>essor George Barbasta<strong>this</strong><br />

(MIT, USA) whose paper was titled<br />

“3D Optics”. His presentation<br />

focused on holographic imaging and<br />

<strong>the</strong> process <strong>of</strong> capturing 3D images<br />

and datas<strong>et</strong>s using holographic<br />

m<strong>et</strong>hods. The presentation <strong>of</strong><br />

Andrew Woods (Curtin University,<br />

Australia) was titled “R&D<br />

Activities on 3D Information<br />

Technologies in Australia” and<br />

summarized <strong>the</strong> work <strong>of</strong> a number<br />

<strong>of</strong> stereoscopic R&D organizations<br />

in Australia (including DDD, iVEC,<br />

and Jumbo Vision), plus his own<br />

work on underwater stereoscopic<br />

video cameras and <strong>the</strong> compatibility<br />

<strong>of</strong> consumer displays with<br />

3DIT 2007 invited speakers and organizers: (left <strong>to</strong> right) Sang-Hyun Kim<br />

(student, Waseda Univ., Japan), Takashi Kawai (Waseda Univ., Japan), unknown<br />

(Presidential Security Service), student (3DRC), Jin Fushou (Jilin Univ., China),<br />

student (3DRC), Vladimir P<strong>et</strong>rov (Sara<strong>to</strong>v State Univ., Russia), Hiroshi<br />

Yoshikawa (Nihon Univ., Japan), Eun-Soo Kim (3DRC), Zsuzsa Dobranyi<br />

(Holografika, Hungary), Dae-Jun Joo (Presidential Security Service), Tibor<br />

Balogh (Holografica, Hungary), George Barbasta<strong>this</strong> (MIT, USA), student<br />

(3DRC), Jack Yamamo<strong>to</strong> (3D Consortium, Japan), Andrew Woods (Curtin Univ.,<br />

Australia), Nam-Young Kim (3DRC).<br />

stereoscopic m<strong>et</strong>hods. Pr<strong>of</strong>essor Fushou Jin’s (Jilin University, China) presentation titled “3D display activities in<br />

China” discussed <strong>the</strong> stereoscopic human fac<strong>to</strong>rs work <strong>of</strong> Fang <strong>et</strong> al. (Zhejiang Univ., 2004), au<strong>to</strong>stereoscopic<br />

video transforms and novel au<strong>to</strong>stereoscopic backlights by Zou <strong>et</strong> al. (Hefei Univ. <strong>of</strong> Tech., 2004 and 2005), head<br />

mounted displays by Sun <strong>et</strong> al. (National Key Lab <strong>of</strong> Applied Optics, 2005), volum<strong>et</strong>ric displays by Lin <strong>et</strong> al.<br />

(Zhejiang Univ., 2005), as well as his own work on integral 3D imaging.<br />

The second session contained two papers. Pr<strong>of</strong>essor Vladimir P<strong>et</strong>rov (Sara<strong>to</strong>v State University, Russia) discussed<br />

“Recent R&D activities on 3D Information Systems in Russia” which included coverage <strong>of</strong> his own work on<br />

classification <strong>of</strong> stereoscopic m<strong>et</strong>hods, formats & technologies, optical correction <strong>of</strong> depth plane curvature, and<br />

electronically controlled optical holograms, plus a description <strong>of</strong> <strong>the</strong> au<strong>to</strong>stereoscopic “SmartON” display by<br />

Putilin <strong>et</strong> al. (FIAN, Moscow), Volum<strong>et</strong>ric displays by Shipitsyn (Moscow, Russia) and Golobov <strong>et</strong> al. (LETI,<br />

Russia), a stack <strong>of</strong> holograms by Golobov <strong>et</strong> al. (LETI, Russia), a stack <strong>of</strong> light scattering shutters by Kimpan<strong>et</strong>s <strong>et</strong><br />

al. (Lebedev Physical Institute, Russia), waveguide holographic display by Putilin <strong>et</strong> al. (FIAN, Moscow), and<br />

o<strong>the</strong>rs. The presentation by Tibor Balogh (Holografika, Hungary) was titled “HoloVizio, The Light Field Display<br />

http://www.veritas<strong>et</strong>visus.com 49


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

System”. Various aspects <strong>of</strong> <strong>the</strong> HoloVizio au<strong>to</strong>stereoscopic display system was described including fundamentals,<br />

principles <strong>of</strong> operation, implementations, hardw<strong>are</strong> and s<strong>of</strong>tw<strong>are</strong> systems, and applications.<br />

The third session <strong>of</strong> <strong>the</strong> day included three papers. Pr<strong>of</strong>essor Takashi Kawai’s (Waseda University, Japan) paper on<br />

“Recent R&D Activities on 3D Information Technologies in Japan” discussed his own work on stereoscopic<br />

ergonomic evaluation, display hardw<strong>are</strong> and stereoscopic video s<strong>of</strong>tw<strong>are</strong> development, content creation, time-series<br />

analysis <strong>of</strong> stereoscopic video, and scalable conversion <strong>of</strong> stereoscopic content for different screen sizes. He also<br />

summarized industry activities including <strong>the</strong> Digital Content Association <strong>of</strong> Japan (DCAJ), <strong>the</strong> Ultra Realistic<br />

Communications Forum (URCF), and 3D Fair 2006 (November 2006, Akihabara, Japan). Jack Yamamo<strong>to</strong> (3D<br />

Consortium, Japan) <strong>provide</strong>d a “3D Mark<strong>et</strong> Trend Overview” and also summarized <strong>the</strong> recent activities <strong>of</strong> <strong>the</strong> 3D<br />

Consortium. The presentation <strong>of</strong> Pr<strong>of</strong>essor Hiroshi Yoshikawa (Nihon University, Japan) was titled “Recent<br />

activities on 3-D imaging and display in Japan” discussed government and industry supported activities along with<br />

a summary <strong>of</strong> his university’s research in optical holograms, digital holography, a fringe printing system, fast<br />

computer generated holograms, and holo-video.<br />

The final formal session <strong>of</strong> <strong>the</strong> day included four papers. Pr<strong>of</strong>essor Eun-Soo Kim (3DRC, Kwangwoon University,<br />

Korea) presented a paper titled “3D R&D Activities in 3DRC” which <strong>provide</strong>d a brief overview <strong>of</strong> commercial 3D<br />

R&D activities in Korea (including Samsung, LG, Pavonine, Zalman, Innertech, Sevendata, and KDC Group), an<br />

introduction <strong>to</strong> <strong>the</strong> 3DRC, and a summary <strong>of</strong> <strong>the</strong> 3D display pro<strong>to</strong>types and R&D activities <strong>of</strong> <strong>the</strong> 3DRC. Dr.<br />

Jinwoong Kim from ETRI (Electronics and Telecommunications Research Institute) (Daejeon, Korea) presented<br />

“R&D Activities on 3D Broadcasting Systems in ETRI”. As well as providing an overview <strong>of</strong> 3D in <strong>the</strong><br />

broadcasting industry, d<strong>et</strong>ail was <strong>provide</strong>d on ETRI’s activities in 3D DMB (Digital Multimedia Broadcast) <strong>to</strong><br />

handheld devices and multi-view 3DTV systems. Dr Sung-Kyu Kim from KIST (Korean Institute <strong>of</strong> Science and<br />

Technology) (Seoul, Korea) presented “R&D Activities on 3D Displays in KIST”. His presentation discussed<br />

KIST’s work on multi-focus 3D display systems (using ei<strong>the</strong>r a laser scanned DMD modulated display, or a multilight<br />

source (LED) m<strong>et</strong>hod), and camera related issues for au<strong>to</strong>stereoscopic mobile displays. Dr. Jae-Moon Jo<br />

(Samsung Electronics, Korea) presented “Status <strong>of</strong> 3D Display Development”. His paper was divided in<strong>to</strong> three<br />

parts: technology and mark<strong>et</strong> trends, 3D display technologies, and technology <strong>of</strong> Samsung. The latter part <strong>of</strong> his<br />

presentation discussed Samsung’s 3D DLP HDTVs, Samsung’s 3D LCD DMB phone (SCH-B710), Samsung<br />

au<strong>to</strong>stereoscopic 2D/3D moni<strong>to</strong>r using time-sequential LCD, and <strong>the</strong> Samsung SDI au<strong>to</strong>stereoscopic OLED 2D/3D<br />

demo.<br />

Selected papers in <strong>the</strong> poster session included:<br />

• Effective generation <strong>of</strong> digital holograms <strong>of</strong> 3-D objects with a novel look-up table m<strong>et</strong>hod<br />

• Three-dimensional reconstruction using II technique <strong>of</strong> captured images by holographic m<strong>et</strong>hod<br />

• Efficient generation <strong>of</strong> CGH for frames <strong>of</strong> video images<br />

• Holographic 3D display <strong>of</strong> captured by II technique<br />

• Enhanced IVR-based computational construction m<strong>et</strong>hod in three-dimensional integral imaging with<br />

non-uniform lens array<br />

• Three-dimensional image correla<strong>to</strong>r using computationally reconstructed integral images<br />

• Using quantum optics in 3D display<br />

• Extraction <strong>of</strong> rat hippocampus using stereoscopic microscope system<br />

• Efficient 3D reconstruction m<strong>et</strong>hod using stereo matching robust <strong>to</strong> noise<br />

• A compact rectification algorithm for trinocular stereo images<br />

• The effect <strong>of</strong> saccadic eye movements on motion sensitivity in 3D depth<br />

A proceedings volume was published by <strong>the</strong> 3DRC containing all <strong>of</strong> <strong>the</strong> slides <strong>of</strong> <strong>the</strong> speakers and poster authors.<br />

http://www.3drc.org/<br />

http://www.veritas<strong>et</strong>visus.com 50


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

http://www.veritas<strong>et</strong>visus.com 51


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

3DTV CON 2007<br />

May 7-9, Kos Island, Greece<br />

In <strong>this</strong> second report on <strong>this</strong> IEEE conference on capture, transmission and display <strong>of</strong> 3D video, Phillip<br />

Hill covers presentations from Monash University, Middle East Technical University/STM Savunma<br />

Teknolojileri Muhendislik ve Tic<strong>are</strong>t, ATR/University <strong>of</strong> Tsukuba, Tampere University <strong>of</strong> Technology,<br />

Momentum, Yonsei University, University <strong>of</strong> Rome, University <strong>of</strong> Oulu, and two from Tel-Aviv University<br />

Large scale 3D environmental modeling for stereoscopic walk-through visualization<br />

Nghia Ho, Ray Jarvis,<br />

Intelligent Robotics Research Centre, Monash University, Australia<br />

The availability <strong>of</strong> high resolution and long-range laser range finders with color image registration facilities opens<br />

up <strong>the</strong> possibility <strong>of</strong> large scale, accurate and dense 3D environment modeling. This paper addresses <strong>the</strong> problem <strong>of</strong><br />

integration and analysis <strong>of</strong> multiple scans collected over extended regions for stereoscopic walk-through<br />

visualization. Large-scale 3D environment modeling has gained popularity due <strong>to</strong> <strong>the</strong> availability <strong>of</strong> high resolution<br />

and long range 3D laser scanners. These devices can capture a dense point cloud <strong>of</strong> <strong>the</strong> environment, collecting in<br />

<strong>the</strong> order <strong>of</strong> 10’s or 100’s <strong>of</strong> millions <strong>of</strong> points. Laser scanners can <strong>provide</strong> very rich data but on <strong>the</strong> o<strong>the</strong>r hand<br />

present a technical challenge in processing such large volume <strong>of</strong> data which can easily grow <strong>to</strong> a couple <strong>of</strong><br />

gigabytes. Two common tasks that <strong>are</strong> essential for large-scale environment modeling <strong>are</strong> scan registration and<br />

visualization. Scans need <strong>to</strong> be taken at various locations and registered <strong>to</strong>ge<strong>the</strong>r <strong>to</strong> build a compl<strong>et</strong>e model. The<br />

ability <strong>to</strong> be able <strong>to</strong> visualize it via a stereoscopic display is very attractive because <strong>the</strong> data is well suited for a<br />

walk-through application such as a virtual <strong>to</strong>ur. The paper reports a campus modeling project undertaken at Monash<br />

University.<br />

Figure 1 (left): Buildings and veg<strong>et</strong>ation rendered entirely by planes and texture. Figure 2 (right): Birds eye<br />

view <strong>of</strong> <strong>the</strong> campus model.<br />

http://www.veritas<strong>et</strong>visus.com 52


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

The researchers used <strong>the</strong> Riegl LMS-Z420i laser range scanner <strong>to</strong> scan <strong>the</strong> Monash University campus. The scanner<br />

is capable <strong>of</strong> scanning 360 degree horizontally and 80 degrees vertically. The average sampling rate for a high<br />

resolution scan is about 8000 points per second. Color information is obtained separately via a Nikon D100<br />

mounted on <strong>the</strong> scanner. Approximately 30 scans were taken around <strong>the</strong> Monash campus over <strong>the</strong> duration <strong>of</strong> a<br />

couple <strong>of</strong> weeks. <strong>We</strong> fixed each scan <strong>to</strong> capture 5-6 million points. One problem <strong>the</strong>y faced was people walking<br />

about during <strong>the</strong> scanning. This introduced some unwanted noise, which appears as a thin line <strong>of</strong> points. To<br />

alleviate <strong>this</strong> problem <strong>the</strong>y did two scans at <strong>the</strong> same location. The two scans were <strong>the</strong>n comp<strong>are</strong>d side by side and<br />

<strong>the</strong> points with <strong>the</strong> fur<strong>the</strong>st distance away from <strong>the</strong> scanner taken as <strong>the</strong> true range.<br />

Figure 2 (previous page) shows a bird’s eye view section <strong>of</strong> <strong>the</strong> campus model. Some <strong>of</strong> <strong>the</strong> tree leaves come out<br />

blue ra<strong>the</strong>r than green and is a result <strong>of</strong> incorrect registration with <strong>the</strong> sky color. One possible explanation is that <strong>the</strong><br />

leaves <strong>are</strong> being blown by <strong>the</strong> wind, which causes an incorrect registration with <strong>the</strong> laser range data and color<br />

images. Point clouds obtained from a laser range scanner <strong>are</strong> dense for distances near <strong>to</strong> <strong>the</strong> scanner but sparse<br />

fur<strong>the</strong>r away. This greatly affects <strong>the</strong> visual quality and is noticeable in some part <strong>of</strong> <strong>the</strong> scenes where <strong>the</strong>re is<br />

inadequate sampling. This can be improved by performing a longer scan and collecting more points. This also has<br />

<strong>the</strong> extra benefit <strong>of</strong> collecting less noise from people moving across <strong>the</strong> scans.<br />

Shape from unstructured light<br />

Anner Kushnir and Nahum Kiryati<br />

Tel-Aviv University, Israel<br />

A structured light m<strong>et</strong>hod for depth reconstruction using unstructured, essentially arbitrary projection patterns is<br />

presented. Unlike previous m<strong>et</strong>hods, <strong>the</strong> suggested approach allows <strong>the</strong> user <strong>to</strong> select <strong>the</strong> projection patterns from a<br />

given slide show or from movie frames, or <strong>to</strong> simply project noise, thus extending <strong>the</strong> range <strong>of</strong> possible<br />

applications. The system includes a projec<strong>to</strong>r and a single camera. Two progressive algorithms were developed for<br />

obtaining projec<strong>to</strong>r-camera correspondence, with each additional projection pattern improving <strong>the</strong> reliability <strong>of</strong> <strong>the</strong><br />

final result. The m<strong>et</strong>hod was experimentally demonstrated using two projection pattern s<strong>et</strong>s – a vacation pho<strong>to</strong><br />

album (similar <strong>to</strong> frames extracted from a video sequence) and a s<strong>et</strong> <strong>of</strong> random noise patterns.<br />

Figure 1: Depth map and textured reconstruction <strong>of</strong> a mannequin head: (a,b) 50 patterns, DP m<strong>et</strong>hod. (c,d) 50<br />

patterns, PPC m<strong>et</strong>hod. (e,f) 10 patterns, DP m<strong>et</strong>hod. (g,h) 100 patterns, PPC m<strong>et</strong>hod.<br />

http://www.veritas<strong>et</strong>visus.com 53


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Figure 1 (previous page) presents <strong>the</strong> reconstruction results obtained using <strong>the</strong> vacation pho<strong>to</strong> album pattern s<strong>et</strong>, in<br />

two formats: depth map and visualization <strong>of</strong> <strong>the</strong> 3D surface with its texture. Results <strong>are</strong> shown for <strong>the</strong> two<br />

correspondence-establishment m<strong>et</strong>hods considered, Pixel-<strong>to</strong>-Pixel Correspondence (PPC) and Dynamic<br />

Programming (DP), using several pattern-s<strong>et</strong> sizes. It can be seen that <strong>the</strong> DP m<strong>et</strong>hod performs well even when <strong>the</strong><br />

number <strong>of</strong> projection patterns used for reconstruction is small. The PPC m<strong>et</strong>hod yields a reasonable result when 50<br />

patterns or more <strong>are</strong> used, and is superior <strong>to</strong> <strong>the</strong> DP m<strong>et</strong>hod in terms accuracy and robustness when 100 patterns or<br />

more <strong>are</strong> used.<br />

Effects <strong>of</strong> color-multiplex stereoscopic view on memory and navigation<br />

Yalın Baştanlar, Hacer Karacan, Middle East Technical University, Ankara, Turkey<br />

Deniz Cantürk, STM Savunma Teknolojileri Muhendislik ve Tic<strong>are</strong>t, Ankara, Turkey<br />

In <strong>this</strong> work, effects <strong>of</strong> stereoscopic view on object recognition and navigation performance <strong>of</strong> <strong>the</strong> participants <strong>are</strong><br />

examined in an indoor Desk<strong>to</strong>p Virtual Reality Environment, which is a two-floor virtual museum having different<br />

floor plans and 3D object models inside. This environment is used in two different experimental s<strong>et</strong>tings: 1) colormultiplex<br />

stereoscopic 3D viewing <strong>provide</strong>d by colored eye-wear, 2) regular 2D viewing. After <strong>the</strong> experiment,<br />

participants filled in a questionnaire that inquired in<strong>to</strong> <strong>the</strong>ir<br />

feeling <strong>of</strong> presence, <strong>the</strong>ir tendency <strong>to</strong> be immersed and <strong>the</strong>ir<br />

performance on object recognition and navigation in <strong>the</strong><br />

environment. Two groups (3D and 2D), each having five<br />

participants, with equal tendency were formed according <strong>to</strong><br />

<strong>the</strong> answers <strong>of</strong> “tendency” part, and <strong>the</strong> rest evaluated <strong>to</strong><br />

examine <strong>the</strong> effects <strong>of</strong> stereoscopic view. Contrary <strong>to</strong><br />

expectations, results show no significant difference b<strong>et</strong>ween<br />

3D and 2D groups both on feeling <strong>of</strong> presence and object<br />

recognition/navigation performance.<br />

A museum consisting <strong>of</strong> two floors was created. Several 3D<br />

object models like cars, airplanes, animals, buildings were<br />

placed in <strong>the</strong> museum. Also a few 2D pictures were put on<br />

<strong>the</strong> walls in order <strong>to</strong> ease navigation. The two floors have<br />

different floor plans and wall textures. A screen shot is<br />

shown in Figure 1. Results did not indicate a significant<br />

difference b<strong>et</strong>ween <strong>the</strong> two groups. Although increasing <strong>the</strong> number <strong>of</strong> participants and preparing different kinds <strong>of</strong><br />

navigation and recognition questions may change <strong>the</strong> result, <strong>the</strong> current result may be explained by two ideas: both<br />

groups were able <strong>to</strong> examine <strong>the</strong> objects and environment closely from different points <strong>of</strong> view. Although 3D group<br />

participants som<strong>et</strong>imes spent a little more time on <strong>the</strong> objects <strong>to</strong> test stereoscopic viewing, it seems that <strong>this</strong> did not<br />

cause participants <strong>to</strong> recognize <strong>the</strong>m well. Despite <strong>the</strong> stereoscopic viewing 3D group participants did not feel<br />

much more involved than <strong>the</strong> 2D participants. In fact, <strong>the</strong> control mechanism and speed <strong>of</strong> <strong>the</strong> test environment was<br />

not realistic enough, and <strong>this</strong> may have resulted in a boring effect on participants causing <strong>to</strong> lose <strong>the</strong>ir attention<br />

while observing objects.<br />

Depth map quantization – how much is sufficient?<br />

Ianir Ideses, Leonid Yaroslavsky, Itai Amit, Barak Fishbain<br />

Department <strong>of</strong> Interdisciplinary Studies, Tel-Aviv University, Israel<br />

Figure 1: A screen-shot from first <strong>the</strong> floor <strong>of</strong> <strong>the</strong><br />

virtual environment<br />

With <strong>the</strong> recent advancement in visualization devices over <strong>the</strong> last years, in order <strong>to</strong> syn<strong>the</strong>size 3D content, one<br />

needs <strong>to</strong> have ei<strong>the</strong>r a stereo pair or an image and a depth map. Computing depth maps for images is a highly<br />

computationally intensive and time-consuming process. In <strong>this</strong> paper, <strong>the</strong> researchers describe results <strong>of</strong> an<br />

experimental evaluation <strong>of</strong> depth map data redundancy in stereoscopic images. In <strong>the</strong> experiments with computer<br />

http://www.veritas<strong>et</strong>visus.com 54


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

generated images, several observers visually tested <strong>the</strong> number <strong>of</strong> quantization levels required for comfortable and<br />

quantization unaffected stereoscopic vision. The experiments show that <strong>the</strong> number <strong>of</strong> depth quantization levels can<br />

be as low as only a couple <strong>of</strong> tens. This may have pr<strong>of</strong>ound implication on <strong>the</strong> process <strong>of</strong> depth map estimation and<br />

3D syn<strong>the</strong>sis, <strong>the</strong> researchers say.<br />

In each experiment, <strong>the</strong> viewer had <strong>to</strong> indicate which image has more quantization levels; if correct in his choice,<br />

<strong>the</strong> program would increase <strong>the</strong> quantization levels until <strong>the</strong> viewer can not distinguish b<strong>et</strong>ween <strong>the</strong> images. If <strong>the</strong><br />

viewer had been incorrect in his choice, <strong>the</strong> program would reduce <strong>the</strong> quantization levels until <strong>the</strong> viewer would<br />

again be able <strong>to</strong> distinguish b<strong>et</strong>ween <strong>the</strong> images. Each experiment is composed <strong>of</strong> several tens <strong>of</strong> rounds until <strong>the</strong><br />

quantization levels converge. In order <strong>to</strong> increase <strong>the</strong> reliability <strong>of</strong> <strong>the</strong> selection, <strong>the</strong> viewer was prompted <strong>to</strong> verify<br />

his answer <strong>to</strong> each round. A screenshot <strong>of</strong> <strong>the</strong> program is shown below.<br />

The results <strong>of</strong> <strong>the</strong>se tests show that, for depth map quantization, a relatively low number <strong>of</strong> about 20 quantization<br />

levels <strong>of</strong> depth map <strong>are</strong> sufficient for 3D syn<strong>the</strong>sis. This number was acquired for shapes with high height gradients<br />

and is lower for o<strong>the</strong>r shapes. The obtained results can be utilized in different applications, and especially in<br />

iterative algorithms <strong>of</strong> depth map computation and in <strong>the</strong> process <strong>of</strong> generating artificial stereo pairs from an image<br />

and a depth map.<br />

An example <strong>of</strong> images that were shown <strong>to</strong> <strong>the</strong> viewer. The viewer had <strong>to</strong> indicate which image has a smoo<strong>the</strong>r<br />

(quantized with more quantization levels) depth map (viewed with anaglyph glasses, blue filter for right eye).<br />

Virtual camera control system for cinema<strong>to</strong>graphic 3D video rendering<br />

Hansung Kim, Ryuuki Sakamo<strong>to</strong>, Tomoji Toriyama, and Kiyoshi Kogure<br />

Knowledge Science Lab, ATR, Kyo<strong>to</strong>, Japan<br />

Itaru Kitahara1, Department <strong>of</strong> Intelligent Interaction Technologies, University <strong>of</strong> Tsukuba, Japan<br />

The researchers propose a virtual camera control system that creates attractive videos from 3D models generated<br />

with a virtualized reality system. The proposed camera control system helps <strong>the</strong> user <strong>to</strong> generate final videos from<br />

<strong>the</strong> 3D model by referring <strong>to</strong> <strong>the</strong> grammar <strong>of</strong> film language. Many kinds <strong>of</strong> camera shots and principal camera<br />

actions <strong>are</strong> s<strong>to</strong>red in <strong>the</strong> system as expertise. Therefore, even non-experts can easily convert <strong>the</strong> 3D model <strong>to</strong><br />

http://www.veritas<strong>et</strong>visus.com 55


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

attractive movies that look as if <strong>the</strong>y were edited by expert film producers with <strong>the</strong> help <strong>of</strong> <strong>the</strong> system’s expertise.<br />

The user can update <strong>the</strong> system by creating a new s<strong>et</strong> <strong>of</strong> camera shots and s<strong>to</strong>ring it in <strong>the</strong> shots’ knowledge<br />

database.<br />

In one application, <strong>the</strong> system generates footage by piecing all <strong>of</strong> <strong>the</strong> generated videos. Figure 1(a) shows example<br />

footage <strong>of</strong> a cinema<strong>to</strong>graphic video with camera controls using two annotations <strong>to</strong> 3D regions and five annotations<br />

<strong>to</strong> time codes. Varied shots with different angles and framing <strong>are</strong> s<strong>et</strong> for <strong>the</strong> 3D video <strong>to</strong> capture a man shadowboxing<br />

dynamically. Figure 1(b) shows o<strong>the</strong>r footage that is applied <strong>to</strong> <strong>the</strong> same shots, <strong>to</strong> which a region annotation<br />

is added <strong>to</strong> <strong>the</strong> man’s foot. Despite <strong>the</strong> fact that <strong>the</strong>se pieces <strong>of</strong> footage <strong>are</strong> made from <strong>the</strong> videos <strong>of</strong> <strong>the</strong> same<br />

scenes, <strong>the</strong> impressions <strong>the</strong>y give <strong>are</strong> ra<strong>the</strong>r different.<br />

The goal <strong>of</strong> <strong>this</strong> study is <strong>to</strong> develop a virtual camera controlling system for creating attractive videos from 3D<br />

models. The proposed system helps users <strong>to</strong> apply expert knowledge <strong>to</strong> generate desirable and interesting film<br />

footage by using a sequence <strong>of</strong> shots taken with a virtual camera. As future work, <strong>the</strong> researchers <strong>are</strong> going <strong>to</strong><br />

devise a m<strong>et</strong>hod <strong>to</strong> use sensors <strong>to</strong> au<strong>to</strong>matically d<strong>et</strong>ermine annotation information.<br />

Figure 1: Outcome <strong>of</strong> shadow-boxing scene<br />

Mid-air display for physical exercise and gaming<br />

Ismo Rakkolainen, Tampere University <strong>of</strong> Technology, Finland<br />

Tanju Erdem, Bora Utku, Çiğdem Eroğlu Erdem, Mehm<strong>et</strong> Özkan, Momentum, Turkey<br />

The researchers presented some possibilities and experiments with <strong>the</strong> “immaterial” walk-through FogScreen for<br />

gaming and physical exercise. They used real-time 3D graphics and interactivity for creating visually and<br />

physically compelling games with <strong>the</strong> immaterial screens. An immaterial projection screen has many advantages<br />

for physical exercise, games and o<strong>the</strong>r activities. It is visually intriguing and can also be made two-sided so that <strong>the</strong><br />

opposing gamers on each side see both <strong>the</strong>ir side <strong>of</strong> <strong>the</strong> screen and each o<strong>the</strong>r through it, and can even walk through<br />

it. The immaterial nature <strong>of</strong> <strong>the</strong> screen helps also on maintenance, as <strong>the</strong> screen is unbreakable and stays always<br />

clean. The initial results show that <strong>the</strong> audience stayed with <strong>the</strong> game over extended periods <strong>of</strong> time.<br />

http://www.veritas<strong>et</strong>visus.com 56


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

The FogScreen is currently available in 2-m<strong>et</strong>er-wide size and in<br />

modular 1-m<strong>et</strong>er-wide size, which enables several units <strong>to</strong> be linked<br />

seamlessly <strong>to</strong>ge<strong>the</strong>r. The FogScreen device is rigged above <strong>the</strong> heads<br />

<strong>of</strong> <strong>the</strong> players so that <strong>the</strong>y can freely walk through <strong>the</strong> screen, which<br />

forms under <strong>the</strong> device. The continuous flow recovers <strong>the</strong> flat screen<br />

plane au<strong>to</strong>matically and immediately when pen<strong>et</strong>rated. The<br />

resolution is not quite as high as with traditional screens, but it works<br />

well for most applications like games. The presented context could<br />

also be used in physical rehabilitation, edutainment in science<br />

museums, and many kinds <strong>of</strong> sports games like boxing, karate or<br />

o<strong>the</strong>r martial arts, for example.<br />

Comparison <strong>of</strong> phoneme and viseme based acoustic units for speech driven realistic lip animation<br />

Elif Bozkurt, Çigdem Eroglu Erdem, Engin Erzin, Tanju Erdem, Mehm<strong>et</strong> Özkan<br />

Momentum, Turkey<br />

Natural looking lip animation, synchronized with incoming speech, is essential for realistic character animation. In<br />

<strong>this</strong> work, <strong>the</strong> Momentum company evaluates <strong>the</strong> performance <strong>of</strong> phone and viseme based acoustic units, with and<br />

without context information, for generating realistic lip synchronization using HMM based recognition systems.<br />

They conclude via objective evaluations that utilization<br />

<strong>of</strong> viseme based units with context information<br />

outperforms <strong>the</strong> o<strong>the</strong>r m<strong>et</strong>hods. Humans <strong>are</strong> very<br />

sensitive <strong>to</strong> <strong>the</strong> slightest glitch in <strong>the</strong> animation <strong>of</strong> <strong>the</strong><br />

human face. Therefore, it is necessary <strong>to</strong> achieve<br />

realistic lip animation, which is synchronous with a<br />

given speech utterance. There <strong>are</strong> m<strong>et</strong>hods in <strong>the</strong><br />

literature for achieving lip synchronization based on<br />

audio-visual systems that correlate video frames with<br />

acoustic features <strong>of</strong> speech. A major drawback <strong>of</strong> such<br />

systems is <strong>the</strong> scarce source <strong>of</strong> audiovisual data for<br />

training. O<strong>the</strong>r m<strong>et</strong>hods use text-<strong>to</strong>-speech syn<strong>the</strong>sis,<br />

which utilize a phon<strong>et</strong>ic context <strong>to</strong> generate both speech<br />

Figure 1: The 3D graphical user interface used for viewing<br />

<strong>the</strong> lip synchronization results<br />

and <strong>the</strong> corresponding lip animation. However, current<br />

speech syn<strong>the</strong>sis systems sound slightly robotic, and<br />

adding natural in<strong>to</strong>nation requires more research. If <strong>the</strong><br />

lip synchronization is generated using speech uttered by a real person, <strong>the</strong> animation will be perceived <strong>to</strong> be more<br />

natural. In such systems, a phon<strong>et</strong>ic sequence can be estimated directly from <strong>the</strong> input speech signal using speech<br />

recognition techniques. This paper focuses on <strong>the</strong> limited problem <strong>of</strong> au<strong>to</strong>matically generating phon<strong>et</strong>ic sequences<br />

from prerecorded speech for lip animation. The generated phon<strong>et</strong>ic sequence is <strong>the</strong>n mapped <strong>to</strong> a viseme sequence<br />

before animating <strong>the</strong> lips <strong>of</strong> a 3D head model, which is built from pho<strong>to</strong>graphs <strong>of</strong> a person. Note that, a viseme is<br />

<strong>the</strong> corresponding lip posture for a phoneme, i.e. visual phoneme.<br />

In <strong>this</strong> work, <strong>the</strong> researchers experimentally comp<strong>are</strong> four different acoustic units within HMM structures for<br />

generating <strong>the</strong> viseme sequence <strong>to</strong> be used for synchronized lip animation. These acoustic units <strong>are</strong> namely phone,<br />

tri-phone, viseme and tri-viseme based units. The lip animation m<strong>et</strong>hod is based on 16 distinct viseme classes. After<br />

<strong>the</strong> generation <strong>of</strong> <strong>the</strong> 3D head model, a graphic artist defines <strong>the</strong> mouth shapes for <strong>the</strong> 16 visemes using a graphical<br />

user interface. The results <strong>of</strong> <strong>the</strong> lip synchronization can be viewed using a user interface shown in Figure 1.<br />

http://www.veritas<strong>et</strong>visus.com 57


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Stereoscopic video generation m<strong>et</strong>hod using motion analysis<br />

Donghyun Kim, Dongbo Min, and Kwanghoon Sohn<br />

Yonsei University, Seoul, Korea<br />

Stereoscopic video generation m<strong>et</strong>hods can produce stereoscopic content from conventional video filmed from<br />

monoscopic cameras. The researchers propose a stereoscopic video generation m<strong>et</strong>hod using motion-<strong>to</strong>-disparity<br />

conversion considering a multi-user condition and characteristics <strong>of</strong> display device. Field <strong>of</strong> view, maximum and<br />

minimum values <strong>of</strong> disparity <strong>are</strong> calculated through an initialization process in order <strong>to</strong> apply various types <strong>of</strong> 3D<br />

display. After motion estimation, <strong>the</strong>y propose three cues <strong>to</strong> decide <strong>the</strong> scale fac<strong>to</strong>r <strong>of</strong> motion-<strong>to</strong>-disparity<br />

conversion, which <strong>are</strong> magnitude <strong>of</strong> motion, camera movement and scene complexity. Subjective evaluation is<br />

performed by comparing videos captured from a stereoscopic camera and generated from one view <strong>of</strong> <strong>the</strong><br />

stereoscopic video. In order <strong>to</strong> evaluate <strong>the</strong> proposed algorithm, several sequences were used. They used two<br />

stereoscopic sequences and two multi-view sequences. The test platform is a 17-inch polarized stereoscopic display<br />

device which <strong>of</strong>fers resolution <strong>of</strong> 1280x512 in stereoscopic mode. Figure 1 shows <strong>the</strong> results <strong>of</strong> stereoscopic<br />

conversion for four test sequences. In <strong>this</strong> figure, we could find <strong>the</strong> shape <strong>of</strong> objects <strong>are</strong> well represented enough <strong>to</strong><br />

assign depth feeling <strong>to</strong> <strong>the</strong> moving objects. Note that in <strong>the</strong> second sequence with <strong>the</strong> large flowerpot container<br />

captured by panning camera, <strong>the</strong> result shows that <strong>the</strong>re is reverse depth <strong>of</strong> background and foreground. It verifies<br />

that camera movement recognition is essential. Errors may occur when <strong>the</strong> original images <strong>are</strong> roughly segmented<br />

or illumination variation occurs.<br />

Figure 1: Results <strong>of</strong> stereoscopic conversion<br />

A non-invasive approach for driving virtual talking heads from real facial movements<br />

Gabriele Fanelli, Marco Fratarcangeli<br />

University <strong>of</strong> Rome, Italy<br />

In <strong>this</strong> paper, <strong>the</strong> University <strong>of</strong> Rome researchers depict a system <strong>to</strong> accurately control <strong>the</strong> facial animation <strong>of</strong><br />

syn<strong>the</strong>tic virtual heads from <strong>the</strong> movements <strong>of</strong> a real person. Such movements <strong>are</strong> tracked using “active appearance<br />

http://www.veritas<strong>et</strong>visus.com 58


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

models” from videos acquired using a cheap webcam. Tracked motion is <strong>the</strong>n encoded by employing <strong>the</strong> widely<br />

used MPEG-4 Facial and Body Animation standard. Each animation frame is thus expressed by a compact subs<strong>et</strong> <strong>of</strong><br />

“Facial Animation Param<strong>et</strong>ers” (FAPs) defined by <strong>the</strong> standard. They precompute, for each FAP, <strong>the</strong> corresponding<br />

facial configuration <strong>of</strong> <strong>the</strong> virtual head <strong>to</strong> animate through an accurate ana<strong>to</strong>mical simulation. By linearly<br />

interpolating, frame by frame, <strong>the</strong> facial configurations corresponding <strong>to</strong> <strong>the</strong> FAPs, <strong>the</strong>y obtain <strong>the</strong> animation <strong>of</strong> <strong>the</strong><br />

virtual head in an easy and straightforward way.<br />

The paper addresses <strong>the</strong> problem <strong>of</strong> realistically animating a virtual talking head at interactive rate by resyn<strong>the</strong>sizing<br />

facial movements tracked from a real person using cheap and non-invasive equipment, namely a<br />

standard webcam (Figure 1). Using<br />

appropriately trained active appearance models,<br />

<strong>the</strong> system is able <strong>to</strong> track <strong>the</strong> facial movements<br />

<strong>of</strong> a real person from a video stream and <strong>the</strong>n<br />

param<strong>et</strong>erize such movements in <strong>the</strong> scripting<br />

language defined by <strong>the</strong> MPEG-4 FBA<br />

standard. Each <strong>of</strong> <strong>the</strong>se param<strong>et</strong>ers corresponds<br />

<strong>to</strong> a key pose <strong>of</strong> a virtual face, namely Morph<br />

Targ<strong>et</strong>, a concept largely known in <strong>the</strong> computer<br />

graphics artist’s community. These initial key<br />

poses <strong>are</strong> au<strong>to</strong>matically precomputed through an<br />

accurate ana<strong>to</strong>mical model <strong>of</strong> <strong>the</strong> face<br />

composed by <strong>the</strong> underlying bony structure, <strong>the</strong><br />

upper skull and <strong>the</strong> jaw, <strong>the</strong> muscle map, and<br />

<strong>the</strong> s<strong>of</strong>t skin tissue. The morph targ<strong>et</strong>s <strong>are</strong><br />

blended <strong>to</strong>ge<strong>the</strong>r through a linear interpolation<br />

weighted by <strong>the</strong> param<strong>et</strong>er’s magnitude,<br />

achieving a wide range <strong>of</strong> facial configurations.<br />

Future developments will focus on extending <strong>the</strong> system <strong>to</strong> support 3D rigid transformations <strong>of</strong> <strong>the</strong> real head (i.e.,<br />

translations and out-<strong>of</strong>-plane rotations), and <strong>the</strong> iris movements. Fields <strong>of</strong> possible application include <strong>the</strong><br />

entertainment industry or human-computer interaction s<strong>of</strong>tw<strong>are</strong>, where car<strong>to</strong>on-like characters could reproduce <strong>the</strong><br />

expressions <strong>of</strong> real ac<strong>to</strong>rs without <strong>the</strong> aid <strong>of</strong> expensive and invasive devices, or visual communication systems,<br />

where video conferences could be established even on very low bandwidth links.<br />

Stereoscopic viewing <strong>of</strong> digital holograms <strong>of</strong> real-world objects<br />

Taina M. Lehtimäki and Thomas J. Naugh<strong>to</strong>n<br />

University <strong>of</strong> Oulu, Finland.<br />

Figure 1: From each input video frame (left), <strong>the</strong> facial movements<br />

<strong>are</strong> tracked (center), and <strong>the</strong>n used <strong>to</strong> control a virtual talking head<br />

The researchers have studied <strong>the</strong> use <strong>of</strong> conventional stereoscopic displays for <strong>the</strong> viewing <strong>of</strong> digital holograms <strong>of</strong><br />

real-world 3D objects captured using phase-shift interferom<strong>et</strong>ry. Although digital propagation <strong>of</strong> holograms can be<br />

performed efficiently, only one depth-plane <strong>of</strong> <strong>the</strong> scene is in focus in each reconstruction. Reconstruction at every<br />

depth <strong>to</strong> create an extended-focus image is a time-consuming process. They investigated <strong>the</strong> human visual system’s<br />

ability <strong>to</strong> perceive 3D objects in <strong>the</strong> presence <strong>of</strong> blurring when different depth reconstructions <strong>are</strong> presented <strong>to</strong> each<br />

eye. Their digital holograms <strong>are</strong> sufficiently large that sub-regions can be digitally propagated <strong>to</strong> generate <strong>the</strong><br />

necessary stereo disparity. The holograms also encode sufficient depth information <strong>to</strong> produce parallax. They found<br />

that <strong>the</strong>ir approach allows 3D perception <strong>of</strong> objects encoded in digital holograms with significantly reduced<br />

reconstruction computation time comp<strong>are</strong>d <strong>to</strong> extended focus image creation.<br />

http://www.veritas<strong>et</strong>visus.com 59


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Stereoscopic Displays and Applications 2007 Conference<br />

January 16-18, 2007, San Jose, California<br />

In <strong>this</strong> third installment, Mark Fihn summarizes presentations made by Ocuity, NEC, Dynamic Digital<br />

Depth, Philips Research (x2), Bos<strong>to</strong>n University, Eindhoven University <strong>of</strong> Technology, LG.Philips LCD,<br />

Hitachi, Ltd., and <strong>the</strong> Tokyo University <strong>of</strong> Agriculture and Technology. The full papers <strong>are</strong> available in <strong>the</strong><br />

2007 SD&A conference proceedings at http://www.stereoscopic.org/proc<br />

Au<strong>to</strong>stereoscopic display technology for mobile 3DTV applications<br />

Jonathan Harrold and Graham J. Woodgate, Ocuity Limited, Oxford<br />

This presentation discussed <strong>the</strong> advent <strong>of</strong> 3DTV products based on cell phone platforms with switchable 2D/3D<br />

au<strong>to</strong>stereoscopic displays. Comp<strong>are</strong>d <strong>to</strong> conventional cell phones, TV phones need <strong>to</strong> operate for extended periods<br />

<strong>of</strong> time with <strong>the</strong> display running at full brightness, so <strong>the</strong> efficiency <strong>of</strong> <strong>the</strong> 3D optical system is key. The desire for<br />

increased viewing freedom <strong>to</strong> <strong>provide</strong> greater viewing comfort can be m<strong>et</strong> by increasing <strong>the</strong> number <strong>of</strong> views<br />

presented. A four view lenticular display will have a brightness five times greater than <strong>the</strong> equivalent parallax<br />

barrier display. Therefore, lenticular displays <strong>are</strong> very strong candidates for cell phone 3DTV. Specifically, Ocuity<br />

discussed <strong>the</strong> selection <strong>of</strong> Polarization Activated Microlens architectures for LCD, OLED and reflective display<br />

applications, providing advantages<br />

associated with high pixel density,<br />

device ruggedness, and display<br />

brightness. Ocuity described a new<br />

manufacturing breakthrough that<br />

enables switchable microlenses <strong>to</strong><br />

be fabricated using a simple coating<br />

process, which is also readily<br />

scalable <strong>to</strong> large TV panels.<br />

Pho<strong>to</strong>s <strong>of</strong> Polarization Activated Microlens structures for some simulated<br />

configurations for 2.2-inch 320x240 and 640x470 panels. .<br />

Image from Ocuity<br />

Ocuity has demonstrated that<br />

Polarization Activated Microlens<br />

technology is a strong candidate <strong>to</strong><br />

me<strong>et</strong> <strong>the</strong> stringent demands <strong>of</strong> 3D<br />

mobile TV, especially when<br />

combined with recent advances in a<br />

LC coating technology which does<br />

not require sealing or vacuum<br />

processing.<br />

http://www.veritas<strong>et</strong>visus.com 60


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

A Pro<strong>to</strong>type 3D Mobile Phone Equipped with a Next Generation Au<strong>to</strong>stereoscopic Display<br />

Julien Flack, Dynamic Digital Depth (DDD) Research, Bentley, <strong>We</strong>stern Australia<br />

Jonathan Harrold and Graham J. Woodgate, Ocuity Limited, Oxford<br />

According <strong>to</strong> <strong>the</strong> authors, <strong>the</strong> most challenging technical issues for commercializing a 3D phone <strong>are</strong> a stereoscopic<br />

display technology which is suitable for mobile applications as well as a means for driving <strong>the</strong> display using <strong>the</strong><br />

limited capabilities <strong>of</strong> a mobile hands<strong>et</strong>. This paper describes a pro<strong>to</strong>type 3D mobile phone which was developed<br />

on a commercially available mobile hardw<strong>are</strong> platform that was r<strong>et</strong>r<strong>of</strong>itted with a Polarization Activated Microlens<br />

array that is 2D/3D switchable and <strong>provide</strong>s both class-leading low crosstalk levels, and suitable brightness<br />

characteristics and viewing zones for operation without compromising battery running time. DDD and Ocuity<br />

collaborated <strong>to</strong> produce <strong>this</strong> next generation au<strong>to</strong>stereoscopic display, which is deployed on a 2.2-inch TFT-LCD at<br />

320x240 pixels. They also describe how a range <strong>of</strong> stereoscopic s<strong>of</strong>tw<strong>are</strong> solutions have been developed on <strong>the</strong><br />

phone’s existing application processor without <strong>the</strong> need for cus<strong>to</strong>m hardw<strong>are</strong>. The objective in developing a<br />

pro<strong>to</strong>type 3D mobile phone was <strong>to</strong><br />

demonstrate <strong>the</strong> effectiveness <strong>of</strong> integrating<br />

an advanced au<strong>to</strong>stereoscopic display in<strong>to</strong> a<br />

Smartphone supported by a range <strong>of</strong> 3D<br />

content demonstrations in order <strong>to</strong> stimulate<br />

<strong>the</strong> development <strong>of</strong> <strong>the</strong> next generation <strong>of</strong><br />

stereoscopic 3D mobiles. Through <strong>the</strong><br />

integration <strong>of</strong> efficient conversion and<br />

rendering s<strong>of</strong>tw<strong>are</strong> with <strong>the</strong> hands<strong>et</strong>’s main<br />

application processor it was possible <strong>to</strong><br />

playback 24 frames/sec 320x240 video<br />

content rendered in real-time using<br />

optimized depth based rendering techniques.<br />

This means that provision <strong>of</strong> content is no<br />

longer an issue for 3D hands<strong>et</strong>s. The phone's<br />

primary purpose was <strong>to</strong> <strong>provide</strong> a<br />

benchmark for hands<strong>et</strong> manufacturers and<br />

As part <strong>of</strong> <strong>the</strong> presentation, Flack gave an overview <strong>of</strong> depth based<br />

rendering: virtual left and right eye images <strong>are</strong> rendered from a depth<br />

map and <strong>the</strong> original 2D image at 320x240 pixels<br />

Image from DDD<br />

Multiple Footprint Stereo Algorithms for 3D Display Content Generation<br />

Faysal Boughorbel, Philips Research Europe, Eindhoven<br />

telecoms carriers <strong>to</strong> assess <strong>the</strong> commercial<br />

viability <strong>of</strong> a stereoscopic 3D phone using<br />

technologies that <strong>are</strong> available and ready for<br />

mass production as <strong>of</strong> 2006.<br />

This research focuses on <strong>the</strong> conversion <strong>of</strong> stereoscopic video material in<strong>to</strong> an image + depth format which is<br />

suitable for rendering on <strong>the</strong> multiview au<strong>to</strong>-stereoscopic displays <strong>of</strong> Philips. The recent interest shown in <strong>the</strong><br />

movie industry for 3D significantly increased <strong>the</strong> availability <strong>of</strong> stereo material. In <strong>this</strong> context <strong>the</strong> conversion from<br />

stereo <strong>to</strong> <strong>the</strong> input formats <strong>of</strong> 3D displays becomes an important task. The presentation discusses a stereo algorithm<br />

that uses multiple footprints generating several depth candidates for each image pixel. The proposed algorithm is<br />

based on a surface filtering m<strong>et</strong>hod that employs simultaneously <strong>the</strong> available depth estimates in a small local<br />

neighborhood while ensuring correct depth discontinuities by <strong>the</strong> inclusion <strong>of</strong> image constraints. The resulting<br />

high-quality, image-aligned depth maps proved an excellent match with Philips’ 3D displays. The researchers<br />

showed that using a robust surface estimation approach built on <strong>to</strong>p <strong>of</strong> basic window-based matching techniques<br />

leads <strong>to</strong> impressive results. Several efficient implementations <strong>are</strong> being pursued <strong>to</strong>wards embedding <strong>the</strong> presented<br />

algorithm in future commercial s<strong>et</strong>s.<br />

http://www.veritas<strong>et</strong>visus.com 61


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

A 470x235 ppi LCD for high-resolution 2D and 3D au<strong>to</strong>stereoscopic display<br />

Nobuaki Takanashi and Shin-ichi Uehara, System Devices Research Labora<strong>to</strong>ries, NEC Corp., Sagamihara<br />

Hideki Asada,, NEC LCD Technologies, Ltd., Kawasaki<br />

The NEC researchers suggested that 3D display developers face many challenges, particularly with regard <strong>to</strong><br />

au<strong>to</strong>stereoscopic 3D and 3D/2D convertibility. They suggested a solution that utilizes a novel pixel arrangement,<br />

called Horizontally Double-Density Pixels (HDDP). In <strong>this</strong> structure, two pictures (one for <strong>the</strong> left and one for <strong>the</strong><br />

right eye) on two adjacent pixels form one squ<strong>are</strong> 3D<br />

pixel. This doubles <strong>the</strong> 3D resolution, making it as high<br />

as <strong>the</strong> 2D display and shows 3D images anywhere in<br />

2D images with <strong>the</strong> same resolution. NEC’s pro<strong>to</strong>type<br />

polysilicon TFT LCD is lenticular lens-based, at 2.5inches<br />

diagonal inches, and at 320x2 (RL) x 480x3<br />

(RGB) resolution. As a 3D display, <strong>the</strong> horizontal and<br />

vertical resolutions <strong>are</strong> equal (235 ppi each). NEC<br />

verified <strong>the</strong> efficacy <strong>of</strong> <strong>the</strong> display with a broad user<br />

survey, which demonstrated a high acceptance <strong>of</strong> and<br />

interest in <strong>this</strong> mobile 3D display. The researchers<br />

reported that <strong>the</strong> display enables 3D images <strong>to</strong> be<br />

displayed anywhere and 2D characters can be made <strong>to</strong><br />

appear at different depths with perfect legibility. No<br />

HDDP Arrangement: right and left-eye pixels combine <strong>to</strong><br />

form a squ<strong>are</strong>.<br />

Image from NEC<br />

switching <strong>of</strong> 2D/3D modes is necessary, and a thin and<br />

uncomplicated structure and high brightness makes <strong>the</strong><br />

design especially suitable for mobile terminals.<br />

Compression <strong>of</strong> still multiview images for 3D au<strong>to</strong>-multiscopic spatially-multiplexed displays<br />

Ryan Lau, Serdar Ince and Janusz Konrad, Bos<strong>to</strong>n University, Bos<strong>to</strong>n<br />

Au<strong>to</strong>-multiscopic displays <strong>are</strong> becoming a viable alternative <strong>to</strong> 3-D displays with glasses. However, since <strong>the</strong>se<br />

displays require multiple views <strong>the</strong> needed transmission bit rate as well as s<strong>to</strong>rage space <strong>are</strong> <strong>of</strong> concern. This paper<br />

describes results <strong>of</strong> research at Bos<strong>to</strong>n University on <strong>the</strong><br />

compression <strong>of</strong> still multiview images for display on<br />

lenticular or parallax-barrier screens. Instead <strong>of</strong> using<br />

full-resolution views, <strong>the</strong> researchers applied<br />

compression <strong>to</strong> band-limited and down-<strong>sample</strong>d views in<br />

<strong>the</strong> so-called “Ntile format”, (proposed by<br />

StereoGraphics). Using lower resolution images is<br />

acceptable since multiplexing at <strong>the</strong> receiver involves<br />

down-sampling from full view resolution anyway. They<br />

studies three standard compression techniques: JPEG,<br />

JPEG-2000 and H.264. While both JPEG standards work<br />

with still images and can be applied directly <strong>to</strong> an N-tile<br />

image, H.264, a video compression standard, requires N<br />

images <strong>of</strong> <strong>the</strong> N-tile format <strong>to</strong> be treated as a short video<br />

sequence. They presented numerous experimental results<br />

indicating that <strong>the</strong> H.264 approach achieves significantly<br />

b<strong>et</strong>ter performance than <strong>the</strong> o<strong>the</strong>r three approaches<br />

studied. The researchers examined all three compression<br />

standards on 9-tile images, and based on <strong>the</strong>ir studies,<br />

Close-up <strong>of</strong> rectangular region from a multiplexed image<br />

before compression (on <strong>the</strong> left) and JPEG-compressed<br />

with a compression ratio <strong>of</strong> 40:1 (on <strong>the</strong> right). Note<br />

numerous artifacts in <strong>the</strong> compressed image; displayed<br />

on a SynthaGram SG222 screen, <strong>the</strong>se artifacts result in<br />

objectionable texture and depth dis<strong>to</strong>rtions.<br />

Image from Bos<strong>to</strong>n University<br />

http://www.veritas<strong>et</strong>visus.com 62


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

proposed a “mirrored N-tile format” where individual tiles <strong>are</strong> transposed so as <strong>to</strong> assure maximum continuity in<br />

<strong>the</strong> N-tile image and thus improve compression performance. Results <strong>of</strong> <strong>the</strong> testing indicate that compressing 9-tile<br />

images gives b<strong>et</strong>ter results than compressing multiplexed images, and that H.264 applied <strong>to</strong> 9-tile images gives <strong>the</strong><br />

best performance.<br />

Predictive Coding <strong>of</strong> Depth Images across Multiple Views<br />

Yannick Morvan, Dirk Farin and P<strong>et</strong>er H. N. de With, Eindhoven University <strong>of</strong> Technology, Eindhoven<br />

A 3D video stream is typically obtained from a s<strong>et</strong> <strong>of</strong> synchronized cameras, which <strong>are</strong> simultaneously capturing<br />

<strong>the</strong> same scene (multiview video). This technology enables applications such as free-viewpoint video which allows<br />

<strong>the</strong> viewer <strong>to</strong> select his preferred viewpoint, or 3DTV where <strong>the</strong> depth <strong>of</strong> <strong>the</strong> scene can be perceived using a special<br />

display. Because <strong>the</strong> user-selected view does not always correspond <strong>to</strong> a camera position, it may be necessary <strong>to</strong><br />

syn<strong>the</strong>size a virtual camera view. To syn<strong>the</strong>size such a virtual view, <strong>the</strong> researchers from <strong>the</strong> University <strong>of</strong><br />

Eindhoven adopted a depth image-based rendering technique that employs one depth map for each camera.<br />

Consequently, a remote rendering <strong>of</strong> <strong>the</strong> 3D video requires a compression technique for texture and depth data.<br />

This paper presents a predictive coding algorithm for <strong>the</strong> compression <strong>of</strong> depth images across multiple views. The<br />

presented algorithm <strong>provide</strong>s<br />

• an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264)<br />

• a random access <strong>to</strong> different views for fast rendering.<br />

The proposed depth-prediction technique works by syn<strong>the</strong>sizing/computing <strong>the</strong> depth <strong>of</strong> 3D points based on <strong>the</strong><br />

reference depth image. The attractiveness <strong>of</strong> <strong>the</strong> depth-prediction algorithm is that <strong>the</strong> prediction <strong>of</strong> depth data<br />

avoids an independent transmission <strong>of</strong> depth for each view, while simplifying <strong>the</strong> view interpolation by<br />

syn<strong>the</strong>sizing depth images for arbitrary view points. The researchers presented experimental results for several<br />

multiview depth sequences that result in a quality improvement <strong>of</strong> up <strong>to</strong> 1.8dB as comp<strong>are</strong>d <strong>to</strong> H.264 compression.<br />

Therefore, <strong>the</strong> presented technique demonstrates that that predictive-coding <strong>of</strong> depth images can <strong>provide</strong> a<br />

substantial compression improvement <strong>of</strong> multiple depth-images while providing random access <strong>to</strong> individual<br />

frames for real-time rendering.<br />

Application <strong>of</strong> Pi-cells in Time-Multiplexed Stereoscopic and Au<strong>to</strong>stereoscopic LCDs<br />

Sergey Shestak and Daesik Kim, Samsung Electronics, Suwon<br />

The Samsung researchers investigated Pi-cell based polarization switches regarding <strong>the</strong>ir applications in both glass<br />

type and au<strong>to</strong>stereoscopic LCD 3D displays. (Pi-cells <strong>are</strong> nematic liquid crystal optical modula<strong>to</strong>rs capable <strong>of</strong><br />

electrically controllable birefringence). They found that Pi-cell should be divided in<strong>to</strong> <strong>the</strong> number <strong>of</strong> individually<br />

addressable segments <strong>to</strong> be capable <strong>of</strong> switching synchronously with line-by-line image updates in order <strong>to</strong> reduce<br />

time-mismatch crosstalk. They discovered that <strong>the</strong> displayed stereoscopic image has unequal brightness and<br />

crosstalk in <strong>the</strong> right and left channels. The asymm<strong>et</strong>ry <strong>of</strong> stereoscopic image param<strong>et</strong>ers is probably caused by <strong>the</strong><br />

asymm<strong>et</strong>ry <strong>of</strong> rise/fall time, inherent in Pi-cells. Finally, <strong>the</strong>y proposed an improved driving m<strong>et</strong>hod capable <strong>of</strong><br />

making <strong>the</strong> crosstalk and brightness symm<strong>et</strong>rical. Fur<strong>the</strong>r, <strong>the</strong>y demonstrated that a response time acceleration<br />

technique (RTA) developed for <strong>the</strong> reduction <strong>of</strong> motion blur, is capable <strong>of</strong> canceling <strong>the</strong> dynamic crosstalk caused<br />

by slow response <strong>of</strong> LCD pixels.<br />

The research revealed that if an LCD moni<strong>to</strong>r is used in a stereoscopic system with conventionally driven shutter<br />

glasses, severe crosstalk (mixed left and right images) is seen across <strong>the</strong> almost entire screen. Even simple systems<br />

using passive polarizing glasses demonstrate <strong>the</strong> same high crosstalk. The crosstalk appears even if <strong>the</strong> moni<strong>to</strong>r has<br />

a very short response times.<br />

The researchers fur<strong>the</strong>r studied application <strong>of</strong> polarization switches based on Pi-cell in two different stereoscopic<br />

systems employing LCD image panels. Both stereoscopic systems <strong>are</strong> capable <strong>of</strong> displaying low crosstalk<br />

stereoscopic images at frame rates <strong>of</strong> 60 and 75Hz. The main problems <strong>of</strong> <strong>the</strong> studied systems <strong>are</strong> <strong>the</strong> time-<br />

http://www.veritas<strong>et</strong>visus.com 63


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

mismatch crosstalk caused by low number <strong>of</strong> individually switchable segments in Pi-cell, <strong>the</strong> dynamic crosstalk<br />

caused by slow switching <strong>of</strong> liquid crystal cells and asymm<strong>et</strong>ry <strong>of</strong> crosstalk and image brightness caused by <strong>the</strong><br />

asymm<strong>et</strong>ry in Pi-cell switching time. The researchers also found that <strong>the</strong> response time acceleration technique<br />

(RTA), developed for <strong>the</strong> reduction <strong>of</strong> motion blur, is capable <strong>of</strong> canceling <strong>the</strong> dynamic crosstalk, caused by slow<br />

response <strong>of</strong> LCD pixels. Experimentally, <strong>the</strong>y discovered that <strong>the</strong> displayed stereoscopic image has unequal<br />

brightness and crosstalk in right and left channels -- probably caused by <strong>the</strong> asymm<strong>et</strong>ry <strong>of</strong> rise/fall time, inherent in<br />

Pi-cells. They proposed <strong>to</strong> compensate <strong>the</strong> brightness and <strong>the</strong> crosstalk asymm<strong>et</strong>ry in <strong>the</strong> left and right images by<br />

<strong>the</strong> adjustment <strong>of</strong> duty cycle <strong>of</strong> control signal. They fur<strong>the</strong>r concluded that it is not adequate <strong>to</strong> simply extend <strong>the</strong><br />

supported vertical frequency <strong>of</strong> LCD moni<strong>to</strong>rs <strong>to</strong> 100-120Hz. To <strong>provide</strong> low crosstalk flicker-less stereo, <strong>the</strong><br />

overdrive levels should also be optimized for <strong>the</strong> canceling <strong>of</strong> crosstalk at higher operational frequency.<br />

The images on <strong>the</strong> left show a severe crosstalk problem inherent with au<strong>to</strong>stereoscopic displays. After compensating for <strong>the</strong><br />

crosstalk using RTA, <strong>the</strong> images on <strong>the</strong> right <strong>are</strong> markedly improved. The images also show a noticeable difference in<br />

brightness b<strong>et</strong>ween <strong>the</strong> left and right images, probably due <strong>to</strong> <strong>the</strong> asymm<strong>et</strong>ry <strong>of</strong> switching a Pi-cell.<br />

Switchable lenticular based 2D/3D displays<br />

Dick K.G. de Boer, Martin G.H. Hiddink, Maarten Sluijter, Oscar H. Willemsen and Siebe T. de Zwart, Philips<br />

Research Europe, Eindhoven<br />

The use <strong>of</strong> an LCD equipped with lenticular lenses is an attractive route <strong>to</strong> achieve an au<strong>to</strong>stereoscopic multi-view<br />

3D display without losing brightness. However, such a display suffers from a low spatial resolution since <strong>the</strong> pixels<br />

<strong>are</strong> divided over various views. To overcome <strong>this</strong> problem Philips developed switchable displays, using LC-filled<br />

switchable lenticulars. In <strong>this</strong> way it is possible <strong>to</strong> have a high-brightness 3D display capable <strong>of</strong> showing <strong>the</strong> native<br />

2D resolution <strong>of</strong> <strong>the</strong> underlying LCD. Moreover, for applications in which it is advantageous <strong>to</strong> be able <strong>to</strong> display<br />

3D and 2D on <strong>the</strong> same screen, <strong>the</strong>y made a pro<strong>to</strong>type having a matrix electrode structure.<br />

A drawback <strong>of</strong> multi-view systems is that, since a number <strong>of</strong> pixels <strong>are</strong> used <strong>to</strong> generate <strong>the</strong> views, <strong>the</strong>re will be a loss <strong>of</strong><br />

resolution that is particularly clear if two-dimensional (2D) content is displayed. The left picture shows <strong>the</strong> image <strong>of</strong> a normal<br />

display, <strong>the</strong> picture at <strong>the</strong> right shows <strong>the</strong> same image with a lenticular placed in front <strong>of</strong> <strong>the</strong> display. Resolution loss is a<br />

common drawback <strong>of</strong> both lenticular and barrier technology.<br />

http://www.veritas<strong>et</strong>visus.com 64


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

To compensate for <strong>the</strong> loss <strong>of</strong> spatial resolution, <strong>the</strong>re is a desire for a concept that can switch from 3D mode <strong>to</strong> 2D<br />

mode. This can be achieved by using switchable barriers or birefringent lenticulars that <strong>are</strong> ei<strong>the</strong>r switchable or<br />

polarization activated. The paper discusses <strong>the</strong> principles <strong>of</strong> lenticular-based 3D displays, as well as problems<br />

related <strong>to</strong> <strong>the</strong>ir usage. Philips has taken <strong>the</strong> approach <strong>to</strong> slant <strong>the</strong> lenses at a small angle <strong>to</strong> <strong>the</strong> vertical axis <strong>of</strong> <strong>the</strong><br />

display <strong>to</strong> distribute <strong>the</strong> resolution loss in<strong>to</strong> two directions. The resulting resolution loss is equal <strong>to</strong> a fac<strong>to</strong>r that is<br />

<strong>the</strong> squ<strong>are</strong> root <strong>of</strong> <strong>the</strong> number <strong>of</strong> views.<br />

Multi-view au<strong>to</strong>stereoscopic display <strong>of</strong> 36 views using an ultra-high resolution LCD<br />

Byungjoo Lee, Hyungki Hong, Juun Park, HyungJu Park, Hyunho Shin, InJae Jung, LG.Philips LCD, Anyang<br />

LG.Philips reported on <strong>the</strong>ir development <strong>of</strong> an au<strong>to</strong>stereoscopic multi view display with 36 views using a 15.1inch<br />

ultra-high resolution LCD. The resolution <strong>of</strong> LCD used for experiment is QUXGA – 3200x2400. RGB subpixels<br />

<strong>are</strong> aligned as vertical lines and size <strong>of</strong> each sub pixel is 0.032 mm by 0.096mm. Parallax barriers <strong>are</strong> slanted<br />

at <strong>the</strong> angle <strong>of</strong> tan-1(1/6) = 9.46 degree and placed before <strong>the</strong> LCD panel <strong>to</strong> generate viewing zones. Barrier<br />

patterns repeated approximately for every 6 pixels. So, <strong>the</strong> numbers <strong>of</strong> pixels decrease by six along <strong>the</strong> horizontal<br />

direction and <strong>the</strong> vertical direction. Nominal 3D resolution becomes (3200/6) x (2400/6) = 533 x 400.<br />

In slanted barrier configuration, <strong>the</strong> angular luminance<br />

pr<strong>of</strong>ile for each zone overlaps each o<strong>the</strong>r. For <strong>the</strong> case <strong>of</strong> a<br />

2-view 3D system, cross-talk b<strong>et</strong>ween left eye and right eye<br />

zone d<strong>et</strong>eriorates 3D image quality. However for multiview<br />

3D, cross-talk b<strong>et</strong>ween adjacent zones does not<br />

always bring about negative effects. The LG.Philips<br />

researchers changed <strong>the</strong> barrier conditions so that<br />

horizontal angles b<strong>et</strong>ween each zone <strong>are</strong> different and 3D<br />

image qualities were comp<strong>are</strong>d. For each barrier condition<br />

<strong>of</strong> different horizontal angles b<strong>et</strong>ween viewing zones, <strong>the</strong>y<br />

found an acceptable range <strong>of</strong> 3D object depth and camera<br />

displacement b<strong>et</strong>ween each zone. The researchers<br />

concluded that <strong>the</strong> smaller <strong>the</strong> viewing interval, <strong>the</strong> more<br />

narrow 3D viewing width becomes but <strong>the</strong> b<strong>et</strong>ter<br />

perceptional resolution and <strong>the</strong> naturalness <strong>of</strong> 3D imaging.<br />

So <strong>the</strong> optimum design is necessary depending on targ<strong>et</strong><br />

performance.<br />

The image on <strong>the</strong> left has a less narrow 3D viewing<br />

width comp<strong>are</strong>d <strong>to</strong> that <strong>of</strong> <strong>the</strong> image on <strong>the</strong> right,<br />

resulting in a b<strong>et</strong>ter perceptional resolution <strong>of</strong> <strong>the</strong> 3D<br />

image quality. But it is a trade-<strong>of</strong>f b<strong>et</strong>ween <strong>the</strong> 3D<br />

viewing width and 3D perceptional resolution, as <strong>the</strong><br />

image on <strong>the</strong> right <strong>provide</strong>s a higher level <strong>of</strong> depth.<br />

Image from LG.Philips LCD<br />

Au<strong>to</strong>stereoscopic Display with 60 Ray Directions using LCD with Optimized Color Filter Layout<br />

Takafumi Koike, Michio Oikawa, Kei Utsugi, Miho Kobayashi, and Masami Yamasaki, Hitachi, Ltd., Kawasaki<br />

Researchers at Hitachi reported about <strong>the</strong>ir development <strong>of</strong> a mobile-size integral videography (IV) display that<br />

reproduces 60 ray directions. IV is an au<strong>to</strong>stereoscopic video image technique based on integral pho<strong>to</strong>graphy (IP).<br />

The IV display consists <strong>of</strong> a 2D display and a microlens array. The maximal spatial frequency (MSF) and <strong>the</strong><br />

number <strong>of</strong> rays appear <strong>to</strong> be <strong>the</strong> most important fac<strong>to</strong>rs in producing realistic au<strong>to</strong>stereoscopic images. Lens pitch<br />

usually d<strong>et</strong>ermines <strong>the</strong> MSF <strong>of</strong> IV displays. The lens pitch and pixel density <strong>of</strong> <strong>the</strong> 2D display d<strong>et</strong>ermine <strong>the</strong><br />

number <strong>of</strong> rays it reproduces. There is a trade-<strong>of</strong>f b<strong>et</strong>ween <strong>the</strong> lens pitch and <strong>the</strong> pixel density. The shape <strong>of</strong> an<br />

elemental image d<strong>et</strong>ermines <strong>the</strong> shape <strong>of</strong> <strong>the</strong> <strong>are</strong>a <strong>of</strong> view. Based on <strong>this</strong> co-relationship, Hitachi developed <strong>the</strong> IV<br />

display, which consists <strong>of</strong> a 5-inch 900-ppi LCD and a microlens array. The IV display has 60 ray directions with 4<br />

vertical rays and a maximum <strong>of</strong> 18 horizontal rays. They optimized <strong>the</strong> color filter on <strong>the</strong> LCD <strong>to</strong> reproduce 60<br />

rays, resulting in a resolution <strong>of</strong> 256x192 pixels and a viewing angle <strong>of</strong> 30 degrees. These param<strong>et</strong>ers <strong>are</strong> sufficient<br />

for mobile game use.<br />

http://www.veritas<strong>et</strong>visus.com 65


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

The design m<strong>et</strong>hod optimizes <strong>the</strong><br />

param<strong>et</strong>ers <strong>of</strong> <strong>the</strong> IV display,<br />

consisting <strong>of</strong> three parts: increasing<br />

<strong>the</strong> 2D image frequency, increasing<br />

<strong>the</strong> number <strong>of</strong> rays without<br />

decreasing <strong>the</strong> image frequency, and<br />

optimizing <strong>the</strong> viewing <strong>are</strong>a. The<br />

proposed color filter layout increases<br />

<strong>the</strong> number <strong>of</strong> rays without<br />

decreasing <strong>the</strong> image frequency.<br />

They claimed “The pro<strong>to</strong>type is<br />

suitable for displaying realistic<br />

au<strong>to</strong>stereoscopic images.”<br />

Development <strong>of</strong> SVGA resolution 128-directional display<br />

Kengo Kikuta and Yasuhiro Takaki, Tokyo University <strong>of</strong> Agriculture and Technology, Tokyo<br />

Researchers from <strong>the</strong> Tokyo University <strong>of</strong> Agriculture and Technology reported on <strong>the</strong>ir development <strong>of</strong> a 128directional<br />

display at 800x600 pixels, called a high-density directional (HDD) display. They previously had<br />

constructed 64-directional, 72-directional, and 128-directional displays in order <strong>to</strong> explore natural 3D display<br />

conditions <strong>to</strong> solve <strong>the</strong> visual fatigue problems caused by <strong>the</strong> accommodation-vergence conflict, and learned that<br />

spatial resolution was <strong>to</strong>o low for comfortable viewing. The newly developed display consists <strong>of</strong> 128 small<br />

projec<strong>to</strong>rs each at 800x600 pixels; each with a separate LCoS device, using <strong>the</strong> field sequential technique is used <strong>to</strong><br />

display color images. All 128 projec<strong>to</strong>rs <strong>are</strong> aligned in a modified 2D arrangement; i.e., all projec<strong>to</strong>rs <strong>are</strong> aligned<br />

two-dimensionally and <strong>the</strong>ir horizontal positions <strong>are</strong> made different from one ano<strong>the</strong>r. All images <strong>are</strong> displayed in<br />

different horizontal directions with a horizontal angle pitch <strong>of</strong> 0.28º. The horizontal viewing angle is 35.7º, and<br />

screen size is 12.8 inches. The display is<br />

controlled by a PC cluster consisting <strong>of</strong><br />

16 PCs. In order <strong>to</strong> correct image<br />

dis<strong>to</strong>rtion caused by <strong>the</strong> aberration <strong>of</strong><br />

imaging systems, images displayed on<br />

<strong>the</strong> LCoS devices <strong>are</strong> pre-dis<strong>to</strong>rted by<br />

reference <strong>to</strong> correction tables.<br />

Pho<strong>to</strong>graphs <strong>of</strong> 3D images generated by <strong>the</strong> 128-directional display<br />

captured from different horizontal view points.<br />

Image from Tokyo University <strong>of</strong> Agriculture and Technology<br />

Layout <strong>of</strong> color filters and lenses for pro<strong>to</strong>type IV display<br />

Image from Hitachi<br />

The researchers noted that image<br />

intensity was low, because <strong>the</strong> light<br />

power <strong>of</strong> <strong>the</strong> illumination LED was not<br />

sufficiently high, and non-uniform<br />

intensity was observed in <strong>the</strong> 3D images.<br />

The 3D image intensity will increase by<br />

employment <strong>of</strong> higher power LEDs, and<br />

crosstalk will be reduced by adjusting<br />

<strong>the</strong> width <strong>of</strong> <strong>the</strong> apertures in <strong>the</strong> projec<strong>to</strong>r<br />

units. Interactive 3D image manipulation<br />

programs developed for <strong>the</strong> previous<br />

HDD display systems. The research was<br />

supported by <strong>the</strong> “Strategic Information<br />

and Communications R&D Promotion<br />

Program” from <strong>the</strong> Ministry <strong>of</strong> Internal<br />

Affairs and Communications, Japan.<br />

http://www.veritas<strong>et</strong>visus.com 66


Stereoscopic Displays and Applications XIX<br />

Sponsored by<br />

IS&T<br />

Collaborate with industry leaders, researchers and developers<br />

in <strong>the</strong>se fields:<br />

Au<strong>to</strong>stereoscopic Displays<br />

Stereoscopic Cinema<br />

3D TV and Video<br />

Applications <strong>of</strong> Stereoscopy<br />

Volum<strong>et</strong>ric Displays<br />

Stereoscopic Imaging<br />

Integral 3D Imaging<br />

2D <strong>to</strong> 3D Conversion<br />

Human Fac<strong>to</strong>rs<br />

Stereoscopic Image Quality<br />

and much more at <strong>the</strong> principal venue in <strong>the</strong> world<br />

<strong>to</strong> see stereoscopic displays!<br />

Technical Presentations • Keynote Address • Discussion Forum • Demonstration<br />

Session • 3D Theater • Poster Session • Educational Short Course<br />

Conference Chairs:<br />

Andrew J. Woods, Curtin Univ. <strong>of</strong> Technology (Australia)<br />

Nicolas S. Holliman, Univ. <strong>of</strong> Durham (United Kingdom)<br />

John O. Merritt, The Merritt Group<br />

Stereoscopic Displays and Applications Conference<br />

and Demonstration: 28–30 January 2008<br />

Stereoscopic Displays and Applications Short Course: 27 January 2008<br />

See <strong>the</strong> Advance Program in November 2007.<br />

IS&T/SPIE 20th Annual Symposium<br />

27–31 January 2008<br />

San Jose Marriott and San Jose Convention Center<br />

San Jose, California USA<br />

electronicimaging.org<br />

www.stereoscopic.org


Stereoscopic Displays and Applications XIX<br />

28–30 January 2008<br />

San Jose McEnery Convention Center, San Jose, California, USA<br />

2008 Highlights<br />

Keynote Address<br />

Stereoscopic and Volum<strong>et</strong>ric 3-D Displays Based on DLP ®<br />

Technology<br />

Dr. Larry Hornbeck, Texas Instruments<br />

Texas Instruments’ DLP ® technology enables both stereoscopic and volum<strong>et</strong>ric 3-D<br />

imaging for a vari<strong>et</strong>y <strong>of</strong> mark<strong>et</strong>s including entertainment, medical imaging and<br />

scientific visualization. For <strong>the</strong> first time in his<strong>to</strong>ry, stereoscopic 3-D entertainment is<br />

commercially viable and being implemented on a large scale. DLP Cinema ® projec<strong>to</strong>rs,<br />

equipped with enhanced stereoscopic functions, support a vari<strong>et</strong>y <strong>of</strong> 3-D digital<br />

cinema implementations. Today, approximately 20 percent <strong>of</strong> <strong>the</strong> more than 5,000<br />

DLP Cinema systems currently installed take advantage <strong>of</strong> <strong>this</strong> 3-D functionality. In<br />

<strong>the</strong> consumer HDTV mark<strong>et</strong>, DLP technology now enables 3-D display modes in DLP<br />

HDTVs, with more than 16 models entering <strong>the</strong> mark<strong>et</strong> in 2007. Innova<strong>to</strong>rs in <strong>the</strong><br />

display industry <strong>are</strong> using DLP technology <strong>to</strong> advance displays from 2-D image<br />

planes <strong>to</strong> 3-D volum<strong>et</strong>ric space. Interactive, volum<strong>et</strong>ric DLP displays <strong>provide</strong> real-time 3-D information<br />

needed <strong>to</strong> perform complicated tasks, such as targ<strong>et</strong>ing cancer tumors in medical radiation <strong>the</strong>rapy. This<br />

informative talk is designed <strong>to</strong> fur<strong>the</strong>r <strong>the</strong> understanding <strong>of</strong> <strong>the</strong> role <strong>of</strong> DLP technology in <strong>the</strong> 3-D world.<br />

Topics include an introduction <strong>to</strong> DLP technology; <strong>the</strong> status <strong>of</strong> DLP technology in <strong>the</strong> 3-D home entertainment<br />

and <strong>the</strong>atrical mark<strong>et</strong>s; <strong>the</strong> primary attributes <strong>of</strong> DLP technology that uniquely enable single-projec<strong>to</strong>r<br />

solutions for stereoscopic 3-D entertainment and volum<strong>et</strong>ric imaging applications; how systems designers<br />

<strong>are</strong> leveraging <strong>the</strong>se attributes <strong>to</strong> optimize for key application-specific requirements; and some thoughts on<br />

<strong>the</strong> future <strong>of</strong> stereoscopic 3-D entertainment.<br />

Technical Presentations<br />

Hear presentations from Sony Pictures Imageworks, REAL D, Disney, In-Three, SeeReal, NEC, JVC,<br />

Actuality Systems, Hitachi, Philips Research, Nokia, Toshiba, Namco Bandai Games, and many<br />

many more.<br />

3D Theatre<br />

A two hour showcase <strong>of</strong><br />

stereoscopic video and<br />

stereoscopic cinema<br />

work from around <strong>the</strong><br />

around <strong>the</strong> world shown<br />

on <strong>the</strong> conference’s<br />

polarized stereoscopic<br />

projection systems.<br />

Demonstration Session<br />

See with your own two eyes a wide collection <strong>of</strong><br />

different stereoscopic displays systems. There<br />

were over 30 stereoscopic displays on show at<br />

SD&A 2007—imagine how long it would take <strong>to</strong><br />

see that many stereoscopic displays if it weren’t<br />

for <strong>this</strong> one session!<br />

Discussion Forum<br />

Hear industry leaders discuss a <strong>to</strong>pic <strong>of</strong> interest <strong>to</strong> <strong>the</strong> whole stereoscopic imaging community.<br />

All that and more at Stereoscopic Displays and Applications 2008.


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Interview with Greg Truman from ForthDD<br />

Greg Truman is managing direc<strong>to</strong>r <strong>of</strong> Forth Dimension Displays. He has served in that<br />

position since <strong>the</strong> formation <strong>of</strong> CRLO Displays Ltd. in September 2004, and <strong>of</strong> its<br />

predecessor, CRL Op<strong>to</strong>, and led <strong>the</strong> successful fund raising that formed Forth Dimension<br />

Displays. He has also participated in <strong>the</strong> formation <strong>of</strong> new displays companies Opsys and<br />

AccuScene. Prior roles have included corporate development manager <strong>of</strong> Scipher plc, where<br />

he was part <strong>of</strong> <strong>the</strong> core team working on VC fund-raising (£5 million) and, subsequently, <strong>the</strong><br />

IPO <strong>of</strong> <strong>the</strong> Company (raising £30 million) in February 2000. Earlier, Greg Truman held roles<br />

in sales, mark<strong>et</strong>ing, R&D project management and integrated circuit design within Thorn<br />

EMI, GEC and in a joint venture in Malaysia. Greg Truman has a BSc in Computer Science<br />

from <strong>the</strong> University <strong>of</strong> Hertfordshire.<br />

Please give us some background information about Forth Dimension Displays. Forth<br />

Dimension Displays develops, manufactures and supplies <strong>the</strong> world’s most advanced microdisplays using a<br />

propri<strong>et</strong>ary, fast-switching liquid crystal technology. The company - previously named CRLO Displays Ltd - was<br />

formed in September 2004, funded by an “A” series round from Amadeus Capital Partners and Doughty Hanson<br />

Technology Ventures. The company is located in Dalg<strong>et</strong>y Bay, Scotland across <strong>the</strong> River Forth from Edinburgh,<br />

with <strong>of</strong>fices in California. In 2006, 82% <strong>of</strong> ForthDD’s rapidly-growing revenues were from products shipped <strong>to</strong><br />

international (non-UK) cus<strong>to</strong>mers, mostly <strong>to</strong> <strong>the</strong> US, Germany, and Japan. ForthDD’s propri<strong>et</strong>ary, high-speed<br />

liquid crystal display and driver technology has major advantages in performance and cost. A portfolio <strong>of</strong> more<br />

than seventy patents protects ForthDD’s Time Domain Imaging (TDI) technology.<br />

What advantages do your ferroelectric devices have over comp<strong>et</strong>itive devices? The biggest advantage is that<br />

<strong>the</strong> technology is all digital. It processes images in <strong>the</strong> time domain (TDI) on a single chip, without RGB subpixels,<br />

separate RGB beams and optics, and without tilting mirrors. This combination allows both amplitude and<br />

phase modulated imaging. It <strong>provide</strong>s<br />

high native resolution, full 24-bit color<br />

for showing high-speed motion. The<br />

very fast switching (100 times faster<br />

than nematic LC) characteristics <strong>of</strong> <strong>the</strong><br />

ferroelectric LCD material <strong>of</strong>fers<br />

benefits in a number <strong>of</strong> applications.<br />

The most relevant <strong>of</strong> <strong>the</strong>se <strong>to</strong> Forth<br />

Dimension Displays is <strong>the</strong> ability <strong>to</strong><br />

produce high performance, color<br />

sequential displays where it has major<br />

advantages in performance and cost.<br />

The technology is well-matched <strong>to</strong> <strong>the</strong><br />

new LED and laser diode light sources.<br />

In addition, <strong>the</strong>re <strong>are</strong> cost advantages:<br />

The single chip has no moving parts, so<br />

it is built using standard CMOS wafer<br />

processes. The absence <strong>of</strong> separate RGB<br />

light paths enables cus<strong>to</strong>mers <strong>to</strong> use<br />

simpler, lower cost optics in <strong>the</strong>ir<br />

system integration.<br />

On <strong>the</strong> left is a cross-section <strong>of</strong> a liquid crystal-based microdisplay in<br />

operation. On <strong>the</strong> right is on <strong>of</strong> ForthDD’s microdisplay solutions. The<br />

company is focused on producing high-performance displays for near-<strong>to</strong>eye<br />

applications such as Head Mounted Displays (HMDs), which <strong>are</strong> <strong>of</strong>ten<br />

used <strong>to</strong> simulate scenarios that may be <strong>to</strong>o dangerous or expensive <strong>to</strong><br />

replicate in <strong>the</strong> real world. ForthDD is <strong>the</strong> world’s leading supplier <strong>of</strong><br />

microdisplays in<strong>to</strong> high-end immersive training and simulation HMDs.<br />

http://www.veritas<strong>et</strong>visus.com 69


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

You recently made some sizable staff reductions as a result <strong>of</strong> strategic decision <strong>to</strong> shift <strong>the</strong> focus <strong>of</strong> your<br />

business. Tell us more. <strong>We</strong> decided that <strong>the</strong> prospects <strong>of</strong> success in <strong>the</strong> rear projection TV mark<strong>et</strong> were being<br />

d<strong>et</strong>ermined more by <strong>the</strong> price reductions in LCD TV than by <strong>the</strong> ability <strong>of</strong> Forth Dimension Displays <strong>to</strong> me<strong>et</strong> <strong>the</strong><br />

product specifications. Price decreases in LCD TVs have been far greater than any analyst forecast and <strong>this</strong> made it<br />

very difficult <strong>to</strong> comp<strong>et</strong>e with a “high performance, value” RPTV product proposition. Forth Dimension Displays<br />

already had an established reputation as <strong>the</strong> leading supplier <strong>of</strong> premium, high native resolution microdisplays in<br />

training and simulation systems for military and aerospace cus<strong>to</strong>mers. The company’s business is expanding with<br />

products <strong>to</strong> cus<strong>to</strong>mers in <strong>are</strong>as such as:<br />

• Confocal microscopy and image injection for medical diagnostic and surgical systems<br />

• Digital printing and imaging systems<br />

• High-resolution industrial m<strong>et</strong>rology and process systems<br />

• Advanced 3D and holographic imaging systems<br />

So <strong>the</strong> decision was made <strong>to</strong> drop <strong>the</strong> RPTV mark<strong>et</strong> and focus on those mark<strong>et</strong>s where <strong>the</strong> prospects were b<strong>et</strong>ter.<br />

Given your decision <strong>to</strong> withdraw from <strong>the</strong> rear projection TV mark<strong>et</strong>, can you sh<strong>are</strong> your thoughts about<br />

<strong>the</strong> future <strong>of</strong> RPTVs? A quick review <strong>of</strong> <strong>the</strong> news and forecasts from <strong>the</strong> RPTV mark<strong>et</strong>, since we withdrew,<br />

quickly shows that <strong>the</strong> pressure from LCD TV has continued <strong>to</strong> drive forecasts down and cause problems for those<br />

companies continuing <strong>to</strong> focus on that mark<strong>et</strong>. It is going <strong>to</strong> be very difficult for RPTVs <strong>to</strong> comp<strong>et</strong>e in anything<br />

o<strong>the</strong>r than <strong>the</strong> largest sizes (55 inch+) and emergent <strong>are</strong>as (e.g. 3D TV). Without some radical breakthrough, <strong>the</strong>re<br />

seems little future for RPTV in <strong>the</strong> mainstream 36-42-inch diagonal TV mark<strong>et</strong>.<br />

Please sh<strong>are</strong> your opinions about <strong>the</strong> new class <strong>of</strong> “pico-projec<strong>to</strong>r” products. The pico-projection business has<br />

<strong>the</strong> prospect <strong>of</strong> being a large mark<strong>et</strong> in terms <strong>of</strong> unit volumes, <strong>the</strong> challenge will be achieving pr<strong>of</strong>itable<br />

manufacture <strong>of</strong> microdisplays/microdisplay chips<strong>et</strong>s at <strong>the</strong> low prices <strong>the</strong>y will be sold at.<br />

So you’re now focusing all <strong>of</strong> your efforts on high-resolution near-<strong>to</strong>-eye devices. How big do you see <strong>this</strong><br />

mark<strong>et</strong>? It is very difficult <strong>to</strong> know, as <strong>the</strong>re is little good mark<strong>et</strong> data and it depends largely on whe<strong>the</strong>r you<br />

perceive that high-resolution near-<strong>to</strong>-eye (NTE) devices will ever pen<strong>et</strong>rate <strong>the</strong> consumer mark<strong>et</strong> in high volumes.<br />

You <strong>are</strong> a fabless company, but still have semiconduc<strong>to</strong>r integration capabilities. Please tell us how your<br />

supply chain works. Actually, we <strong>are</strong> not really “fabless” but “partially fabless”; we receive silicon wafers<br />

manufactured on our behalf by a silicon foundry (<strong>the</strong> fabless bit) but do all subsequent processing (coating,<br />

laminate assembly, cell filling, mounting <strong>et</strong>c.) within our own Dalg<strong>et</strong>y Bay manufacturing facility. This gives us a<br />

lot more flexibility and control versus trying <strong>to</strong> use a <strong>to</strong>tally fabless approach and is one <strong>of</strong> our core strengths.<br />

Although ForthDD does not produce its own silicon wafers, <strong>the</strong>ir facility in Dalg<strong>et</strong>y Bay, Scotland does all <strong>the</strong> processing<br />

(coating, laminate assembly, cell filling, mounting, <strong>et</strong>c), providing advantages related <strong>to</strong> quality and scheduling<br />

http://www.veritas<strong>et</strong>visus.com 70


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

What is your current production capacity? Currently around 20,000 microdisplays per annum but we can<br />

increase capacity in Dalg<strong>et</strong>y Bay <strong>to</strong> over 100,000 per annum should <strong>the</strong> mark<strong>et</strong> demand be <strong>the</strong>re.<br />

In terms <strong>of</strong> improving performance, is <strong>the</strong>re one <strong>are</strong>a in which you <strong>are</strong> focusing your development efforts?<br />

The technology already performs extremely well in our key applications, so we <strong>are</strong> focused on making small<br />

improvements across <strong>the</strong> board (while trying not <strong>to</strong> introduce negative side effects) and reducing cost <strong>of</strong> ownership<br />

<strong>to</strong> allow our cus<strong>to</strong>mers <strong>to</strong> expend <strong>the</strong>ir mark<strong>et</strong>s.<br />

Your current solutions <strong>are</strong> at 1280x1024 pixels. Do you see a need <strong>to</strong> move <strong>to</strong> higher resolutions? Yes, we<br />

expect <strong>to</strong> move from <strong>the</strong> current 1.3M pixel displays <strong>to</strong> 2M pixels and beyond.<br />

What <strong>are</strong> <strong>the</strong> pitfalls in moving <strong>to</strong> higher resolutions? Is it more than just a larger die size? The key<br />

challenges include <strong>the</strong> larger die size (or reduced pixel size) and <strong>the</strong> high data rates required. A high refresh rate<br />

(120Hz), 2M pixel display requires around 10 Gbits/second <strong>to</strong> be delivered <strong>to</strong> <strong>the</strong> display.<br />

What <strong>are</strong> <strong>the</strong> most promising applications for high-resolution near-<strong>to</strong>-eye devices? Forth Dimension Displays<br />

is <strong>the</strong> clear global mark<strong>et</strong> leader supplying high-resolution microdisplays for near-<strong>to</strong>-eye (NTE) devices in <strong>the</strong><br />

training and simulation mark<strong>et</strong> and, right now, <strong>this</strong> is <strong>the</strong> best mark<strong>et</strong> for us.<br />

Do you see 3D as a big opportunity for Forth Dimension? It already is. <strong>We</strong> supply a lot <strong>of</strong> our systems for use<br />

in binocular, stereoscopic head mounted displays.<br />

Tell us one <strong>of</strong> your favorite cus<strong>to</strong>mer satisfaction s<strong>to</strong>ries. I<br />

would prefer not <strong>to</strong> put words in our cus<strong>to</strong>mer’s mouths – and<br />

suggest you contact Marc Foglia <strong>of</strong> NVis. <strong>We</strong> contacted Mr.<br />

Foglia, who <strong>provide</strong>d <strong>the</strong>se insights about ForthDD:<br />

“ForthDD has been our supplier for microdisplays since our<br />

company was founded in 2002, enabling NVIS <strong>to</strong> build an<br />

entire product line <strong>of</strong> high-resolution head-mounted and<br />

hand-held displays. While most microdisplay suppliers turn<br />

away low volume manufacturers, ForthDD (<strong>the</strong>n CRL<br />

Op<strong>to</strong>) welcomed <strong>the</strong> opportunity <strong>to</strong> work with us. Over<br />

time, great suppliers start <strong>to</strong> feel more like partners, and<br />

ForthDD always treated NVIS as a partner. They made it<br />

clear <strong>to</strong> us that our success was an important part <strong>of</strong> <strong>the</strong>ir<br />

business. This was evident in <strong>the</strong>ir responsiveness <strong>to</strong> our<br />

requests for technical information, documentation, and at<br />

times, demanding delivery schedules. As a small<br />

manufacturer, our ability <strong>to</strong> support our cus<strong>to</strong>mers is <strong>of</strong>ten<br />

tied <strong>to</strong> our suppliers’ support for us, and in <strong>this</strong> capacity our<br />

relationship with ForthDD has been vital <strong>to</strong> our success.<br />

<strong>We</strong> see a bright future <strong>to</strong>ge<strong>the</strong>r with ForthDD as both our<br />

businesses grow.” http://www.nvisinc.com.<br />

The NVis nVisor ST uses ForthDD’s highresolution<br />

ferroelectric liquid crystal on silicon.<br />

The illumination scheme includes an RGB LED<br />

mounted on <strong>the</strong> <strong>to</strong>p-face <strong>of</strong> a polarizing beam<br />

splitter prism. The microdisplay is illuminated by<br />

<strong>the</strong> light reflected <strong>of</strong>f <strong>the</strong> polarizing beam splitter<br />

surface. Color is generated by <strong>the</strong> LED using an<br />

advanced color sequential algorithm that rapidly<br />

switches b<strong>et</strong>ween red, green, and blue light which<br />

is synchronized with <strong>the</strong> pixels on <strong>the</strong> LCoS device<br />

<strong>to</strong> generate a 24-bit color image.<br />

Given your earlier financial troubles, when do you expect <strong>to</strong> reach pr<strong>of</strong>itability? <strong>We</strong> have not had any<br />

financial issues since <strong>the</strong> formation <strong>of</strong> CRLO Displays (later Forth Dimension Displays) in September 2004. <strong>We</strong><br />

have always had positive cash in <strong>the</strong> bank and have very supportive inves<strong>to</strong>rs/owners. <strong>We</strong> expect <strong>to</strong> achieve break<br />

even in late 2007 and move in<strong>to</strong> sustained pr<strong>of</strong>itability in 2008.<br />

Please describe what you think Forth Dimension will look like three years from now. I would expect that we<br />

have grown substantially, <strong>are</strong> consistently pr<strong>of</strong>itable and cash generative and have a value that justifies our<br />

inves<strong>to</strong>rs’ belief and investment in us.<br />

http://www.veritas<strong>et</strong>visus.com 71


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Interview with Ian Underwood from MED<br />

Ian Underwood is CTO and a co-founder <strong>of</strong> MED as well as co-inven<strong>to</strong>r <strong>of</strong> its P-OLED<br />

microdisplay technology. Prior <strong>to</strong> 1999 he was at The University <strong>of</strong> Edinburgh where he<br />

carried out pioneering research and development in <strong>the</strong> field <strong>of</strong> liquid crystal microdisplays<br />

b<strong>et</strong>ween 1983 and 1999. He is a Fulbright Fellow (1991), Pho<strong>to</strong>nics Spectra Circle <strong>of</strong><br />

Excellence designer (1994), British Telecom Fellow (1997), Ben Sturgeon Award winner<br />

(1999), Ernst & Young Entrepreneur <strong>of</strong> <strong>the</strong> Year (2003), Fellow <strong>of</strong> <strong>the</strong> Royal Soci<strong>et</strong>y <strong>of</strong><br />

Edinburgh (2004), and Gannochy Medal winner (2004). He is recognized worldwide as an<br />

authority on microdisplay technology, systems and applications. In 2005, Ian was named<br />

Pr<strong>of</strong>essor <strong>of</strong> Electronic Displays at The University <strong>of</strong> Edinburgh. In addition <strong>to</strong> his fulltime<br />

post at MED, he sits on <strong>the</strong> Council <strong>of</strong> <strong>the</strong> Scottish Op<strong>to</strong>electronics Association and<br />

<strong>the</strong> Steering Committee <strong>of</strong> ADRIA (Europe’s N<strong>et</strong>work in Advanced Displays). He is coauthor<br />

<strong>of</strong> a recently released book entitled Introduction <strong>to</strong> Microdisplays.<br />

Please give us some background information about MED. MicroEmissive Displays<br />

(MED) is a leader in polymer organic light emitting diode (P-OLED) microdisplay technology. The company was<br />

founded in 1999 and has developed a unique emissive microdisplay technology by using a P-OLED layer on a<br />

CMOS substrate. In late 2004, MED floated on <strong>the</strong> Alternative Investment Mark<strong>et</strong> <strong>of</strong> <strong>the</strong> London S<strong>to</strong>ck Exchange<br />

(AIM) following a fourth successful funding round, which raised £15.7M. Funding has been used for pro<strong>of</strong> <strong>of</strong><br />

principle, technology development and establishing pre-production facilities in Edinburgh, culminating in <strong>the</strong> first<br />

product release and commercial shipments <strong>of</strong> MED’s “eyescreen” microdisplays in December 2005. MED has been<br />

awarded ISO 9001:2000 registration for <strong>the</strong> research, design, development and mark<strong>et</strong>ing <strong>of</strong> digital microdisplay<br />

solutions and is working <strong>to</strong>wards full accreditation in 2007. MED is headquartered at <strong>the</strong> Scottish Microelectronics<br />

Centre, Edinburgh, Scotland and its manufacturing site is in Dresden, Germany. The company employs 62 people<br />

and also has sales representatives and applications support located in Asia, <strong>the</strong> USA and Europe.<br />

Do you regard yourselves primarily as a display company or as a semiconduc<strong>to</strong>r company that happens <strong>to</strong> be<br />

making displays? MED is a displays company whose displays happen <strong>to</strong> use a CMOS active matrix backplane.<br />

So, like all microdisplay companies, our manufacturing and cost base is very semiconduc<strong>to</strong>r-like.<br />

Please <strong>provide</strong> an overview about your technology. MED’s eyescreen products <strong>are</strong> <strong>the</strong> world’s only polymer<br />

organic light emitting diode (P-OLED) microdisplays. The full color eyescreen combines superb TV quality<br />

moving video images that <strong>are</strong> free from flicker, with ultra-low<br />

power consumption, enabling greatly extended battery life for <strong>the</strong><br />

consumer. This enhancement in battery usage time made possible<br />

by <strong>the</strong> eyescreen will play a vital role in <strong>the</strong> widespread adoption <strong>of</strong><br />

portable head-s<strong>et</strong>s for personal TV and video viewing in <strong>the</strong><br />

consumer mark<strong>et</strong>place. The design <strong>of</strong> <strong>the</strong> eyescreen, with its<br />

integrated driver ICs and its digital interface, <strong>of</strong>fers product design<br />

engineers a robust design-in solution for smaller, lighter weight,<br />

stylish products <strong>of</strong> <strong>the</strong> future, all for a size comparable with <strong>the</strong><br />

pupil <strong>of</strong> <strong>the</strong> human eye.<br />

You <strong>are</strong> currently very close <strong>to</strong> <strong>of</strong>fering a compl<strong>et</strong>e “display on<br />

a chip” in a CMOS process. What remains <strong>to</strong> achieve <strong>this</strong> goal,<br />

and what advantages <strong>are</strong> derived from <strong>of</strong>fering a compl<strong>et</strong>e<br />

solution? Display-System-on-Chip (DSoC) means that <strong>the</strong><br />

microdisplay component is <strong>the</strong> only high-value or active<br />

component required. MED’s eyescreen microdisplays <strong>of</strong>fer<br />

MED builds its display devices on a CMOS<br />

active matrix device. This pho<strong>to</strong> <strong>of</strong> a wafer<br />

shows how more than a hundred devices can be<br />

manufactured on a single die.<br />

http://www.veritas<strong>et</strong>visus.com 72


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

emissive operation which is equivalent <strong>to</strong> having an “integrated” backlight. The use <strong>of</strong> a CMOS backplane allows<br />

<strong>the</strong> functionality <strong>of</strong> <strong>the</strong> display driver IC <strong>to</strong> be integrated. The display has a high level <strong>of</strong> integrated configurability<br />

such as brightness control, image orientation, frame rate, switching b<strong>et</strong>ween digital data formats, down-scaling <strong>of</strong><br />

incoming data stream, <strong>et</strong>c.<br />

More generally, what <strong>are</strong> <strong>the</strong> primary advantages <strong>of</strong> OLED microdisplays as comp<strong>are</strong>d <strong>to</strong> LC<br />

microdisplays? The primary advantages <strong>are</strong>:<br />

• Lower power equating <strong>to</strong> longer battery life<br />

• Higher contrast equating <strong>to</strong> a more vivid image<br />

• Higher pixel fill fac<strong>to</strong>r equating <strong>to</strong> higher perceived image quality<br />

In 2004, your display was listed in <strong>the</strong> Guinness Book <strong>of</strong> World Records as <strong>the</strong> world’s smallest color TV<br />

screen. Is <strong>this</strong> still true? Do you have plans <strong>to</strong> make even smaller displays? MED’s original display was <strong>the</strong><br />

ME3201 (320x240 monochrome). The backplane <strong>of</strong> <strong>the</strong> ME3201 was used <strong>to</strong> create a color microdisplay –<br />

ME1602 (160x120 color) by applying color filters over a 2x2 array <strong>of</strong> monochrome pixels <strong>to</strong> create a single color<br />

pixel. ME1602 made it in<strong>to</strong> <strong>the</strong> Guinness Book <strong>of</strong> World Records in 2004 and 2005. But Guinness has more<br />

records than <strong>the</strong>y <strong>are</strong> able <strong>to</strong> put in<strong>to</strong> <strong>the</strong> book each year so MED has not appe<strong>are</strong>d in <strong>the</strong> book since 2005.<br />

As a CDT licensee, does CDT’s polymer OLED development work in large <strong>are</strong>a displays translate without<br />

problem <strong>to</strong> your microdisplays? CDT is a developer and licensor <strong>of</strong> generic IP in polymer OLED technology.<br />

MED and CDT have worked very closely <strong>to</strong>ge<strong>the</strong>r <strong>to</strong> ensure that MED achieves <strong>the</strong> best possible implementation<br />

<strong>of</strong> that IP in its field <strong>of</strong> application.<br />

OLEDs generally face problems related <strong>to</strong> barrier layers <strong>to</strong> protect from moisture and oxygen. Do you face<br />

<strong>the</strong>se same problems, or is it actually much simpler <strong>to</strong> adequately protect a microdisplay versus a larger<br />

display? All OLEDs must be encapsulated in order <strong>to</strong> ensure reliable performance by protecting <strong>the</strong> OLED layer<br />

from <strong>the</strong> d<strong>et</strong>rimental effects <strong>of</strong> atmospheric oxygen and moisture. MED has developed an encapsulation strategy<br />

that is appropriate for, and compatible with, P-OLED microdisplays.<br />

In terms <strong>of</strong> definition, you currently show <strong>of</strong>f 320x240 pixels. Since <strong>this</strong> is less than even standard TV, <strong>are</strong><br />

you under any pressure <strong>to</strong> increase <strong>the</strong> resolution? 320xRGBx240 (QVGA color) is a typical definition for lowcost,<br />

low-power consumer video glasses. Viewing TV or video content from a personal DV player or iPod, users<br />

<strong>are</strong> normally satisfied with that. Even those who would prefer<br />

say VGA may not readily accept <strong>the</strong> additional cost, bulk and<br />

power consumption.<br />

What size do you typically achieve with regard <strong>to</strong> <strong>the</strong><br />

“virtual” image? Does magnifying <strong>to</strong> larger sizes diminish<br />

<strong>the</strong> image quality? In o<strong>the</strong>r words is <strong>the</strong>re some “swe<strong>et</strong><br />

spot” related <strong>to</strong> device size and virtual image size? The<br />

virtual image is best described in terms <strong>of</strong> “Field <strong>of</strong> View” –<br />

<strong>the</strong> angle subtended at <strong>the</strong> eye by <strong>the</strong> diagonal <strong>of</strong> <strong>the</strong> image.<br />

The norm is <strong>to</strong> make <strong>the</strong> FoV as large as possible without <strong>the</strong><br />

individual pixels becoming resolvable. (If individual pixels<br />

can be resolved, <strong>this</strong> reduces <strong>the</strong> perceived quality <strong>of</strong> <strong>the</strong><br />

image). The appropriate FoV depends on a number <strong>of</strong> fac<strong>to</strong>rs<br />

relating <strong>to</strong> <strong>the</strong> display, <strong>the</strong> system and <strong>the</strong> application; <strong>the</strong>se<br />

include display definition and pixel fill fac<strong>to</strong>r. For eyescreen<br />

ME3204 <strong>the</strong> appropriate FoV is typically around about 20<br />

degrees.<br />

MED’s tiny eyescreen microdisplay with a 6 mm<br />

(0.24-inch) diagonal pixel array can be<br />

combined with magnifying optics <strong>to</strong> produce a<br />

large virtual image, that appears <strong>to</strong> <strong>the</strong> eye <strong>to</strong> be<br />

equivalent in dimensions <strong>to</strong> <strong>the</strong> picture on a TV<br />

screen or computer display.<br />

http://www.veritas<strong>et</strong>visus.com 73


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Your devices <strong>are</strong> entirely digital with no analog interface. Tell us about what <strong>this</strong> means in terms <strong>of</strong><br />

cost/performance. The future is digital. MED has implemented two interface possibilities in<strong>to</strong> eyescreen ME3204<br />

– CCIR 656 and serial RGB. An all-digital signal path maintains flexibility, reduces power and maintains best<br />

possible image quality. In <strong>the</strong> case <strong>of</strong> an application where <strong>the</strong> data source is analog, e.g. composite video, a low<br />

cost/power video pixel decoder can be used <strong>to</strong> convert incoming data <strong>to</strong> CCIR 656.<br />

Your eyescreen devices have recently been showcased in systems that utilize Qualcomm’s MDDI pro<strong>to</strong>col.<br />

Why is <strong>this</strong> important and do o<strong>the</strong>r mobile standards (such as <strong>the</strong> MIPI standard) <strong>provide</strong> similar results?<br />

MDDI (Mobile Digital Display Interface) allows an all digital signal path and has provision for driving an external<br />

display. This realizes all <strong>of</strong> <strong>the</strong> benefits <strong>of</strong> eyescreen from a cell phone. The MDDI/eyescreen demo runs from <strong>the</strong><br />

cell phone battery – it does not require an external battery box.<br />

Tell us about <strong>the</strong> mark<strong>et</strong>s you intend <strong>to</strong> address? MED is aiming specifically at consumer mark<strong>et</strong>s with existing<br />

or potential for high volume. Our first targ<strong>et</strong> is video glasses and we also plan <strong>to</strong> targ<strong>et</strong> electronic viewfinders for<br />

applications including digital cameras, video cameras and night vision systems.<br />

Mobile TV is still som<strong>et</strong>hing <strong>of</strong> a question mark. Please give us your thoughts about <strong>this</strong> mark<strong>et</strong>. 3G <strong>to</strong>ok <strong>of</strong>f<br />

in Korea and Japan, migrating <strong>to</strong> Europe <strong>the</strong>n USA and onwards. Similarly, mobile TV is now taking <strong>of</strong>f in Korea.<br />

If mobile TV takes <strong>of</strong>f, what will entice users <strong>to</strong> consider near-<strong>to</strong>-eye devices that incorporate MED<br />

microdisplays? Considerations such as:<br />

• Enhanced viewing experience in any environment (e.g. bright sunlight or fluorescent light); <strong>the</strong> NTE<br />

device can be configured <strong>to</strong> block ambient light<br />

• Larger image than that available from cell phone or iPod or o<strong>the</strong>r pock<strong>et</strong>able device<br />

• Privacy (no one can look over your shoulder)<br />

• Consideration for o<strong>the</strong>rs (what you <strong>are</strong> watching does not disturb <strong>the</strong> person sitting next <strong>to</strong> you in a<br />

plane or train)<br />

Tell us about your work related <strong>to</strong> developing 3D video glasses? MED worked with <strong>the</strong> EPICentre at <strong>the</strong><br />

University <strong>of</strong> Abertay and Thales Optics (now Qioptiq) <strong>to</strong> develop a stereoscopic 3D heads<strong>et</strong> using eyescreen<br />

microdisplays. That project, called EZ-Display, was sponsored by <strong>the</strong> UK Department <strong>of</strong> Trade and Industry and<br />

finished in 2006.<br />

One <strong>of</strong> <strong>the</strong> his<strong>to</strong>rical issues associated with near-<strong>to</strong>-eye devices is related <strong>to</strong> nausea. Adding 3D <strong>to</strong> <strong>the</strong><br />

equation and it seems like you’ll need <strong>to</strong> add “sanitary bags” <strong>to</strong> your bill <strong>of</strong> material. What sorts <strong>of</strong> things<br />

can you do <strong>to</strong> minimize <strong>the</strong>se inner-ear problems? MED is not a developer and manufacturer <strong>of</strong> video glasses.<br />

Optimization <strong>of</strong> <strong>the</strong> end product <strong>to</strong> <strong>provide</strong> a comfortable viewing experience rests with <strong>the</strong> system manufacturer.<br />

If someone already wears glasses, do <strong>the</strong>y need <strong>to</strong> wear prescription video glasses? MED is not a developer<br />

and manufacturer <strong>of</strong> video glasses. Some video glasses can be worn over prescription spectacles and some cannot;<br />

some have focus adjustment and some do not; some could incorporate cus<strong>to</strong>m prescription lenses and some cannot.<br />

Your new manufacturing center in Dresden, Germany is a big step forward. When do you expect <strong>to</strong> have<br />

commercial products ready <strong>to</strong> ship from <strong>the</strong> facility? On 24th July 2007, MED announced that it had made its<br />

first production shipment from its Dresden manufacturing facility on schedule.<br />

Why did you choose <strong>to</strong> build in Dresden ra<strong>the</strong>r than in Edinburgh or elsewhere? Dresden was an ideal<br />

selection because it has <strong>the</strong> Fraunh<strong>of</strong>er IPMS at its heart and is at <strong>the</strong> very forefront <strong>of</strong> electronic innovation. <strong>We</strong><br />

<strong>are</strong> very proud <strong>to</strong> be part <strong>of</strong> Silicon Saxony and <strong>are</strong> looking forward <strong>to</strong> sharing our success in such a vibrant<br />

technological and cultural center.<br />

http://www.veritas<strong>et</strong>visus.com 74


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Twenty Interviews<br />

Volume 2 just released!<br />

Interviews from <strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> newsl<strong>et</strong>ters – Volume 2<br />

+ 21st Century 3D, Jason Goodman, Founder and CEO<br />

+ Add-Vision, Matt Wilkinson, President and CEO<br />

+ Alienw<strong>are</strong>, D<strong>are</strong>k Kaminski, Product Manager<br />

+ CDT, David Fyfe, Founder and CTO<br />

+ DisplayMasters, David Rodley, Academic Coordina<strong>to</strong>r<br />

+ HDMI Licensing, Les Chard, President<br />

+ JazzMutant, Guillaume Largillier, CEO<br />

+ Lumicure, Ifor Samuel, Founder and CTO<br />

+ Luxtera, Eileen Robarge, Direc<strong>to</strong>r <strong>of</strong> Mark<strong>et</strong>ing<br />

+ QFT, Merv Rose, Founder and CTO<br />

78 pages, only $12.99<br />

http://www.veritas<strong>et</strong>visus.com<br />

+ RPO, Ian Maxwell, Founder and Executive Direc<strong>to</strong>r<br />

+ SMART Technologies, David Martin, Executive Chairman<br />

+ Sony, Kevin Kuroiwa, Product Planning Manager<br />

+ STRIKE Technologies, David Tulbert, Founder<br />

+ TelAztec, Jim Nole, Vice President – Business Development<br />

+ TYZX, Ron Buck, President and CEO<br />

+ UniPixel Display, Reed Killion, President<br />

+ xRez, Greg Downing, Co-founder<br />

+ Zebra Imaging, Mark Lucente, Program Manager<br />

+ Zoomify, David Urbanic, Founder, President, and CEO<br />

http://www.veritas<strong>et</strong>visus.com 75


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Shovelling data by Adrian Travis<br />

Educated at Cambridge University, Adrian Travis compl<strong>et</strong>ed his BA in Engineering in 1984,<br />

followed by a PhD in fiber optic wave guides in 1987. With his extensive optics experience, he<br />

is now an internationally recognized authority on flat panel displays. He is a Fellow <strong>of</strong> Cl<strong>are</strong><br />

College and lectures at Cambridge University Engineering Department. He is <strong>the</strong> inven<strong>to</strong>r <strong>of</strong><br />

<strong>the</strong> “<strong>We</strong>dge” and is now working <strong>to</strong> commercialize <strong>the</strong> product through <strong>the</strong> Cambridge spinout<br />

company Cambridge Flat Projection Displays, Ltd. (CamFPD). Adrian is also a c<strong>of</strong>ounder<br />

and <strong>the</strong> chief scientist for Deep Light, a company that is planning <strong>to</strong> commercialize<br />

a high-resolution 3D display system.<br />

If we <strong>are</strong> serious about g<strong>et</strong>ting true 3D, <strong>the</strong>n we need displays which modulate light in<br />

both azimuth and elevation and <strong>this</strong> implies enormous data rates. Suppose we want 100<br />

views in azimuth and elevation; <strong>the</strong>n each pixel has <strong>to</strong> be in effect a miniature video<br />

projec<strong>to</strong>r with 100 by 100 pixels so our data rates will be 10,000 times those <strong>of</strong> a<br />

conventional flat panel display.<br />

This produces a dilemma: users want displays <strong>to</strong> be big, but big displays struggle <strong>to</strong> handle such high data rates<br />

because RC time constants tend <strong>to</strong> keep row addressing times at b<strong>et</strong>ween two and four microseconds. Silicon<br />

microdisplays such as TI’s DMD and <strong>the</strong> ferroelectric LCOS devices from Displaytech and Forth Technologies<br />

easily manage binary frame rates in <strong>the</strong> range <strong>of</strong> 2-5 kHz but <strong>the</strong>n <strong>the</strong>y <strong>are</strong> <strong>to</strong>o small.<br />

The obvious solution is <strong>to</strong> use projection but instead <strong>of</strong> one microdisplay, we will need 100 microdisplays running<br />

at 5 kHz if we <strong>are</strong> <strong>to</strong> increase data rates by a fac<strong>to</strong>r <strong>of</strong> 10,000 and that s<strong>et</strong>s out <strong>the</strong> starting conditions for <strong>the</strong> 3D<br />

designer – how now <strong>are</strong> we <strong>to</strong> connect all <strong>the</strong>se devices?<br />

RISC (Reduced Instruction S<strong>et</strong>) processors brought important advances in computing when processor architects<br />

realised that <strong>the</strong>y did b<strong>et</strong>ter by make <strong>the</strong>ir devices simple in order <strong>to</strong> increase clock speed, and perhaps <strong>the</strong> same<br />

might help with microdisplays. After all, both micro-mirrors and ferroelectric liquid crystals can switch at 40 kHz<br />

or so when addressed at several volts and <strong>the</strong> frame rate tends instead <strong>to</strong> be limited by <strong>the</strong> number <strong>of</strong> lines times <strong>the</strong><br />

line-address time. Suppose that we make a “Reduced Microdisplay” by lowering <strong>the</strong> number <strong>of</strong> lines from 500 <strong>to</strong><br />

100; <strong>the</strong>n <strong>the</strong> frame rate might g<strong>et</strong> up <strong>to</strong>wards 30 kHz which equals <strong>the</strong> line rate <strong>of</strong> a conventional 2D display with<br />

500 lines. The significance <strong>of</strong>f <strong>this</strong> is that we could display 3D images line by line and <strong>this</strong> might greatly simplify<br />

<strong>the</strong> optics.<br />

Our ideal 3D display should have a wide field<br />

<strong>of</strong> view and 100 views should suffice <strong>to</strong> span<br />

100 º , but lenses with that angular range tend <strong>to</strong><br />

be bulky and expensive. A ball lens, for<br />

example, can collimate light from any angle<br />

because <strong>of</strong> its spherical symm<strong>et</strong>ry but a ball lens<br />

<strong>the</strong> size <strong>of</strong> a television would be unthinkable.<br />

However, if we <strong>are</strong> displaying images line by<br />

line, <strong>the</strong>n our optical system need be only one<br />

pixel thick so <strong>the</strong> ball lens can be replaced by a<br />

disc as shown:<br />

Stephen Ben<strong>to</strong>n used spinning prisms <strong>to</strong> syn<strong>the</strong>size his famous holographic video line by line and is reported <strong>to</strong><br />

have been eager <strong>to</strong> g<strong>et</strong> away from moving parts. But line-scanning displays have been demonstrated by several<br />

independent teams as a strategy for simplifying <strong>the</strong> display <strong>of</strong> 2D images.<br />

http://www.veritas<strong>et</strong>visus.com 76


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

A particularly successful concept seems <strong>to</strong> have been<br />

that in which <strong>the</strong> emission from a line projec<strong>to</strong>r was<br />

passed in<strong>to</strong> a slab light-guide, <strong>the</strong>n piezo-electrics were<br />

used <strong>to</strong> push gratings in<strong>to</strong> <strong>the</strong> evanescent field, one line<br />

at a time. The gratings eject <strong>the</strong> light from <strong>the</strong> lightguide<br />

and need only move a few microns <strong>to</strong> switch on<br />

and <strong>of</strong>f which sounds easy until one looks through a<br />

right-angled prism and tries <strong>to</strong> push an object in and<br />

out <strong>of</strong> <strong>the</strong> evanescent field at <strong>the</strong> hypotenuse. Except in<br />

dust free conditions, most solid objects cannot g<strong>et</strong> close<br />

enough while fluids such as water work only <strong>to</strong>o well<br />

and <strong>are</strong> reluctant <strong>to</strong> release. This brings me <strong>to</strong> <strong>the</strong><br />

reason why I wrote <strong>this</strong> article: a few days ago, I tried a<br />

pencil rubber and that works fine, even in my filthy<br />

labora<strong>to</strong>ry.<br />

http://www.veritas<strong>et</strong>visus.com<br />

http://www.veritas<strong>et</strong>visus.com 77


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

3D camera for medicine and more<br />

Mat<strong>the</strong>w Brennesholtz is a senior analyst at Insight Media. He has worked in <strong>the</strong> display<br />

field since 1978 in manufacturing, engineering, design, development, research, application<br />

and mark<strong>et</strong>ing positions at GTE/Sylvania, Philips and General Electric. In <strong>this</strong> time he has<br />

worked on direct view and projection CRTs, oil film light valves and systems, TFT LCD<br />

microdisplays and systems, DMD systems and LCoS imagers and systems. He has<br />

concentrated on <strong>the</strong> optics <strong>of</strong> <strong>the</strong> display itself, <strong>the</strong> display system and display<br />

instrumentation. He has also done extensive work on <strong>the</strong> human fac<strong>to</strong>rs <strong>of</strong> displays and <strong>the</strong><br />

relationship b<strong>et</strong>ween <strong>the</strong> broadcast video signal and <strong>the</strong> image <strong>of</strong> that signal as displayed.<br />

More d<strong>et</strong>ails on <strong>this</strong> promising mark<strong>et</strong> segment and many o<strong>the</strong>r conclusions <strong>are</strong> included in<br />

Insight Media’s “3D Technology and Mark<strong>et</strong>s: A Study <strong>of</strong> All Aspects <strong>of</strong> Electronic 3D<br />

Systems, Applications and Mark<strong>et</strong>s”. This article is reprinted with permission from Insight<br />

Media’s Display Daily on August 8, 2007.<br />

by Matt Brennesholtz<br />

On August 8, Avi Yaron and Joe Rollero <strong>of</strong> Visionsense visited Insight Media headquarters in Norwalk,<br />

Connecticut <strong>to</strong> demonstrate <strong>the</strong>ir 3D camera technology. This micro-miniature camera technology can be expected<br />

<strong>to</strong> appear in medical equipment, primarily for minimally invasive surgery (MIS), in mid-2008.<br />

3D MIS has been well received by doc<strong>to</strong>rs, at least in principle. According <strong>to</strong> Yaron, <strong>the</strong> Intuitive Surgical system<br />

has been especially well received. Unfortunately <strong>this</strong> system is very large and very expensive, even by <strong>the</strong><br />

standards <strong>of</strong> medical equipment, which has limited its sales and pen<strong>et</strong>ration in<strong>to</strong> <strong>the</strong> minimally invasive surgery<br />

mark<strong>et</strong>. Currently, minimally invasive surgery represents only about 15% <strong>of</strong> all surgery. One <strong>of</strong> <strong>the</strong> limits on MIS<br />

is based on <strong>the</strong> difficulty <strong>of</strong> doing surgery with only 2D images.<br />

The basic Visionsense technology uses a single sensor, which can be a CMOS or CCD imager. Their “Pun<strong>to</strong>”<br />

d<strong>et</strong>ec<strong>to</strong>r, shown in <strong>the</strong> pho<strong>to</strong> with a camera containing <strong>the</strong> d<strong>et</strong>ec<strong>to</strong>r, has a diagonal <strong>of</strong> 3.3 mm. The sensor has a<br />

microlens and a color filter array applied <strong>to</strong> it in a manner much like an<br />

au<strong>to</strong>stereoscopic LCD panel. In an au<strong>to</strong>stereoscopic display, <strong>this</strong> produces two<br />

or more “swe<strong>et</strong> spots” for <strong>the</strong> pupils <strong>of</strong> <strong>the</strong> eyes <strong>to</strong> receive different images. In<br />

<strong>the</strong> camera configuration, <strong>the</strong> system essentially runs backwards and <strong>the</strong> two<br />

swe<strong>et</strong> spots become <strong>the</strong> interpupillary distance for <strong>the</strong> 3D camera. This single<br />

sensor approach can produce very small cameras. The relatively large physical<br />

size <strong>of</strong> o<strong>the</strong>r 3D camera <strong>of</strong>ferings with two sensors for medical applications has<br />

limited <strong>the</strong>ir use in MIS. Visionsense has five granted patents on <strong>the</strong> technology<br />

plus numerous o<strong>the</strong>r patents pending, according <strong>to</strong> Yaron.<br />

This arrangement <strong>provide</strong>s an interpupillary distance <strong>of</strong> about 1/2 <strong>to</strong> 2/3 <strong>the</strong> imager diagonal. This <strong>provide</strong>s stereo<br />

images out <strong>to</strong> about 20 <strong>to</strong> 30 times <strong>the</strong> interpupillary distance, or in <strong>the</strong> case <strong>of</strong> <strong>the</strong> Pun<strong>to</strong> chip, out <strong>to</strong> an inch or<br />

two. While <strong>this</strong> is enough for many MIS applications, when it is not enough <strong>the</strong>re <strong>are</strong> two approaches <strong>to</strong> increase<br />

<strong>the</strong> stereo range. First, you can use a larger image sensor, and Visionsense is working on a high-resolution sensor<br />

6.8 mm in diam<strong>et</strong>er. If that doesn’t <strong>provide</strong> a large enough interpupillary distance for an application, prisms can be<br />

used in <strong>the</strong> pupil plane <strong>to</strong> separate <strong>the</strong> two pupils as necessary.<br />

Visionsense is developing <strong>the</strong> camera sensor, camera module including optics and electronics and support s<strong>of</strong>tw<strong>are</strong>.<br />

They demonstrated for me both pre-recorded 3D video <strong>of</strong> actual medical procedures and live 3D video coming<br />

from a <strong>sample</strong> camera containing <strong>the</strong> Pun<strong>to</strong> sensor. Yaron emphasized that Visionsense was display technology<br />

neutral and <strong>the</strong> final display for any medical instrument using Visionsense technology would be chosen by <strong>the</strong><br />

Visionsense cus<strong>to</strong>mers, not by Visionsense itself. Their technology has been used as an image source with<br />

http://www.veritas<strong>et</strong>visus.com 78


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

MacNaugh<strong>to</strong>n, Planar and Philips 3D displays and could be used with any o<strong>the</strong>r 3D display technology as well.<br />

They have also used single and dual projec<strong>to</strong>r installations for demonstrations at medical trade shows, for example.<br />

One interesting s<strong>of</strong>tw<strong>are</strong> <strong>to</strong>ol Visionsense has developed is called Image Fusion. This <strong>to</strong>ol takes an image from a<br />

3D source such as an MRI or CAT scan and warps it <strong>to</strong> overlay <strong>the</strong> live 3D image from <strong>the</strong> camera. The system<br />

shows <strong>the</strong> fused image on <strong>the</strong> surgeon’s 3D moni<strong>to</strong>r. This would allow <strong>the</strong> surgeon <strong>to</strong> see, for example, how far<br />

away his <strong>to</strong>ols <strong>are</strong> from <strong>the</strong> spinal cord while doing disc surgery, even if <strong>the</strong> cord is not y<strong>et</strong> visible in <strong>the</strong> camera<br />

image.<br />

After leaving Insight Media, Yaron and Rollero were heading up <strong>to</strong> Bos<strong>to</strong>n for several scheduled me<strong>et</strong>ings. While<br />

Yaron would not disclose any cus<strong>to</strong>mers or potential cus<strong>to</strong>mers at <strong>this</strong> point, he said it was necessary <strong>to</strong> visit both<br />

potential Visionsense cus<strong>to</strong>mers in <strong>the</strong> medical equipment business and potential end users in hospitals. Medical<br />

equipment manufacturers <strong>are</strong> unlikely <strong>to</strong> commit <strong>to</strong> designing and building a piece <strong>of</strong> equipment using new<br />

technology until <strong>the</strong> concept and <strong>the</strong> design has been blessed by doc<strong>to</strong>rs. Therefore, it’s necessary for Visionsense<br />

<strong>to</strong> visit end users as well as medical equipment manufacturers.<br />

No medical equipment is currently on sale using Visionsense cameras. Yaron expects <strong>this</strong> <strong>to</strong> change in mid-2008,<br />

however. At that point he expects FDA-approved medical equipment containing Visionsense technology <strong>to</strong> appear<br />

on <strong>the</strong> mark<strong>et</strong>. While Visionsense is currently focused on medical systems, Yaron eventually expects <strong>the</strong>re will be<br />

non-medical applications for <strong>the</strong> technology as well.<br />

>>>>>>>>>>>>>>>>>>>>


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

as Hollywood rushed <strong>to</strong> embrace 3D only <strong>to</strong> see it abandoned very quickly due <strong>to</strong> shortcomings in <strong>the</strong> technology<br />

and resultant image quality.<br />

Today’s 3D cinema technology is vastly superior and image quality is quite good. But creating a good 3D movie<br />

takes a lot <strong>of</strong> effort and special skills <strong>to</strong> add just <strong>the</strong> right “Dimensionalization” on a scene-by-scene basis. There<br />

<strong>are</strong> skilled people <strong>to</strong> do <strong>this</strong>, but <strong>the</strong>re <strong>are</strong> also more than a dozen studios gearing up <strong>to</strong> do 3D movies. Is <strong>the</strong>re<br />

enough talent and training available <strong>to</strong> ensure <strong>the</strong>se projects produce high quality content? A bad 3D<br />

implementation <strong>of</strong> a first run movie can have a disproportionate impact on <strong>the</strong> impression it creates with <strong>the</strong><br />

viewing public about 3D.<br />

Ano<strong>the</strong>r <strong>are</strong>a <strong>of</strong> concern is content conversion from 2D <strong>to</strong> 3D. Games and o<strong>the</strong>r computer generated graphics<br />

content with a 3D database behind <strong>the</strong>m could f<strong>are</strong> well in conversion <strong>to</strong> stereoscopic 3D. But so far, gaming<br />

content developers have merely converted 2D games <strong>to</strong> 3D games. What is needed <strong>are</strong> 3D games that take<br />

advantage <strong>of</strong> <strong>the</strong> ability <strong>to</strong> hide or reveal objects that may not be visible in a 2D version. This will create a real<br />

reason <strong>to</strong> own a 3D game.<br />

Converting 2D still images <strong>to</strong> S-3D is not <strong>to</strong>o hard, but converting 2D video is <strong>to</strong>ugh. Doing it <strong>of</strong>fline where<br />

pr<strong>of</strong>essionals can adjust and tweak is <strong>the</strong> preferred, and expensive, approach but <strong>the</strong>re is a huge pull <strong>to</strong> do it<br />

au<strong>to</strong>matically in real time. Most <strong>of</strong> real time demos I have seen <strong>of</strong> 2D <strong>to</strong> 3D conversion have not been very<br />

compelling. In fact, some <strong>are</strong> not good at all. <strong>We</strong> need <strong>to</strong> be c<strong>are</strong>ful in how fast we roll out <strong>the</strong>se solutions so as not<br />

<strong>to</strong> create bad impressions <strong>of</strong> 3D that will take years <strong>to</strong> reverse.<br />

And l<strong>et</strong>’s not forg<strong>et</strong> <strong>the</strong> hardw<strong>are</strong> implementation <strong>of</strong> 3D display systems. Even at trade shows dedicated <strong>to</strong> 3D, I<br />

have seen demos that <strong>are</strong> <strong>of</strong> poor quality or even s<strong>et</strong> up incorrectly. If <strong>the</strong> people who <strong>are</strong> trying <strong>to</strong> sell 3D can’t<br />

configure it properly or create compelling demos, that’s a problem. Case in point: au<strong>to</strong>stereoscopic 3D displays <strong>are</strong><br />

ones that do not require glasses <strong>to</strong> see <strong>the</strong> 3D effect. To do <strong>this</strong>, <strong>the</strong> technology requires that you trade <strong>of</strong>f image<br />

resolution in order <strong>to</strong> enable multiple viewing zones across a fairly wide field <strong>of</strong> view. One clear lesson with using<br />

such displays is <strong>to</strong> limit content <strong>to</strong> low-resolution images such as icons or larger graphic elements.<br />

At S-3D, I saw one demo that was showing SD resolution video on an au<strong>to</strong>stereoscopic display. As expected, <strong>the</strong><br />

video was so compromised it looked like it was out <strong>of</strong> focus. Also, <strong>the</strong> viewing zones were so narrow that it was<br />

difficult <strong>to</strong> find and keep <strong>the</strong> image in full stereo.<br />

O<strong>the</strong>r demos showed large rainbow patterns and similar difficulties in visually acquiring <strong>the</strong> image. Ano<strong>the</strong>r<br />

common mistake is reversing <strong>the</strong> left and right images when coupled <strong>to</strong> <strong>the</strong> polarization filtering glasses. This<br />

creates a stereo image, but it looks funny and will create eye strain. How can manufacturers ensure <strong>this</strong> doesn’t<br />

happen? There <strong>are</strong> no standards or m<strong>et</strong>hods that I know <strong>of</strong>.<br />

And, 3D needs <strong>to</strong> recreate, as much as possible, <strong>the</strong> way we see <strong>the</strong> world. You cannot see stereo pairs when<br />

looking at objects beyond 50 fe<strong>et</strong> or so, so don’t try <strong>to</strong> add dimension <strong>to</strong> <strong>the</strong>se long distance shots - it looks wrong.<br />

And, when moving your head laterally around a stereo display, don’t maintain <strong>the</strong> same object orientation as you<br />

move. That’s not how it works in <strong>the</strong> real world. Finally, making objects jump out at you may work in a <strong>the</strong>me park<br />

3D experience, but not if you want <strong>to</strong> use <strong>the</strong> 3D display for extended periods.<br />

The bot<strong>to</strong>m line: while we have <strong>to</strong> solve <strong>the</strong> technology part, we can’t sell <strong>the</strong> technology <strong>to</strong> <strong>the</strong> consumer. It’s<br />

about <strong>the</strong> application. L<strong>et</strong>’s s<strong>to</strong>p being obsessed with <strong>the</strong> technology and focus on making <strong>the</strong> applications for <strong>the</strong><br />

technology <strong>to</strong> work. Once it is easy <strong>to</strong> use and <strong>of</strong>fers a clear benefit over 2D, 3D will be adopted. But l<strong>et</strong>’s not be<br />

<strong>to</strong>o over-anxious <strong>to</strong> roll out 3D systems ei<strong>the</strong>r. Bad implementations create a poor impression and a backlash that<br />

could take years, maybe decades <strong>to</strong> reverse.<br />

http://www.veritas<strong>et</strong>visus.com 80


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

8:15 AM<br />

8:40 AM<br />

9:05 AM<br />

Rob de Vogel, Sr. Direc<strong>to</strong>r Business Creation<br />

Philips 3D Solutions<br />

Jeremy Tear, Consultant/Partner<br />

GGT Consultants Ltd.<br />

Keith Fredericks, CTO<br />

Newsight<br />

Tuesday, September 18, 2007<br />

Session 1: 3D Public Displays<br />

Leveraging 3D Digital Signage in<strong>to</strong> 3D<br />

Entertainment<br />

Does 3D Advertising Increase Brand R<strong>et</strong>ention?<br />

3D Digital Signage Solutions<br />

Panel Discussion - Modera<strong>to</strong>r: Chris Yewdall, CEO, DDD<br />

Session 2: Stereoscopic 3D for Gamers<br />

Neil Schneider, President & CEO<br />

10:10 AM Taking Stereoscopic 3D Gaming <strong>to</strong> <strong>the</strong> Next Level<br />

Meant To Be Seen (MTBS)<br />

T<strong>are</strong>k El Dokor, CTO<br />

10:35 AM New Ways in Game-Human Interface<br />

Edge 3 Technologies<br />

Richard Marks, Manager <strong>of</strong> Special Projects<br />

11:00 AM TBA<br />

Sony Computer Entertainment <strong>of</strong> America<br />

11:25 AM Panel Discussion - Modera<strong>to</strong>r: Arthur Berman, Analyst, Insight Media<br />

1:00 PM<br />

1:25 PM<br />

1:50 PM<br />

2:15 PM<br />

3:00 PM<br />

3:25 PM<br />

4:10 PM<br />

4:35 PM<br />

Mat<strong>the</strong>w Brennesholtz, Sr. Analyst<br />

Insight Media<br />

Lenny Lip<strong>to</strong>n, CTO<br />

RealD<br />

Dave Seigle, President/CEO<br />

InThree, Inc.<br />

John C<strong>are</strong>y, Vice President <strong>of</strong> Mark<strong>et</strong>ing<br />

Dolby Labora<strong>to</strong>ries<br />

Aaron Parry, Executive Producer<br />

Paramount Pictures<br />

Jim Mainard, Head <strong>of</strong> Production Development<br />

Dreamworks Animation<br />

Session 3: 3D Digital Cinema<br />

Prospects for 3D Digital Cinema<br />

Next Steps in <strong>the</strong> 3D Cinema Revolution<br />

Trade-Offs in 2D <strong>to</strong> 3D Conversion<br />

Stereoscopic Technology Options for 3D Digital<br />

Cinema<br />

Challenges <strong>to</strong> 3D Filmmaking<br />

Authoring in Stereo: Rewriting <strong>the</strong> Rules <strong>of</strong> Visual<br />

S<strong>to</strong>ry Telling<br />

Panel Discussion - Modera<strong>to</strong>r: Chris Chinnock, President, Insight Media<br />

Thomas Ruge, Representative <strong>of</strong> <strong>the</strong> Americas<br />

Holografika<br />

Alex Corb<strong>et</strong>t, Sr. Engineer<br />

Light Blue Optics<br />

Session 4: Novel 3D Technologies<br />

Light Field Reconstruction Approach <strong>to</strong> 3D<br />

Displaying<br />

Steerable Holographic Backlight for 3D Au<strong>to</strong>stereo<br />

Displays<br />

Exhibits Open <strong>to</strong> Conference Attendees only 5:00-8:00; N<strong>et</strong>working Session 5:45-8:00<br />

http://www.veritas<strong>et</strong>visus.com 81


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

<strong>We</strong>dnesday, September 19, 2007<br />

Session 5: 3D TV<br />

Arthur Berman, Analyst<br />

8:15 AM<br />

Insight Media<br />

3D in <strong>the</strong> Home<br />

Chris Yewdall, CEO<br />

8:40 AM<br />

DDD<br />

3D TV - Crossing <strong>the</strong> Chasm<br />

David Naranjo, Direc<strong>to</strong>r, Product Development<br />

9:05 AM<br />

Mitsubishi Digital Electronics America<br />

Educating R<strong>et</strong>ailers & Consumers on 3D TVs<br />

Nicholas Routhier, President<br />

9:30 AM<br />

Sensio<br />

Joining Forces: Is a 3DTV Consortium<br />

Needed?<br />

Panel Discussion - Modera<strong>to</strong>r: John Merritt, CTO, Merritt Group<br />

Session 6: 3D Visualization<br />

Paul Singh<br />

10:35 AM<br />

Albany Medical College<br />

Minimally Invasive Surgery<br />

Ronald Enstrom, President & CEO<br />

11:00 AM<br />

The Colfax Group<br />

Geospatial Analysis<br />

James Oliver, Direc<strong>to</strong>r VRAC<br />

11:25 AM<br />

Iowa State University<br />

Fully Immersive Ultra-High Resolution Virtual<br />

Reality<br />

Johnny Lawson, Direc<strong>to</strong>r <strong>of</strong> Visualization<br />

11:50 AM<br />

Louisiana Immersive Tech Enterprise<br />

Bringing Immersive Visualization in<strong>to</strong> <strong>the</strong><br />

Light: The Challenges and Opportunities<br />

Creating <strong>the</strong> LITE Center<br />

Panel Discussion - Modera<strong>to</strong>r: André Floyd, Mark<strong>et</strong>ing Manager, Sony Electronics<br />

Exhibits Open <strong>to</strong> Public 10:00 <strong>to</strong> 3:00 PM<br />

Special Screening at Dolby (Buses leave for Dolby at 3:15 and 3:30 PM)<br />

http://www.3dbizex.com<br />

>>>>>>>>>>>>>>>>>>>>


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

• I recently had <strong>the</strong> opportunity <strong>to</strong> visit a lab where I saw a demonstration <strong>of</strong> 3D LCD displays, 3D LCD<br />

TV and a 3D LCD gaming console. The images were as<strong>to</strong>nishing. The images did appear <strong>to</strong> jump out<br />

in<strong>to</strong> mid-air and it made for <strong>the</strong> most exciting video game realism I had ever seen.<br />

• I know <strong>the</strong> direc<strong>to</strong>r <strong>of</strong> a major medical facility here in Sou<strong>the</strong>rn California who expressed definite<br />

interest in 3D LCD displays for <strong>the</strong>ir medical imaging applications. He said he thought <strong>this</strong> new state-<strong>of</strong><strong>the</strong>-art<br />

technology is important for 3D and 4D ultrasound and o<strong>the</strong>r applications.<br />

• I know computer products distribu<strong>to</strong>rs who <strong>are</strong> looking for <strong>the</strong> most advanced display systems available<br />

and who also represent <strong>the</strong> niche known as “specialty displays”. For <strong>the</strong>m, 3D display technology is a<br />

no-brainer. It is not a matter <strong>of</strong> if, but when, <strong>the</strong>y can launch new 3D products.<br />

• In my travels and research I had <strong>the</strong> pleasure <strong>of</strong> me<strong>et</strong>ing one <strong>of</strong> <strong>the</strong> VPs <strong>of</strong> a major multi-billion dollar<br />

aerospace company. He considers advanced 3D LCD and widescreen panoramic displays <strong>to</strong> be<br />

important for <strong>the</strong>ir government, military and aerospace applications.<br />

• The gaming mark<strong>et</strong> itself has now grown so fast that <strong>the</strong> computer industry is now taking it seriously,<br />

whereas previously it was regarded mostly as just a small mark<strong>et</strong> for youth. It has grown <strong>to</strong> be an<br />

established major mark<strong>et</strong> for consumers <strong>of</strong> all ages. 3D is considered <strong>to</strong> be an essential part <strong>of</strong> <strong>this</strong><br />

growing mark<strong>et</strong>.<br />

3D display technology is <strong>the</strong> next big thing: 3D is <strong>the</strong> next logical step in <strong>the</strong> technology cycle. Imagine <strong>the</strong><br />

excitement <strong>of</strong> a truly immersive experience in which you take your family <strong>to</strong> <strong>the</strong> <strong>the</strong>ater and you watch an action<br />

adventure movie with 3D images along with <strong>the</strong> great Dolby sound systems we have. Now you feel like you <strong>are</strong><br />

taking part in <strong>the</strong> action for <strong>the</strong> images <strong>are</strong> more real. Now imagine having one <strong>of</strong> <strong>the</strong>se systems in your home – as<br />

part <strong>of</strong> your home <strong>the</strong>ater or home PC system. Soon you will be able <strong>to</strong> do just that.<br />

Advice <strong>to</strong> OEMs planning on 3D technology: Now it is just a matter time before OEMs take <strong>the</strong>ir technology and<br />

form tactical plans <strong>to</strong> execute and build mark<strong>et</strong> momentum. It looks like <strong>the</strong> next CES show <strong>this</strong> January will be<br />

more exciting than ever.<br />

Make sure you do <strong>the</strong> basics well. The basics <strong>are</strong> fundamental product fulfillment skills that deliver <strong>the</strong><br />

product <strong>to</strong> you quickly and in good working order. They <strong>are</strong>:<br />

• A quality product line. The supplier should follow <strong>the</strong> products and trends and <strong>of</strong>fer a range <strong>of</strong> quality<br />

products <strong>to</strong> me<strong>et</strong> <strong>the</strong> OEM cus<strong>to</strong>mer’s particular needs.<br />

• Responsiveness <strong>to</strong> inquiries and good communication. Nobody likes calling, leaving a message and <strong>the</strong>n<br />

waiting a day or two for a call back, especially if <strong>the</strong>y <strong>are</strong> thinking <strong>of</strong> spending money with you.<br />

• Easy ordering. Your OEM cus<strong>to</strong>mers don’t want <strong>to</strong> have <strong>to</strong> undertake an extensive treasure hunt <strong>to</strong> find<br />

<strong>the</strong> products <strong>the</strong>y need. Also, how easy <strong>are</strong> you <strong>to</strong> do business with? If <strong>the</strong>re is a stack <strong>of</strong> paperwork<br />

required <strong>this</strong> can be a disincentive <strong>to</strong> <strong>the</strong> OEM cus<strong>to</strong>mer and frequently OEMs prefer companies that <strong>are</strong><br />

easy <strong>to</strong> work with and have minimal paperwork or costly delays.<br />

• On-time order delivery. Slow product delivery can slowly usher an OEM or a supplier out <strong>of</strong> business.<br />

It’s taken for granted as a no-brainer, however, that <strong>this</strong> is one <strong>of</strong> <strong>the</strong> basics many comp<strong>et</strong>i<strong>to</strong>rs fail at and<br />

o<strong>the</strong>r leaders succeed at: on-time delivery. This is important and OEM cus<strong>to</strong>mers appreciate it and show<br />

it with more orders. Fast, friendly and value-added service.<br />

Make sure you have <strong>the</strong> differentia<strong>to</strong>rs OEMs need. This is where <strong>the</strong> intangibles come in strongly.<br />

OEM differentia<strong>to</strong>rs <strong>are</strong> those things that you do <strong>to</strong> make sure you present <strong>the</strong> best solution for <strong>the</strong> problem or<br />

need for <strong>the</strong> OEM. Being good at <strong>the</strong>se requires solution experience, technical knowledge as well as an<br />

understanding <strong>of</strong> <strong>the</strong>ir specific needs and a strong relationship connection with <strong>the</strong> OEM – not just<br />

understanding <strong>the</strong> needs <strong>of</strong> a similar cus<strong>to</strong>mer. OEMs look for suppliers that differentiate <strong>the</strong>mselves in terms <strong>of</strong><br />

business philosophy, experience and knowledge in <strong>the</strong>ir <strong>are</strong>a <strong>of</strong> need, and post-sales practices and support.<br />

http://www.veritas<strong>et</strong>visus.com 83


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

http://www.veritas<strong>et</strong>visus.com 84


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

3D maps – a matter <strong>of</strong> perspective?<br />

by Alan Jones<br />

In 1492, Columbus first s<strong>et</strong> <strong>of</strong>f <strong>to</strong> <strong>the</strong> west in an attempt <strong>to</strong> g<strong>et</strong> <strong>to</strong> Asia. For centuries before and after <strong>the</strong>re was a<br />

debate about whe<strong>the</strong>r <strong>the</strong> Earth was round or flat. Columbus’ feats succeeded in adding fuel <strong>to</strong> <strong>the</strong> debate because<br />

he did not circumnavigate <strong>the</strong> world and left open <strong>the</strong> possibility that <strong>the</strong> world was flat, if a little bigger. The Flat<br />

Earth Soci<strong>et</strong>y continued <strong>the</strong> debate until <strong>the</strong> late 20th century when some pho<strong>to</strong>graphs taken from Space <strong>provide</strong>d a<br />

ra<strong>the</strong>r uncomfortable truth for <strong>the</strong>m that <strong>the</strong> world was indeed round.<br />

But is it? I have been playing with Micros<strong>of</strong>t’s Virtual Earth and Google Earth. Both <strong>are</strong> relatively straight-forward<br />

<strong>to</strong> use. I found <strong>the</strong> Google Earth interface a little easier <strong>to</strong> navigate while <strong>the</strong> Virtual Earth interface was what one<br />

could describe as more lay-person friendly. From an image point <strong>of</strong> view both <strong>are</strong> rich in content although Google<br />

Earth <strong>provide</strong>s more support information which can become obtrusive if not turned <strong>of</strong>f.<br />

Spending what time I have with each <strong>of</strong> <strong>the</strong>m has posed some thoughts and questions, which might generate some<br />

comments from you, our readers. Take <strong>the</strong> Statue <strong>of</strong> Liberty as an example. A 2D image looks pr<strong>et</strong>ty convincing<br />

that <strong>the</strong> Statue is indeed a 3D object. But is all we <strong>are</strong> seeing correct? The left-hand image is <strong>the</strong> option <strong>the</strong> “bird’s<br />

eye” view – a pho<strong>to</strong>graph taken with perspective. The center image is <strong>the</strong> map data, which gives <strong>the</strong> illusion that<br />

<strong>the</strong> Statue is not <strong>the</strong>re, although <strong>the</strong>re is a shadow! If we rotate <strong>the</strong> image, as in <strong>the</strong> right image, <strong>the</strong>n <strong>the</strong> Statue<br />

comes in<strong>to</strong> view, but where has it come from?<br />

With <strong>the</strong> high pr<strong>of</strong>ile that New York enjoys it is not really a surprise that a great deal <strong>of</strong> effort has gone in<strong>to</strong><br />

maximizing <strong>the</strong> representation <strong>of</strong> <strong>the</strong> city. Not least <strong>of</strong> <strong>the</strong>se efforts is <strong>the</strong> 3D modeling <strong>of</strong> <strong>the</strong> New York skyline,<br />

including <strong>the</strong> Statue <strong>of</strong> Liberty. On <strong>the</strong> left is <strong>the</strong> Virtual Earth image which clearly shows <strong>the</strong> statue while on <strong>the</strong><br />

right is a similar Google Earth image – but oh, where is <strong>the</strong> Statue?!<br />

http://www.veritas<strong>et</strong>visus.com 85


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

If we go somewhere less glamorous, <strong>the</strong>n what do we see? Blackpool was famous in <strong>the</strong> 20th century as a place for<br />

“Sea, Sun and Fun”. It is positioned on <strong>the</strong> north-west coast <strong>of</strong> <strong>the</strong> UK and was a favorite summer haunt for <strong>the</strong><br />

mill-workers out <strong>of</strong> <strong>the</strong> Lancastrian cot<strong>to</strong>n mills. In <strong>the</strong> hey-day <strong>of</strong> <strong>the</strong> mills, technology was defined by <strong>the</strong><br />

Spinning Jenny and Water Frame – much “beloved” by <strong>the</strong> Luddites! As a <strong>to</strong>urist attraction, Blackpool has a Tower<br />

which is modeled on <strong>the</strong> Eiffel Tower in Paris. I have chosen <strong>this</strong> place because I suspect that <strong>the</strong>re would be little<br />

incentive <strong>to</strong> focus 3D imagery resources on such a small <strong>to</strong>wn.<br />

In <strong>the</strong> image on <strong>the</strong> left, <strong>the</strong> Tower in Blackpool can just be made out. Rotate <strong>to</strong> <strong>the</strong> horizon and <strong>the</strong> Tower<br />

has disappe<strong>are</strong>d, but not <strong>the</strong> shadow!<br />

A favorite place for both our publisher and me is S<strong>to</strong>nehenge. Trying <strong>the</strong> same test as before, in <strong>the</strong> Virtual Earth<br />

images, we loose those magnificent Preseli Blues<strong>to</strong>nes when we rotate <strong>to</strong> <strong>the</strong> horizon,<br />

Virtual Earth views <strong>of</strong> S<strong>to</strong>nehenge <strong>are</strong> shown in <strong>the</strong>se two images, but note <strong>the</strong> tilted view loses <strong>the</strong> sense <strong>of</strong> depth<br />

http://www.veritas<strong>et</strong>visus.com 86


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

The Google Earth team tried <strong>to</strong> preserve <strong>the</strong> depth image <strong>of</strong> <strong>the</strong> s<strong>to</strong>nes with a 3D overlay, but unfortunately <strong>the</strong>y <strong>are</strong><br />

about 30 fe<strong>et</strong> <strong>to</strong> <strong>the</strong> east <strong>of</strong> where <strong>the</strong>y should be.<br />

Google Earth images <strong>are</strong> shown in <strong>the</strong> above images, where height is maintained, but not <strong>the</strong> correct position.<br />

Finally, l<strong>et</strong>’s take a look at <strong>the</strong> highest place on earth, Mount Everest – depicted in Virtual Earth in <strong>the</strong> <strong>to</strong>p two<br />

images and in Google Earth in <strong>the</strong> bot<strong>to</strong>m two images:<br />

http://www.veritas<strong>et</strong>visus.com 87


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

I guess that <strong>the</strong>re is so much depth involved in a pho<strong>to</strong>graph <strong>of</strong> Mount Everest; we cannot fail <strong>to</strong> be impressed by<br />

<strong>the</strong> imposing images.<br />

Summary: From my short time looking at <strong>the</strong>se 3D maps, <strong>the</strong> conclusion is that we do not y<strong>et</strong> have <strong>the</strong> rich 3D<br />

data source in <strong>the</strong> “real” world that we have in <strong>the</strong> virtual worlds created inside computers by games, medical<br />

imaging, modeling, <strong>et</strong>c. However we should be confident that <strong>this</strong> need will be satisfied over time as <strong>the</strong> capability<br />

<strong>to</strong> capture depth information with increasing d<strong>et</strong>ail is deployed in volume. What is not so clear is that we shall g<strong>et</strong><br />

<strong>the</strong> compelling 3D displays that will allow us <strong>to</strong> exploit that rich information source as and when it becomes<br />

available.<br />

Postscript: Regular readers will know that I take pride in our old thatched cottage (built in 1766 – about <strong>the</strong> time<br />

<strong>the</strong> Spinning Jenny was being invented!). The outside has been used many times in <strong>this</strong> and o<strong>the</strong>r sources, but what<br />

does it look like in Virtual Earth and Google Earth?<br />

But when we do a 3D rotation, <strong>the</strong>re is no depth. Maybe I live on a Flat Earth after all!<br />

Alan Jones r<strong>et</strong>ired from IBM in 2002 after 35 years <strong>of</strong> displays development, mark<strong>et</strong>ing and product management. He<br />

was a frequent speaker at iSuppli and DisplaySearch FPD conferences.<br />

http://www.veritas<strong>et</strong>visus.com 88


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

PC vs. Console – Has <strong>the</strong> mark been missed?<br />

Neil Schneider is <strong>the</strong> president & CEO <strong>of</strong> Meant <strong>to</strong> be Seen (http://www.mtbs3D.com). He<br />

runs <strong>the</strong> first and only stereoscopic 3D certification and advocacy group. MTBS is nonpropri<strong>et</strong>ary,<br />

and <strong>the</strong>y test and certify video games for S-3D compatibility. They also<br />

continually take innovative steps <strong>to</strong> move <strong>the</strong> S-3D industry forward through education,<br />

community development, and member driven advocacy.<br />

I’d like <strong>to</strong> point you all <strong>to</strong> <strong>this</strong> article I read online recently that has me feeling very<br />

conflicted. It is an interview summary with Mr. Roy Taylor, nVidia’s VP <strong>of</strong> Content<br />

Relations. (http://www.tgdaily.com/index2.php?option=com_content&do_pdf=1&id=33143).<br />

There <strong>are</strong> three things happening here that don’t make a lot <strong>of</strong> sense <strong>to</strong> me:<br />

by Neil Schneider<br />

1. He is claiming that players have switched <strong>to</strong> consoles over PCs for gaming.<br />

2. He strongly believes that <strong>the</strong> PC innovation that will drive <strong>the</strong> PC gaming mark<strong>et</strong><br />

sh<strong>are</strong> <strong>are</strong> high-resolution screens.<br />

3. He thinks that it is acceptable for a good game <strong>to</strong> require a $20,000 (yes, twenty THOUSAND) dollar PC.<br />

I found <strong>this</strong> article troubling for a number <strong>of</strong> reasons. First, and most importantly, it doesn’t make any business<br />

sense <strong>to</strong> me.<br />

Yes, console games <strong>are</strong> very successful, and <strong>the</strong>y will continue <strong>to</strong> be very successful. Back in my day, computers<br />

and consoles lived happily ever after with Coleco Vision, Atari, Commodore, Apple, Amiga, and so on. In fact, it<br />

wasn’t <strong>the</strong> consoles catching up <strong>to</strong> computers, it was computers catching up <strong>to</strong> – and surpassing – <strong>the</strong> consoles! So,<br />

like a fine wine connoisseur, <strong>the</strong>re will always be a mark<strong>et</strong> for those who like <strong>to</strong> buy <strong>the</strong>ir white in a box, and <strong>the</strong>ir<br />

red in a bottle. The PC mark<strong>et</strong> is <strong>the</strong> bottle mark<strong>et</strong>, and Mr. Taylor is at least correct in recognizing that.<br />

Now, he is very much correct that <strong>to</strong> continue <strong>to</strong> reap <strong>the</strong> benefits <strong>of</strong> superior game developer attention, <strong>the</strong> PC<br />

mark<strong>et</strong> has <strong>to</strong> differentiate itself from <strong>the</strong> console mark<strong>et</strong>. Unfortunately, his minds<strong>et</strong> is based on a myth that<br />

believes a console can only plug in <strong>to</strong> <strong>the</strong> living room HDTV. I’m sorry, Roy, but moni<strong>to</strong>rs <strong>are</strong> not PC ONLY<br />

equipment, and if nVidia’s mark<strong>et</strong>ing strategy is <strong>to</strong> say “Hey, we <strong>are</strong> after games that display on LCD panels”, <strong>the</strong>y<br />

<strong>are</strong> in for a shock!<br />

In fact, I think <strong>this</strong> is a very dangerous strategy because it gambles <strong>the</strong> PC mark<strong>et</strong>’s success on a relatively boring<br />

piece <strong>of</strong> equipment – <strong>the</strong> flat 2D moni<strong>to</strong>r. It amazes me that <strong>the</strong> biggest idea <strong>the</strong> industry can think <strong>of</strong> is a higher<br />

resolution. It just doesn’t strike me as a major breakthrough worth paying <strong>to</strong>p dollar for now that HDTV is<br />

commonplace.<br />

The article hints at having PC games with extra levels and more artistic quality, but where is <strong>the</strong> innovation? Who<br />

c<strong>are</strong>s? Here’s <strong>the</strong> real problem - since when is $20,000 for a PC acceptable? Sure, if you want an Oc<strong>to</strong>-SLI s<strong>et</strong>-up<br />

with a CPU farm rendering your video game in your garage while your wife is threatening <strong>to</strong> run you over as you<br />

chant “serenity now, serenity now”, I guess that’s an option.<br />

Suppose <strong>the</strong> hardw<strong>are</strong> manufacturers do manage <strong>to</strong> sell a modest number <strong>of</strong> <strong>the</strong>se $20K machines. Can you think <strong>of</strong><br />

a single game developer who would think <strong>to</strong> develop and mark<strong>et</strong> <strong>to</strong> such a small, boring mark<strong>et</strong> place?<br />

L<strong>et</strong>’s face facts, <strong>the</strong> PC gaming mark<strong>et</strong> is <strong>the</strong> industry’s dirty little secr<strong>et</strong>. While <strong>the</strong> average consumer may be<br />

impressed by <strong>the</strong> words “Dual Core” or “Intel Inside”, it’s <strong>the</strong> video games that give cus<strong>to</strong>mers <strong>the</strong> annual excuse <strong>to</strong><br />

upgrade <strong>the</strong>ir computer and feed <strong>the</strong> industry’s families. The PC dollar value has <strong>to</strong> be som<strong>et</strong>hing that every day<br />

consumers can swallow, and still <strong>of</strong>fer a comp<strong>et</strong>itive advantage over <strong>the</strong>ir console counterpart.<br />

http://www.veritas<strong>et</strong>visus.com 89


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

The good news is <strong>the</strong> solution is right under nVidia’s noses. For those <strong>of</strong> you reading <strong>this</strong> newsl<strong>et</strong>ter, it’s no<br />

surprise that stereoscopic 3D (S-3D) is <strong>the</strong> thrilling technology used in 3D movie <strong>the</strong>aters like IMAX 3D, RealD,<br />

and Dolby Labs. Everyone is jumping on board, including Dreamworks Animation, James Cameron, George Lucas,<br />

and more. When millions <strong>of</strong> moviegoers see Star Wars in re-mastered S-3D, <strong>the</strong>y won’t need <strong>to</strong> be educated on<br />

what true 3D gaming is in video games.<br />

With <strong>the</strong> exception <strong>of</strong> speed, nVidia’s only comp<strong>et</strong>itive advantage right now is <strong>the</strong>ir stereoscopic 3D support for<br />

video games. While we <strong>are</strong> very excited <strong>to</strong> see that nVidia is continuing <strong>to</strong> develop <strong>the</strong>se drivers, it’s time for<br />

nVidia <strong>to</strong> put more public focus and private money in<strong>to</strong> it. Their stereoscopic 3D development team is going <strong>to</strong> be<br />

<strong>the</strong> lifeblood <strong>of</strong> that company much sooner than later, and <strong>the</strong>y should have every resource needed <strong>to</strong> be successful.<br />

It’s not just about nVidia. iZ3D has developed propri<strong>et</strong>ary drivers that work on both nVidia and AMD/ATI graphics<br />

cards, and <strong>the</strong>y fur<strong>the</strong>r support post-processing effects in 3D like never before seen.<br />

If you spent $5,000 on a computer (which is high), and your neighbor spent $400 on his console, how <strong>are</strong> you going<br />

<strong>to</strong> wow him <strong>to</strong> your house? Give him a pair <strong>of</strong> 3D glasses, and he won’t be mowing <strong>the</strong> lawn for months. THAT’S<br />

what <strong>the</strong> PC industry needs right now, and THAT’S what game developers want <strong>to</strong> hear. None <strong>of</strong> <strong>this</strong> rubbish about<br />

high-resolution moni<strong>to</strong>rs that no one c<strong>are</strong>s about.<br />

Like oil riches being drained from <strong>the</strong> ground, <strong>the</strong> PC mark<strong>et</strong> understands that time is ticking for <strong>the</strong> next<br />

defendable business breakthrough in gaming, but unlike <strong>the</strong> world’s energy crisis, <strong>the</strong> solution is right in front <strong>of</strong><br />

our eyes and in movie <strong>the</strong>aters across <strong>the</strong> country.<br />

>>>>>>>>>>>>>>>>>>>>


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

3D DLP HDTVs – all is revealed!<br />

Andrew Woods is a research engineer at Curtin University’s Centre for<br />

Marine Science & Technology in Perth, Australia. He has nearly 20 years <strong>of</strong><br />

experience in <strong>the</strong> design, application, and evaluation <strong>of</strong> stereoscopic video<br />

equipment for industrial and entertainment applications. He is also co-chair <strong>of</strong><br />

<strong>the</strong> annual Stereoscopic Displays and Applications conference – <strong>the</strong> world’s<br />

largest and longest running technical stereoscopic imaging conference.<br />

by Andrew Woods<br />

Samsung released <strong>the</strong>ir range <strong>of</strong> “3D Ready” DLP HDTVs in April,<br />

Mitsubishi released <strong>the</strong>irs in June, Texas Instruments publicly revealed <strong>the</strong><br />

3D format that <strong>the</strong>se HDTVs accept in late August, i-O Display systems<br />

Pho<strong>to</strong>: Mark Codding<strong>to</strong>n<br />

released wireless 3D glasses suitable for <strong>the</strong>se 3D HDTVs early <strong>this</strong> month,<br />

and PC s<strong>of</strong>tw<strong>are</strong> which supports <strong>the</strong> format <strong>of</strong> <strong>the</strong>se new 3D HDTVs is also becoming available. All <strong>the</strong> pieces <strong>are</strong><br />

slotting <strong>to</strong>ge<strong>the</strong>r for high-quality stereoscopic 3D viewing, and <strong>the</strong> marking push is beginning.<br />

Samsung “3D Ready” HDTV from<br />

http://product.samsung.com/dlp3d<br />

HDMI cable). Switch <strong>the</strong> display <strong>to</strong> “HDMI 3” (by<br />

pressing <strong>the</strong> source but<strong>to</strong>n), and <strong>the</strong>n enable 3D mode<br />

by pressing <strong>the</strong> “3D” but<strong>to</strong>n on <strong>the</strong> remote control.<br />

For a full list <strong>of</strong> <strong>the</strong> “3D Ready” HDTV models<br />

available from Samsung and Mitsubishi:<br />

http://www.3dmovielist.com/3dhdtvs.html<br />

There have been a number <strong>of</strong> 3D formats around for<br />

some time (e.g. row-interleaved, column interleaved,<br />

over-under, side-by-side, time-sequential), but <strong>the</strong> 3D<br />

format that <strong>the</strong>se 3D HDTVs accept is som<strong>et</strong>hing<br />

different – a checkerboard pattern. That is, <strong>the</strong> native<br />

resolution image sent <strong>to</strong> <strong>the</strong> display consists <strong>of</strong> an<br />

alternating checkerboard pattern <strong>of</strong> pixels from <strong>the</strong><br />

left perspective and right perspective images (as<br />

indicated by l<strong>et</strong>ters “L” and “R” in Figure 1). The<br />

display internally converts <strong>this</strong> checkerboard pattern<br />

Firstly, a short technology recap: <strong>the</strong>se “3D Ready” HDTVs<br />

<strong>are</strong> capable <strong>of</strong> displaying 120Hz time-sequential stereoscopic<br />

3D images and video providing high-resolution flicker-free<br />

3D viewing. The viewer wears a pair <strong>of</strong> liquid crystal shutter<br />

(LCS) 3D glasses that switch in synchronization with <strong>the</strong><br />

sequence <strong>of</strong> left and right perspective images displayed on<br />

<strong>the</strong> screen (at 120 images per second). All <strong>of</strong> <strong>the</strong> “3D ready”<br />

displays <strong>are</strong> based on DLP technology from Texas<br />

Instruments, <strong>of</strong>fer ei<strong>the</strong>r 1080p or 720p resolution, and all <strong>are</strong><br />

rear-projection TVs (<strong>of</strong> a new extremely slim design).<br />

Equipment s<strong>et</strong>up is quite simple – plug in a pair <strong>of</strong> VESA<br />

3-pin compatible LCS 3D glasses in<strong>to</strong> <strong>the</strong> “3D Sync”<br />

connec<strong>to</strong>r on <strong>the</strong> rear <strong>of</strong> <strong>the</strong> 3D HDTV. Connect <strong>the</strong> DVI<br />

output <strong>of</strong> an appropriate PC (running suitable 3D-capable<br />

s<strong>of</strong>tw<strong>are</strong>) <strong>to</strong> “HDMI input 3” on <strong>the</strong> display (using a DVI-<strong>to</strong>-<br />

Figure 1: 3D DLP checkerboard pattern for a<br />

1920x1080 native resolution display<br />

http://www.veritas<strong>et</strong>visus.com 91


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

in<strong>to</strong> a 120Hz time-sequential image that is viewed using LCS 3D glasses.<br />

The reason for using <strong>the</strong> checkerboard pattern relates <strong>to</strong> <strong>the</strong> way that <strong>the</strong>se displays work. They use a process TI<br />

calls “Smoothpicture” or “Wobulation” <strong>to</strong> achieve a full-resolution image from a half-resolution DMD (Digital<br />

Micromirror Display) panel. Each 60Hz full frame is displayed as two sub-frames (at 120 sub-frames per second).<br />

This has similarities <strong>to</strong> <strong>the</strong> way that interlacing works but is also quite different in many ways. With reference <strong>to</strong><br />

Figure 1, in <strong>the</strong> first sub-frame all <strong>the</strong> “L” designated pixels <strong>are</strong> displayed, and in <strong>the</strong> second sub-frame all <strong>the</strong> “R”<br />

designated pixels <strong>are</strong> displayed – at 120 sub-frames per second. The position <strong>of</strong> <strong>the</strong> image is wobbled b<strong>et</strong>ween subframes<br />

so <strong>the</strong>y <strong>are</strong> slightly spatially <strong>of</strong>fs<strong>et</strong>. This “wobulation” process was already being used in last year’s range <strong>of</strong><br />

DLP TVs, just for 2D display – as mentioned earlier, <strong>to</strong> achieve a full-resolution image from a half-resolution<br />

panel. The DMD is perfect for <strong>this</strong> since it can switch b<strong>et</strong>ween states extremely fast – it has no phosphor<br />

persistence, it has an ultra-fast pixel response time, and can generate a black period. Someone in TI obviously<br />

realized <strong>this</strong> feature s<strong>et</strong> had possibilities for 3D display. Two discr<strong>et</strong>e images, shown at 120 Hz, with a black period<br />

– perfect for 3D! And <strong>this</strong> 3D function can be added at almost zero additional cost (over what was already being<br />

done for 2D display in last year’s models). A match made in heaven!<br />

TI recently published a paper (“Introducing DLP 3-D TV” by David Hutchison, available at<br />

http://www.dlp.com/3d, which summarizes <strong>the</strong> 3D image format that <strong>the</strong>se new displays accept. In that document<br />

<strong>the</strong>y have said that <strong>the</strong> checkerboard format “preserves <strong>the</strong> horizontal and vertical resolution <strong>of</strong> <strong>the</strong> left and right<br />

views providing <strong>the</strong> viewer with <strong>the</strong> highest quality image<br />

possible with <strong>the</strong> available bandwidth”. This is not entirely<br />

true – <strong>the</strong> <strong>to</strong>tal resolution per eye is half that <strong>of</strong> <strong>the</strong> full native<br />

display resolution. The “pixel” layout <strong>of</strong> each sub-frame is<br />

shown in Figure 2. Each diamond represents a DMD mirror<br />

– notice that <strong>the</strong>y <strong>are</strong> rotated 45º relative <strong>to</strong> <strong>the</strong> normal<br />

orientation for DMD mirrors. Also notice that <strong>the</strong> center <strong>of</strong><br />

each <strong>of</strong> <strong>the</strong>se diamond mirrors corresponds with <strong>the</strong> pixels<br />

for one eye in <strong>the</strong> checkerboard pattern (shown in light gray<br />

in Figure 2). Now, due <strong>to</strong> <strong>the</strong> use <strong>of</strong> <strong>the</strong> checkerboard<br />

pattern, it isn’t entirely straightforward how <strong>to</strong> describe <strong>the</strong><br />

Figure 2: The diamond pixel layout, which produces<br />

each eye view – shown overlaid on <strong>to</strong>p <strong>of</strong> <strong>the</strong> native<br />

input pixel layout (light gray)<br />

An example <strong>of</strong> combining left and right perspective<br />

images in<strong>to</strong> a DLP 3D checkerboard image from <strong>the</strong><br />

TI DLP 3D white paper<br />

reduced resolution. It is not half vertical resolution and it is<br />

not half horizontal resolution. It is a bit <strong>of</strong> both. Perhaps it is<br />

1 /√2 in each direction. I’m sure someone will work <strong>this</strong> out<br />

eventually.<br />

In TI’s white paper <strong>the</strong>y have also <strong>provide</strong>d examples <strong>of</strong> how<br />

<strong>to</strong> generate <strong>the</strong> checkerboard pattern using Adobe Pho<strong>to</strong>shop<br />

(for 3D still images) and AVIsynth (for 3D video).<br />

However, <strong>the</strong> average user is unlikely <strong>to</strong> use ei<strong>the</strong>r <strong>of</strong> <strong>the</strong>se<br />

techniques with <strong>the</strong>ir 3D HDTV (except for DIY users) and<br />

it is highly unlikely that images or video will be distributed<br />

natively in <strong>the</strong> checkerboard pattern (it simply doesn’t cope<br />

with compression well). It is most likely that consumers will<br />

use hardw<strong>are</strong> or s<strong>of</strong>tw<strong>are</strong> that reformats 3D images on-<strong>the</strong>fly<br />

in<strong>to</strong> <strong>the</strong> checkerboard format. Three pieces <strong>of</strong> s<strong>of</strong>tw<strong>are</strong><br />

which currently support <strong>the</strong> DLP 3D checkerboard pattern<br />

internally <strong>are</strong> P<strong>et</strong>er Wimmer’s Stereoscopic Player<br />

(http://www.3dtv.at), DDD’s TriDef 3D Experience<br />

(http://www.tridef.com), and Lightspeed Design’s DepthQ<br />

http://www.veritas<strong>et</strong>visus.com 92


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Stereoscopic Media Server (http://www.depthq.com) - and I’m sure <strong>the</strong>re will be more soon! These programs<br />

accept 3D video (and stills) in a range <strong>of</strong> different conventional 3D image formats (e.g. over-under, side-by-side, or<br />

field-sequential) and reformat <strong>to</strong> <strong>the</strong> DLP 3D checkerboard pattern in real-time.<br />

With regard <strong>to</strong> 3D glasses, i-O Display has recently announced an LCS 3D glasses pack specifically tailored for <strong>the</strong><br />

Samsung 3D DLP HDTVs (http://www.i-glassess<strong>to</strong>re.com/dlp3dsystems.html). Some <strong>of</strong> <strong>the</strong> packs also include<br />

DDD’s new TriDef 3D Experience s<strong>of</strong>tw<strong>are</strong>. It is possible <strong>to</strong> use o<strong>the</strong>r LCS 3D glasses with <strong>the</strong>se displays –<br />

anything with a VESA 3-pin “3D Sync” connec<strong>to</strong>r should work. The venerable Crystaleyes and NuVision LCS<br />

glasses can be made <strong>to</strong> work with <strong>the</strong>se displays. There <strong>are</strong> also RF wireless LCS 3D glasses from Jumbo Vision<br />

International: http://www.jumbovision.com.au. For those who <strong>are</strong> interested in knowing more about <strong>the</strong> VESA 3pin<br />

“3D Sync” connec<strong>to</strong>r, more information is available here: http://www.stereoscopic.org/standard/connect.html.<br />

The availability <strong>of</strong> 3D content <strong>to</strong> show on <strong>the</strong>se displays is going <strong>to</strong> be <strong>the</strong> next question. There <strong>are</strong> over 30 fieldsequential<br />

3D DVDs available right now (see <strong>this</strong> list: http://www.3dmovielist.com/3ddvds.html). However, <strong>the</strong><br />

resolution <strong>of</strong> <strong>the</strong>se DVDs is far below what <strong>the</strong>se new 3D HDTVs <strong>are</strong> capable <strong>of</strong>. People will be yearning for highdefinition<br />

3D video content <strong>to</strong> show on <strong>the</strong>se displays. Hopefully some <strong>of</strong> <strong>the</strong> newer digital 3D cinema releases, or<br />

even some <strong>of</strong> <strong>the</strong> older 3D movies, will eventually be released on a 3D HD home-format – perhaps on<br />

3D HD-DVD or 3D Blu-ray? For those wanting <strong>to</strong> show <strong>of</strong>f <strong>the</strong>ir new 3D HDTV now with some 3D HD video<br />

content, <strong>the</strong> “Dzignlight Stereoscopic Demo Reel 2007” is particularly good (available from:<br />

http://www.dzignlight.com/stereo.html).<br />

The largest source <strong>of</strong> 3D content in <strong>the</strong> short term will likely be stereoscopic games. No doubt nVidia is preparing<br />

<strong>to</strong> add support for <strong>the</strong> DLP 3D checkerboard pattern <strong>to</strong> <strong>the</strong>ir “3D Stereo driver”, but DDD have beat <strong>the</strong>m <strong>to</strong> mark<strong>et</strong><br />

– <strong>the</strong> TriDef 3D Experience s<strong>of</strong>tw<strong>are</strong> mentioned earlier also allows a select number <strong>of</strong> consumer games <strong>to</strong> be<br />

played in stereoscopic 3D on <strong>the</strong> Samsung 3D HDTVs.<br />

It will be interesting <strong>to</strong> see how <strong>the</strong> availability <strong>of</strong> 3D content evolves over <strong>the</strong> coming months. Stay tuned!<br />

Hopefully <strong>the</strong>se displays will sell well – buy one yourself <strong>to</strong>day!<br />

>>>>>>>>>>>>>>>>>>>>


Soci<strong>et</strong>y <strong>of</strong> Motion Picture and Television Engineers<br />

Pre-conference Symposium<br />

STEREOSCOPIC PRODUCTION<br />

Tuesday Oc<strong>to</strong>ber 2<strong>3rd</strong>, 2007<br />

Stereoscopic 3D display technology has experienced dramatic advancements within <strong>the</strong> past few years. Installations <strong>of</strong> 3D<br />

systems for digital cinema <strong>are</strong> fast approaching 1000 screens worldwide. In addition, 3D displays <strong>are</strong> also being proposed for<br />

consumer home <strong>the</strong>ater, videogames and point-<strong>of</strong>-sale advertising. If stereoscopic displays begin appearing in consumer<br />

homes for gaming and home <strong>the</strong>ater, will 3D television follow?<br />

This symposium will <strong>provide</strong> <strong>the</strong> broadcast, production, or cinema engineer with a roadmap for exploring <strong>the</strong> stereoscopic<br />

production landscape, from acquisition <strong>to</strong> <strong>the</strong> latest projection systems. Leading industry experts will explain <strong>the</strong> core<br />

technologies, applications and challenges <strong>of</strong> managing a 3D production pipeline. Stereoscopic projection will be liberally used<br />

<strong>to</strong> illustrate <strong>the</strong> speakers’ presentations.<br />

Symposium Committee<br />

P<strong>et</strong>e Ludé Sony Electronics, Inc., Edi<strong>to</strong>rial VP, SMPTE<br />

Lenny Lip<strong>to</strong>n CTO, Real D, Conference Chairman<br />

Tom Scott OnStream Media, Program Direc<strong>to</strong>r<br />

Chris Chinnock Insight Media, Featured Speaker<br />

8:00 AM Continental Breakfast<br />

Opening Session<br />

8:30 AM<br />

Bob Kisor, President, SMPTE: <strong>We</strong>lcome <strong>to</strong> <strong>the</strong> Conference<br />

8:40 AM<br />

P<strong>et</strong>er Ludé, Edi<strong>to</strong>rial VP, SMPTE: Conference Overview<br />

8:50 AM<br />

Lenny Lip<strong>to</strong>n<br />

CTO<br />

Real D<br />

9:10 AM<br />

Chris Chinnock<br />

President<br />

Insight Media<br />

The Stereoscopic<br />

Cinema Reborn<br />

Emerging 3D<br />

Display<br />

Technologies<br />

3D films have been around from <strong>the</strong> invention <strong>of</strong> motion pictures, and have enjoyed<br />

passing waves <strong>of</strong> popularity. Is <strong>the</strong> recent activity in Hollywood just ano<strong>the</strong>r fad or<br />

is <strong>the</strong>re som<strong>et</strong>hing fundamentally different <strong>this</strong> time?<br />

This talk will enumerate <strong>the</strong> principal means <strong>of</strong> producing 3D images, for both<br />

projection and direct-view displays, and explain <strong>the</strong> operating principles <strong>of</strong> each.<br />

The advantages and disadvantages <strong>of</strong> each approach will be explored and recentlyannounced<br />

products and systems for consumer and cinema use will be explained.<br />

Session 1A: Content Creation, Live Action<br />

There <strong>are</strong> many challenges <strong>to</strong> real-world stereoscopic cinema<strong>to</strong>graphy both in terms <strong>of</strong> compositional considerations and<br />

camera design. The speakers <strong>are</strong> experts in both <strong>are</strong>as <strong>of</strong> <strong>this</strong> nascent art as applied <strong>to</strong> <strong>the</strong> stereoscopic digital cinema. 3D<br />

cameras require exquisite precision and coordination <strong>to</strong> produce quality images and lately electronic correction has been part <strong>of</strong><br />

<strong>the</strong> solution as has rectification during post. Bleeding edge concepts will be presented here.<br />

9:40 AM<br />

Chris Ward<br />

President<br />

Lightspeed Design<br />

Group<br />

10:00 AM<br />

Jason Goodman<br />

President<br />

21st Century 3D<br />

10:20 AM<br />

Vince Pace<br />

President<br />

Pace Technology<br />

An Advanced<br />

Beam-splitter Rig<br />

Using M<strong>et</strong>a-data<br />

A New Concept in<br />

High Definition<br />

Camera Design<br />

Live Action 3D<br />

Cinema<strong>to</strong>graphy<br />

The beam-splitter rig has been adapted <strong>to</strong> a hi-def m<strong>et</strong>a-data based system for<br />

maintaining image rectification during cinema<strong>to</strong>graphy and through post-production.<br />

This talk will discuss <strong>the</strong> advances <strong>of</strong> such an approach <strong>to</strong> left/right image<br />

coordination.<br />

A compact hi-def camera-recorder is being designed <strong>to</strong> allow for image capture <strong>of</strong><br />

uncompressed data. The camera has uses for both industrial and feature<br />

production.<br />

The technology and m<strong>et</strong>hods for capturing, transmitting and projecting D-Cinema<br />

quality stereoscope sporting events will be described.<br />

10:40 PM Break<br />

Session 1B: Content Creation, Syn<strong>the</strong>sis and Computer Generation<br />

Computer generated imaging and conversion from planar <strong>to</strong> 3D have <strong>the</strong> promise <strong>of</strong> creating perfect stereoscopic images,<br />

unencumbered by <strong>the</strong> limitations <strong>of</strong> real-world cinema<strong>to</strong>graphy. This is an art that can create perfectly controlled beautiful<br />

stereo images. But it’s an art must deal with movies that were conceived <strong>of</strong> a 2D projects – at <strong>this</strong> moment. Our speakers<br />

include <strong>the</strong> 3D direc<strong>to</strong>rs <strong>of</strong> several recent <strong>the</strong>atrical releases and have advanced <strong>the</strong> understanding <strong>of</strong> <strong>the</strong> creative elements <strong>of</strong><br />

stereoscopic composition. Conversion from planar is also finding its place in <strong>the</strong> sun as techniques advance.<br />

11:00 PM<br />

David C. Seigle<br />

President<br />

In-Three<br />

Dimensionalizing<br />

2D Movies<br />

This talk will <strong>provide</strong> a behind-<strong>the</strong>-scenes look at converting planar movies in<strong>to</strong> 3D<br />

movies and dispel some <strong>of</strong> <strong>the</strong> myths about <strong>the</strong> process. It may well be that 2D as<br />

source material produces superior results <strong>to</strong> conventionally acquired two-view<br />

material.


11:20 PM<br />

Rob Engle<br />

Sony Pictures<br />

Digital Effects<br />

Supervisor<br />

Imageworks<br />

11:40 PM<br />

Phil McNally<br />

Stereoscopic<br />

supervisor<br />

DreamWorks<br />

Animation<br />

Creating<br />

Stereoscopic<br />

Movies from 3D<br />

Ass<strong>et</strong>s<br />

Stereoscopic<br />

compositional<br />

concerns from<br />

Inception <strong>to</strong><br />

Projection<br />

CG animation and motion-captured movies have an intrinsic three-dimensional data<br />

base making production <strong>of</strong> a stereoscopic version <strong>of</strong> such material <strong>the</strong>or<strong>et</strong>ically<br />

possible. But <strong>the</strong>ory and practice depart. What <strong>are</strong> <strong>the</strong> practical considerations <strong>of</strong><br />

making a 3D movie out <strong>of</strong> <strong>the</strong>se ass<strong>et</strong>s?<br />

Movies <strong>are</strong> usually not conceived <strong>of</strong> from <strong>the</strong> g<strong>et</strong>-go as being stereoscopic and <strong>this</strong><br />

leads <strong>to</strong> compromises when <strong>the</strong> stereo direc<strong>to</strong>r handles <strong>the</strong> ass<strong>et</strong>s. What can be<br />

done <strong>to</strong> change <strong>this</strong> situation? An educational program has been instituted at<br />

DreamWorks Animation <strong>to</strong> teach everyone from writers <strong>to</strong> layout artists how <strong>to</strong><br />

maximize <strong>the</strong> effectiveness <strong>of</strong> <strong>the</strong> 3D medium.<br />

12:00 PM Panel Discussion Lenny Lip<strong>to</strong>n, modera<strong>to</strong>r<br />

12:20 PM Lunch<br />

Session 2: The Stereoscopic Production Pipeline<br />

The stereoscopic production pipeline is being created at <strong>this</strong> moment. Visionary studio head Jeffry Katzenberg has mandated<br />

that all future DreamWorks Animation films will be in 3D. Disney has already taken <strong>the</strong> plunge as has Sony Pictures<br />

Imageworks. It is probable that future productions will use everything: CG animation, live action, and syn<strong>the</strong>sis. But where and<br />

what <strong>are</strong> <strong>the</strong> <strong>to</strong>ols for <strong>the</strong> post production processes? That s<strong>to</strong>ry will be <strong>to</strong>ld in <strong>this</strong> session.<br />

1:30 PM<br />

Mark Hor<strong>to</strong>n<br />

Strategic Mark<strong>et</strong>ing<br />

Manager<br />

Quantel<br />

1:50 PM<br />

Jim Mainard<br />

Head <strong>of</strong> Prod. Dev.<br />

DreamWorks<br />

Animation<br />

2:10 PM<br />

Buzz Hays<br />

Senior VFX Producer<br />

Sony Pictures<br />

Imageworks<br />

2:30 PM<br />

Steve Schklair<br />

Founder and CEO<br />

3ality Digital Systems<br />

A Solution for<br />

Stereoscopic<br />

Post-Production<br />

Pipeline - There<br />

Isn’t One for<br />

Stereo…<br />

Stereoscopic<br />

Production<br />

Pipeline for VFX,<br />

Live Action, and<br />

Animation<br />

Electronic<br />

Rectification<br />

Applied <strong>to</strong> 3D<br />

Camera Design<br />

Stereo acquisition and distribution <strong>are</strong> being widely discussed – but what issues<br />

must be addressed in post-production and what <strong>are</strong> <strong>the</strong> possibilities? This tu<strong>to</strong>rial<br />

focuses on <strong>the</strong> current workflows being used for stereoscopic post production and<br />

<strong>the</strong>n explores alternative m<strong>et</strong>hods. Practical examples <strong>are</strong> shown <strong>of</strong> multiple<br />

techniques, covering <strong>the</strong> most common issues faced and <strong>to</strong>ols for handling <strong>the</strong>m.<br />

From Edi<strong>to</strong>rial <strong>to</strong> Post and most everything in b<strong>et</strong>ween we find ourselves creating a<br />

new pipeline and revising our <strong>to</strong>ols<strong>et</strong> <strong>to</strong> author in stereo. These hazards and how<br />

<strong>to</strong> deal with <strong>the</strong>m will be identified and explored.<br />

A discussion <strong>of</strong> <strong>the</strong> Imageworks pipeline evolution from CG Animation <strong>to</strong> Live-<br />

Action and everything in-b<strong>et</strong>ween. The talk will include highlights <strong>of</strong> <strong>to</strong>ols and<br />

processes with examples from SPI-produced films.<br />

Shooting with <strong>the</strong> two camera heads that make up a single stereoscopic rig can be<br />

a daunting task in terms <strong>of</strong> producing images that <strong>are</strong> properly coordinated <strong>to</strong> a fine<br />

<strong>to</strong>lerance. This talk will concentrate not only on <strong>the</strong> means <strong>to</strong> achieve such<br />

coordination but also on <strong>the</strong> post-production <strong>to</strong>ols that <strong>are</strong> required <strong>to</strong> follow through<br />

<strong>to</strong> create a quality stereoscopic image.<br />

2:50 PM<br />

Chuck Comisky Motion Capture James Cameron’s new feature, Avatar, will be 60% motion capture, 20%<br />

3D Visual Effect as a Stereoscopic stereoscopic cinema<strong>to</strong>graphy, and 20% green-screen. How can all <strong>of</strong> <strong>the</strong>se<br />

Specialist<br />

Source on a processes be coordinated <strong>to</strong> produce a unified look? This presentation will discuss<br />

Lights<strong>to</strong>rm Productions $190M budg<strong>et</strong> that challenge.<br />

3:10 PM Panel Discussion Buzz Hays, modera<strong>to</strong>r<br />

3:30 PM Break<br />

Session 3: Stereoscopic Exhibition<br />

A unique value-added <strong>of</strong> <strong>the</strong> digital cinema is <strong>the</strong> ability <strong>to</strong> project 3D images with a quality level not possible with 35mm<br />

projection. In <strong>this</strong> session we'll review <strong>the</strong> different add-on technologies available <strong>to</strong> convert 2D digital projection systems in<strong>to</strong><br />

3D projection systems.<br />

4:00 PM<br />

Michael Karagosian<br />

President<br />

MKPE Consulting LLC<br />

The 3D Cinema A high level review <strong>of</strong> <strong>the</strong> various stereoscopic digital cinema systems.<br />

4:20 PM<br />

Dave Schnuelle<br />

Senior Direc<strong>to</strong>r Image<br />

Technology<br />

Dolby Labora<strong>to</strong>ries<br />

4:40 PM<br />

Matt Cowan<br />

CSO<br />

Real D<br />

Dolby<br />

Stereoscopic<br />

Digital Projection<br />

Real D<br />

Stereoscopic<br />

Digital Projection<br />

5:00PM Panel Discussion Michael Karagosian, modera<strong>to</strong>r<br />

5:30 PM Closing Remarks Chris Chinnock<br />

A tu<strong>to</strong>rial on <strong>the</strong> Dolby stereoscopic projection system.<br />

A tu<strong>to</strong>rial on <strong>the</strong> Real D stereoscopic projection system.


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Last Word: How things g<strong>et</strong> invented<br />

by Lenny Lip<strong>to</strong>n<br />

Lenny Lip<strong>to</strong>n currently serves as chief technology <strong>of</strong>ficer at Real D. He founded<br />

StereoGraphics Corporation in 1980, and created <strong>the</strong> electronic stereoscopic display<br />

industry. He is <strong>the</strong> most prolific inven<strong>to</strong>r in <strong>the</strong> field and has been granted 25 patents in <strong>the</strong><br />

<strong>are</strong>a <strong>of</strong> stereoscopic displays. He is a member <strong>of</strong> <strong>the</strong> Soci<strong>et</strong>y for Information Display, <strong>the</strong><br />

Soci<strong>et</strong>y <strong>of</strong> Pho<strong>to</strong>-Instrumentation Engineers, and he was <strong>the</strong> chairman <strong>of</strong> <strong>the</strong> Soci<strong>et</strong>y <strong>of</strong><br />

Motion Picture and Television Engineers working group, which established standards for<br />

<strong>the</strong> projection <strong>of</strong> stereoscopic <strong>the</strong>atrical films.<br />

The his<strong>to</strong>ry <strong>of</strong> motion pictures is an interesting one, and I am learning more about it in <strong>the</strong><br />

context <strong>of</strong> my present work inventing stereoscopic motion picture systems, and in<br />

connection with <strong>the</strong> work I am doing with studios and filmmakers. I am taking working<br />

with filmmakers seriously because <strong>the</strong> quality <strong>of</strong> <strong>the</strong> Real D system is judged by <strong>the</strong><br />

content projected on our screens. I was recently appointed as <strong>the</strong> co-chair (P<strong>et</strong>er Andersen<br />

is <strong>the</strong> o<strong>the</strong>r co-chair) <strong>of</strong> <strong>the</strong> sub-committee <strong>of</strong> <strong>the</strong> ASC Technology Committee tasked <strong>to</strong> help figure out workflow<br />

production pipeline and stereoscopic cinema<strong>to</strong>graphic issues. These subjects <strong>are</strong> tentative and need <strong>to</strong> be developed<br />

and we’re all learning <strong>to</strong>ge<strong>the</strong>r.<br />

The stereoscopic cinema, in its present incarnation, as manufactured by Real D, is entirely dependent upon digital<br />

and computer technology. Digital projection allows for a single projec<strong>to</strong>r, while o<strong>the</strong>r stereoscopic systems use two<br />

projec<strong>to</strong>rs. Two projec<strong>to</strong>rs work well in IMAX <strong>the</strong>aters, based on my observations. I cannot say <strong>the</strong> same for <strong>the</strong>me<br />

parks, whe<strong>the</strong>r <strong>the</strong>y use film or digital technology, because <strong>the</strong>re <strong>are</strong> occasions when <strong>the</strong> projected image is out <strong>of</strong><br />

adjustment.<br />

Replacing multiple machines with a single machine – i.e. a projec<strong>to</strong>r – is <strong>the</strong> way <strong>to</strong> go, especially in <strong>to</strong>day’s<br />

projection booths; because typically <strong>the</strong>re is no projectionist in <strong>the</strong> booth at <strong>the</strong> time <strong>the</strong> film is being projected.<br />

There is a technician who will assemble <strong>the</strong> film reels and make sure everything is going <strong>to</strong> project well, but <strong>the</strong>n<br />

somebody else – maybe <strong>the</strong> kid at <strong>the</strong> candy counter – who actually works <strong>the</strong> projec<strong>to</strong>r and makes adjustments.<br />

(Interestingly <strong>the</strong> kid at <strong>the</strong> candy counter may be well qualified <strong>to</strong> work <strong>the</strong> servers and projec<strong>to</strong>rs because <strong>of</strong> his<br />

or her PC experience.)<br />

The product that I invented, <strong>the</strong> projection ZScreen, has been used for years for <strong>the</strong> projection <strong>of</strong> CAD and similar<br />

images for industrial applications. Real D turned <strong>the</strong> ZScreen in<strong>to</strong> a product that had <strong>to</strong> work even b<strong>et</strong>ter for<br />

<strong>the</strong>atrical motion picture applications. It turns out that <strong>the</strong> film industry has very high standards when it comes <strong>to</strong><br />

image quality. This is easy <strong>to</strong> understand, because <strong>the</strong> industry lives or dies by image quality.<br />

The stereoscopic cinema has had a long gestation. To date, <strong>this</strong> is <strong>the</strong> longest gestation <strong>of</strong> any technology advance<br />

in <strong>the</strong> his<strong>to</strong>ry <strong>of</strong> <strong>the</strong> cinema. For example, within about three decades <strong>of</strong> <strong>the</strong> invention <strong>of</strong> <strong>the</strong> cinema, sound was<br />

added. There were numerous efforts <strong>to</strong> make sound a part <strong>of</strong> <strong>the</strong> cinema and make it a bona fide product. In <strong>the</strong><br />

three-year period from about 1927 <strong>to</strong> 1930, rapid advances were made both in sound technology and in aes<strong>the</strong>tics.<br />

If you take a look at movies that were made in 1927, and <strong>the</strong>n you see movies that were made in 1930 or 1931,<br />

<strong>the</strong>re’s a gigantic difference. Movies made in <strong>the</strong> early 1930s look a lot like, and sound like, modern movies. There<br />

was a tremendous advance in <strong>the</strong> technology and in filmmaker know-how in a short period <strong>of</strong> time.<br />

It is <strong>the</strong> creative pr<strong>of</strong>essionals who will perfect <strong>the</strong> stereoscopic medium. That’s exactly what <strong>the</strong>y did every time a<br />

new technology came along, whe<strong>the</strong>r it was sound, color, wide-screen, or computer-generated images. In fact, those<br />

<strong>are</strong> <strong>the</strong> major additions <strong>to</strong> <strong>the</strong> cinema, and <strong>the</strong>y all <strong>to</strong>ok decades <strong>to</strong> become an ongoing part <strong>of</strong> <strong>the</strong> cinema. Ads for<br />

movies never say, “This is a sound movie,” or “This is a color movie,” or “This movie is in <strong>the</strong> widescreen (or<br />

’scope) aspect ratio.” It’s assumed. It’s a r<strong>are</strong> movie that is in black-and-white. It’s an even r<strong>are</strong>r movie that is<br />

http://www.veritas<strong>et</strong>visus.com 96


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

silent. And nobody is going back <strong>to</strong> shooting 4:3 Edison aspect ratio movies. (Curiously, that’s more or less <strong>the</strong><br />

aspect ratio used by IMAX for <strong>the</strong>ir cinema <strong>of</strong> immersion.)<br />

An attempt was made in <strong>the</strong> early 1980s <strong>to</strong> use a single projec<strong>to</strong>r with <strong>the</strong> above-and-below format – essentially<br />

two Techniscope frames that could be projected through mirrors or prisms or split lenses, optically superimposed<br />

on <strong>the</strong> screen, and polarized. The audience used polarizing glasses <strong>to</strong> view <strong>the</strong> images in 3D. I was <strong>the</strong> chairman <strong>of</strong><br />

<strong>the</strong> SMPTE working group that established <strong>the</strong> standards for <strong>the</strong> above-and-below format. But as soon as <strong>the</strong><br />

standards were established, <strong>the</strong> above-and-below format was more or less abandoned. A few films like “Comin’ At<br />

Ya!” or “Jaws 3-D”, and one I worked on, “Rottweiler: Dogs <strong>of</strong> Hell” were projected above-and-below, an<br />

approach that was technically inadequate. For one thing it was hard <strong>to</strong> adjust properly and s<strong>et</strong> up <strong>the</strong> projec<strong>to</strong>r <strong>to</strong><br />

achieve even illumination. I know; I s<strong>et</strong> up a few, and it was <strong>to</strong>ugh <strong>to</strong> do a good job because <strong>of</strong> <strong>the</strong> design <strong>of</strong> <strong>the</strong><br />

lamp housings and <strong>the</strong> projec<strong>to</strong>rs.<br />

Curiously it was <strong>the</strong> above-and-below format that led me <strong>to</strong> <strong>the</strong> first flicker-free stereoscopic field-sequential<br />

computer and television systems. I noticed that <strong>the</strong> above-and-below format was applicable <strong>to</strong> video, because that<br />

which is juxtaposed spatially can, with <strong>the</strong> injection <strong>of</strong> a synchronization pulse b<strong>et</strong>ween <strong>the</strong> two frames, become<br />

juxtaposed temporally when played back on a CRT moni<strong>to</strong>r; so <strong>the</strong> first StereoGraphics systems used <strong>the</strong> aboveand-below<br />

format.<br />

The above-and-below video format, which is applicable <strong>to</strong> video or computer graphics, results in a field-sequential<br />

image that can be viewed using shuttering or related polarizing selection techniques. I designed <strong>the</strong> first flicker-free<br />

field sequential system in 1980. It used early electro-optics that were clunky, but <strong>the</strong> flicker-free principal was<br />

established. Using 60Hz video, for example, with <strong>the</strong> above and below format, one achieved a 120Hz result, that is<br />

<strong>to</strong> say, 60 fields per second per eye. The field sequential system is what is used for <strong>the</strong> Real D projection system.<br />

The electro-optics <strong>are</strong> different. There’s <strong>the</strong> ZScreen<br />

modula<strong>to</strong>r used in <strong>the</strong> optical path in front <strong>of</strong> <strong>the</strong><br />

projection lens, and audience members wear<br />

polarizing eyewear. (The combination <strong>of</strong> ZScreen and<br />

polarizing eyewear actually form a shutter. You can<br />

classify <strong>the</strong> system as ei<strong>the</strong>r shuttering for selection or<br />

polarization, but in fact a proper classification is that<br />

it uses both polarization and shuttering.) But <strong>the</strong><br />

principal is <strong>the</strong> same as that used for <strong>the</strong> early stereo<br />

systems I developed. The right eye sees <strong>the</strong> right<br />

image while <strong>the</strong> left sees nothing and vice versa, ad<br />

infinitum, or as long as <strong>the</strong> machine is turned on.<br />

The issue I had <strong>to</strong> solve in 1980 was <strong>this</strong>: How <strong>to</strong><br />

make an innately 60Hz device work twice as fast.<br />

And <strong>the</strong> above-and-below format did just that. <strong>We</strong><br />

had <strong>to</strong> modify <strong>the</strong> moni<strong>to</strong>rs <strong>to</strong> run fast, but for a CRT<br />

moni<strong>to</strong>r it wasn’t that hard. There <strong>are</strong> two parts <strong>to</strong><br />

stereoscopic systems’ issues: <strong>the</strong> selection device<br />

design and content creation. Today we <strong>are</strong> faced with<br />

<strong>the</strong> same design issue I was faced with in 1980. In<br />

addition, content creation has always been a major<br />

issue and that’s why I am working with <strong>the</strong> film<br />

industry <strong>to</strong> work out compositional and workflow<br />

issues.<br />

Engineer Jim Stewart (left) and I <strong>are</strong> working on <strong>the</strong> first<br />

electronic stereoscopic field-sequential system that produced<br />

flicker-free images (Circa 1980). <strong>We</strong> used two black and<br />

white NTSC TV cameras as shown, and combined <strong>the</strong> signals<br />

<strong>to</strong> play on a Conrac moni<strong>to</strong>r, which, without modification,<br />

could run at 120 Hz. The images were half height, but we<br />

proved <strong>the</strong> principal. Stewart is wearing a pair <strong>of</strong> welder’s<br />

goggles in which we mounted PLZT (lead lanthanum<br />

zirconate titanate) electro-optical shutters we got from<br />

Mo<strong>to</strong>rola. The shutters had been designed for flash blindness<br />

goggles for pilots who dropped a<strong>to</strong>mic bombs. I kid you not.<br />

http://www.veritas<strong>et</strong>visus.com 97


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Display Industry Calendar<br />

A much more compl<strong>et</strong>e version <strong>of</strong> <strong>this</strong> calendar is located at: http://www.veritas<strong>et</strong>visus.com/industry_calendar.htm.<br />

Please notify mark@veritas<strong>et</strong>visus.com <strong>to</strong> have your future events included in <strong>the</strong> listing.<br />

September 2007<br />

September 8-12 GITEX 2007 Dubai, UAE<br />

September 9-12 PLASA '07 London, England<br />

September 10-11<br />

Europe Workshop on Manufacturing LEDs for<br />

Lighting and Displays<br />

Berlin, Germany<br />

September 10-11 Printed Electronics Asia Tokyo, Japan<br />

September 11 Workshop on Dynamic 3D Imaging Heidelberg, Germany<br />

September 12-14 Semicon Taiwan, 2007 Taipei, Taiwan<br />

September 13 Printing Manufacturing for Reel-<strong>to</strong>-Reel Processes K<strong>et</strong>tering, England<br />

September 14-16 Taitronics India 2007 Chennai, India<br />

September 16-20<br />

Organic Materials and Devices for Displays and<br />

Energy Conversion<br />

San Francisco, California<br />

September 17-20 EuroDisplay Moscow, Russia<br />

September 18-19 3D Workshop San Francisco, California<br />

September 18-19 Global Biom<strong>et</strong>rics Summit Brussels, Belgium<br />

September 18-19 RFID Europe Cambridge, England<br />

September 21 FPD Components & Materials Seminar Tokyo, Japan<br />

September 24-26 Organic Electronics Conference Frankfurt, Germany<br />

Oc<strong>to</strong>ber 1-4<br />

Oc<strong>to</strong>ber 1-5<br />

Oc<strong>to</strong>ber 2007<br />

European Conference on Organic Electronics &<br />

Related Phenomena<br />

International Topical Me<strong>et</strong>ing on Optics <strong>of</strong> Liquid<br />

Crystals<br />

V<strong>are</strong>nna, Italy<br />

Puebla, Mexico<br />

Oc<strong>to</strong>ber 2-3 3D Insiders' Summit Boulder, Colorado<br />

Oc<strong>to</strong>ber 2-3 Mobile Displays 2007 San Diego, California<br />

Oc<strong>to</strong>ber 2-6 CEATAC Japan 2007 Tokyo, Japan<br />

Oc<strong>to</strong>ber 2-7 CeBIT Bilisim EurAsia Istanbul, Turkey<br />

http://www.veritas<strong>et</strong>visus.com 98


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Oc<strong>to</strong>ber 3-4 Displays Technology South Reading, England<br />

Oc<strong>to</strong>ber 7-10 AIMCAL Fall Technical Conference Scottsdale, Arizona<br />

Oc<strong>to</strong>ber 8-9 Printed RFID US Chicago, Illinois<br />

Oc<strong>to</strong>ber 9-11 SEMICON Europa 2007 Stuttgart, Germany<br />

Oc<strong>to</strong>ber 9-13 Taipei Int'l Electronics Autumn Show Taipei, Taiwan<br />

Oc<strong>to</strong>ber 9-13 Korea Electronics Show Seoul, Korea<br />

Oc<strong>to</strong>ber 10 Novel Light Sources Bl<strong>et</strong>chley Park, England<br />

Oc<strong>to</strong>ber 10-11<br />

International Symposium on Environmental<br />

Standards for Electronic Products<br />

Ottawa, Ontario<br />

Oc<strong>to</strong>ber 10-11 HDTV Conference 2007 Los Angeles, California<br />

Oc<strong>to</strong>ber 10-12 IEEE Table<strong>to</strong>p Workshop Newport, Rhode Island<br />

Oc<strong>to</strong>ber 10-13 CeBIT Asia Shanghai, China<br />

Oc<strong>to</strong>ber 11-12 Vehicles and Pho<strong>to</strong>ns 2007 Dearborn, Michigan<br />

Oc<strong>to</strong>ber 13-16 Hong Kong Electronics Fair Autumn Hong Kong, China<br />

Oc<strong>to</strong>ber 13-16 ElectronicAsia 2007 Hong Kong, China<br />

Oc<strong>to</strong>ber 15-18 Showeast Orlando, Florida<br />

Oc<strong>to</strong>ber 15-19 CEA Technology & Standards Forum San Diego, California<br />

Oc<strong>to</strong>ber 16<br />

Enabling Technologies with A<strong>to</strong>mic Layer<br />

Deposition<br />

D<strong>are</strong>sbury, England<br />

Oc<strong>to</strong>ber 17-18 Pho<strong>to</strong>nex 2007 S<strong>to</strong>neleigh Park, England<br />

Oc<strong>to</strong>ber 17-19<br />

Printable Electronics & Displays Conference &<br />

Exhibition<br />

San Francisco, California<br />

Oc<strong>to</strong>ber 17-20 SMAU 2007 Milan, Italy<br />

Oc<strong>to</strong>ber 18 Displaybank FPD Conference Taiwan Taipei, Taiwan<br />

Oc<strong>to</strong>ber 22-25 CTIA Wireless IT & Entertainment San Francisco, California<br />

Oc<strong>to</strong>ber 23 Stereoscopic Production Brooklyn, New York<br />

Oc<strong>to</strong>ber 23-25 SATIS 2007 Paris, France<br />

Oc<strong>to</strong>ber 23-25 Display Applications Conference San Francisco, California<br />

Oc<strong>to</strong>ber 24-26 Worship Facilities Conference & Expo Atlanta, Georgia<br />

http://www.veritas<strong>et</strong>visus.com 99


<strong>Veritas</strong> <strong>et</strong> <strong>Visus</strong> <strong>3rd</strong> Dimension September 2007<br />

Oc<strong>to</strong>ber 24-26 LEDs 2007 San Diego, California<br />

Oc<strong>to</strong>ber 24-26 FPD International Yokohama, Japan<br />

Oc<strong>to</strong>ber 24-27 SMPTE Technical Conference & Exhibition Brooklyn, New York<br />

Oc<strong>to</strong>ber 29-30 Plastic Electronics Frankfurt, Germany<br />

Oc<strong>to</strong>ber 29 -<br />

November 1<br />

Digital Hollywood Fall Los Angeles, California<br />

November 2007<br />

November 1-2 Digital Living Room San Francisco, California<br />

November 5-7 OLEDs World Summit La Jolla, California<br />

November 5-6 Challenges in Organic Electronics Manchester, England<br />

November 5-9 Color Imaging Conference 2007 Albuquerque, New Mexico<br />

November 6-8 Crystal Valley Conference Cheonan, Korea<br />

November 6-9 EHX Fall 2007 Long Beach, California<br />

November 6-11 SIMO 2007 Madrid, Spain<br />

November 7-8 High Def Expo Burbank, California<br />

November 8 Taiwan TV Supply Chain Conference Taipei, Taiwan<br />

November 8-10 Viscom Milan, Italy<br />

November 8-11 Color Expo 2007 Seoul, Korea<br />

November 9 2007 FPD Mark<strong>et</strong> Analysis & 2008 Mark<strong>et</strong> Outlook Seoul, Korea<br />

November 11-15 Pho<strong>to</strong>nics Asia 2007 Beijing, China<br />

November 12-15 Printed Electronics USA San Francisco, California<br />

November 14-15 Nano 2007 Bos<strong>to</strong>n, Massachus<strong>et</strong>ts<br />

November 14-15 DisplayForum Prague, Czech Republic<br />

November 15-16 Future <strong>of</strong> Television New York, New York<br />

November 15-16 Future <strong>of</strong> Television Forum New York, New York<br />

November 19-20 International Conference on Enactive Interfaces Grenoble, France<br />

November 25-30 RSNA 2007 Chicago, Illinois<br />

November 29 Displaybank Japan Conference Tokyo, Japan<br />

http://www.veritas<strong>et</strong>visus.com 100

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!