13.09.2019 Views

Vision in Action Autumn 2019

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

16<br />

LIGHTING THE WAY TO 3D<br />

The four ma<strong>in</strong> 3D imag<strong>in</strong>g techniques, stereo vision,<br />

laser l<strong>in</strong>e triangulation, structured light and Time of<br />

Flight (ToF) use light <strong>in</strong> very different ways.<br />

Stereo vision<br />

Stereo vision replicates human vision by view<strong>in</strong>g a scene from two<br />

different positions us<strong>in</strong>g two cameras. Special match<strong>in</strong>g algorithms<br />

compare the two images, search for correspond<strong>in</strong>g po<strong>in</strong>ts and visualise<br />

all po<strong>in</strong>t displacements <strong>in</strong> a Disparity Map. S<strong>in</strong>ce the view<strong>in</strong>g angles and<br />

separation of the cameras is known, triangulation can be used to<br />

calculate the co-ord<strong>in</strong>ates of each pixel <strong>in</strong> the image to create a 3D po<strong>in</strong>t<br />

cloud. In passive stereo applications, the images are acquired us<strong>in</strong>g<br />

ambient light, but the quality of the resultant po<strong>in</strong>t cloud depends<br />

directly on both the light and object<br />

surface textures. Some applications<br />

therefore require active stereo vision,<br />

where a random light pattern of dots<br />

is projected onto the object surfaces<br />

to give a high-contrast texture and<br />

much better quality results.<br />

Laser l<strong>in</strong>e triangulation<br />

Here, a laser l<strong>in</strong>e is projected onto the<br />

surface of the object which moves<br />

beneath it. The l<strong>in</strong>e profile is distorted<br />

by the object and this is imaged by<br />

one or more cameras located at fixed<br />

distances and angles to the light<br />

surface. As the object moves under<br />

the l<strong>in</strong>e, the system builds up a series<br />

of profiles which represent the<br />

topology of the object. Aga<strong>in</strong> us<strong>in</strong>g<br />

triangulation methods, these profiles<br />

can be used to calculate the 3D po<strong>in</strong>t<br />

cloud.<br />

3D DATA PROCESSING<br />

Process<strong>in</strong>g 3D data and creat<strong>in</strong>g 3D images is<br />

computationally <strong>in</strong>tensive, and the evolution of high<br />

performance PCs was one of the key driv<strong>in</strong>g forces <strong>in</strong><br />

mak<strong>in</strong>g 3D imag<strong>in</strong>g an affordable ma<strong>in</strong>stream mach<strong>in</strong>e<br />

vision technique. Initially camera data was transferred to a<br />

PC, where the 3D po<strong>in</strong>t clouds were calculated and the<br />

various 3D measurements made. However the application<br />

of 3D imag<strong>in</strong>g to measure f<strong>in</strong>er details on faster <strong>in</strong>dustrial<br />

processes has required the use of higher resolution<br />

cameras at higher frame rates which generates larger data<br />

volumes that must be transmitted to the PC. The network<br />

bandwidth for data transmission to the PC must be<br />

optimised to avoid time delays or data loss. The process<strong>in</strong>g<br />

power of the PC hardware must also keep pace with the<br />

<strong>in</strong>creased process<strong>in</strong>g needs <strong>in</strong> order not to restrict the<br />

overall system.<br />

Mov<strong>in</strong>g data process<strong>in</strong>g<br />

The key to overcom<strong>in</strong>g these issues is to move the 3D data process<strong>in</strong>g<br />

away from the PC CPU. Today’s FPGA and multicore embedded<br />

processor architectures make it possible to do these calculations at<br />

much faster speeds. However, there may still be concerns about data<br />

transfer speeds to the PC, so there is an <strong>in</strong>creas<strong>in</strong>g move from camera<br />

Structured light<br />

Structured light, normally <strong>in</strong> the form<br />

of parallel l<strong>in</strong>es is projected onto the<br />

object. The distortion <strong>in</strong> the l<strong>in</strong>es<br />

caused by the shape of the object is<br />

captured by the camera and is used<br />

to calculate depth, structure and<br />

detail, by know<strong>in</strong>g the distance and<br />

angle between the projected light and<br />

the camera. This technique may be<br />

used with two cameras or a s<strong>in</strong>gle<br />

camera and is <strong>in</strong>dependent of the<br />

object’s texture.<br />

Time of flight<br />

In ToF systems the time taken for<br />

light emitted from the system to<br />

return to the sensor after reflection<br />

from each po<strong>in</strong>t of the object is<br />

measured. The direct ToF method<br />

(also known as pulse modulation)<br />

measures the return time of short<br />

pulses – this is longer the further<br />

away the imag<strong>in</strong>g po<strong>in</strong>t is from the<br />

sensor. The cont<strong>in</strong>uous wave<br />

modulation method uses a<br />

cont<strong>in</strong>uous signal and calculates the<br />

phase shift between the emitted and<br />

return<strong>in</strong>g light waves which is<br />

proportional to the distance to the<br />

object.<br />

www.ukiva.org<br />

3D Techniques<br />

(courtesy ClearView Imag<strong>in</strong>g)<br />

Mov<strong>in</strong>g process<strong>in</strong>g from PC to camera<br />

(Courtesy IDS Imag<strong>in</strong>g Development Systems)<br />

manufacturers to provide fast, direct memory access between image<br />

acquisition and process<strong>in</strong>g on a dedicated FPGA processor mounted<br />

either <strong>in</strong> the camera itself or <strong>in</strong> a separate module l<strong>in</strong>ked to the cameras,<br />

before transfer to a PC for further process<strong>in</strong>g. The transfer of 3D result<br />

data <strong>in</strong>stead of high-resolution 2D raw data significantly reduces the<br />

network load.<br />

Gett<strong>in</strong>g results<br />

For PC process<strong>in</strong>g, the major mach<strong>in</strong>e vision software packages offer an<br />

extensive range of 3D tools, beg<strong>in</strong>n<strong>in</strong>g with the construction of 3D depth<br />

maps or po<strong>in</strong>t clouds from any of the 3D imag<strong>in</strong>g techniques and system<br />

calibration. Other functionality typically <strong>in</strong>cludes 3D registration, 3D<br />

object process<strong>in</strong>g, surface-based 3D match<strong>in</strong>g and 3D Surface<br />

Inspection. These packages can handle multi-camera applications and<br />

smart cameras can also be l<strong>in</strong>ked for extended imag<strong>in</strong>g. Smart 3D<br />

cameras comb<strong>in</strong>e on-board process<strong>in</strong>g with a comprehensive range of<br />

measurement tools or application-specific packages, which means that<br />

they are completely self-conta<strong>in</strong>ed and do not need any ancillary PC<br />

process<strong>in</strong>g.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!