10.07.2015 Views

OPTRO 2012 Paper - Draft 6 12_12_11

OPTRO 2012 Paper - Draft 6 12_12_11

OPTRO 2012 Paper - Draft 6 12_12_11

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Conventional System Wavefront Coding (Wiener) Wavefront Coding (CLS)Man at 30 mMan at <strong>12</strong>5 mFigure 1. Captured images comparing similar scenes for conventional system focused at 20 m, wavefrontcoding system using Wiener algorithm and wavefront coding system using CLS algorithmRemoval of a focus mechanism is not the onlypotential benefit which wavefront coding canprovide in infrared systems; the technique can beused to mitigate the detrimental effects of anydefocus-like aberration. Therefore, the techniquecan potentially be used for athermalisation [6] or toaccommodate broad spectral bands, whilstavoiding the use of less desirable materials.Sensitivity to some lens manufacturing tolerances(e.g. surface radii, lens thickness and separation)can be reduced and time consuming focus settingmay be avoided during final assembly; thesefactors can have a significant impact on productcost. For some applications, wavefront coding canenable a single lens solution to provide acceptableimagery over an extended field-of-view [7].4. MULTI-APERTURE IMAGINGMulti-aperture imagers employ multiple parallelshorter-focal-length imaging lenses in place of aconventional single longer-focal-length lens toproduce a set of low-resolution images that arecomputationally fused to produce a single highresolutionimage [8]. The technique relies onaliasing to encode high spatial frequencyinformation and disparity between the recordedimages to enable super-resolution algorithms toreconstruct a high resolution image. By employingan array of shorter focal length imagers it ispossible to significantly reduce the length of theimaging system without yielding resolution.4.1. Optical designThe aim is to produce a smaller and lighter multiapertureimaging system whilst providing a similarresolution performance to a larger conventionalsingle-aperture lens. The baseline conventionaldesign is a two-element infrared Petzval lens forthe long-wave infrared (LWIR) with focal length of<strong>11</strong>4 mm, operating at F/1.6 over a full field-of-viewof 8°. The overall length of this system is around<strong>12</strong>5 mm, with a diameter 71 mm.The equivalent LWIR multi-aperture imagingsystem consists of a 3x3 array of imagingchannels, as shown in Figure 2. Each channelprovides a field-of-view of 8°, common to allchannels, and is composed of two germaniumelements resulting in an F/1.6 optical system with anominal focal length of 38 mm.For simplicity, we assume a square sensor with640x640 pixels and a 25 µm pixel pitch. TheNyquist frequency is 20 cycles/mm. The system is45 mm long, a factor of almost three shorter thanthe conventional baseline lens, and a similar widthof 72 mm.


After optimisation of the wavefront error betweenthe 8 to <strong>12</strong> µm wavelength range, the performanceof the imaging channels is virtually diffractionlimited. The system has been designed to operateat infinite conjugates. As an illustration, the MTFfor the imaging channels situated at the centre andcorners of the array are displayed in Figures 3 and4 respectively.Thus, given the input scene shown in Figure 5, thissystem produces nine under sampled images withdiffraction limited performance. A simulation of thefull sensor frame acquired by the system isdisplayed in Figure 6. Note that the acquisition ofnon-redundant information across the channels isenabled by the different geometric distortions thatvary from channel to channel. To prevent overlapbetween adjacent images and cross talk, thesystem will contain a series of field stops andbaffles that limit the field of view of each channeland block any light that crosses from one channelinto another.Figure 4. Corner channel MTF for various FOV’sup to the Nyquist frequencyFigure 5. High resolution scene(original image courtesy of Sierra PacificInnovations Corp www.x20.org)Figure 2. Side and font view of the multi-aperturesystemFigure 3. Central channel MTF for various FOV’sup to the Nyquist frequencyFigure 6. Full frame acquired with the multiaperturesystem(detected frame has been rotated for display)


4.2. Image ProcessingFundamentally, the reconstruction process fusesthe set of low-resolution under sampled images toproduce a single high-resolution image. Thesuper-resolution reconstruction is a well knownproblem and has been extensively studied in theliterature; for a technical overview see [9].The super-resolution model describes therelationship between the acquired N low-resolutionimages, denoted by y k , and the high-resolutionscene x and can be formulated in matrix notationas [10, <strong>11</strong>]:yk = DkHk Wk x + ekfor 1 ≤ k ≤ N (1)where W k is a matrix representing the distortion orwarping operator between x and the k th image y k ,H k is the blur matrix representing the space variantpoint-spread function across the sensor, and D k isthe decimation matrix representing the samplingoperator that reduces the resolution in the acquiredimages. The vector e k stands for the additive noisein the k th image y k . In the system presented here,the size of the nine images is 213x213 pixels whichare subsequently fused to produce a 639x639high-resolution image.By grouping the matrices in Eq. (1) into one matrixF, with⎡F1 ⎤ ⎡D1H1W1⎤⎢ ⎥ ⎢ ⎥= ⎢F2 ⎥ = ⎢D2H2W2F⎥ , (2)⎢ M ⎥ ⎢ M ⎥⎢ ⎥ ⎢ ⎥⎣FN ⎦ ⎣D9H9W9⎦the imaging model can be expressed as:y = Fx + e , (3)which shows that the super-resolution problem canbe treated as a classic inverse problem in which anestimate of x can be recovered from priorknowledge of the linear operator F (i.e. imagedistortion, shift-variant PSFs and the samplingcharacteristic of the sensor). Matrix F can beobtained from data calculated in a conventional raytracing software tool, such as Zemax, or through acalibration process once the imaging system isconstructed.Many reconstruction algorithms assuming differentnoise models have been employed in the literaturefor the solution of this inverse problem [9 - <strong>12</strong>]. Inthis work, the noise in the acquired images ismodelled as Poisson noise and the reconstructionalgorithm is based on the maximum likelihoodapproach [13]. Future work will explore the use ofother reconstruction algorithms assuming differenttypes of noise such as Gaussian noise.The reconstruction algorithm is an iterative methodin which x n denotes the current estimate and x n+1the new estimate as defined by:⎛T y ⎞xn+1= xn⎜F ⎟(4)⎝ Fxn ⎠thus each step in the new estimate x n+1 is animprovement over the old estimate x n (in themaximum likelihood sense).(a) (b) (c)Figure 7. (a) Detail of the low-resolution image corresponding to central channel; (b) Detail of reconstructedhigh-resolution image (c) Detail of scene. Images acquired with SNR=100.


The infrared scene consisting of a docked boatand a person in Figure 5 is imaged by the multiaperturesystem with mean a signal-to-noise ratio(SNR) of 100. The 213×213 pixel sub-image fromthe central channel is shown in Figure 7(a), notethe presence of aliasing artefacts. The highresolutionimage iteratively restored using all lowresolutionsub-images is shown in Figure 7(b).Clearly, the reconstruction algorithm has been ableto recover a spatial resolution similar to the originalhigh resolution scene; see Figure 7(c).5. MULTI-SCALE IMAGINGIn the original concept proposed by Brady et al.[14], multi-scale imaging is a design approach thataims to break the relationships between geometricaberrations, lens complexity and aperture sizewhich govern the design of traditional opticalsystems [15]. In multi-scale imaging, the field-ofviewis increased by concatenating additional lensarrays that correct aberrations locally and produceoverlapping partial images which are subsequentlyprocessed to create a single image with wide fieldcoverage [16, 17].The work presented here is inspired by the originalconcept of multi-scale imaging. However, incontrast to the original approach, our designcontains no intermediate image planes and, as aresult, is required to accommodate more severeoverlaps in the partial images.5.1. Optical designThe LWIR system consists of a rotationallysymmetric meniscus front element and rearelement containing an array of 13 lenslets, asshown in Figure 8. All elements are made fromgermanium. The full field-of-view is 72°. Unlikeother multi-scale designs that use multiple sensorsdistributed on a spherical arrangement [17], oursystem employs only one sensor. The detector isassumed to have 640x640 pixels with a 25 µmpixel pitch. In this design, each lenslet in thesecond optical element corrects aberrations in asmall localised region of the field-of-view. Thus,for a given field point, the wavefront may bepartitioned into several segments, resulting inmultiple separated PSF’s in the focal plane.Figure 8. Layout of the multi-scale imaging system.The red surface corresponds to the sensor.The imaging performance has been optimised withconstrains which penalise abrupt surfacetransitions between lenslet boundaries. The MTF’sfor the three lenslets replicated across the lensletarray are shown in Figure 9. An illustration of thetype of image acquired with this system is shown inFigure 10; the image is simulated with additivePoisson noise with mean SNR of 100.The development of this multi-scale solutionstarted with a conventional two-element baselinelens, of similar length and overall diameter,designed to provide a total field-of-view of <strong>12</strong>° fo rthe same sensor. The goal was to increase thefield coverage without increasing lens complexityor size and mass of the optical system. Withconventional lenses, the total field coverage couldnot be pushed beyond 20° whilst maintaining theabove constraints.


5.2. Image ProcessingThe imaging process in the multi-scale system canbe described by the linear forward model:y = Fx + e (5)where y is the detected image, x is the scene, e isthe noise and F is the linear space-variant blurmatrix representing the PSF for every point on theimage plane. The matrix F also containsinformation on the geometric transformationbetween the object and image spaces. Theacquired image y is restored using the sameiterative algorithm described in Eq.(4). Simulationsof the restored image is shown in Fig.5. Theimages have been acquired with a mean SNR of100 and Poisson noise statistics.Figure 9. MTF’s up to the Nyquist frequency forthe three lenslets and various FoV’s.Figure <strong>11</strong>. Restored multi-scale image (SNR=100)with a field of view of 72°6. CONCLUSIONSWe have shown that computational imagingtechniques can be used to break some of thefundamental rules which determine the limitationson size, weight and cost of conventional opticalsystems. Specific examples of uncooled objectivelenses for operation in the LWIR spectral regionhave been used to highlight the potential benefits.Figure 10. Detected multi-scale image of thedocked boat and person shown in Figure (meanSNR=100).Wavefront coding can be used to significantlyincrease the depth-of-focus limitation governed bythe F-number and operating wavelength of aconventional lens. The image processing can beimplemented to run in real-time and remove theneed for a focus mechanism. However, trade-offs


etween the increased depth-of-focus, imagingartefacts and noise amplification must be carefullymanaged, requiring specialised optimisationstrategies to be developed.We have designed a multiple aperture systemwhich is a factor of two-and-a-half times shorterthan the focal length of a conventional lensproviding equivalent resolution. It uses the samenumber of optical elements as the conventionalbaseline design and is contained within a similardiameter.A new interpretation of the multi-scale opticsconcept has been presented. We have designed atwo-element system with a total field coverage of atleast a factor of three greater than could beachieved with a conventional lens of the samelevel of complexity and overall size.The reader is left to ponder the question, iscomputational imaging simply a new tool in theoptical engineer’s toolbox, or does it represent asignificant disruptive technology?7. REFERENCES[1] Bigwood, C. and Wood, A. (20<strong>11</strong>), Two-elementlenses for military applications, Opt. Eng., Vol. 50,No. <strong>12</strong>, <strong>12</strong>1705.[2]. Dowski, E. R. and Cathey, W. T. (1995),Extended depth of field through wavefront coding,Appl. Opt., Vol. 34, No. <strong>11</strong>, pp 1859 – 1866.[3] Vettenburg, T., Wood, A., Bustin, N. andHarvey, A. (2010), Fidelity comparison of phasemasks for hybrid imaging, Proc. SPIE, Vol. 7652,76520U.[4] Vettenburg, T., Bustin, N. and Harvey, A.(2010), Fidelity optimisation for aberration-toleranthybrid imaging systems, Opt. Express, Vol. 18, No.9, p 9220 – 9228.[5] Hasler, I., Bustin, N. and Price, D. (2010),Thermal imaging system using optimisedwavefront coding operating in real-time, Proc.SPIE, Vol. 7834, 783402.[6] Muyo, G. and Harvey, A. (2004), Wavefrontcoding for athermalisation of infrared imagingsystems, Proc. SPIE, Vol. 56<strong>12</strong>, pp 227 – 235.[7] Muyo, G., Singh, A., Andersson, M., Huckridge,D., Wood, A. and Harvey, A. (2009), Infraredimaging with a wavefront-coded singlet lens, Opt.Express, Vol. 17, No. 23, pp 2<strong>11</strong>18 – 21<strong>12</strong>3.[8] Tanida, J., Kumagai, T., Yamada, K., Miyatake,S., Ishida, K., Marimoto, T., Kondou, N., Miyazaki,D. and Ichioka, Y. (2001), Thin observation moduleby bound optics (TOMBO): concept andexperimental verification, Appl. Opt., Vol. 40, No.<strong>11</strong>, pp 1806-1813.[9] Park, S. C., Park, M. K. and Kang, M. G.(2003), Super-resolution image reconstruction: atechnical overview, IEEE Signal ProcessingMagazine, Vol. 20, No. 3, pp 21–36.[10] Elad, M. and Feuer, A. (1997), Restoration ofa single super resolution image from severalblurred, noisy, and under sampled measuredimages, , IEEE Transactions on Image Processing,Vol. 6, No.<strong>12</strong>, pp.1646 - 1658.[<strong>11</strong>] Farsiu, S., Robinson, M. D., Elad, M. andMilanfar, P. (2004), Fast and robust multi-framesuper-resolution, IEEE Transactions on ImageProcessing, Vol. 13, No. 10, pp 1327-1344.[<strong>12</strong>] Shankar, M., Willett, R., Pitsianis, N., Schulz,T., Gibbons, R., Te Kolste, R., Carriere, J., Chen,C., Prather, D. and Brady, D. (2008), Thin infraredimaging systems through multi-channel sampling,Appl. Opt., Vol. 47, No. 10, pp B1-B10.[13] Shepp, L. A. and Vardi, Y. (1982), Maximumlikelihood reconstruction for emission tomography,IEEE Transactions on Medical Imaging, Vol. 1, No.2, pp <strong>11</strong>3 - <strong>12</strong>2.[14] Brady, D. J. and Hagen, N. (2009), Multi-scalelens design, Opt. Express, Vol. 17, No. 13, pp10659 - 10674.[15] Lohmann, A. W. (1989), Scaling laws for lenssystems, Appl. Opt., Vol. 28, No. 23, pp 4996 –4998.[16] Marks, D. L., Tremblay, E. J., Ford, J. E. andBrady, D. J. (20<strong>11</strong>), Microcamera aperture scale inmonocentric gigapixel cameras," Appl. Opt., Vol.50, No. 30, pp 5824 - 5833.[17] Son, H., Marks, D. L., Tremblay, E. J., Ford,J., Hahn, J., Stack, R., Johnson, A., McLaughlin,P., Shaw, J., Kim, J. and Brady, D. J. (20<strong>11</strong>), Amulti-scale, wide field, gigapixel camera, ImagingSystems Applications: OSA Technical Digest,JTuE2.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!