WISHED: Wavefront Imaging Sensor With High Resolution And .

2y ago
37 Views
2 Downloads
7.60 MB
10 Pages
Last View : 9d ago
Last Download : 3m ago
Upload by : Grant Gall
Transcription

WISHED: Wavefront imaging sensor with highresolution and depth rangingYicheng Wu , Fengqiang Li , Florian Willomitzer, Ashok Veeraraghavan, Oliver CossairtAbstract—Phase-retrieval based wavefront sensors have been shown to reconstruct the complex field from an object with a highspatial resolution. Although the reconstructed complex field encodes the depth information of the object, it is impractical to be used asa depth sensor for macroscopic objects, since the unambiguous depth imaging range is limited by the optical wavelength. To improvethe depth range of imaging and handle depth discontinuities, we propose a novel three-dimensional sensor by leveraging wavelengthdiversity and wavefront sensing. Complex fields at two optical wavelengths are recorded, and a synthetic wavelength can be generatedby correlating those wavefronts. The proposed system achieves high lateral and depth resolutions. Our experimental prototype showsan unambiguous range of more than 1,000 larger compared with the optical wavelengths, while the depth precision is up to 9µm forsmooth objects and up to 69µm for rough objects. We experimentally demonstrate 3D reconstructions for transparent, translucent, andopaque objects with smooth and rough surfaces.Index Terms—3D imaging, Wavelength diversity, Wavefront sensing, Phase retrievalF1I NTRODUCTIONOP tical fields from an object contain information aboutboth albedo and depth. However, intensity-basedCCD/CMOS sensors can only record the amplitude of acomplex optical field.In order to reconstruct a complex-valued optical fieldthat contains both amplitude and phase information, wavefront sensors can be used. The Shack-Hartmann wavefrontsensor (SHWFS) [1] is the most common wavefront sensorand it relies on a lenslet array. This reduces the spatialresolution of the recovered wavefront to the number oflenses in the array, typically a few tens of thousands.Recently, a number of high-resolution wavefront sensors(HR-WFS) have been introduced by relying on optical modulation and computational reconstruction. Wang et al. [2]and Horisaki [3] recover megapixel resolution wavefrontsusing coded apertures. Wu et al. [4] propose a wavefrontimaging sensor (a.k.a WISH) with several megapixels resolution by using a programmable spatial light modulator(SLM).HR-WFS can be used to recover high lateral spatial resolution, but depth information is encoded in phase relativeto optical wavelengths, producing an unambiguous depthrange in the hundreds of nanometers (one optical wavelength). Phase unwrapping algorithms may help alleviatethis issue, but typically fail for objects with discontinuities.The problem of phase wrapping gets even more severe for opticallyrough surfaces since it results in a speckle pattern that has randomphase distribution (see Fig. 1f). The speckle problem manifests * These two authors contributed equally.Y. Wu and A. Veeraraghavan are with the Department of Electrical andComputer Engineering, Rice University, Houston, TX 77005 USA.F. Li and O. Cossairt are with the Department of Computer Science,Northwestern University, Evanston, IL 60208 USA.F. Willomitzer is with the Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208 USA.Corresponding authors: A. Veeraraghavan (email: vashok@rice.edu) andO. Cossairt (email: ollie@eecs.northwestern.edu)for any WFS technique so that none can be used as a generalpurpose.The goal of this work is to develop a wavefront sensorcapable of measuring the depth of objects with large surfacevariations (orders of magnitude larger than optical wavelengths) and objects with rough surfaces. Our approachis inspired by interferometry. Optical interferometry is awavefront sensing technique that uses a reference beam torecord the complex field, which suffers from a similar limitation on the unambiguous depth range. To circumvent thislimitation, multi-wavelength interferometry is proposed [5],[6], [7]. For example, two wavelengths (λ1 , λ2 ) are usedto record a complex field, and a complex field with thesynthetic wavelength (Λ λ1 · λ2 / λ1 λ2 ) can be thencalculated. Since the synthetic wavelength is much largerthan optical wavelengths, multi-wavelength interferometrycan provide several orders of magnitude improvement inthe unambiguous depth range that can be recovered.In this paper, we leverage two ideas. First, by exploitingcomputational wavefront sensors (e.g., WISH [4] in thiswork), we can achieve high lateral resolution wavefrontsensing. Second, by exploiting wavelength diversity, we canachieve a large unambiguous depth range in recovery.We introduce a wavefront imaging sensor with highresolution and depth ranging (WISHED), which allows usto achieve tens of micron lateral resolution and an unambiguous range more than 1,000 larger than the opticalwavelengths. Our WISHED prototype utilizes a tunablelaser to provide wavelength diversity and a programmablesynthetic wavelength. To reconstruct the depth information,first a wavefront from the object is reconstructed for eachwavelength. Then, the difference in phase between thesetwo measured wavefronts is calculated, and the depth iscomputed relative to the resulting synthetic wavelength. Wesummarize our contributions as follows. We leverage wavelength diversity with wavefront

Fig. 1. Overview of the proposed WISHED system. (a) The proposed WISHED setup consists of a tunable laser and a wavefront sensor. (b)Comparing to WISH, the proposed WISHED has a much larger unambiguous imaging range with high depth resolution. Our system can work forboth optically smooth and rough objects. (c) Experimental setup with two layers of glass slides with letter masks placed in front. The optical pathdifference between the two layers is about 0.5mm. (d) Experimental setup for recovering a tilted wavefront incident on a diffuser (Thorlabs Grit220). (e,f) show the recovered phase map from WISH. Since the phase is highly wrapped, and contains discontinuity (e) and speckle (f), the correctoptical path difference cannot be recovered. In contrast, the phase map (g,h) from WISHED shows little (h) or no (g) phase wrapping. (i,j) show theunwrapped optical path difference and optical depth, which demonstrates that WISHED is capable of achieving sub-mm depth resolutions. 2sensing to propose WISHED as a depth sensor.We build a prototype and demonstrate threedimensional (3D) sensing with high lateral (up totens of microns) and depth resolutions (up to 9 µm),and large unambiguous range (up to 1.2cm).We experimentally demonstrate that WISHED can beused for transparent, translucent, and opaque objectswith smooth and rough surfaces.The WISHED prototype using a tunable laser provides flexibility in the selection of unambiguousranges and depth precisions. By combining a multiwavelength approach and phase unwrapping, wecan also achieve a higher dynamic range (the ratiobetween imaging range and depth resolution), asshown in Fig. 6.R ELATED W ORKIn this section, we review optical interferometry and phaseretrieval algorithms.2.1Optical interferometryIn optical interferometry, the detector compares the phasedelay in the optical field between sample and referencearms to measure the surface variations of an object. Forexample, the widely used white light interferometry (WLI)can provide very high depth resolution, and optical coherence tomography is one example of white light interferometry [8]. Focal plane sensors can be used to record a WLIinterferogram [9], or a single pixel can be used togetherwith mechanical scanning to record the whole object [10].Detection can be broadly separated into homodyne andheterodyne techniques. In homodyne detection, the carrierfrequencies in two arms are the same. In heterodyne interferometry, the carrier frequencies in two arms are different,which helps with the phase estimation [11]. Single-shotheterodyne interferometry has been proposed with a polarized camera [12]. Kadambi et al. [13] also use heterodyneinterferometry to build a GHz time-of-flight imager withmicron depth resolution.Although interferometry with a single wavelength provides extremely high depth resolution, it can not measure objects with rough surfaces since speckles destroy thedepth measurement [14]. To measure optically rough objects, multiple phase-wrapped measurements can be madesequentially using different optical wavelengths, then phaseunwrapping algorithms can be used to recover the depthof the object [15]. On the other hand, Dändliker et al. [7]propose superheterodyne interferometry (SH) to measureobjects with two closely spaced wavelengths simultaneously. Li et al. [16] further demonstrate that SH can be usedto measure depth for objects with an optically rough surface,and demonstrate the use of tunable lasers to provide a tradeoff between range and resolution. The implementation of SHusing a focal plane sensor has also been proposed to removethe need for mechanical scanning [17].Distinctions between multi-wavelength interferometryand WISHED: Both methods can provide high-resolutionwavefront sensing. The main difference is that multiwavelength interferometry needs a reference arm to coherently interfere with the sample beam. However, thisresults in several limitations. First, the camera essentiallyrecords the phase delay caused by the optical path difference(OPD) between the sample and reference beams. Since thesetwo arms are physically separated, even micrometer levelobject movement (e.g., due to vibrations) may introducesignificant phase delays in these two arms, which totallydestroys the measurement. Second, to generate a high contrast interferogram on the sensor plane, the power of thesample and reference beams must be matched. This meansthat a careful calibration of power matching between objectand reference beams needs to be performed. Third, the

phase of the reference must be calibrated so that its effectcan be factored out of the recovered wavefront. Moreover,most current SH systems are implemented with single-pixeldetectors or low-resolution lock-in sensors, while WISHEDcan use megapixel CCD/CMOS sensors. This means thatthe spatial resolution of WISHED is much higher.2.2Pz is the propagation operator [32] calculating the field atdistance z .In our system, we first recover the field right beforethe SLM (uSLM P z usensor ). Then usensor be calculatedbased on numerical propagation. To estimate uSLM , werewrite Eq. 1 as an optimization form,Phase retrievalSince optical frequencies (e.g., 400THz) are much higherthan the frame rate of a focal plane detector or the samplingfrequency of a single pixel detector, it is generally onlypossible to record the amplitude of an optical field, but notthe phase information. As mentioned above, interferometrycan be used to recover the phase directly with the helpof a reference beam. On the other hand, non-linear phaseretrieval algorithms can be used to estimate the phase (orthe complex field) from only intensity measurements. Ingeneral, this reconstruction is an ill-posed problem and itis difficult to guarantee uniqueness in reconstructed results.The most popular phase retrieval algorithm was introduced by Gerchberg and Saxton (GS) [18], which iteratively imposes sensor-plane and object-plane constraints.Although it is not guaranteed to recover to the true solution, the GS algorithm works well in practice providedsufficiently strong constraints. Researchers have proposeda number of techniques to improve the convergence of theinitial GS algorithm. Several methods focus on increasingthe number of uncorrelated measurements (stronger constraints), including adding measurements at different propagation planes [19], [20] or with different phase modulations [4], [21]. More sophisticated phase retrieval algorithmshave also been introduced using new ideas such as convexrelaxations [22], [23], approximate message passing [24], [25]and deep learning [26].Non-linear phase retrieval algorithms have been a keycomponent for many imaging techniques, such as wavefrontsensing [2], [4], [21], [27], Ptychography [28], [29], andFourier Ptychography [30], [31].ûSLM arg minP ROPOSED M ETHOD3.1Wavefront recovery using phase retreivalWe follow the idea proposed by WISH [4], and designa wavefront sensor with an SLM and a CMOS sensorto record the optical field. N uncorrelated random phasepatterns ΦiSLM (i 1 · · · N ) are displayed on the SLM tomodify the incident wavefront. The corresponding intensityI i (i 1 · · · N ) is captured by the sensor at distance z . Therelation between the unknown field usensor that falls on theSLM and the measurement I i can be represented asI i Pz ΦiSLM P z usensor 2(1)(2)To solve this phase retrieval program, we apply theGS algorithm [18]. The GS algorithm gives accurate resultswhen the phase modulation is random [33]. Here, thisrequirement is satisfied by projecting random SLM patterns.More measurements lead to better convergence, especiallyfor rough objects. Empirically, we find 16 SLM measurements give us robust reconstruction results in most cases.Our algorithm does not enforce any prior information onthe estimated field, which allows it to achieve good resultsfor both optically smooth and rough surfaces.This iterative algorithm is shown in Algorithm 1. Forthe initialization, ûSLM is the average of the fields backpropagatedfrom the sensor plane with captured amplitudes I i and zero (or random) phases. During each iteration,ûSLM is modulated by ΦiSLM and then propagates to thesensor. On the sensor plane, for each y i , the amplitude is replaced by the corresponding measurement I i . Next, thesefields are propagated back to the SLM plane, multipliedwith the inverted field of the SLM plane, and averaged.The estimation will finally converge with several iterations.After the estimation of the optical field on the SLM plane,we can calculate the optical field on the sensor plane with aforward propagation.Algorithm 1 Wavefront recovery alogrithm Pi1: initialize field ûSLM i (P z I )/NP i iI ) do2: whilei (y 4:The key idea of our approach is to borrow the wavelengthdiversity that is often used in interferometry, and leveragewith the high-resolution computational wavefront sensing.First, we explain a phase retrieval algorithm to reconstructthe wavefront. Then, we introduce a method for depthestimation with a synthetic wavelength that is several ordersof magnitude larger than the optical wavelength.I i Pz ΦiSLM uSLM i3:3N X5:6:propagate to sensor plane: y i Pz ΦiSLM ûSLM constraint amplitude: yci I i y i 1propagate to SLM plane: ûiSLM ΦiSLMP z yciP iaverage the field: ûSLM i ûSLM /N7: output the field on the sensor3.2ûsensor Pz ûSLMDepth estimation with multiple wavelengthsThe optical field Oλi (x, y) recorded on the sensor planeencodes the surface variation of the object d(x, y) and albedoaλi (x, y) as:Oλi (x, y) aλi (x, y)exp}{i2π d(x,y) δd(x,y)λi(3)where δd(x, y) is the roughness of point (x, y) on the object.For an optically smooth object (δd(x, y) 0), we can estimate the phase with only one wavelength. However, formacroscopic objects (d λi ) with discontinuities, it is challenging to unwrap the phase and convert into depth. On theother hand, for rough surfaces (δd(x, y) 6 0), it introduces a

speckle pattern that has random phase distribution, whichfails the depth measurement.To decode the depth for objects with discontinuities orrough surfaces, we combine the optical fields with twoclose wavelengths to estimate the depth information. Thephase difference or depth can be calculated by pointwisemultiplication between the field of λ1 and the conjugatefield of λ2 .Oλ1 (Oλ2 ) (aλ1 expd (Oλ1{i2π λd }1)(Oλ2 ) ) ·(aλ2 exp{i2π λd } 2)λ1 λ21·2π λ1 λ2 (4)(5)where λ1 and λ2 are the two wavelengths to estimate thedepth or phase of the object. Synthetic wavelength is definedλ2as Λ λλ11 λ. represents the point-wise multiplication,2 and () represents the conjugate of the complex value.The micro surface roughness δd(x, y) is far smaller thanthe macro surface height d(x, y). Therefore, the WISHEDreconstructed depth represents the macro surface height ofthe object.During the experiment, the values of these two wavelengths are chosen very close ( 0.2nm) to produce a largesynthetic wavelength Λ ( 5mm), which helps measure thesurface with large height variation.3.3Phase unwrappingTo achieve both high depth resolution and large imaging range, we can utilize two measurements: one witha small synthetic wavelength and the other with a largesynthetic wavelength. The small synthetic wavelength provides high depth resolution, but the overall measurementcan be wrapped. We can use a depth measurement witha large synthetic wavelength and no phase wrapping as aguide to unwrap measurements with the smaller syntheticwavelength as below [34]:Φ2uw (Λ2 ) Φw (Λ2 ) 2π · roundM · Φ1uw (Λ1 ) Φw (Λ2 )2πz1Objectz2LensSensorFig. 2. Simulation setup. The laser beam with a tuneable wavelength isfirst scattered by the object and then collected by the lens. A wavefrontsensor containing an SLM and a CMOS imager is used to record thecomplex field.4.1Simulation settingsThe setup for our simulation is shown in Fig. 2. We pickthe Stanford Bunny [36] as our object. To model the roughsurface, we add a height variation following a Gaussiandistribution on the top of the height map of the bunny. Thefinal height map is h hbunny N (0, 1µm). The maximumheight difference for the bunny is 10mm.The two wavelengths used here are λa 854.985nm andλb 855.015nm. The synthetic wavelength is Λ 24.4mm.The bunny is illuminated by the laser beam with eachwavelength sequentially. The scattering light is collectedby a 15-mm-diameter lens at distance z1 600mm with85.7mm focal length. The light is focused at z2 100mmaway from the lens. Since the sensor is on the conjugateplane of the object plane, usensor can be directly calculatedbased on Fourier Optics theory [32],hz z i h z i121usensor QV uobj(7)z22z2Q represents multiplication by a quadratic-phase exponential and V represents scaling by a constant. Based on thisequation, once we recover usensor , uobj can be calculatedaccordingly. Then the depth map can be recovered based onEq. 5. 4.2(6)In the simulation, we generate 16 random SLM patterns andcalculate the corresponding intensity images on the sensor.To mimic noise in the real experiment, we add a Gaussiannoise with σ 0.2 into the measurement. Fig. 3 shows several simulated sensor measurements for two wavelengths.Based on the phase retrieval algorithm mentioned inSec. 3.1, the reconstructed field on the sensor is shownin Fig. 4(a). Because of the roughness of the bunny, theamplitude contains speckle patterns, and the phase looksrandom. Based on the Eq. 5, we recover the heightmap ofthe object. Due to the slight difference of the Airy diskdiameters between two wavelengths, the recovered depthmap contains artifacts near the boundary of the specklepattern. To reduce such artifacts, a Gaussian kernel with thesize of the averaged Airy disk is applied to smooth the depthmap. The result is shown in Fig. 4(b), which is quite close tothe ground truth (RMSE 85µm for Λ 24.4mm). Note thatthe surface roughness (1µm) is much smaller than the RMSE(85µm). Therefore, the recovered macroscopic heightmap isnot affected by the surface roughness.where M equals to Λ1 /Λ2 . Φ1uw is the phase measurement ofthe large synthetic wavelength Λ1 without wrapping. Φw isthe wrapped phase measurement using the small syntheticwavelength Λ2 . Φ2uw is the unwrapped phase that needs tobe estimated. Once we estimate Φ2uw , we can convert it intodepth with Eq. 5.For simple objects such as a planar surface, we canalso directly use a fast two-dimensional phase-unwrappingalgorithm by adding integer times of 2π at the phase jumpregions [35] to unwrap phase measurements with smallsynthetic wavelengths.4Beam splitter SLMTuneable laserS IMULATIONIn this section, we show simulation results based on the proposed WISHED. Without loss of generality, we demonstratesimulations estimating the depth of an opaque object withrough surfaces in reflective mode.Depth estimation

Sensor imagesλa 854.985nmSLM patternsSensor imagesλb 855.015nm(a) Schematic of the imaging setupGlass stack1ApertureBeam splitter SLMLaser0(b) ObjectsPolarizerFocusing lens(c) Amplitude1Sensor(d) WISH phaseλ 854.31nm0.60.4Intensity0.80.21mm5mm5mm0Fig. 3. Simulated sensor measurements with different SLM patternsand optical wavelengths. For each SLM pattern, the speckle patternsare close for two similar wavelengths. For different SLM patterns, thespeckle patterns are totally distinct to make sure these measurementsare uncorrelated.AmplitudePhase(b)RMSE 85μm(g) Λ 0.52mmWISHED OPD with different synthetic wavelengths(h) Λ 6.44mm(i) Λ 1.84mm(j) Λ 0.52mmEstimated07.552.5Ground truthDepth [mm]105mm1.81.41102.5λa 854.985nm65432100.650.20Fig. 5. Experiment on optical smooth objects in transmissive mode.(a) Setup illustration. (b) The object: a letter mask with a glass stackin front. (c) Amplitude measurement. (d) WISH reconstruction: phasemaps with one optical wavelength. (e-g) WISHED reconstruction: phasemaps with different synthetic wavelengths. (h-j) OPD with different synthetic wavelengths. Smaller synthetic wavelength provides higher depthprecision.29um) equipped with a Sony IMX-226 CMOS sensor (1.85µm pixel pitch, 4024 3036 resolution).0Fig. 4. Simulation reconstruction. (a) Recovered object fields for twowavelengths. (b) Comparison between ground truth and WISHED estimated depth using a synthetic wavelength of 24.4mm (RMSE 85µm).5(f) Λ 1.84mm2mm7.5λb 855.015nmWISHED phase with different synthetic wavelengths(e) Λ 6.44mmDepthDepth [mm](a)ICCP6543210Phase [rad]Collimating lens0.2Phase [rad]0.4OPD [mm]0.6Intensity0.8E XPERIMENTAL R ESULTSIn this section, we report experimental measurements usingour prototype WISHED sensor for transparent, translucent,and opaque objects. The optical configuration is identical tothe one used for simulations, as illustrated in Fig. 2. A tunable laser (Toptica DLC pro850) with the center wavelengthof 850nm is used as the light source. We image the objectwith multiple wavelengths by tuning the frequency of thelaser emission. The laser emission is collimated and thenilluminates the object either in transmissive or reflectivemode. A linear polarizer is used to match the polarizationof the SLM (HOLOEYE LETO, 1920 1080 resolution,6.4µm pitch size) since SLM is only sensitive to a specificpolarization direction. The focusing lens (Thorlabs AC508075-B-ML) has a focal length of 75mm and is placed about50cm away from the object. A 25.4-mm beam splitter isinserted between the SLM and the sensor to guide the fieldinto the sensor since the SLM operates in the reflectivemode. The distance between the SLM and the sensor is 25mm. The sensor is a 10-bit Basler Ace camera (acA4024-5.1Transmissive based 3D imagingTo verify the optimal performance and depth resolutionwith the prototype, we first image different transmissiveobjects with optically smooth and rough surfaces.5.1.1 Imaging an optically smooth planar objectAs shown in Fig. 5(a), the beam is collimated and thenilluminates the object – a letter mask with a glass plate stackin front as shown in Fig. 5(b). There are zero, one, two,and three glass plates on top of the letter ’I’, ’C’, ’C’, and’P’, respectively. The surface of the glass plates is opticallysmooth, and each glass plate introduces an optical pathdifference of 0.5mm. 16 random SLM patterns are used tomodulate the optical field.Since the surface of the glass plates is smooth, thereis no speckle pattern observed as shown in Fig. 5(c). Weimage the object with an optical wavelength of 854.31nm,854.43nm, 854.71nm, and 855.73nm, which leads to six different synthetic wavelengths. Here, we show three examplesin Fig. 5, corresponding to the synthetic wavelength of6.44mm, 1.84mm, and 0.52mm. The phase and depth valuesare estimated with the method described in Sec. 3.2.As we can see, WISH measurement with one wavelengthhas severe phase wrapping as shown in Fig. 5(d). It isvery challenging to recover the overall phase of the glassstack if we do not know the discontinuities ahead. On the

1.8(a)1.61.4ApertureBeam splitter SLM 0.52 mm1.2OPD [mm]Glass stack 6.44 mm 1.84 mmLaser1DiffuserCollimating lensPolarizerFocusing lensSensorAmplitude0.8(c)(b)0.60.4λ 854.22nm01002003004005006007008009001000Pixel No.λ 854.38nmWISH phaseWISHED phase(d) λ 854.22nm(e) Λ 4.3mmWISHED depth6(f) Λ 4.3mm32.55Phase [rad]Fig. 6. OPD profile along one line on the plate stack as shown inFig 5(h-j). The x-axis marks the different pixels along the line, and yaxis is the OPD. Note: each glass plate introduces an OPD of 0.5mm.The RMSEs of this OPD profile for the synthetic wavelengths of 6.44mm,1.84mm and 0.52mm are 130µm, 56µm and 9µm.5.1.2 Imaging an optically rough objectWe then increase the complexity of the experiment byadding a diffuser behind the glass stack to emulate anoptically rough surface as shown in Fig. 7(a). If we lookat the images recorded by the sensor without SLM modulation, the speckle pattern is clearly observed as shownin Fig. 7(b,c). Two different wavelengths of 854.31nm and854.48nm are used, which produces a synthetic wavelengthof 4.3mm. The speckle pattern with different wavelengths is31.5210.51005mmFig. 7. Experiment on optical rough objects in transmissive mode.(a) Setup illustration. (b,c) Amplitude measurement (no SLM modulation) with two close wavelengths 854.22nm (b) and 854.38nm (c). Thesetwo speckle patterns are similar since their wavelengths are close. (d)phase map from WISH reconstruction. The 3D information is totally lostdue to the speckle pattern. (e) phase map from WISHED reconstruction.The synthetic wavelength is 4.3mm. (f) Estimated OPD, where glasslayers are clearly visualized.21.81.61.4OPD [mm]other hand, WISHED with the larger synthetic wavelengthsproduces an unambiguous depth range significantly greaterthan the optical wavelengths, and we can clearly separatethese four letters, which have different phases as shownin Fig. 5(e,f). Since the total optical depth of the glassstack is larger than the synthetic wavelength of 0.52mm,we observe phase wrapping in Fig. 5(g). We use the phaseunwrapping algorithm of Eq. 6 and the measurement of thesynthetic wavelength of 1.84mm as a guide to unwrap. Theunwrapped phase of Fig. 5(g) is then converted into depthas shown in Fig. 5(j).The smaller synthetic wavelength provides better depthresolution as shown in Fig. 5(j) compared to the largersynthetic wavelength shown in Fig. 5(h, i). The same phenomena can be observed in an optical depth profile alonga line cut (marked as a dashed line in Fig. 5j) through theglass plate stack as shown in Fig. 6. We further quantifythe root mean square errors (RMSE) for the optical depthprofile as shown in Fig. 6. We define RMSE as the standarddeviation away from the assumed step heights. The RMSEsfor the synthetic wavelengths of 6.44mm, 1.84mm, and 0.52mmare 130µm, 56µm and 9µm, which demonstrates the veryhigh depth precision of the prototype. The observation ofmeasurements with different synthetic wavelengths alsoaligns with our intuitive expectation that a smaller wavelength provides finer depth resolution given the same tonoise ratio (SNR) in the sensor. Since our prototype is builtwith a tunable source, it provides a trade-off for differentapplications requiring different imaging ranges and depthresolutions.24OPD Pixel No.200022002400Fig. 8. OPD profile along one line on the plate stack as shown inFig. 7(f). The x-axis marks the different points along the line, and y axisis the OPD value. Note: each glass plate introduces an OPD of 0.5mm.The RMSE in optical depth for the measurement is 69µm.slightly different. Although the speckle pattern is present,we can still reconstruct the optical depth, which we canconvert into a true optical depth map given knowledgeof the index of refraction of the transparent object (in thiscase n 1.5), as shown in Fig. 7. Each glass plate introducesan optical path difference of 0.5mm. The glass plates areclearly separated according to their thickness. A line profileis also plotted across these different glass plates as shown inFig. 8, demonstrating that we can still achieve a high depthresolution despite the presence of speckle. We quantify theRMSE for the optical depth along with this line profile as69 µm.In Fig. 6 and Fig. 8, we did not obtain a ground truth

measurement of glass thickness (or OPD), but rather assumea ‘ground truth’, following the manufacturing specifications- that each glass plate introduces an OPD of 0.5mm. Therefore, in our assumed ground truth, the steps for the glassplate in Fig. 6 are 0mm, 0.5mm, 1mm, 1,5mm. In Fig. 8,they are 0mm, 0.5mm, 1mm, 1.5mm, 2mm. Our assumedground truth may have small errors due to manufacturingtolerances.due to the refraction, and a diffuser (Thorlabs 220 Grit)is used to scatter light to the detector. There are specklesin the sensed image as shown in Fig. 9(b). Wavelengthsof 854.31nm and 854.88nm are used, which correspondsto a synthetic wavelength of 1.29mm. The phase with thesynthetic wavelength is shown in Fig. 9(c), and a phaseunwrapping algorithm [11] is used to unwrap the phaseand convert to optical path difference as shown in Fig. 9(d).A line profile across the optical depth map is shown inFig. 9(e), which demonstrates a 61µm depth resolution.Collimating lensLaserApertureBeam splitter SLM3D ObjectPolarizerFocusing lensSensorFig. 10. Experiment schematics for reflective 3D objects. The objectis illuminated by the tunable laser. And the scattering light is collectedby a focusing lens to our wavefront sensors.5.2Fig. 9. Experiment on a transparent object with complex geometry.(a) Experimental setup for imaging the glass prism. A diffuser is usedfor scattering the light

WISHED: Wavefront imaging sensor with high resolution and depth ranging Yicheng Wu , Fengqiang Li , Florian Willomitzer, Ashok Veeraraghavan, Oliver Cossairt Abstract—Phase-retrieval based wavefront sensors have been shown to reconstruct the complex field from an object with a high spatial resolution.

Related Documents:

It can compensate the wavefront distortion caused by the following amplification and transmission. The Shack Hartmann wavefront sensor (Thorlabs WFS150-5C) is placed at the end to measure the output wavefront distribution. The cavity mirror position is the image relay plane. The DM and the Shack Hartmann wavefront sensor are both at the Figure 1.

2.3 - Wavefront Sensors Purpose: The wavefront sensors are the test articles for the experiment. Requirements: A Shack-Hartmann Array sensor from Thorlabs will be provided A Roddier Curvature Wavefront Sensor must be developed using COTS components Shack-Hartmann Array: Roddier Method: 22

The pyramid wavefront sensor (1996 by Ragazzoni)6 is a pupil plane wavefront sensor that in e ect works as a 2-D Foucault knife-edge test. In commonly used con gurations, the focal plane . and OCAM2K were mounted on a coaxial rail system from ThorLabs. Custom mounting plates

Coded Wavefront Sensor: Analogy!12 [Ng et al. 2005] Lenslet light field camera [Veeraraghavan et al. 2007, Marwah et al. 2013] Mask-based light field cameras [Wang et al. 2017] Coded wavefront sensor [Shack and Platt 1971] Shack-Hartmann wavefront sensor

Two Wavefront Sensor (WFS) Options 15 Hz CCD Sensor 880 Hz (Max) CMOS Sensor Each Thorlabs Adaptive Optics (AO) Kit is a complete adaptive optics imaging solution, including a deformable mirror (DM), wavefront sensor (WFS), control software, and optomechanics for assembly.

a given wavefront sensor is the primary goal of the AWESoMe testbed. Figures 1 and 2 provide a visual representation of the method for measuring the wavefront errors as well as the physical hardware required to do so. 2.2.Roddier Curavture Wavefront Sensor Because the RCWS method is relatively new compared to the Shack-Hartmann array method .

wavefront, B is the deviation of Airy spot centroid from the plane wavefront case. We use standard commercial PSD (Thorlabs PSD4M; New Focus Model 2931) and the distance d between the aperture and the sensor is arbitrarily set to 5 cm. The sensitivity of the technique is strongly

coronavirus and are considered ‘extremely vulnerable’. They include: people with severe respiratory conditions including all cystic fibrosis, severe asthma and severe COPD people on immunosuppression therapies sufficent to significantly increase risk of infection people with specific cancers or receiving cancer treatment. There are more groups in this list, so please visit the .