Light Field Camera Design For Integral View Photography

3y ago
14 Views
3 Downloads
573.37 KB
13 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Ronan Orellana
Transcription

ADOBE TECHNICAL REPORTLight Field CameraDesign for IntegralView PhotographyTodor Georgeiv and Chintan IntwalaAdobe Systems Incorporated345 Park Ave, San Jose, CA 95110tgeorgie@adobe.comFigure 1:Integral view of a seagullAbstractThis paper introduces the matrix formalism of optics asa useful approach to the area of “light fields”. It iscapable of reproducing old results in IntegralPhotography, as well as generating new ones.Furthermore, we point out the equivalence betweenradiance density in optical phase space and the lightfield. We also show that linear transforms in matrixoptics are applicable to light field rendering, and weextend them to affine transforms, which are of specialimportance to designing integral view cameras. Ourmain goal is to provide solutions to the problem ofcapturing the 4D light field with a 2D image sensor.From this perspective we present a unified affine opticsview on all existing integral / light field cameras. Usingthis framework, different camera designs can beproduced. Three new cameras are proposed.

Table of ContentsAbstract 11. Introduction 31.1 Radiance and Phase Space 31.2 Structure of this paper 32. Linear and Affine Optics 42.1. Ray transfer matrices 42.2. Affine optics: Shifts and Prisms 43. Light field conservation 54. Building blocks of opticalsystem 54.1."Camera" 54.2."Eyepiece" 64.3.Combining eyepieces 65. The art of light field cameradesign 65.1. Integral view photography 65.2. Camera designs 76. Results from our light fieldcameras 9Conclusion 13References 13Adobe Technical Report

These coordinates are very similar to the traditional( s, t , u, v) coordinates used in light field literature.Only, in our formalism a certain analogy withHamiltonian mechanics is made explicit. Our variablesq and p play the same role as the coordinate andmomentum in Hamiltonian mechanics. In more detail,it can be shown that all admissible transformations ofthe light field preserve the so called symplectic formIntroductionLinear (Gaussian) optics can be defined as the use ofmatrix methods from linear algebra in geometricaloptics. Fundamentally, this area was developed(without the matrix notations) back in the 19-thcentury by great minds like Gauss and Hamilton.Matrix methods became popular in optics during the1950-ies, and are widely used today [1], [2]. In those oldmethods we recognize our new friend, the Light Field. 0 1 , same as in the case of canonical transforms 1 0 in mechanics [4].We show that a slight extension of the above ideas towhat we call affine optics, and then a transfer into thearea of computer graphics, produces new and veryuseful practical results. Applications of the theory todesigning “Integral” or “light field” cameras aredemonstrated.In other words, the phase space of mechanics and “lightfield space” have the same symplectic structure. For thelight field one can derive the volume conservation law(Liouville's theorem) and other invariants of mechanics[4]. This observation is new to the area of light fields.Transformations of the light field in an optical systemplay a role analogous to canonical transforms inmechanics.1.1 Radiance and Phase SpaceThe radiance density function (or “Light field'' - as it isoften called) describes all light rays in space, each raydefined by 4 coordinates [3]. We use a slightly modifiedversion of the popular “2-plane parameterization”,which describes each ray based on intersection pointwith a predefined plane and the two angles / directionsof intersection. Thus, a ray will be represented by space1.2 Structure of this paperNext section 2 shows that: (1) A thin lens transformsthe light field linearly, by the appropriate ray transfermatrix. (2) Light traveling a certain distance in space isalso described by a linear transformation (a shear) - asfirst pointed out in a paper [5] by Durand at. al. (3)Shifting a lens from the optical axis or inserting a prismis described by the same affine transform. This extendslinear optics into what we call affine optics. Thesetransformations will be central to future “light field''image processing, which is coming to replacetraditional image processing.coordinates, q1 , q2 and direction coordinates,p1 , p2 which together span the phase space of optics(See Figure 2). In other words, at a given transversalplane in our optical system, a ray is defined by a 4Dvector, ( q1 , q2 , p1 , p2 ) , which we will call the light fieldvector.Section 3: Transformation of the light field in anyoptical device, like telescope or microscope, has topreserve the integral of light field density. Any suchtransformation can be constructed as a product of onlythe above two types of matrices, and this is the mostgeneral linear transform for the light field.Section 4 defines a set of optical devices, based on theabove three transforms. Those optical devices doeverything possible in affine optics, and they will beused as building block for our integral view cameras.The idea is that since those building blocks are the mostgeneral, everything that is possible in optics could bedone using only those simple blocks.Section 5 describes the main goal of Integral ViewPhotography, and introduces several camera designsfrom the perspective of our theory. Three of thosedesigns are new. Section 6 shows some of our results.Figure 2: A ray intersecting a planeperpendicular to the optical axis.Directions (angles) of intersectionare defined as derivatives of q1 andq2 with respect to t.3Adobe Technical Report

( T stands for “travel” -- as in [5]), who first introducedthis “shear” transform of light field traveling in space.)The linear transform is:2. Linear and Affine OpticsThis section introduces the simplest basic transforms ofthe light field. They may be viewed as the geometricprimitives of image processing in light space, similar torotate and resize in traditional imaging in the plane. q ' 1 T q p ' 0 1 p (2)where the bottom left matrix element 0 specifies thatthere is no change in the angle p when a light raytravels through space. Also, positive angle p producespositive change in q , proportional to the distance2.1. Ray transfer matrices(1) Light field transformation by a lens:traveled T .Just before the lens the light field vector is (q, p ) . Justafter the lens the light field vector is (q ', p ') . The lensdoesn't shift the ray, so q ' q . Also, the transform islinear. The most general matrix representing this typeof transform would be: q ' 1 0 q p ' a 1 p 2.2. Affine optics: Shifts and PrismsIn this paper we need to slightly extend traditionallinear optics into what we call affine optics.This is done by using (in the optical system) additiveelements together with the above matrices.(1)Our motivation is that all known light field cameras andrelated systems have some sort of lens array, whereindividual lenses are shifted from the main optical axis.This includes Integral Photography [6], the HartmannShack sensor [7], Adelson's Plenoptic camera [8], 3DTV systems [9], [10], the light field - related [3] and thecamera of Ng [11]. We were not able to find ourcurrent theoretical approach anywhere in the literature.One such “additive” element is the prism. By definitionit tilts each ray by adding a fixed angle of deviation α .Expressed in terms of the light field vector the prismtransform is:Figure 3: A lens transform of thelight field.Note: As a matter of notations, a 1f q' q 0 p ' p α where f iscalled focal length of the lens. Positive focal lengthproduces negative increment to the angle, see Figure 3.(3)One interesting observation is that the same transform,in combination with lens refraction, can be achieved bysimply shifting the lens from the optical axis.(2) Light field before and after traveling a distance TIf the shift is s, formula (1) for lens refraction would bemodified as follows:Convert to lens-centered coordinates by subtracting s .Apply linear lens transform. Convert to originalcoordinates by adding back s, q' 1 1 p ' fFigure 4: Space transfer of light.0 q s s 1 p 0 (4)which is simply:4Adobe Technical Report

q' 1 1 p ' f0 q 0 1 p sf After transformation in the optical device representedby matrix M , the area between the new rays will be(5)(q1Final result: “Shifted lens lens prism”This idea will be used later in section 5.2 0 1 q2 p1 ) M T M , 1 0 p2 (7)where M T is the matrix transposed to M . Thecondition for expressions (6) and (7) to be equal for anypair of rays is:Figure 5 illustrates the above result by showing how youcan build a prism with variable angle of deviationα sf from two lenses of focal length f and f 0 1 0 1 MT M . 1 0 1 0 shifted by a variable distance s from one-another.(8)This is the condition for area conservation. In thegeneral case, a similar expression describes 4D volumeconservation for the light field. The reader can checkthat (1) the matrix of a lens and (2) the matrix of a lightray traveling distance T discussed above both satisfythis condition.Further, any optical system has to satisfy it, as a productof such transforms. It can be shown [12] that any lineartransform that satisfies (8) can be written as a productof matrices of type (1) and (2).The last step of this section is to make use of the factthat since light field density for each ray before andafter the transform is the same, the sum of all thosedensities times the infinitesimal area for each pair ofrays must be constant. In other words, integral of thelight field over a given area (volume) in light space isconserved during transforms in any optical device.Figure 5: A variable angle prism.3. Light field conservationThe light field (radiance) density is constant along eachray. The integral of this density over any volume in 4Dphase space (light field space) is preserved during thetransformations in any optical device. This is a generalfact that follows from the physics of refraction, and ithas a nice formal representation in symplectic geometry(see [12]).4. Building blocks of ouroptical systemWe are looking for simple building blocks for opticalsystems (light field cameras), that are most general. Inother words, they should be easy to understand in termsof the mathematical transformations that they perform,and at the same time they should be general enough sothey do not exclude useful optical transforms.In our 2D representation of the light field this fact isequivalent to area conservation in (q, p) - space, whichwill be shown next:Consider two rays, (q1 , p1 ) and (q2 , p2 ) . After thetransform in an optical system, the rays will bedifferent. The signed area between those rays in lightspace (the space of rays) is defined by their crossproduct. In our matrix formalism the cross productexpression for the area will be:q1 p2 q2 p1 (q1 0 1 q2 p1 ) . 1 0 p2 According to our previous section, in the space of affineoptical transforms, everything can be achieved asproducts of the matrices of equations (1), (2) andprisms. However, those are not simple enough. That'swhy we define other building blocks as follows:4.1. “ Camera ”(6)5Adobe Technical Report

This is not the conventional camera, but is closelyrelated to it by adding a field lens. With this lens thecamera transform becomes simple: m 0 M .1 0 m length f , and in the end they travel a distance f . Theresult is: 1 0(9)f 1 0 1 1 f1 1 0f 0 1 f1f .0 (14)This is “inverse diagonal” matrix, which satisfies areaconservation. It will be used in section 5.2 for switchingbetween q and p in a light field camera.First, light travels a distance a from the object to theobjective lens. This is described by a transfer matrixM a . Then it is refracted by the objective lens of focallength f , represented by transfer matrix M f . In theend it travels to the image plane a distance b ,represented by M b . The full transform, found by4.3. Combining eyepiecesmultiplication of those three matrices is:Inversion: 1 bfMbM f M a 1 fa abf b .1 af Two eyepieces together produce a “camera” withmagnification -1:(10) 1 0 0 1 The condition for focusing on the image plane is thatthe top right element of this matrix is 0 , which isequivalent to the familiar lens equation:1 1 1 .a b fAn eyepiece before and after a variable space T andthen inversion produces a lens of variable focal(11)length F Using (11), our camera transfer matrix can beconverted into a simpler form: ba 1 f0 . ab 0 1 f0 ba ba 0:f 1 T 0 0 0 1 1ff 1 0 fT20 . (16)1 2lens produces space translation T fF without usingup real space! Devices corresponding to the abovematrices (9), (14), (15), (16) together with shifts andprisms are the elements that can be used as “buildingblocks” for our light field cameras.the image plane:0 ba 1 1ff2TBy symmetry, same combination of eyepieces with a(12)We also make the bottom left element 0 by inserting aso called “field lens'' (of focal length F bfa ), just before 1 1 F(15)0 . ba Those operators are also useful as primitives for futureoptical image processing in software. They are thebuilding blocks of the main transforms. Correspondingto geometric transforms like Resize and Rotate incurrent image processing.(13)This matrix is diagonal, which is the simple final formwe wanted to achieve. It obviously satisfies our areaconservation condition, which the reader can easilyverify. The parameter m ab is called “magnification”5. The art of light fieldcamera designand is a negative number. (Cameras produce invertedimages.)5.1. Integral view photography4.2. “ Eyepiece ”We define Integral View Photography as ageneralization from several related areas of research.These include Integral Photography [6], [9] and related,Adelson's “Plenoptic” camera [8], a number 3D TVsystems ([10] and others), and the “Light Field” cameraof Ng at. al. [11].This element has been used as an eyepiece (ocular) inoptics, that's why we give it the name. It is made up oftwo space translations and a lens. First, light rays travela distance f , then they are refracted by a lens of focal6Adobe Technical Report

registered independently (but on the same sensor!), as ifcoming from different cameras. This makes the designcompact and cheap to manufacture.In our approach we see conventional cameras as“integration devices’’ which integrate the optical fieldover all points on the aperture into the final image. Thisis already an integral view camera. It achieves effectslike refocusing onto different planes and changingdepth of field, commonly used by photographers. Theidea of Integral View photography is to capture somerepresentation of that same optical field and be able tointegrate it afterwards, in software. In this way, thecaptured “light field”, “plenoptic” or “holographic”image potentially contains the full optical information,and much greater flexibility can be achieved.First design:Consider formula (5). With this in mind, Figure 6 inwhich each lens is shifted from the optical axis, wouldbe equivalent to adding prisms to a single main lens. SeeFigure 7. This optical device would be cheaper tomanufacture because it is made up of one lens and(1) Integration is done in software, and notmechanically.(2) Instead of fixing all parameters in real time, thephotographer can relax while taking the picture, anddefer focusing, and integration in general, to postprocessing in the dark room. Currently only color andlightness are done as post processing (in Aperture andLight Room).(3) Different methods of integrating the views can beapplied or mixed together to achieve much more thanwhat is possible with a conventional camera. Examplesinclude focusing on a surface, “all in focus” and others.Figure 6: An array of camerasused in integral photography forcapturing the light field.(4) Also, more power is gained in image processingbecause now we have access to the full 3D informationabout the scene. Difficult tasks like refocusing becomeamazingly easy. We expect tasks like deblur, objectextraction, painting on 3D surfaces, relighting andmany others to become much easier, too.multiple prisms, instead of multiple lenses. Also, it'smore convenient for the photographer to use thecommon controls of one single lens, while effectivelyworking with a big array of lenses.5.2. Camera designsWe are given the 4D light field (radiance densityfunction), and we want to sample it into a discreterepresentation with a 2D image sensor. The approachtaken is to represent this 4D density as a 2D array ofimages. Different perspectives on the problem arepossible, but for the current paper we would chooseto discuss it in the following framework. TraditionalIntegral photography uses an array of camerasfocused on the same plane, so that each point on thatplane is imaged as one pixel in each camera. Thesepixels represent different rays passing at differentangles through that same point. In this way angulardimensions are sampled. Of course, the image itselfsamples space dimensions, so we have a 2D array of2D arrays.Figure 7: Array of prisms design.We believe this design is new. Based on what we callaffine optics (formula (5)), it can be considered areformulation of the traditional “multiple cameras”design of integral photography.The idea of compact light field camera design is to putall the optics and electronics into one single device. Wewant to make different parts of the main camera lensactive separately, in the sense that their input is7Adobe Technical Report

In traditional cameras all rays from a far away pointare focused into the same single point on the sensor.This is represented in Figure 7, where all rays comingfrom the lens are focused into one point. We want tosplit rays coming from different areas of the mainlens. This is equivalent to a simple change of angle,so it can be done with prisms of different angles ofdeviation placed next to the main lens, at theaperture.Second design:Figure 9: More detail aboutFigure 8.A very interesting approach would be to make the arrayof lenses or prisms external to the camera: Withpositive lenses we get an array of real images, which arecaptured by a camera focused on them. Figure 11.Figure 8: Array of eyepiecesgenerating multiple views.This approach was invented by Adelson-Wang [8], andrecently used by Ng et. al [11]. We would like topropose an interesting interpretation of their design. Itis a traditional camera, where each pixel is replaced byan eyepiece (E) with matrix of type 01 , and 1 0 sensor (CCD matrix) behind it. The role of the eyepieceis to switch between coordinate and momentumposition-direction) in optical phase space (light field).As a result different directions of rays at a giveneyepiece are recorded as different pixels on the sensorof that eyepiece. Rays coming from each area of themain lens go into different pixels at a given eyepiece.See Figure 8, where we have dropped the field lens forclarity (but it should be there at the focal plane of themain camera lens for the theory to be exact). In otherwords this is the optical device “camera” of section 4.1,followed by an array of eyepieces (section 4.2).Figure 10: Lenses insteadof prisms in Figure 7.Figure 9 shows two sets of rays, and their path in thesimplified version of the system - without field lens.Our next step will be to generalize designs (1) and (2)by building cameras equivalent to them in the opticalsense. Using formula (5), we can replace the array ofprisms with an array of lenses. See Figure 10. We getshift up in angle, same as with prisms. Total inversefocal length will be sum of inverse focal length of mainlens and individual lenses.Figure 11: Multiple lensescreating real images.8Adobe Technical Report

With negative lenses we get virtual images on the otherside of the main lens

Linear (Gaussian) optics can be defined as the use of matrix methods from linear algebra in geometrical optics. Fundamentally, this area was developed (without the matrix notations) back in the 19-th century by great minds like Gauss and Hamilton. Matrix methods became popular in optics

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

2-9V in unit & 2 AA in camera. Match polarities ( ) and ( ). Set camera date back, close camera lens and connect plug to camera port. 2 3 Secure camera, open camera shutter, and slide unit power switch to (ON) and back to (OFF), then push camera test button. Close camera Shutter, remove camera & load film, connect plug to camera, close cover. 4

User Manual Replace a Pro 3 Camera battery You can leave the camera housing in place so the camera position stays the same. 1. Release the camera from the camera housing. Press the button on the charging port underneath the camera. The camera clicks as it disengages from the camera housing. 2. Pull the camera all the way out of the camera .

Camera CCH-01G Jeep Grand Cherokee Camera CMB-16G Mercedes Benz GLK Trunk Handle Camera CCH-01S Jeep Wrangler Spare Tire Mount Camera CVW-07L VW Beetle License Plate Light Camera (LED) CVW-07G VW Beetle License Plate Light Camera CFD-03F Ford Tailgate Handle Camera CCH-01W Jeep Wrangler License Plate Light Camera CBM-01T BMW 5 Series Trunk .

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .