Refraction Geometrical Optics - SPIE

1y ago
7 Views
2 Downloads
2.67 MB
16 Pages
Last View : 12d ago
Last Download : 3m ago
Upload by : Alexia Money
Transcription

Fundamental Optics1RefractionGeometrical optics uses rays to represent light propagation through an optical system. According to Fermat’sprinciple, rays will choose a path that is stationary withrespect to variations in path length, typically a timeminimum. The quickest path may result in a change ofdirection at the interface between two refractive mediaseparated by a refracting surface.The change of direction is described by Snell’s law,n0 sin I 0 5 n sin In and n0 are the refractive indices of the first and secondrefractive media. The refractive index describes thespeed of light in vacuum relative to the medium. I is theangle of incidence, and I 0 is the angle of refraction. Both aremeasured relative to the surface normal. For rotationallysymmetric systems, the optical axis (OA) is the axis ofsymmetry taken along the z direction.Image formation requires a spherical refracting surface sothat an incident ray travelling parallel to the OA can bebent towards the OA. However, spherical surfaces give riseto aberrations and associated image defects. A conceptually simple example is spherical aberration, where raysoriginating from a unique object point do not converge toform a unique image point at the OA.z(1)(2)Field Guide to Photographic Science

Fundamental Optics3Gaussian OpticsGaussian optics treats each surface as free of aberrationsby performing a linear extension of the paraxial region toarbitrary heights above the OA.ypiinnuupCRssSince tan u 5 u holds exactly in the paraxial region, eachrefracting surface is projected onto a flat tangent planenormal to the OA, and the surface power Fs is retained.The angles i and i0 are now equivalent to u and u0 , where yy y y5 05 ; u0 5 tan 1u 5 tan 1sss0su and u0 are interpreted as ray slopes instead of angles.Substituting u and u0 into the Gaussian conjugate equationfor a spherical surface yields a new form of Snell’s law,n0 u0 5 nu yFsParaxial imaging is now valid at arbitrary heights h; h0 .OPIPuhyuhsnnsOP is the object plane, and IP is the image plane.Field Guide to Photographic Science

Fundamental Optics11AberrationsWhen rays are traced using exact trigonometric equationsbased on Snell’s law (p. 1), aberrations arise from thehigher-order terms in the expansion of the sine function:u3 u5 u7sin u 5 u þ þ · · ·3! 5! 7!Primary (Seidel) aberrations arise from the third-order term: Spherical aberration (SA) is caused by variations offocus with ray height in the aperture (p. 26). A convergingelement has under-corrected SA, so rays come to a focuscloser to the lens as the ray height increases. Coma is caused by variation of magnification with rayheight in the aperture so that rays from a given objectpoint will be brought to a focus at different heights on theIP. The image of a point is spread into a non-symmetricshape that resembles a comet. Coma is absent at the OAbut increases with radial distance outwards. Astigmatism arises because rays from an off-axisobject point are presented with a tilted rather thansymmetric aperture. A point is imaged as two smallperpendicular lines. Astigmatism improves as theaperture is reduced. Petzval field curvature describes the fact that theIP corresponding to a flat OP is itself not naturallyflat. Positive and negative elements introduce inwardand outward curvature, respectively. Field curvatureincreases with radial distance outwards. Distortion describes the variation of magnificationwith radial height on the IP. Positive or pincushiondistortion will pull out the corners of a rectangularimage crop to a greater extent than the sides, whereasbarrel distortion will push them inwards. Distortiondoes not introduce blur and is unaffected by aperture. Chromatic aberration and lateral color appear inpolychromatic light due to variation of focus with wavelength l and off-axis variation of m with l, respectively.Aberrations affect image quality and can be measured asdeviations from the ideal Gaussian imaging properties.Modern compound lenses are well corrected for primaryaberrations.Field Guide to Photographic Science

Focusing15AutofocusPhase-detect autofocus (PDAF) systems used in digitalSLR cameras have evolved from the 1985 Minolta Maxxumdesign. The reflex mirror has a zone of partial transmission, and the transmitted light is directed by a secondarymirror down to an autofocus (AF) module located at anoptically equivalent SP (p. 14). The module containsmicrolenses that direct the light onto a CCD strip.Consider light passing through the lens from a small regionon the OP indicated by an AF point on the focusing screen.When the OP is in focus at the SP, the light distributionarriving from equivalent portions of two halves of the XPwill produce an identical optical image and signal alongeach half of the CCD strip.XPequivalent SPmicrolensesCCDsignalWhen the OP is not in sharp focus (i.e., out of focus), theseimages will shift either toward or away from each other,and this shift indicates the direction and amount by whichthe lens needs to be adjusted by its AF motor.A horizontal CCD strip is suited foranalyzing signals with scene detail present in the horizontal direction, such as thechange of contrast provided by a verticaledge. A cross-type AF point utilizesboth a horizontal and a vertical CCD strip so that scene detailin both directions can be utilized to achieve focus.Field Guide to Photographic Science

30Exposuref-NumberEPOPHXPHSPnn12DPfsEPus XPdAfWhen focus is set at infinity, s0 5 f 0 . The ray slope is thenDu0 5 02fu0 can be substituted into the formula for the RA (p. 29).The refractive indices can be removed using n0 /n 5 f 0 /f.The illuminance E at the axial position on the SP becomesp1fE 5 LT 2 ; where N 54DNThe f-number N depends on the front effective focal lengthand the EP diameter D.The f-number is the reciprocal of the RA when focus is setat infinity. It is usually marked on lens barrels using thesymbols f/N or 1:N, where N is the f-number.Beyond Gaussian optics, the f-number can be writtenn; where NA0 5 n0 sin U 0N52NA0 NA0 is the image-space numerical aperture when s ! ,and U0 is the real image-space marginal ray angle.Provided the lens is aplanatic (free from SA and coma),then sin U0 5 u0 when s ! , according to Abbe’s sinecondition, so the Gaussian expression N 5 f/D is exact foran aplanatic lens. However, the sine function restricts thelowest achievable value in air (n 5 n0 5 1) to N0 5 0.5.Field Guide to Photographic Science

38Raw DataColor Filter ArraysThe HVS is sensitive to wavelengths between 380–780 nm.The eye contains three types of cone cells with photonabsorption properties described by a set of eye coneresponse functions l̄ðlÞ, m̄ðlÞ, and s̄ðlÞ. These differentresponses lead to the visual sensation of color (p. 53).BayerFuji X-Trans A larger number of green filters areused since the HVS is moresensitive to l in the green regionof the visible spectrum.A camera requires an analogous set of response functionsto detect color. In consumer cameras, a color filter array(CFA) is fitted above the imaging sensor.The Bayer CFA uses a 2 3 2 block pattern of red, green,and blue filters that form three types of mosaic. The filtershave different spectral transmission properties describedby a set of spectral transmission functions TCFA,i(l), wherei is the mosaic label. The overall camera response isdetermined largely by the product of TCFA,i(l) and thecharge collection efficiency h(l) (pp. 37, 40).The Fuji X-Trans CFA uses a 6 3 6 block pattern. Itrequires greater processing power and is more expensivebut can give improved image quality. An infrared-blocking filter is combined with the CFAto limit the response outside of the visible spectrum. The spectral passband of the camera describes therange of wavelengths over which it responds. In order to record color correctly, a linear transformation should in principle exist between the cameraresponse and eye cone response functions. After raw data capture, only the digital value of onecolor component is known at each photosite. Colordemosaicing estimates the missing values so that allcolor components are known at every photosite.Field Guide to Photographic Science

52ColorColor TheoryColor vision is a perceived physiological sensation toelectromagnetic waves with wavelengths in the visibleregion, which ranges from approximately 380–780 nm.400450500550600650700750Color can be described by its luminance and chromaticity.Chromaticity can be subdivided into hue and saturationcomponents. Pure spectrum colors (rainbow colors) arefully saturated. Their hues can be divided into six mainregions. Each region contains many hues, and the transitionsbetween regions are smooth.Colors that are not pure spectrum colors have been dilutedwith white light and are not fully saturated. Pink isobtained by mixing a red hue with white. Greyscale orachromatic colors are fully desaturated.Light is generally polychromatic, i.e., a mixture ofvarious wavelengths described by its spectral powerdistribution (SPD) P(l). This can be any function of powerat each l, for example, spectral radiance (p. 39). Metamersare different SPDs that appear to be the same color underthe same viewing conditions. The principle of metamerismis fundamental to color theory.Field Guide to Photographic Science

54ColorCIE RGB Color SpaceIn 1931, before the eye cone response functions wereknown, the CIE (Commission internationale de l’éclairage)analyzed results from color-matching experiments thatused a set of three real monochromatic primaries:lR 5 700 nm;lG 5 546.1 nm;lB 5 435.8 nmUsing Grassmann’s laws, human observers were asked tovisually match the color of a monochromatic light source ateach wavelength l by mixing amounts of the primaries.From the results, the CIE defined a set of color-matchingfunctions (CMFs) denoted r̄ðlÞ, ḡðlÞ, b̄ðlÞ.0.4( )0.3( )0.2( )0.10.0-0.1380480580680780wavelength (nm)Since the primaries are necessarily real, negative valuesoccur. This physically corresponds to matches that requireda primary to be mixed with the source light instead of theother primaries. The CMFs are normalized such that thearea under each curve is the same; adding a unit of eachprimary matches the color of a unit of a reference white thatwas chosen to be CIE illuminant E. This hypotheticalilluminant has constant power at all wavelengths.Adding equal photometric or radiometric amounts of theprimaries does not yield the reference white; the actualluminance and radiance ratios between a unit of eachprimary are 1 : 4.5907 : 0.0601 and 72.0966 : 1.3791 : 1.CIE RGB tristimulus values can be defined, and each R, G,B triple defines a color in the CIE RGB color space. Thisis again a reference color space. The CMFs and l̄ðlÞ, m̄ðlÞ,s̄ðlÞ are linearly related (p. 53). A different set of primarieswould give a different set of CMFs obtainable via a lineartransformation.Field Guide to Photographic Science

Color63White Balance: Matrix AlgebraThe raw to sRGB transformation (p. 61) for scene illumination with a D65 white point can be rewritten:32 32RRL4 GL 55 M R DD65 4 G 5where M R DD65 5 M 1sRGB CB D65BL D65M R is a color rotation matrix (each row sums to 1): 1M R 5 M 1sRGB C DD65DD65 is a diagonal white balance matrix of raw WBmultipliers for D65 scene illumination:321 RWP00DD65 5 4 00 51 GWP001 BWP D651 GWP 5 1 provided C has been appropriately normalized(p. 59). Typically, 1 RWP . 1 and 1 BWP . 1.For scene illumination with a different white point, DD65can be replaced by a matrix suitable for the adopted white:232 331 R WPRLR004 GL 55 M R DAW 4 G 5 ; DAW 5 4 00 51 GWPBL D65001 BWP AWB AW2Better accuracy can be achieved using M R derived from acharacterization performed with illuminant CCT closelymatching that of the AW. Camera manufacturers typicallyuse a set of rotation matrices, each optimized for use withan associated WB preset. Two presets from the Olympus E-M1 raw metadata are tabulated below (divide by 256).CCTSceneMultipliers3000 KTungsten296, 256, 7606000 KCloudy544, 256, 3962MR324 404 68 30816 2482380 1044 40 34810 1283 2816 54883 20 52 5374Field Guide to Photographic Science

Digital Images71Image Display ResolutionThe pixel count of a digital image is the total number ofpixels. It is often expressed as n 5 nh 3 nv, where nh and nvare the horizontal and vertical pixel counts. The imagedisplay resolution is the number of displayed image pixelsper unit distance, e.g., pixels per inch (ppi). It is a propertyof the display device/medium:1) For a monitor/screen, the image display resolution isdefined by the screen resolution, which typically has avalue such as 72 ppi, 96 ppi, etc.2) For a hard-copy print, the image display resolution canbe chosen by the user; 300 ppi is considered enough forhigh-quality prints viewed at Dv.Note that image display resolution is independent of theprinter resolution, which is the number of ink dots perunit distance used by a printer, e.g. dots per inch (dpi). Fora given printer technology, a larger dpi can yield a betterquality print. The image display size is defined asimage display size 5pixel countimage display resolutionThe print size is the image display size for a print.Example: The image display size for a digital imageviewed on a 72-ppi monitor will be 300/72 4.16 timeslarger than that of a 300-ppi print of the same image.Images cannot be “saved” at a specific ppi since imagedisplay resolution is a property of the display, but a tag canbe added to the image metadata indicating a desired ppi forthe printer software. This tag does not affect the imagepixel count, and the value can be overridden.In order to match a desired image display size for a givenimage display resolution (or vice versa), the pixel countneeds to be changed through image resampling. Printersoftware will automatically perform the required resampling when printing an image. Notably, this does notchange the pixel count of the stored image file. Imageediting software can be used to resample an image. Incontrast to printing, the resampled image needs to beresaved, which will alter its pixel count.Field Guide to Photographic Science

78Standard Exposure StrategyExposure ValueThe reflected and incident-light metering equations(pp. 75, 82) can be rewritten as the APEX equation(additive system of photographic exposure) designed tosimplify manual calculations:Ev 5 Av þ Tv 5 Bv þ Sv 5 Iv þ Sv Ev is the exposure value (not to be confused with Ev).Av 5 log2 N2 is the aperture value.Tv log2 t is the time value.Bv 5 log2 (〈L〉/(0.3K)) is the brightness value.Iv 5 log2 (E/(0.3C)) is the incident-light value.Sv 5 log2 (S/3.125) is the speed value.These are associated with specific N, t, 〈L〉, S. For example,N0.50.711.422.845.6Av 2 1012345t4211/21/41/81/161/32Tv 2 1012345etc.etc.Bv and Iv depend upon the choice of reflected light andincident light meter calibration constants K and C.Suitable exposure settings for a typical scene are providedby any combination of N, t, S that yields the recommendedEv. A difference of 1 Ev defines a photographic stop(pp. 34, 45). An f-stop specifically relates to a change of Av.Exposure compensation (EC, p. 81) can be included inthe APEX equation by modifying the brightness value:Bv ! Bv ECPositive EC compensates when Bv is too high by reducing Evand therefore increasing 〈H〉. Negative EC compensates whenBv is too low by raising Ev and therefore decreasing 〈H〉.The APEX equation is valid provided the metering isbased on average photometry. It is not valid when usingin-camera matrix/pattern metering modes (p. 82).Field Guide to Photographic Science

Practical Exposure Strategy87High Dynamic Range: ExampleDue to back lighting,negative EC was required to preserve thehighlights in the sun.This rendered the shadows much too dark.The shutter speed wast 5 1/2500 s.A frame taken at 2 Evusing a shutter speedt 5 1/640 s reveals moreshadow detail but clipsthe highlights.A frame taken at 4 Evusing a shutter speedt 5 1/160 s completelyclips the highlights inorder to reveal the fullshadow detail.Merging the three rawfiles into a linear HDRimage and then applying a local TMO yieldsan output image withboth highlight andshadow detail visible.Field Guide to Photographic Science

Practical Exposure Strategy91Polarizing Filters: PracticeThe utility of the polarizing filter is that the ratio betweenunpolarized light and any partially or completely planepolarized light entering the lens can be altered. Dielectricsurface s from which partially or completely plane polarized light emerges include glass, wood, leaves, paint, andwater. Rotating the filter plane of transmission to eliminate plane polarized light emerging from glass or watercan eliminate the observed reflections. Eliminating reflections from leaves can reveal their true color.Light arriving from clouds will in general be unpolarizeddue to repeated diffuse reflections (p. 99) and will beunaffected by a polarizing filter. Light arriving from a bluesky will in general be partially plane polarized due toRayleigh scattering from air particles (pp. 99, 101).Therefore, rotating the filter to reduce the partially planepolarized light will darken the blue sky relative to anysources of unpolarized light. The darkening effect is mostnoticeable along the vertical strip of sky that forms a rightangle with the sun and the observer since maximumpolarization occurs at a 90 scattering angle from the sun(or other light source). A plane or linear polarizing filter should not beused with digital SLR cameras as it will interfere withthe autofocus and metering systems. These utilize abeamsplitter that functions using polarization. A circular polarizing filter (CPL) can be used withdigital SLR cameras in place of a linear polarizingfilter. A CPL functions as an ordinary linear polarizerbut modifies the transmitted (plane polarized) light sothat E rotates as a function of time and traces out ahelical path. This prevents the transmitted light fromentering the beamsplitter in a plane polarized state.A polarizing filter should be removed from the lens in lowlight conditions since an ideal polarizing filter onlytransmits 50% of all unpolarized light, equivalent touse of a 1-stop neutral density filter (p. 88).Field Guide to Photographic Science

96LightingSide Lighting: ExampleLuoping, Yunnan, ChinaSide lighting reveals the depth and 3D form.Llyn Gwynant, Snowdonia, WalesSide lighting reveals the surface texture.Field Guide to Photographic Science

Lighting107Sync SpeedThe shutter speed t is typically set much slower than theflash duration itself when using flash. Shutter speedsquicker than the sync speed should not be used, otherwiseimage artifacts can arise due to conflict between the flashand the shutter method of operation.Mechanical focal plane shutter: Since very quick t areobtained only when the second shutter curtain starts to closebefore the first curtain fully opens (p. 35), the sync speed is thequickest t at which the first curtain can fully open before thesecond curtain needs to start closing, on the order of 1/250 s.This ensures that the flash can be fired when both curtainsare fully open. Quicker t would shield part of the scene fromthe flash and cause a dark band to appear in the image.Mechanical leaf shutter: Since this is positioned next tothe lens aperture, the sync speed is limited only by thequickest possible t, typically on the order of 1/2000 s.Electronic global shutter and CCD sensor: The syncspeed is limited only by the quickest electronic shutterspeed available or by the flash duration.Electronic rolling shutter and CMOS sensor: Althoughquicker shutter speeds t are available compared to amechanical shutter, exposing all sensor rows to the flashrequires limiting the sync speed to the total frame readouttime due to the rolling nature of the shutter (p. 35). This istypically too slow to be useful.Electronic global shutter and CMOS sensor: Availableon scientific cameras, the sync speed is limited only by thequickest electronic t or by the flash duration.If a t faster than the sync speed is required with a focalplane shutter, e.g., when a low N is needed for shallowDoF, the high-speed sync (HSS) mode fires the flashcontinuously at low power throughout the entire duration tin order to eliminate shielding artifacts. In this case, theeffective flash duration and t are the same. Nevertheless,“high speed” refers to shutter speed t and not the effectiveflash duration; high-speed photography requires asingle flash of very short duration, such as 1/32000 s.Field Guide to Photographic Science

Image Quality: Resolution119Camera System MTFThe camera system MTF is the product of all theindividual component MTFs (p. 110):jHðmx ; my Þj 5 jH 1 ðmx ; my Þj jH 2 ðmx ; my Þj · · · jH n ðmx ; my ÞjAlthough a huge variety of component MTFs contribute tothe camera system MTF, a basic system can be definedusing the diffraction, detector aperture, and optical lowpass filter (OLPF) components.The camera system MTF depends greatly upon thediffraction MTF since this varies with lens f-number N.Example (see p. 115): dx 5 3.8 mm, px 5 4.0 mm;mc,det 5 263 cycles/mm, mx,Nyq 5 125 cycles/mm. N 5 22 : Here the diffraction MTF dominates. TheOLPF is not required at this f-number (also see p. 118).detector 200300400500Spatial frequency (cycles/mm)600 N 5 1.4 : Here, the OLPF MTF dominates. The detectoraperture MTF would dominate without the OLPF, inwhich case aliasing would occur (see pp. 117, 118).MTF10.500100200300400500Spatial frequency (cycles/mm)600Field Guide to Photographic Science

Refraction Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat's principle, rays will choose a path that is stationary with

Related Documents:

called “sequential raytracing” and is the main way to study geometrical optics. This chapter briefly introduces basic geometrical optics using a sequential raytracing technique. (Smith1 is a widely cited optical-engineering r

CALL FOR PAPERS MANAGED BY SPIE Europe Ltd., a subsidiary of SPIE, is a not-for-profit UK-registered company serving SPIE constituents throughout Europe as an advocate and liaison to political and industry associations within the European optics and photonics community. In addition to providing membership services, SPIE Europe Ltd. organises

Classical and nonlinear optics and applications 1. LIGO, gravitational wave detection. 2. Nonlinear crystals for the generation of light, entanglement and squeezing . Introduction to optics Fundamentals Fermat principle Reflection and refraction ray and eikonal equations Geometrical optics Image formation and ray tracing Paraxial optics and .

PROCEEDINGS OF SPIE Volume 9286 Proceedings of SPIE 0277-786X, V. 9286 SPIE is an international society advancing an interdisciplinary approach to the science and application of light. Second International Conference on Applications of Optics and Photonics Manuel Filipe P. C. Ma

Photomask Technology 2015 PHOTOMASK TECHNOLOGY WWW.SPIE.ORG/PM Monterey Conference Center and Monterey Marriott Monterey, California, USA Exhibition: 29-30 September 2015 Conference: 29 September-1 October 2015 TECHNOLOGY SUMMARIES CO-LOCATED WITH SPIE SCANNING MICROSCOPIES 2015. PHOTOMASK TECHNOLOGY WWW.SPIE.ORG/PM

22 Laser Lab 22 Laser Lab - Optics 23 LVD 23 LVD - Optics 24 Mazak 31 Mazak - Optics 32 Mazak - General Assembly 34 Mitsubishi 36 Mitsubishi - Optics 37 Mitsubishi - General Assembly 38 Precitec 41 Precitec - Optics 42 Prima 43 Prima - Optics 44 Salvagnini 45 Strippit 46 Tanaka 47 Trumpf 51 Trumpf - Optics

index of refraction, the ray will bend away from the normal (angle of incidence angle of refraction). If the second medium has a higher index of refraction, the ray will bend towards the normal (angle of incidence angle of refraction). From the diagrams, we can therefore tell that n A n i, n B n i, and n C n i

A/ B. COM - SEMESTER I – GENERAL ENGLISH (2019- 20) University Paper Style (total 4 questions, 70 marks, 2.30 hours) Unit/s Topic/s No MarksQuestion style I Lessons Beautiful Minds (Gujarati Medium) Pinnacle (English Medium) Q. 1. 1 to 3 (a) Answer in brief - 3/5 (b) Write a short note - 1/3 (09) (08) II Q. 2. Poems 1 to 3 (a) Answer in brief - 3/5 (b) Write a short note - 1/3 (09) (08) III .