Resolving The Vergence-Accommodation Conflict In Head .

2y ago
18 Views
2 Downloads
489.40 KB
17 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Madison Stoltz
Transcription

1Resolving the Vergence-AccommodationConflict in Head Mounted DisplaysA review of problem assessments, potential solutions, and evaluation methodsGregory Kramida and Amitabh VarshneyAbstract—The vergence-accommodation conflict remains a major problem in head-mounted displays for virtual and augmentedreality (VR and AR). In this review, we discuss why this problem is pivotal for nearby tasks in VR and AR, present a comprehensiveclassification of potential solutions, along with advantages and shortfalls of each category, and briefly describe various methodsthat can be used to better evaluate the solutions.Index Terms—Vergence-Accommodation Conflict, Head-Mounted DisplaysF1I NTRODUCTIONThe vergence-accommodation conflict (henceforth referred to as VAC), also known as accommodationconvergence mismatch, is a well-known problemin the realm of head(or helmet)-mounted displays(HMDs), also referred to as head-worn displays(HWDs) [1], and stereoscopic displays in general:it forces the viewer’s brain to unnaturally adapt toconflicting cues and increases fusion time of binocular imagery, while decreasing fusion accuracy [2].This contributes to (sometimes severe) visual fatigue(asthenopia), especially during prolonged use [3], [4],[5], which, for some people, can even cause seriousside-effects long after cessation of using the device[6].The current work is a checkpoint of the currentstate of the VAC problem as it relates to HMDs foraugmented reality (AR) and virtual reality (VR), anda comprehensive listing and discussion of potentialsolutions. With this review, we intend to provide solidinformational foundations on how address VAC forany researcher working on or with HMD displays,whether they are working on new solutions to theproblem specifically, or designing a prototype for arelated application.In the remainder of this section we present a review of publications assessing the nature of the VACproblem and discussing its severity and importancewithin different contexts. In the following Section 2,comprising the bulk of this review, we discuss thevarious display designs that attempt to solve the problem, addressing the advantages and shortfalls of each,as well as related technology which could potentially G. Kramida and A. Varshney are with the Department of ComputerScience, University of Maryland, College Park, MD, 20740.E-mail: gkramida, varshney@umiacs.umd.eduhelp with this issue in future designs. In Section 3, wedescribe the various methods and metrics that havebeen proposed or applied to evaluate the effectivenessof existing solutions. Finally, in Section 4, we identifypotential areas within the solution space that have yetto be explored or can be improved.1.1 The Accommodation-Vergence ConflictThe human visual system employs multiple depthstimuli, a more complete classification of which canbe found in a survey by Reichelt et al. [5]. The samesurvey finds that occulomotor cues of consistent vergence and accommodation, which are, in turn, relatedto retinal cues of blur and disparity, are critical tocomfortable 3D viewing experience. Retinal blur is theactual visual cue driving the occulomotor responseof accommodation, or adjustment of the eye’s lens tofocus on the desired depth, thus minimizing the blur.Likewise, retinal disparity is the visual cue that drivesvergence. However, there is also a dual and parallelfeedback loop between vergence and accommodation,and thus one becomes a secondary cue influencingthe other [4], [5], [7]. In fact, Suryakumar et al. in[8] measure both vergence and accommodation at thesame time during the viewing of stereoscopic imagery,establish that accommodative response driven fromdisparity and resultant vergence is the same as themonocular response driven by retinal blur. In a recent review of the topic, [6], Bando et al. summarizesome of the literature about this feedback mechanismwithin the human visual cortex.In traditional stereoscopic HMD designs, the virtualimage is focused at a fixed depth away from theeyes, while the depth of the virtual objects, and hencethe binocular disparity, varies with the content [9],[10], which results in conflicting information withinthe vergence-accommodation feedback loops. Fig. 1demonstrates the basic geometry of this conflict.

2(a)(b)accommodation distanceimage forleft eyeresulting3D imagefar focusright eyeimage forright eyevergence distanceclose focusfocal planeleft eyeFigure 1. (A) Conceptual representation of accommodation within the same eye. Light rays from far-away objectsare spread at a smaller angle, i.e. are closer to parallel, and therefore require little refraction to be focused on theretina. Light rays from close-up objects fan out at a much greater angle, and therefore require more refraction.The lens of the human eye can change in degree of curvature, and, therefore, its refractive index, resulting ina change in focal distance. (B) Conceptual representation of the VAC. Virtual display plane, or focal plane, islocated at a fixed distance. The virtual objects can be located either in front or, if it is not at infinity, behind it.Thus the disparity cue drives the eyes to verge at one distance, while the light rays coming from the virtual planeproduces retinal blur that drives the eyes to accommodate to another distance, giving rise to the conflict betweenthese depth cues.The problem is not as acute in certain domains,such as 3D TV or cinema viewing, as it is in HMDs,so long as the content and displays both fit certainconstraints. Lambooij et. al in [4] develop a frameworkof constraints for such applications, the most notableof which in this context being that retinal disparity hasto be fall within 1 safety zone with the focal cues.This indeed can be achieved in 3D cinematography,where virtual objects are usually located at a greatdepth and stereo parameters can be adjusted for eachframe prior to viewing. Precise methodologies havebeen developed on how to tailor the stereo content toachieve this [11], [12], [13], [14].However, these constraints have to be violatedwithin the context of VR gaming [9], [10], [15] andthe context of AR applications [16], where contentis dynamic and interactive, and nearby objects haveto be shown for a multitude of near-point tasks, forinstance – assembly, maintenance, driving, or evensimply walking and looking around in a room.We proceed to outline a hierarchical taxonomy ofHMD displays for AR and VR.resolving VAC gears towards adjusting the retinal blurcue to the virtual depth of the content.2Independently from the see-through method, HMDscan be distinguished based on where they fall on the“extent of presence” axis of the taxonomy for mixedreality displays developed by Milgram and Kishino[19]. HMDs span the range including monoscopic,stereoscopic, and multiscopic displays. We leave outmonoscopic heads-up displays from further discussion, since these cannot be used for VR or AR inS OLUTIONSAlthough the VAC problem remains generally unsolved in modern-day commercial HMDs, researchershave theorized about and built potential prototypesolutions since early 1990s. Since the convergence cuein properly-configured stereo displays mostly corresponds1 to natural world viewing, but the accommodation does not, vast majority of the effort on1. but not entirely, due to offset between virtual camera andpupil, as we discuss later2.1See-through MethodsHMDs for VR are typically opaque, since they onlyaim to provide an immersive visual of the virtualenvironment (VE)2 . For AR, the displays fall intotwo general categories, optical see-through (OST) andvideo see-through (VST). Optical see-through systemslet through or optically propagate light rays from thereal world and use beamsplitters to combine themwith virtual imagery. Video see-through displays capture video of the real world and digitally combineit with virtual imagery before re-displaying it to theuser. Refer to Table 1 for a comparison of thesetwo methods. Most of the solutions we describe areapplicable to both opaque and see-through HMDs,although not all can be as easily integrated into OSTas they may be into VST displays.2.23D Display Principles2. Although it has been suggested to optionally display a minified video-feed of the outside world to prevent the user fromrunning into real obstacles while exploring VEs

3OSTAdvantagesprovide good peripheral vision and low distortion ofthe worldimpose no lag on world imageryimaging point may remain at the pupilno resolution loss for world imageryVSTeasy to make virtual objects occlude world objects invideo feedbasic designs have fewer optical elements and are cheapand easy to manufactureno latency between world imagery and virtual objectsDrawbacksdifficult to make virtual objects occlude the worldinvolve complex optical paths: design and fabricationare complexlatency between seeing the world and registration for /rendering of virtual contentmismatch between focal distance of real and virtualobjects is more apparentprone to distort world imagery, esp. in the peripheryimpose lag on all content due to video capture, processing, and renderingdisplacement between cameras and pupils contributesto misjudgement of depth and disorientationresolution and/or FOV loss for world imageryTable 1Comparison of optical see-through (OST) and video see-through (VST) HMDs, based on [1], [17], and [18]the classical sense (as they cannot facilitate immersive 3D[20]) and are irrelevant to the VAC problem.Stereoscopic displays capitalize on rendering a pair ofimages, one for each eye, with a disparity betweenthe two views to facilitate stereo parallax. Multiscopicdisplays show multiple viewing angles of the 3Dscene to each eye. These circumvent VAC in HMDsby rebuilding the entire light field, but introduce otherproblems.We base our design classification from these twocategories as they tend to address VAC in fundamentally different ways, and further subdivide bothbranches using the underlying hardware operatingprinciple. Designs from both categories can also beclassified based on whether the views are timemultiplexed or space-multiplexed. Please refer to Fig.2 for the full hierarchical representation of our classification. We proceed to describe the designs in eachhardware category in more detail.2.3Stereoscopic DisplaysEach stereoscopic display method can be described aseither multifocal or varifocal, although in certain casesthe two techniques can be combined. Varifocal designsinvolve adjustable optics which are able to modify thefocal depth of the entire view. Multifocal designs, onthe other hand, split the view for each eye image intoregions based on the depth of the objects within, anddisplay each region on at a separate, fixed focal depth.Many earlier varifocal display prototypes were builtas a proof-of-concept, and could display only simplistic images, often just simple line patterns or wireframeprimitives. These either forced the focus informationto correspond to the vergence at a single object, orprovided some manual input capability to the user tomanipulate the X and Y coordinate of the focal point,which in turn would tell the system which object tobring into focus.Just prior to the turn of the century, multifocal designs with physical display stacks were con-VarifocalSliding opticsDeformablemembranetmirrorsLiquid lensestLiquid crystallensestStereoscopicMultifocal(Focal planestacks)Birefringentlenses s tFreeformwaveguidesstacksMultiviewRetinalDisplays tMultiscopict time-multiplexeds PinlightDisplays sFigure 2. Classification tree of various display designsrelevant to HMDs that have shown potential in resolving VAC.ceived, which to the present day feature solely spacemultiplexed focal planes with concurrent output withthe exception of [21], which is multiplexed in bothspace and time. The central idea in those is to displaysimultaneously on multiple planes at progressivelygreater focal depths, thus emulating a volume ratherthan a single image, naturally circumventing VAC.Afterwards, there was an effort to improve and adaptvarifocal designs as well to display different imagesat fixed depth planes in a time-multiplexed fashion.

4distance of virtual image to eye (dei)dileye relief (del)ftwi Mwsvirtual image(focus plane forclosest screen)wsdslwl(closest) screenP P'or relay lens(lens principaloptical planes)weFOV(α)pupil planeFigure 3. Optics of a simple magnifier. We use this notation for the terms to maintain consistency and toavoid conflicts with other notations. Subscripts e,i,l, and s represent “eye”, “(virtual) image”, “lens”, and “screen”respectively, so terms such as dil explicitly denote “distance from image to lens”, and wl denotes “width of lens.”The only exception is we , which stands for the width of the eye box. f represents focal length of the lens; trepresents either the thickness of the display stack (in case of display stack designs) or the range of motion ofthe relay lens (in case of sliding optics designs); M represents the magnification factor from the screen or relaylens to the virtual image.Both multifocal and varifocal designs can be described by the optics of a simple magnifier. We establish the notation framework for this in Fig. 3. Thereexists a common classification which splits opticaldesigns of HMDs into pupil-forming and non-pupilforming [1], [22], [23]. The simple magnifier embodies all non-pupil-forming designs. The only relevantdifference pupil-forming displays have is that theymagnify an intermediary image (or a series of such)before relaying, or projecting it, to the final exit pupil.The primary benefit of simple magnifier is that itrequires the fewest optical elements, and therefore isrelatively light, easy to design, and cheap to manufacture. The primary benefit of the more complexprojection systems is that the optical pathway canbe wrapped around the head, increasing the opticalpath length, providing better correction of opticalaberrations. For the purposes of this article, it sufficesto say that the principles for dealing with VAC asdescribed using the simple magnifier schematics canbe applied just as easily to pupil-forming designs bysimply replacing the screen with the previous elementin the optical path.2.3.1 Sliding OpticsThe first experimentally-implemented solution of avarifocal display with mechanically-adjustable focuswas that of Shiwa et al. in [24]. In their design, aCRT displayed stereoscopic images for both eyes intwo separate sections. Relay lenses were placed inthe optical paths between the exit lenses and thecorresponding sections of the screen. The relay lenseshad the ability to slide back and forth along theoptical path, driven by a stepper motor. The authorsobserve that if the viewer’s eyes are located at thefocal distance ( del f ), when the relay lenses aremoved, the angular FOV(a) remains constant, whilethe focal distance to the virtual image (dei ) changes 3 .The authors suggest that gaze detection should beintegrated to determine the exact point, and thereforedepth, of the objects being viewed. However, forthe purposes of the experiment, their implementationassumed a focal point at the center of the screen andprovided manual (mouse/key) controls to move it.Based on the depth of the virtual object at this point,the relay lens for each eye would move along theoptical axis, focusing the image at a different distance.3. See Appendix A for mathematical justification

5The authors rely on the specifications of the optometer developed in [25]4 to determine that thespeed of relay lens movement. They measure thatthe mechanism takes less than 0.3 seconds to changefrom 20 cm to 10 m focal plane distance (5 and 0.1diopters respectively), which they conclude is fastenough to keep up with eye accommodation. A recentstudy on accommodation responses for various agegroups and lighting conditions confirms this to be true[26]5 : the youngest age group in the brightest settingshowed an average peak velocity of only 1.878 0.625diopter/sec.Yanagisawa et al. also construct and analyze a 3Ddisplay of similar design with an adjustable relaylens [27]. Both systems have angular FOV below 50 .Shibata et al. [28], rather then changing the positionof a relay lens, change the axial position of the actualdisplay in relation to the exit static mono-focal lensaccording to the same principle, varying focal depthfrom 30 cm to 2 m (3.0 to 0.5D).All of the above-mentioned displays withmechanically-adjustable focus are large andcumbersome bench systems, and thus wouldrequire significant effort to be scaled down to beused in portable HMDs. Since the optics for HMDscannot physically span the entire near range wherethe VAC conflict is significantly acute, downsizing ofsuch designs would require additional magnificationof the virtual image.2.3.2 Deformable Mirrors in Virtual Retinal DisplaysFirst proposed in [29], a virtual retinal display (VRD)projects a low-laser light beam directly into the pupil,forming the image on the back of the retina directlyrather than on an external device, which makes itradically different from other display technologies. In[30], McQuaide et al. at the Human Interface Technology Laboratory (HITLab)6 use a virtual retinal display(VRD) in conjunction with micro-electromechanicalsystem (MEMS) deformable mirrors. The VRD scansthe laser light onto the deformable mirror, whichreflects it through a series of pivoting mirrors directlyinto the pupil in an x-y raster pattern. In this way,the VRD directly forms the image on the retina. TheMEMS mirror is a thin circular membrane of siliconnitride coated with aluminum and suspended over anelectrode. The surface of the mirror changes its convergence depending on how much voltage is appliedto the electrode, thus directly modifies the focus of thelaser beam, altering the required accommodation toview the displayed objects without blur. The authorsachieve a continuous range of focal planes from 33cm to infinity (3.0 to 0.0 D), which is later improved4. This optometer detected accommodation to within 0.25diopters (1 D 1/m) at the rate of 4.7 Hz.5. The subjects focused from a target at 4m to a target at 70cmaway6. www.hitl.washington.eduto 7cm to infinity in [31]. The experiments feature amonocular table-top proof-of-concept system whichprojected very basic images (two lines), but showedthat observers’ accommodative responses coherentlymatched changes in the focal stimulus demands controlled by the mirror.While deformable mirrors can be used in a varifocalfashion, focusing on the one depth being observed,they also can be flipped fast enough between two focal planes to create the illusion of contiguous 3D volume. Research on VRDs with deformable membranemirrors is continued in [32], where Schowengerdt etal. of HITLab synchronize the membrane curvaturechanges with per-frame swapping between two different images, thus displaying the images at differentdepth simultaneously and simulating the light field.The prototype’s depth range spans contiguously from6.25 cm to infinity.The claimed advantage of the VRD designs isthat they can potentially be made less bulky, sincethey do not require an actual image-forming display.However, there still needs to be a reflective surfacespanning a large area in front of the observer’s eyesin order to project an image with a large angular FOV.2.3.3 Liquid and Electroactive Lens DisplaysThe very first to use a liquid lens for dynamicallyswitching the perceived focus in a display wereSuyama et al. in [33]. The lens could be adjustedto any optical power between -1.2 to 1.5 dioptersat a frame rate of 60 Hz. Another static lens wasplaced between the exit pupil and the varifocal lens,in order to keep FOV of the output image constant.The prototype featured a single 2D display, providingonly the movement parallax without the binoculardisparity. In the experiment, simple 3D primitiveswere shown, whose focal depth was controlled.Years later, Liu and Hua build their own proof-ofconcept monocular liquid-lens varifocal prototype in[34]. The liquid lens they used could change from -5to 20 diopters within 74 ms (7 Hz), but they also testthe speed of several alternative lenses, with speeds upto 9 ms (56 Hz), which approach the 60 Hz frequency.The optics of the whole system are set up to varyaccommodation from 0 to 8 diopters (infinity to 12.5cm). They continue their research in [35], where theyintegrate the 9 ms liquid lens and have it oscillatebetween two different focal planes, thus emulating alight field at about 37.5 Hz.One problem with the liquid lens that Liu and Huaidentify is that, during settling time of the liquidlens when its driving signal is switched, there arelongitudinal shifts of the focal planes, which yieldminor image blur and less accurate depth representations. They hypothesize this problem can be mitigated by a liquid lens with a yet faster responsetime. Subsequently, in [36], Liu et al. integrate theirliquid lens mechanism into an HMD. The prototype’s

6FOV spans only about 16 horizontally. They test iton ten subjects and determine their error rate in abasic depth estimation task, as well as measuring theactual accommodation response with a near-infraredautorefractor, concluding that their approach yields abetter accommodation cue than static optics.A critique of the liquid lenses by Love et. al in[21] is that a switchable-focal-plane display requiresa minimum of four states, not two, and, provideda liquid lens frequency of 60 Hz (which lenses usedby Liu et. al do not yet achieve but target for futureresearch), the display could yield an maximum refreshrate of only 12.5 Hz, and hence would produce flickerand motion artifacts.However, there exist other methods to adjust opticalpower of a lens besides actually changing its geometry, as in the liquid lens. One possible alternative is theliquid crystal electroactive lens, as also suggested byLiu and Hua in both [34] and [36]. Such lenses consistof a layer of liquid crystal sandwiched between two(often planar) glass substrates [37]. The two substratesare coated with (transparent) indium tin oxide onsides parallel to the optical axis, and with aluminumfilm on the other sides; these two materials act aselectrodes. Liquid crystal itself consists of thin rod-likemolecules. A fixed voltage is applied to the side aluminum electrodes, the molecules are aligned homogeneously parallel to the substrates. When additionalvoltage is applied to the indium tin oxide electrodes,the molecules assume a different homogeneous anglecloser to perpendicular, which varies with the voltage.Hence, modulating the voltage on the indium oxidechanges the refractive index and therefore the opticalpower.Ye et al. demonstrate a liquid crystal lens with controllable power between 0.8 and 10.7 D (from about9 cm to 1.25 m) [37]. Li et al. develop and implementa glasses-thin prototype of adjustable eyewear foruse by far-sighted people (presbyopes), whose opticalpower varies dynamically between 1.0 and 2.0 D[38]. A slew of research was done on prototypes ofbench autostereoscopic displays using a liquid crystallens arrays to control the focal depth of individualpixels or image regions [39], [40], [41], [42], [43], [44],[45], [46], [47]. Yet we are not aware of any work thatintegrates liquid crystal lenses into HMDs in practiceor develops any theoretical framework for this.2.3.4 Focal Plane StacksThe concept of spatially-multiplexed multifocal designs originates from a study by Rolland et al. [48],who explore the feasibility of stacking multiple display planes, each focused at its own depth, andrendering different images to them simultaneously.The original idea is, at each plane, to leave those pixelsthat correspond to a different depth layer transparent,while rendering only those objects that correspond.The viewers would then be able to naturally convergeon and accommodate to the correct depth, whereverthey look. Authors develop a mathematical modelthat stipulates at what intervals to place the focalplanes (dioptric spacing), as well as requirements forthe total number of planes and pixel density of thedisplays. They determine that a minimum of 14 planesis required to achieve a focal range between 2 dioptersand infinity, with interplanar spacing at 1/7 diopters.They also suggest that if a fixed positive lens ispositioned in front of the focal planes, physical thickness of the display can be greatly reduced. Theirframework is analagous to fig. 3, so the thickness ofthe resulting display stack can be expressed as:t f dsl f2f2 f dilf dei del(1)In the above equation, dei is the shortest depth theviewer should be able to accommodate to, while dslis the offset from the lens to first screen in the stack,which displays virtual objects at that depth. dsl canbe expressed as:dsl 1f1f dil1 f d dilil(2)Based on these equations7 , for a 30 mm focal length,25 cm closest viewing distance, and 25 mm eye relief,dsl would be approx. 26.5 mm and the stack thicknesst would be approx. 3.5 mm, resulting in an overallminimum display thickness of about 3 cm. Authorsproceed to derive the resolution requirements forsuch displays and conclude they can be built usingcontemporary technology.However, as [36] points out, a practical applicationof this method is still challenging, since no displaymaterial known to date has enough transmittance toallow light to pass through such a thick stack ofscreens. Akeley et al. are the first to address thischallenge in [49]. They design and build a prototypewith only three focal planes per eye, all projectedto via beamsplitters and mirrors from 6 viewportsrendered on a single high-resolution LCD monitor.Their main contribution is a depth filtering algorithm,which they use to vary pixel intensity linearly withthe difference between their virtual depth and thedepth of the actual plane on which they are shown,thus emulating the light field in between the viewingplanes. Although this prototype shows that sparsedisplay stacks can be effectively used, it is still a large,immobile table-top machine, and needs to be scaleddown to be used as an HMD.After developing the liquid lens HMD, Liu andHua switch gears and also come up with an elaborate theoretical framework for using specifically sparsedisplay stacks in HMDs in [50], coining term depthfused 3D displays (DFD) for any display with depth7. See appendix B for detailed derivations of these equations,which not given in the original source

7blending. There are two major points their frameworkaddresses: (1) the dioptric spacing between adjacentfocal planes, now different from Akeley’s model inthat it is based on depth-of-field rather than stereoacuity, and (2) the depth-weighted blending function torender a continuous volume (also referred to as depthfiltering). They develop their own depth blendingmodel, different from the one described by Akeley in[49].In a later work, Ravikumar et al. analyze both ofthese blending models, and find that Akeley’s linearmodel is theoretically better based on contrast andeffectiveness in correctly driving accommodation cues[51]. MacKenzie et al. [52] experimentally establishrequirements for plane separation in display stackswith depth blending: a 8/9 D separation, yielding aminimum of five planes for a depth range from 28 cmto infinity (32/9 to 0 D). They also note that contrast(and, therefore, sharpness) is attenuated due to oneor more planes between the eye and the target beingdefocused, an effect present even at 8/9 D and moredrastic at larger plane separations. This is an inherentflaw in all multifocal systems, which causes renderedobjects to always appear different from and less sharpthan naturally-observed objects.2.3.5 Birefringent LensesLove et al. in [21] build a similar display to that ofAkeley et al., but their design is time-multiplexedusing light polarization. They used two birefringentlenses out of calcite interspersed with polarizationswitches. They take advantage of the fact that, whilecalcite is highly transparent, birefringent lenses havetwo different indices of refraction, one for light polarized along one crystalline axis and another for thelight polarized along the orthogonal axis. Thus, forlight with different polarization, the lenses wouldhave different optical power, and focus to planes atdifferent distances. This way, the setup features onlytwo lenses, but projects to four different depths. Theyuse the shutter technique for switching between volumetric slices of images with different polarization,and achieve a frame-rate of 45 Hz using two CRTmonitors, one for each eye. The design demonstratessuperior transmittance between focal planes. However, the prototype is still not small enough to be usedas an HMD.2.3.6 Freeform WaveguidesOur eyes do not come retrofitted within threadedcircular nests. If that were the case, designing a lightweight, super-compact, wide-FOV HMD with conicoptics would be trivial. Hence, although stacked display designs with conic optics can be said to have“evolved” into freeform optics displays, as Rollandand Thompson semi-humorously note in [53], theadvent of automatic fabrication of surfaces undercomputer numerical control (CNC) in the past twodecades constitutes a revolution in HMD designsand other optics applications. Indeed, freeform optics provide HMD researchers with a much greaterfreedom and flexibility than they had with conventional rotationally-symmetric surfaces. We first provide some historical context about freeform optics inHMDs as the precursor of resulting VAC solutions.We suppose that the time and place at whichfreeform optics started to heavily affect HMD designs,especially optical see-through HMDs, would be justafter the beginning of this century, at the Optical Diagnostics and Applications Laboratory (the O.D.A. Lab)of Rochester University, headed by the mentionedRolland. In [54], Cakmakci et. al target eyeglassesform factor and see-through operational principle asthe primary display in wear

Gregory Kramida and Amitabh Varshney . it forces the viewer’s brain to unnaturally adapt to . of accommodation, or adjustment of the eye’s lens to focus on the desired depth, thus minimizing the blur. Likewise, retinal disparity is the visual cue that dr

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

A proper test of Berkeley’ s hypothesis requires simultaneous measurement of both apparent distance and vergence angle. To date, almost all evidence, for, as well as against, vergence as a cue for distance has come from psychophysical experiments in which vergence eye

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Pacific University CommonKnowledge College of Optometry Theses, Dissertations and Capstone Projects 5-1975 Application of vergence method to problems Donald Leon Peterson Pacific University Recommended Citation Peterson, Donald Leon, "Application of vergence method to problems" (1975). College of Optometry. 438.