Feature Article Digital Route - IUPUI

1y ago
5 Views
2 Downloads
2.34 MB
11 Pages
Last View : 22d ago
Last Download : 2m ago
Upload by : Brady Himes
Transcription

Feature ArticleDigital RoutePanoramasJiang Yu ZhengIndiana University–Purdue University, IndianapolisRoute panorama is anew image mediumfor digitallyarchiving andvisualizing scenesalong a route. It’ssuitable forregistration,transmission, andvisualization of routescenes. This articleexplores routepanoramaprojection,resolution, andshape distortion. Italso discusses how toimprove quality,achieve real-timetransmission, anddisplay long routepanoramas on theInternet.In ancient western art, compositionsshowed the perspective of a scene’s projection toward a 2D view plane. In contrast, an ancient oriental paintingtechnique exposed scenes of interest at differentlocations in a large field of view, which let viewers explore different segments by moving theirfocus. A typical and famous example is a longpainting scroll named A Cathay City (see Figure1), painted 900 years ago. It recorded the prosperity of the capital city of ancient China in theSong Dynasty by drawing scenes and eventsalong a route from a suburb to the inner city.The invention of paper in ancient Chinaallowed most paintings to be created on foldableand scrollable papers instead of canvases. Asopposed to a single-focus perspective projectionat one end of a street or a bird’s eye view, onebenefit of this scrollable style is that a pathoriented projection can display more detailedvisual information in an extended visual fieldthan a single-focus perspective projection at oneend of a street or a bird’s-eye view from the top.Today we can realize a composition approachsimilar to A Cathay City by creating a new imagemedium—Route Panorama, a program that contains an entire scene sequence along a route. Wegenerate long route panoramas by using digitalimage-processing techniques and render routeviews continuously on the Internet. Now we cancapture, register, and display route panoramasfor many miles taken from vehicles, trains, orships. We can use this approach for many practical purposes, including profiles of cities for visitors or for introductory indexes of localhometowns. In an extended manner, we couldprobably even use it as part of something like anenhanced version of Mapquest for people navigating through large cities like Los Angeles orBeijing. Figure 2 (next page) shows an exampleof a route panorama from a one-hour video inVenice, a place where people often get lost.Route scenes on the InternetThere are many things to consider when creating a quality route panorama. To begin with,we create a route panorama by scanning scenescontinuously with a virtual slit in the cameraframe to form image memories. We call it a virtualslit because it isn’t an actual lens cover with a slitin it; rather, for each camera image in the videosequence, we’re copying a vertical pixel line at afixed position.We paste these consecutive slit views (orimage memories) together to form a long, seamless 2D image belt. We can then transmit the 2DFigure 1. A Cathay Citypainted on an 11 mscroll in the SongDynasty 900 years ago.(From the collection ofthe National PalaceMuseum, Taipei,Taiwan, China.)1070-986X/03/ 17.00 2003 IEEEPublished by the IEEE Computer Society57

Figure 2. Segment ofroute panoramarecording canal scenesin Venice.IEEE MultiMediaimage belt via the Internet, enabling end users toeasily scroll back and forth along a route. Theprocess of capturing a route panorama is as simple as recording a video on a moving vehicle. Wecan create the route panorama in real time with aportable PC inside the vehicle or by replaying arecorded video taken during the vehicle’s movement and inputting it into a computer later.Nevertheless, the generated image belt with itsconsecutive slit views pieced together has muchless data than a continuous video sequence.My colleagues and I first invented a routepanorama 10 years ago for mobile robot navigation.1-3 We called it a generalized panoramic viewbecause we discovered it while creating the firstdigital panoramic view. In this early study, wemounted a camera on a moving vehicle and itconstantly captured slit views orthogonal to themoving direction. The route panorama is a specialcase of a more general representation called adynamic projection image,4 which forms the 2Dimage using a nonfixed, temporal projection.Figure 3 (next page) illustrates how a vertical slitcan scan the street scenes displayed in Figure 2when the camera is positioned sideways along asmooth path. This viewing scheme is anorthogonal-perspective projection of scenes—orthogonal toward the camera path and perspective along the vertical slit. Generally speaking,common modes of transportation such as a fourwheeled vehicle, ship, train, or airplane can provide a smooth path for the camera.58Compared with existing approaches to modela route using graphics models,5,6 our routepanorama has an advantage in capturing scenes.It doesn’t require taking discrete images by manual operation or texture mapping onto geometrymodels. A route panorama can be ready after driving a vehicle in town for a while. It yields a continuous image scroll that other image stitching ormosaic approaches,7-9 in principle, are impossibleto realize. A mosaicing approach works well forstitching images taken by a camera rotating at astatic position. For a translating camera, however, scenes with different depths have different disparities (or optical flow vectors) in consecutiveimages. Overlapping scenes at one depth will addlayers and create dissonant scenes at other depths,much like overlapping stereo images.A route panorama requires only a small fraction of data compared to a video sequence and hasa continuous format when accessed. If we pile asequence of video images together along the timeaxis, we obtain a 3D data volume calledspatial–temporal volume that’s full of pixels (seeFigure 4, p. 60). The route panorama comprisespixel lines in consecutive image frames, whichcorrespond to a 2D data sheet in the spatial–temporal volume. Ideally, if the image frame has awidth w (in pixels), a route panorama only has1/w of the data size of the entire video sequence(w is 200 300), since we only extract one pixelline from each frame when viewing through theslit. The route panorama neglects redundant

scenes in the consecutive video frames, which wecan observe from the traces of patterns in theepipolar plane images (Figure 4 shows one EPI).This shows a promising property of the routepanorama as an Internet medium, which candeliver large amounts of information with smalldata. The missing scenes are objects under occlusion when exposed to the slit. Also, as normal 2Dimages capturing dynamic objects, a route panorama only freezes instantaneous slit views, ratherthan capturing object movements as video does.Figure 5 (see p. 60) is another example of aroute panorama. It displays a segment of the FirstMain Street of China. We used a small digitalvideo camera on a double-decker bus to recordthe 5-km route panorama. No image device otherthan an image grabber is required in processing.BuildingsCamera traceRoute panoramaViewing plane(plane of sight)SCameraSlit(a)Projecting scenesPCompared with existing media such as photo,video, and panoramic views, route panorama hasits own projection properties. For instance, muchcan depend on the smoothness of the ride whilethe camera is recording. In an ideal situationroads are flat and the vehicle has solid suspension, a relatively long wheelbase, and travels at aslow speed. Under these conditions, the camerahas less up-and-down translation and can traverse a smooth path along a street.In this article, we assume the camera axis is sethorizontally for simplicity. We define a pathoriented coordinate system SYD for the projection of scenes toward a smooth camera trace S onthe horizontal plane. We denote a point in sucha coordinate system as P(S, Y, D), where S is thecamera-passed length from a local starting point(see Figure 3b), Y is the height of the point fromthe plane containing the camera path, and D isthe depth of the point from the camera focus.Assume t and y are the horizontal and verticalaxes of the route panorama, respectively. Theprojection of P in the route panorama denotedby p(t, y) isYt S/ry Yf/D(1)where r (m/frame) is the slit sampling interval onthe camera trace, which is the division of the vehicle’s speed V (m/second) by the camera’s framerate per second (fps). The camera’s focal length f(in pixels) is known in advance after calibration.Because the route panorama employs anorthogonal-perspective projection, the aspectratio of an object depends on its distance fromDImage frameVirtual slit(pixel line)YDCamera motionSCamera pathySAutomaticcopy andpasteRoute panoramain long memoryt(b)Figure 3. Route scene scanning by constantly cutting a vertical pixel line in theimage frame and pasting it into another continuous image memory when thecamera moves along a smooth path on the horizontal plane.the path. Figure 6 displays a comparison of viewsbetween an ordinary perspective image and aroute panorama. The further the object, thelower the object height is in the route panorama.Object widths in the route panorama are proportional to their real widths facing the street. In this59

Figure 4. Data size of aroute panorama is onesheet out of a volume ofvideo images.Spatial-temporal volumeyRoute panoramat (image number)Image framexOriginFigure 5. A segment of route panorama generated from a video taken on a bus (before removing shaking components).Figure 6. 2D projections of 3D objectsin a route panorama compared witha perspective projection image. Wstands for the object width, H isobject height, and D is object depth.(a) Ordinary perspective projection.(b) Orthogonal-perspectiveprojection. (c) A typical object inperspective projection and (d) in anorthogonal-perspective projectionimages.Hf /D1Hf /D1Hf /D2L1L1L1L1Hf /D2W/rWf /D2W/r(b)Wf /D1(a)Image ctive te panorama

sense, distant objects are extended horizontallyin the route panorama. Thus, route panoramasaren’t likely to miss large architectures. Smallobjects such as trees, poles, and signboards areusually narrower than buildings along the t axisand might disappear after squeezing in the tdirection, while buildings won’t disappear. In anormal perspective projection image, a small treeclose to the camera may occasionally occlude alarge building behind it.We examine several sets of structure lines typically appearing on 3D architectures (see Figures6c and 6d) and find their shapes in the routepanorama from a linear path. Assuming a linearvector V (a, b, c) in the global coordinate system with its X axis parallel to the camera path,the line sets are L1 {V c 0}: lines on vertical planes parallelto the camera path. These lines may appear onthe front walls of buildings. L2 {V a c 0} L1: vertical lines in the 3Dspace. These lines are vertical rims on architectures. L3 {V c 0}: lines stretching in depth fromthe camera path. L4 {V c 0, b 0} L3: horizontal 3D linesnonparallel to the camera path.Obviously, lines in L2 are projected as verticallines in the route panorama through the verticalslit. Denoting two points P1(S1,Y1,D1) andP2(S2,Y2,D2) on the line, where P1P2 τV, their projections in the route panorama are p1 and p2. Theprojection of the line is then S S1v p 2 p1 2 rfY2 fY1 D2 D1 S f Y aτv D1 r r a fb fbτ τ D1 r D1 which is linear in the route panorama. Therefore,a line in L1 is still projected as a line in the routepanorama for a linear path.good visual indexes of routescenes as long as we allow forsmooth curves in the routepanorama.The most significant difference from perspective projection is a curving effect on line set L3(Figure 6d). For a line in L3, its projection in theroute panorama is a curve, because point P2 inthe route panorama is S 2 fY2 S S1 f (Y1 Y ) D1 D r D2 r(t 2 , y 2 ) aτ S1 f (Y1 bτ ) D1 cτ rwhich is a hyperbolic function of τ. This curveapproaches a horizontal asymptotic line y fb/cfrom p1 when τ . Particularly for lines in L4,their projections are curves approaching towardthe projection of horizon (y 0) in the routepanorama.The path curvature also defines the lines’ curving effect if the camera moves on a curved trace.Because of space, we omit the analysis here.Nevertheless, we can obtain reasonably goodvisual indexes of route scenes as long as we allowfor smooth curves in the route panorama.Another interesting property of the routepanorama is the common asymptote for a set ofparallel lines stretching in depth (parallel lines inL3). Under perspective projection, we project parallel lines with a depth change in the 3D spaceonto the image plane as nonparallel lines andtheir extensions on the image plane cross at acommon point called the vanishing point (according to the principle in computer vision). In theroute panorama obtained from a linear camerapath, however, a set of 3D parallel lines stretching in depth has a common asymptotic line inthe route panorama. This is because parallel linesin L3 have the same direction (a, b, c), and theirprojections in the route panorama all approachthe same horizontal asymptotic line y fb/cwhen τ .July–September 2003according to the projection model in Equation 1.For a line in L1 where D D2 D1 0 and Y/ Sis constant b/a, its projection becomesWe can obtain reasonably61

Slit view covered regionsOversampling rangeSurface 1Surface 2PJJust-sampling depthOver sampledSampled surfaceSlit viewSurface 3Undersampling rangeSlitCamera pathFigure 7. Oversamplingrange, just-samplingdepth, andundersampling range ofthe route panorama.Focal length of cameraS1S2If we fix the camera axis so that the plane ofsight through the slit isn’t perpendicular to thecamera path, we obtain a parallel-perspectiveprojection along a linear path because all the perspective planes of sight are parallel. We can further extend this to a bended-parallel-perspectiveprojection when the camera moves along acurved path. We can extend most properties ofthe orthogonal-perspective projection similarly.Stationary image blur and close objectfilteringIEEE MultiMediaWhen we’re actually recording a routepanorama, we obtain slit views by cutting apixel line in the image frame of a video camera.Every slit view itself then is a narrow-perspective projection. The sampling rate of the slithas a limit lower than the video rate. If thevehicle speed isn’t slow enough, it’s reflectedin the route panorama, because the panoramais actually the connection of narrow perspective projections at discrete positions along theroute (as Figure 7 depicts). Scenes contained ina narrow wedge are projected onto the onepixel line at each time instance. We examinesurfaces that can appear in three depth rangesfrom the camera path. First, at the depth wheresurface 2 in Figure 7 is located, each part of thesurface is taken into consecutive slit viewswithout overlapping, just as a normal perspective projection does. We call this the just-sampling range (depth).Second, for a surface closer than the just-62sampling range, the consecutive slit views don’tcover every fine part of the surface (surface 3 inFigure 7). Surfaces in this range are undersampled in the route panorama. If the spatial frequency of intensity distribution is low on thesurfaces—that is, the surface has relativelyhomogeneous intensity—we can recover theoriginal intensity distribution from the sampledslit views (the route panorama), according tothe Nyquist theorem in digital signal processing. Otherwise, we may lose some details within that range. Therefore, route panorama has afunction of filtering out close objects such astrees, poles, people, and so forth. By reducingthe camera’s sampling rate, this filtering effectbecomes clearer and more distinct. This is helpful when we’re mainly interested in architectures along a street.Third, if a surface is farther than the justsampling range, the camera could oversamplethe surface points. Because a slit view accumulates intensities in the narrow perspectivewedge, a point on surface 1 may be counted inthe overlapped wedges of the consecutive slitviews. Therefore, a distant object point retaining at a position in the perspective projectionmay cause a blur horizontally in the routepanorama. We call this phenomenon stationaryblur, since it’s converse to the motion blureffect in a dynamic image where a fast translating point wipes across several pixels during theimage exposure.We can give the just-sampling depth’snumerical computation in a more general formto include bended parallel-perspective projection. If we set a global coordinate systemO-XYZ, we can describe the smooth camera pathby S[X(t), Z(t)]. If the vehicle is moving on astraight lane without obvious turns, the camerapath almost has zero curvature (κ 0). Selectinga camera observing side, we can divide a pathroughly as linear, concave, or convex segments,depending on the sign of curvature. For simplicity, we assume the radius of curvature R(t) ofa curved camera path is constant between twoconsecutive sampling positions S1 and S2, whereR(t) 0 for a concave path, R(t) 0 for a convexpath, and R(t) for a linear path. The curvelength between S1 and S2 is r. The wedge’s angleis 2θ where f tanθ 1/2, because we cut onepixel as the slit width (see Figure 8).For bended parallel-perspective projection,the plane of sight—which is the wedge’s centralplane—has an angle α from the camera transla-

Camera axisPjDj S2PjθαMotion vector1 pixelRadius of curvatureS1S2Camera pathCamera pathR(t)(a)(b)Figure 8. Just-sampling range for a curved camera path. (a) Curved camera path and a physical sampling wedge. (b) Twoconsecutive slit views and just-sampling depth.tion direction that is the curve’s tangent vector.The two consecutive wedges of thin perspectiveprojection meet at a vertical line through pointPj in Figure 8. We have a vector relation on thehorizontal plane asS1Pj S1S1 S1PjDj S1 P j S 2 P j cos θS1 P j S 2 P j rr r sin α θ sin α θ cos θ2 R (t )2 R (t ) 2 R (t ) r r r sin 2θ sin α θ sin α θ R(t ) 2 R (t ) 2 R (t ) 2 R(t ) sinin triangle S1S2Pj. Then we obtain S1Pj and S2Pj as2 R(t ) sinS1 P j r sin 2θ R (t ) 2 R(t ) sinS2P j rr sin α θ 2 R (t )2 R (t ) rr sin α θ 2 R (t )2 R (t ) r sin 2θ R (t ) path changes from linear to convex (R(t) variesfrom to 0 ), Dj extends to the infinity (Dj ) and then starts to yield negative values (Pjflips to the path’s other side). This means that theconsecutive perspective wedges won’t intersectwhen the convex path reaches a high curvatureand the entire depth range toward infinity isundersampled.For a simple orthogonal-perspective projection toward a linear path, we can simplify theequation in Figure 9 toDj July–September 2003by using sine theorem. It isn’t difficult to furthercalculate the just-sampling depth Dj in the planeof sight to the just-sampled surface. Figure 9shows the equation for this.It’s important to note that the just-samplingrange not only relies on the camera’s samplingrate, vehicle speed, image resolution, and camera’s focal length, but also depends on the camerapath’s curvature. The just-sampling range tendsto be close when the camera moves on a concavepath and far on a linear path. When the cameraFigure 9. Calculating the just-sampling depth Dj in the plane of sight to thejust-sampled surface.r2 tan θby setting α π/2 and R(t) . Overall, there aredifferently sampled regions in a route panoramadepending on the subjects’ depths. We can selecta sampling rate so that the just-sampling range is63

Vehicle turn(Camera pan Ry)TyRoll RzCameraTzWheelbasePitch RxTxFigure 10. Vehicle and camera model in taking aroute panorama. Note that (Tx, Ty, Tz, Rx, Ry, andRz) (forward translation, up-and-downtranslation, translation sideways, pitch, pan, androll), respectively. Also, translation sidewaysdoesn’t occur for a vehicle movement.IEEE MultiMediaapproximately at the front surfaces of the buildings of interest.The collection of slit views is a process ofsmoothing spatial intensity distribution: outputvalue at a point by averaging intensities over aspace around it. We can estimate the degree ofstationary blur as follows. At depth D, the widthof the perspective wedge is W D tanθ. We canaverage the color distribution at depth D over Wto produce a pixel value for the slit view, whichis the convolution between the intensity distribution and a rectangular pulse function withwidth W and height 1/W. If we set a standard testpattern at depth D that’s a step edge with unitcontrast or sharpness, we can easily verify thatthe convoluted result is a triangular wave withthe contrast reduced to 1/(D tanθ). Therefore, anedge’s sharpness is inversely proportional to itsdepth. This is important when estimatingobjects’ depth in a route panorama.If a segment of route panorama mainly contains objects far away, we can squeeze it alongthe t axis to reduce the stationary blurring effectand, at the same time, reduce shape distortion.This scaling may visually improve the objects’appearances in the route panorama if there’s norequirement to keep the exact scale or resolutionof the route panorama horizontally.Dealing with camera shakingImproving image quality is a crucial steptoward the real application of route panoramas inmultimedia and the Internet. In Figure 5, we64pushed a small video camera firmly on a windowframe of the bus to avoid uncontrollable accidental camera movement. We still can observe severezigzags on horizontal structural lines in the routepanorama. This is because the camera shookwhen the vehicle moved over an uneven road. Tocope with the camera shaking, some have tried tocompensate by using a gyroscope. However,adding special devices might increase the difficulty of spreading this technology. Our approachwas to develop an algorithm to rectify distortionaccording to some constraints from scenes andmotion.We can describe the camera movement at anyinstance by three degrees of translations andthree degrees of rotations (as displayed in Figure10). Among them, the camera pitch caused byleft-and-right sway and the up-and-down translation caused by the vehicle shaking on unevenroads are the most significant components affecting the image quality. The latter only yields smallup-and-down optical flow in the image if thevehicle bumping is less than several inches.Overall, camera pitch influences image quality the most and luckily we can compensate forit with an algorithm that reduces camera shaking (many algorithms along these linesexist).10,11 Most of these algorithms work bydetecting a dominant motion componentbetween consecutive frames in the imagesequence. For a route panorama, however, weonly need to deal with shaking componentsbetween consecutive slit lines. The idea is to filter the horizontal zigzagged lines in the routepanorama to make them smooth, whichinvolves spatial processing. We do this by usingthe following criteria: smooth structural lines in the 3D space shouldalso be smooth in a route panorama, and vertical camera shaking (in pitch) joggles theslit view instantly to the opposite direction.As Figure 11 illustrates, we estimate the camera motion from the joggled curves in the routepanorama and then align vertical pixel linesaccordingly to recover the original straightstructural lines. The way to find an instant camera shaking is to check if all the straight linesjoggle simultaneously at that position. We trackline segments horizontally in the route panorama after edge detection and calculate their consecutive vertical deviations along the t axis. At

each t position, we use a median filter to obtainthe common vertical deviation of all lines toyield the camera’s shaking component. Themedian filter prevents taking an original curvein the scene as a camera-joggled line and resulting in a wrong pitch value. After obtaining thesequence of camera parameters along the t axis,we prepare a window shifting along the horizontal axis and apply another median filter tothe camera sequence in the window, whicheliminates disturbances from abrupt vehicleshaking.Suppose the original structure lines in thescenes are horizontal in an ideal route panorama(Figure 11a). The camera, however, shakes vertically over time (Figure 11b), which joggles thestructure lines inversely in the captured routepanorama (Figure 11c). By shifting all the vertical pixel lines according to the estimated cameramotion sequence, we can align curved linesproperly to make horizontal structure linessmooth in the route panorama (Figure 11d).(a)(b)(c)(d)Figure 11. Recovering straight structure lines from camerajoggled curves in a route panorama.Figure 12. Recovering smooth structure lines in the route panorama by removing the camera-shaking components.Figure 12 shows the route panorama of Figure 5 after removing thecamera shakes in the pitch. Thisalgorithm is good at removingsmall zigzags on the structure linesto produce smooth curves. Once weapply the algorithm, it’s easy tomodify the route panorama tomake major lines smooth andstraight.Figure 13. Real-time streaming data transmission over the Internet to show route scenes.Real-time transmission and display(see Figure 13) that continuously reveals a sectionof the route panorama back and forth. We call ita route image scroll. You can control the directionof movement and the scrolling speed with amouse. The third type is a forward view of thevehicle for street traversing; we call it a traversingwindow (see Figure 14, next page).We can combine these different displays in various ways. In rendering a traversing window, wemap both side-route panoramas onto two sidewallsJuly–September 2003Our next step is to transmit a long routepanorama on the Internet and to seamlesslyscroll it back and forth. Displaying and streaming route panoramas gives users the freedom toeasily maneuver along the route.We developed three kinds of route panoramadisplays to augment a virtual tour. The first typeis a long, opened form of route panorama (seeFigure 2). The second type is a side-window view65

Figure 14. Thetraversing windowdynamically displaystwo sides of routepanoramas in an open,cylindrical panoramicview for virtualnavigation along astreet.East - The 5th StreetThe 4th Street - WestMove forwardSpeed upPauseIEEE MultiMediaalong the street and then project to a localpanoramic view (a cylindrical image frame). Wethen display the opened 2D form of this panoramic view so that users can observe the street stretching forward as well as architectures passing by,while traversing the route. We render the traversing window continuously according to the moving speed specified by the mouse. Although thetraversing window isn’t a true 3D display, majorportions in the traversing window have an opticalflow that resembles real 3D scenes. As anotherform of use, it’s even possible to display these pseudo-3D routes within a car’s navigation system.As a route panorama extends to several miles,it’s unwise to download the whole image andthen display it. We developed a streaming datatransmission function in Java that can displayroute image scrolls and traversing windows during download. Because of the route panoramas’small data sizes, we achieved much faster transmission of street views than video.The image belt provides a visual profile of along street for its compactness. By adding clickingregions in the image, the route panorama becomesa flexible visual index of routes for Web page linking. On the other hand, we can automaticallyscroll a route panorama in a small window to giveviewers the feeling that they’re viewing architectures and shops from a sightseeing bus. With twocameras directing left and right sides of the vehicle, we can establish two side views of a route bysynchronizing the route panoramas. If we drivevehicles passing every street in a town for all theroute panoramas, we can generate a visual map ofthe town for virtual tourism on the Internet.With the tools discussed here, we can registerand visualize an urban area using panoramicviews, route panoramas, route image scrolls, andmaps. All these images have much less data compared to video and look more realistic than 3Dcomputer-aided design models. We can link areaswithin a city map on the Web to correspondingareas in the route panoramas so that clicking aspot on the map can a update route image scroll66accordingly (and vice versa). Eventually, we canuse these tools to visualize large-scale spaces suchas a facility, district, town, or even city.ConclusionStreets have existed for thousands of yearsand there are millions of streets in the worldnow. These streets have a tremendous amountof information including rich visual contextsthat are closely related to our lifestyles andreflect human civilization and history.Registering and visualizing streets in an effectiveway is important to our culture and commerciallife. With the Route Panorama software, whenyou click a map to follow a certain route, theroute panorama will also be scrolled accordingly, showing real scenes along the route. This willgreatly enhance the visualization of GeographicInformation Systems (GIS).Because of the 2D characteristics of capturedroute panoramas, we can use them in a variety ofways. For instance, we can display and scrollthem on wireless phone screens or handheld terminals for navigation in cities or facilities. Byconnecting a route panorama database with theGlobal Positioning System, we can locate ourposition in the city and display the corresponding segment of a route panorama on a liquidcrystal display. Displaying route panoramas andpanoramic images are basically raster copy of sections of images. Hence, the proposed techniquesare even applicable for 2D animation and gameapplications, potentially pr

end of a street or a bird's-eye view from the top. Today we can realize a composition approach similar to A Cathay City by creating a new image medium—Route Panorama, a program that con-tains an entire scene sequence along a route. We generate long route panoramas by using digital image-processing techniques and render route

Related Documents:

I320003107 IUPUI Housing and Residence Life Fund I320014472 IUPUI Paw's Closet I380010177 Paw's Pantry Support Fund . I320003093 IUPUI Office of Academic Affairs I320003089 IUPUI Parents Fund I320003091 IUPUI Public Art Acco

IUPUI. Where Do They Come From IUPUI Students represent all 92 counties in Indiana. 67% of all Indiana IUPUI students come from Marion County or a surrounding county or Lake County In fall 2019, IUPUI served students from 143 other countries and 5

I320003107 IUPUI Housing and Residence Life Fund I320003109 IUPUI Campus Center Bells . I320003082 Office for Women, Operating Account I320003084 Kathy Warfel Award . I320003091 IUPUI Public Art Account I320003092 IUPUI Division of Finance and Administration I320003093 IUPUI Office o

Amendments to the Louisiana Constitution of 1974 Article I Article II Article III Article IV Article V Article VI Article VII Article VIII Article IX Article X Article XI Article XII Article XIII Article XIV Article I: Declaration of Rights Election Ballot # Author Bill/Act # Amendment Sec. Votes for % For Votes Against %

appointments for these services at paws.iupui.edu. Emergency resource support and referrals Get in touch with an advocate Phone: 317-274-4431 Email: stuadvoc@iupui.edu Website: advocate.iupui.edu Virtual walk-in hours: 12 noon–3 p.m., Wednesdays and Thursdays. Connect v

Executive Director, IUPUI Alumni Association Associate Executive Director, IUAA Indiana University Advancement Center 340 W. Michigan St. Indianapolis, IN 46202 Phone: 317-274-8828 Toll-free: 866-267-3104 Fax: 317-274-5064 On the web: alumni.iupui.edu Email: alum@iupui.edu aLUMnI aDVIS

182 IUPUI All-Campus Bulletin 2004-06 183 Administration of Graduate Programs at IUPUI 183 IUPUI Graduate Office 183 IUPUI Graduate Affairs Committee 183 Integrity in Graduate Education 183 General Graduate School Regulations and Information 183 English as a Second Language (ESL) Placement Test 184 English Proficiency 184 GRE (Graduate Record .

Bio-Zoology Practical - General Instruction In order to get maximum benefit and good training it is necessary for the students to follow the following instructions. 1. The students must attend all practical classes. Each experiment in practicals has got important relevance to theory subjects. 2. Bring this practical manual to your practicals class. 3. Bring the following objects to the .