Comparison Of Navigation Techniques For Large Digital Images

1y ago
18 Views
2 Downloads
944.28 KB
26 Pages
Last View : 2d ago
Last Download : 2m ago
Upload by : Roy Essex
Transcription

Comparison of Navigation Techniques for Large Digital ImagesBradley M. Hemminger,1,3Anne Bauers,2 and Jian Yang1Medical images are examined on computer screens in avariety of contexts. Frequently, these images are largerthan computer screens, and computer applicationssupport different paradigms for user navigation of largeimages. The paper reports on a systematic investigationof what interaction techniques are the most effective fornavigating images larger than the screen size for thepurpose of detecting small image features. An experiment compares five different types of geometricallyzoomable interaction techniques, each at two speeds(fast and slow update rates) for the task of finding aknown feature in the image. There were statisticallysignificant performance differences between severalgroupings of the techniques. The fast versions of theArrowKey, Pointer, and ScrollBar performed the best. Ingeneral, techniques that enable both intuitive andsystematic searching performed the best at the fastspeed, while techniques that minimize the number ofinteractions with the image were more effective at theslow speed. Additionally, based on a postexperimentquestionnaire and qualitative comparison, usersexpressed a clear preference for the Pointer technique,which allowed them to more freely and naturally interactwith the image.KEY WORDS: User interfaces, human factors, medialimage display, interaction techniques, pan, zoom, performance evaluationINTRODUCTIONViewing images larger than the user’s displayscreen is now a common occurrence. Itoccurs both because the spatial resolution of digitalimages that people interact with continues toincrease and because of the increasing variety ofsmaller resolution screens in use today (desktops,laptops, PDAs, cell phones, etc.). This leads to anincreased need for interaction techniques thatenable the user to successfully and quicklynavigate images larger than their screen size.Journal of Digital ImagingPeople view large digital images on a computerscreen in many different kinds of situations. Thispaper draws from work in many fields to addressone of the most common tasks in medical imaging,finding a specific small-scale feature in a verylarge image. An example is mammographerslooking for microcalcifications or masses in mammograms. For this study, large images are definedas images that have a spatial resolution significantly larger than their viewing device, i.e., at leastseveral times larger in area. It may additionally beconstrained by the user operating within a windowon that screen that further constrains the availableresolution. For instance, a user may wish tonavigate a digital mammogram image that is40,000 50,000 pixels on a personal computerscreen that is 1,024 768 pixels in a window ofsize 800 600 pixels.1From the School of Information and Library Science,University of North Carolina at Chapel Hill, Chapel Hill, NC27599-3360, USA.2From the 4909 Abbott Ave. S., Minneapolis, MN 55410,USA.3From the Department of Radiology in School of Medicine,University of North Carolina at Chapel Hill, Chapel Hill, NC27599-3360, USA.This research was supported in part by the National Institutesof Health (NIH) Grant # RO1 CA60193-05, US Army MedicalResearch and Material Command Grant # DAMD 17-94-J4345, NIH RO1-CA 44060, and NIH PO1-CA 47982.Correspondence to: Bradley M. Hemminger, Department ofRadiology in School of Medicine, University of North Carolinaat Chapel Hill, Chapel Hill, NC 27599-3360, USA; tel: 1-9199662998; fax: 1-919-9668071; e-mail: bmh@ils.unc.eduCopyright * 2008 by Society for Imaging Informatics inMedicinedoi: 10.1007/s10278-008-9133-0

HEMMINGER ET AL.In the past, computer and network speedslimited the speed at which such large images couldbe manipulated by the display device, limiting thetypes of interaction techniques available and theireffectiveness. As computer and network speedshave increased, it is now possible to interactivelymanipulate images by panning and zooming themin real time on most computer-based displaysystems, including the graphics cards found onstandard personal computers. The availability ofinteractive techniques supporting real-time panningand zooming provides for the possibility of improvedhuman–computer interactions. However, most interactions in existing commercial applications as well asfreely available ones do not take advantage ofimproved interaction techniques or necessarily usethe techniques best suited for capabilities of theirparticular display device. To test different interactiontechniques, five different interaction techniquessupported by imaging applications were selected.In order to quantitatively compare the performance of different techniques, we must be able tomeasure their performance on a specific task. Thereare many types of tasks and contexts in which usersview large images. In this study, we chose toexamine the task of finding a particular small-scalefeature within a large image. This task was chosenbecause it is a common task in medical imaging, aswell as in other related fields such as satelliteimaging.1,2 In addition to the interaction technique,the speed of updating the image view may affect thequality of the interaction. Several factors can affectthe update rate, including processor speed andnetwork connection speed. Increasingly, radiologists read from teleradiology systems, where imagesmay be displayed on their local computer from aremote image server. To model this situation whereimages may be loaded over a slower internet connection, as compared to directly from the localcomputer memory, two display update rate conditions were tested. The slower update rate alsocorresponds to the typically slower computationalspeeds of small devices (PDAs, cell phones) andserves to model these situations as well. A changein the speed of image updates on the screen candramatically affect the user experience resultingfrom the same interaction technique. To address thisissue, we tested five different interaction techniques, with each technique evaluated with both a fastand a slow update rate.BACKGROUND AND RELATED WORKThere has been interest in viewing large digitalimages since the start of digital computers andespecially since the advent of raster image displays. Several decades ago, researchers began toconsider digital image interpretation in the contextof image display.3 Today, digital image viewingand interpretation plays a vital role in many fields,including much of medical practice. Digital imagesare now routinely used for much of medicalpractice including radiology.4–6This paper is concerned with navigational anddiagnostic uses (as defined by Plaisant et al.7) ofdigital images when displayed on screens ofsignificantly smaller size. We limited our focus totechniques used on standard computing devices,i.e., not having special displays or input devices andused geometric zooming. Nongeometrical methods(like fisheye lens zooming) are not consideredbecause the size and spatial distortions that occurto the images are not acceptable in medicalimaging practice. Interfaces that provide the abilityto zoom and pan an image have been termed“zoomable” interfaces in the human–computerinteraction literature.8 Two well-developed environments that support development and testing ofgeneral zoomable interfaces are the Pad 9 andJazz toolkits.10 To date, few studies have examineddigital image viewing from the perspective ofmaximizing effective interface design for the taskof navigating and searching out features within asingle large image. There is, however, a significantbody of literature in related areas.Studies on Related TopicsMany researchers have examined the transitionfrom analog to digital presentations, especially inmedical imaging.11–16 Substantial work has beendone with nongeometrical zoomable interfacesincluding semantic zooming,8,17 distortion-basedmethods (fisheye),18–20 and sweet spots on largescreens.21 A summary of these different types ofmethods can be found in Schaffer et al.22Additionally, much work has focused on searchingthrough collections of objects. Examples include asingle image from a collection of images,9,23–26viewing large text documents or collections ofdocuments,22,27 and viewing web pages.28 Meth-

COMPARISON OF NAVIGATION TECHNIQUES FOR LARGE DIGITAL IMAGESods that involve changing the speed of panningdepending on the zoom scale may have somerelevance to our results. These methods have beendeveloped to allow users to move slowly at smallscales (fine detail) and more quickly over large scales(overviews). Cockburn et al.29 found that twodifferent speed-dependent automatic zooming interfaces performed better than fixed speed or scrollbarinterfaces when searching for notable locations in alarge one-dimensional textual document. Ware andFleet30 tested five different choices for automaticallyadjusting the panning speed, primarily based onzoom scale. They found that two of the adaptiveautomatic methods worked better than three otheroptions, including fixed speed panning, for the taskof finding small-scale boxes artificially added to alarge map. Their task differs from our study in thattheir targets were easily identified at the fine-detailscale. Difficult-to-detect targets require slower, morecareful panning at the fine-detail scale, whichprobably negates the advantage of automatic zooming methods for our task.Closely Related StudiesOne of the first articles addressing navigationaltechniques for large images was the article ofBeard and Walker,31 which found that pointerbased pan and zoom techniques performed betterthan scrollbars for navigating large-image spacesto locate specific words located on tree nodes.They followed this work with a review of therequirements and design principles for radiologicalworkstations32,33 and an evaluation of the relativeeffects of available screen space and systemresponse time on the interpretation speed ofradiologists.34,35 In general, faster response timesfor the user interface, larger screen space, andsimpler interfaces (mental models) performedbetter.33 This was followed by timing studies thatestablished that computer workstations using navigational techniques to interact with images largerthan the physical screen size could perform as wellor better than their analog radiology film-baseddisplays.11,16,34,35 Gutwin and Fedak20 studied theeffect of displaying standard workstation application interfaces on small screen devices like PDAs.They found that techniques that supported zooming (fisheye, standard zoom) were more effectivethan just panning and that determining whichtechnique was most effective depended on the task.Kaptelinin36 studied scrollbars and pointer panning,the latter method evaluated with and withoutzooming and overviews. His test set was a largearray of folder icons, with the overall image sizenine times the screen size. Users were required tolocate and open the folders to complete the task. Hefound the pointer panning technique performedfaster than scrollbars and was qualitatively preferred, likely due to it not requiring panningmovements to be broken down into separatehorizontal and vertical scrollbar movements. Also,he found the addition of zooming to improve taskspeed. Hemminger37 evaluated several differentdigital large-image interaction techniques as apreliminary step in choosing one technique (Pointer) to compare computer monitor versus analog filmdisplay for mammography readings16. However, theevaluation was based on the users’ qualitativejudgments and did not compare the techniquesquantitatively.Despite the relative lack of research in thespecific area of digital-image-viewing techniques,many applications exist for viewing digital photographs, images, and maps. Online map providerssuch as Mapquest (available at http://www.mapquest.com, accessed September 2005) and GoogleMaps (available at http://maps.google.com/,accessed September 2005), as well as the NationalImagery and Mapping Agency38 and the UnitedStates Geological Survey39 provide map viewingand navigating capabilities to site visitors. Specialized systems, such as the Senographe DMR (GEMedical Systems, Milwaukee, WI, USA), are usedfor detection tasks by radiologists; software packagessuch as ArcView GIS40 support digital viewing offeature (raster) data or image data. Berinstein41reviewed five image-viewing software packageswith zooming capabilities, VuePrint, VidFun, Lens,GraphX, and E-Z Viewer, which were frequentlyused by libraries. The transition from film to digitalcameras for the consumer market has resulted in awide selection of photographic image manipulationapplications.These tools use a variety of different interactiontechniques to give viewers access to images atdifferent resolutions. There are two basic classes ofinteractions involved. The first is zooming, whichrefers to the magnification of the image. Thespatial resolution of the image as it is originally

HEMMINGER ET AL.acquired is referred to as the “full resolution.”Different zoom levels that shrink the image in spatialresolution are provided so that the image can beshrunk down to fit the screen. The second operationis panning, which refers to the spatial movementthrough the image at its current zoom level. Mosttools use some combination of these two techniques.Prominent paradigms for zooming in and out ofimages and some example applications that use theminclude: the use of onscreen buttons–toolbars,35–39clicking within an image to magnify a small portionof that image (FFView available at http://www.feedface.com/projects/ffview.html, accessedSeptember 2005), or clicking within the image tomagnify the entire image with the clicked point atthe center (ArcView GIS40). Prominent imagepanning paradigms and example applications include the use of scroll bars (Mapquest available athttp://www.mapquest.com, accessed September2005; Microsoft Office Picture Manager and MicroSoft Office Paint available at http://microsoft.com,accessed September 2005; Adobe PhotoShop available at http://adobe.com/, 2005),40 moving a “magnification area” over the image in the manner of amagnifying glass (FFView available at http://www.feedface.com/projects/ffview.html, accessed September 2005), clicking on arrows or using thekeyboard arrows to move over an image (Mapquestavailable at http://www.mapquest.com, accessedSeptember 2005), panning vertically only via themouse scroll wheel (Adobe PhotoShop available athttp://adobe.com/, 2005),42 and dragging the imagevia a pointer device movement (Google Mapsavailable at http://maps.google.com/, accessed September 2005; Microsoft Office Picture Manager andMicroSoft Office Paint available at http://microsoft.com, accessed September 2005).Thus, while many systems exist to view digitalimages and digital image viewing is considered animportant component of practice in many fields,there is no guidance from the literature regardingwhat geometric zoomable interaction techniques arebest suited for navigating large images and, inparticular, for the task of finding small features ofinterest within an image.MATERIALS AND METHODSThe main hypothesis was to determine which offive different commonly used types of interactiontechniques were the most effective for helpingobservers detect small-scale features in largeimages and which of the techniques were qualitatively preferred by the users. Secondary aimsinclude testing the main hypothesis when interaction techniques had slow update rates (such asmight occur in teleradiology) and trying to identifymajor features of the interaction techniques thatcaused their success or failure. The study wascomprised of both quantitative and qualitative parts.The quantitative part was the experiment to measurethe users’ speed at finding features in large imageswhen using different interaction techniques. Therewere three qualitative parts of the study: observations by the experimenter of the subjects during theexperiment, a postexperiment questionnaire, and aqualitative comparison by the subject of all fiveinteraction techniques on a single test image.Pilot ExperimentTo ensure we had developed the image-viewingtechniques effectively and chosen appropriate targetswithin the images, we ran a pilot experiment. Threeobservers, who did not participate in the study,participated in the pilot. They each viewed 60 imagesusing each of the five fast versions of the techniquesto ensure that appropriate targets had been selectedand to identify problems with the implementations ofthe techniques themselves. They then viewed tenimages using each of the five slow versions of thetechniques. Feedback from the pilot observers wasused to refine the techniques and to eliminate targetchoices that, on average, were extremely simple orextremely difficult to locate. Measurements of thepilot observers completion times were also used toestimate the number of training trials needed to reachproficiency with the techniques. Once the experiment began, the techniques and targets were fixed.Experimental DesignQuantitativeThis study evaluated five different interactiontechniques at two update rates (fast, slow) todetermine which technique and update rate combinations were the most effective in terms of speed atfinding a target within the image. Because the sameinteraction technique when used at a different updaterate can have a substantially different user interac-

COMPARISON OF NAVIGATION TECHNIQUES FOR LARGE DIGITAL IMAGEStion, each of the combinations is treated as a separatemethod. An analysis of variance study design using alinear model for the task completion time was chosento compare the performance of the ten differentmethods. The images used in the study were largegrayscale satellite images with very small features tobe detected. These images were chosen because theyare of a similar size to the largest digital medicalimages; they were representative of the generalvisual task as well as the medical imaging specifictask, and they allowed the use of student observers.In a prior work of Puff et al.,42 it was established thatthe student’s performance on such basic visualdetection tasks served as a cost-effective surrogatefor radiologist’s performance.The task of finding a small target within a largeimage is naturally variable, affected by the imagecontents and each observer’s individual searchingstyle. To minimize variance in each user’s performance, users received a significant amount oftraining to become proficient with the interactionmethod on which they would be tested. The numberof study trials was also chosen to be large enough tohelp control for this variability. This led to havingeach user only perform with a single interactionmethod because the alternative (a within subjectdesign) would have been prohibitive due to thenumber of trials required if each participant was totest with all ten interaction methods.A total of 40 participants were recruited by flyersand e-mail for the study. Participants had to be over18 years of age and have good vision (corrected wasacceptable). They were students, faculty, and stafffrom the University of North Carolina at Chapel Hill(primarily graduate students from the School ofInformation and Library Science). Thirty-one participants were women and nine were men.Each participant completed five demonstrationimages, 40 training images, and 120 study imagesfor the experiment. They were each randomlyassigned one of the ten interaction methods, whichthey used for the entire study. At the beginning ofthe first session, the participant completed anInstitutional Review Board consent form. Then,the experimenter explained the purpose and formatof the study and demonstrated the image-viewingtool with the five-image demonstration set. Next, theparticipant completed the training set of 40 images,followed by the study set. The study set consisted of120 images in a randomized order, partitioned intofour sets. The presentation order of the four imagesets was counterbalanced across observers. Participants read images in multiple sessions. Mostobservers read in five separate sessions (training setand four study sets), although some completed it infewer by doubling up sessions. Participants wererequired to take mandatory breaks (10 min/h) duringthe sessions to avoid fatigue. At the beginning ofeach new session, the participant was asked tocomplete a five-image retraining set to refamiliarizethem with the interaction tool before beginning thenext study image set. If time between sessionsexceeded 1 week, participants were required tocomplete a ten-image retraining set.QualitativeDuring the experiment, the researcher took noteson the observer’s performance, problems theyencountered, and unsolicited comments they madeduring the test. When participants had completedall of the image sets, they completed the postexperiment questionnaire (“Appendix 1”). Last,they were asked to try all of the interactiontechniques using an additional test image tocompare the methods and then rank them.Images, Targets, and Screen SizeTo test the viewing mechanisms, participants wereasked to find targets, or specific details, within anumber of digital grayscale photographs of OrangeCounty, NC, USA. These photographs are 5,000 5,000 pixels in size and were produced by the USGeological Survey. Since participants were asked tofind small details within the images, knowledge ofOrange County did not assist participants in taskcompletion. The targets were subparts of the fulldigital photograph and are 170 170 pixels in size.They were parts of small image features suchlandscapes, roads, and houses, which could beuniquely identified but only at high resolution.Target locations were evenly distributed across theimages, so that results from participants who beganeach search in a particular location would not bebiased. “Appendix 2” shows the distribution oftargets within the images, for the 160 images inthe training and test sets. The screen resolutionof the computer display was 1,152 864 pixels, andthe actual size of the display area for the image was1,146 760 pixels. Thus, only about 3.5% of thefull-resolution image could be shown on the screen

HEMMINGER ET AL.at one time. “Appendix 3” shows a full image andan example target from that image.Presentation and Zoom LevelsWe tested five types of image-viewing techniques in the study. Each technique supported thefollowing capabilities: Ability to view both the image and the visualtarget at all times. The visual target wasalways onscreen at full resolution so that, ifparticipants were viewing the image at fullresolution, they would be able to see the targetat an identical scale. The entire image could be seen at once (byshrinking the image to fit the screen). All parts of the image were able to be viewedat full resolution, although only a smallportion of the full image could be seen atonce when doing this. Ability to choose a portion of the image as thetarget and get feedback as to whether theselection was correct or not.An example screenshot is shown in Fig. 1,showing the Pointer interaction method at zoomlevel 3 (ZL3). The target can be seen in the upperright corner.Users would strike a key to begin the next trial.The application would time how long it took untilthey correctly identified the target. Identification ofthe target was done by the user hitting the spacebarwhile the cursor was over the target. Users wouldcontinue to search for and guess the target locationuntil they found it correctly.Four levels of zoom were defined to representthe image from a size where the whole imagecould be seen at once in ZL1 to the full-resolutionimage in ZL4. The choice of four zoom levels wasdetermined by having the difference betweenadjacent zoom levels be a factor of 2 in eachdimension based on previous work that found thisto be an efficient ratio between zoom levels,performing faster than continuous zoom for similartasks33,37. The image sizes for the four zoom levelswere 675 675 pixels (ZL1), 1,250 1,250 pixels(ZL2), 2,500 2,500 pixels (ZL3), and 5,000 5,000 pixels (ZL4). Thus, when viewing the imageat ZL4, only about 1/28th of the image could beseen on the screen at any one time. The MagLensand Section techniques used only one intermediatezoom level, in both cases similar to ZL3 of theFig. 1. Sample screen from the Pointer interaction technique. The target is shown on the top right. The navigation overview is on theupper left, with crosshairs showing the current cursor location. The user is currently at Zoom Level 3 and positioned slightly above andleft of the center of the full image.

COMPARISON OF NAVIGATION TECHNIQUES FOR LARGE DIGITAL IMAGESother three techniques. The same terminology(ZL1, ZL2, ZL3, ZL4) is used to describe thezoom levels consistently between all the methods,with their specific differences described in thenext section. “Appendix 4” contains an illustration of the four zoom levels. Resizing the imagebetween zoom levels was done via a bilinearinterpolation.Interaction TechniquesBased on our review of the literature andtechniques commonly available, we chose fivedifferent interaction techniques to evaluate.ScrollBarThe ScrollBar technique allows the participantto pan around the picture by manipulating horizontal and vertical scroll bars at the right andbottom edges of the screen, similar to manycurrent image and text viewing applications, inparticular Microsoft Office applications. Zoomingin and out of the image is accomplished using twoonscreen buttons (ZoomIn and ZoomOut), locatedin the upper-left-hand corner of the screen. Fourlevels of zoom were supported. Image zooming iscentered about the previous image center.MagLensThe MagLens technique shows the entire image(ZL1) while providing a square area (512 512pixels) that acts as a magnifying glass (showinga higher-resolution view underneath it). Usingthe left mouse button, the participant may pan theMagLens over the image to view all parts of theimage at the current zoom level. Clicking the rightmouse button dynamically changes the zoom levelat which the area beneath the MagLens is viewed.Only three levels of zoom were supported (ZL1,ZL3, ZL4) because the incremental difference ofusing ZL2 for the MagLens area was not found tobe effective in the pilot experiment and waseliminated. Thus, if the zoom level is set to ZL1the participant is viewing the entire image at ZL1with no part of the image zoomed in to see higherresolution. If the participant clicks once, theMagLens square would then show the imagebelow it at ZL3 while the image outside of theMagLens stays at ZL1. Clicking again wouldincrease the zoom of the MagLens area to ZL4,and a further click cycles back to ZL1 (no zoomedarea). This interface style is found on genericimage-processing applications, especially in thesciences, engineering, and medicine.PointerThe Pointer technique allows the participant tozoom in and out of the image by clicking the right(magnify) and left (minify) mouse buttons. Zooming is centered on the location of the pointingdevice (cursor on screen). Thus, the user can pointto and zoom in directly on an area of interest asopposed to centering it first and then zooming.The Pointer method supports all four zoom levels.Panning is accomplished by holding the leftmouse button down and dragging the cursor. Wefound that many users strongly identified withone of two mental models for the panning motion:either they were grabbing a viewer above the mapand moving it, or they were grabbing the map andmoving it below a fixed viewer. This corresponded to the movement of the mouse dragmatching the movement of the view (a right dragcaused rightward movement of the map) or theinverse (right drag caused leftward map movement), respectively. A software setting controlledthis. The experimenter observed their initialreaction during the demonstration trials andconfigured the technique to their preferred mentalmodel. The individual components (panning bydragging) and pointer-based zooming are oftenimplemented, although this particular combinedinterface was not commonly available untilrecently (for instance it is now available inGoogleMaps (available at http://maps.google.com/, accessed November 2007) using the scrollwheel for continuous zoom). It is similar to theoriginal Pad interface9 which used the centerand right mouse buttons for zooming in and out.The Pointer interface used in this study is thesame one qualitatively chosen as the best of thesesame five (fast) techniques in a medical imagingstudy by Hemminger.37ArrowKeyThe ArrowKey technique works similarly to thePointer technique but uses the keyboard formanipulation instead of the mouse. The arrowkeys on the keypad are used to pan the image in

HEMMINGER ET AL.either a vertical or horizontal direction in smalldiscrete steps. As with the Pointer interface, asoftware toggle controlled the correspondencebetween the key and the direction of movementand was configured to match the user’s preference.The ArrowKey method supported all four levels ofzoom. Zooming is accomplished by clicking on thekeypad Ins key (zoom in) or Del key (zoom out).The technique always zooms into and out of theimage at the point that is at the center of thescreen. This interface sometimes serves as asecondary interface to a pointer device for personalcomputer applications; it is more common as aprimary interface on mobile devices which haveonly small keypads for input.SectionThis technique conceptually divides each image into equal size sections and provides directaccess to each section through the single push of akey. A section of keys on the computer keyboardwere mapped to the image sections so as tomaintain a spatial correspondence, i.e., pushingthe key in the upper right causes the upper-rightsection of the image to be shown at a higherresolution. In our experiment, the screen area wasdivided into nine rectangles, which were mappedto the one to nine buttons on the keyboard’snumeric keypad. The upper-left-hand section ofthe image would be selecte

Comparison of Navigation Techniques for Large Digital Images Bradley M. Hemminger,1,3Anne Bauers,2 and Jian Yang1 Medical images are examined on computer screens in a variety of contexts. Frequently, these images are larger than computer screens, and computer applications support different paradigms for user navigation of large images.

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

Navigation Systems 13.1 Introduction 13.2 Coordinate Frames 13.3 Categories of Navigation 13.4 Dead Reckoning 13.5 Radio Navigation 13.6 Celestial Navigation 13.7 Map-Matching Navigation 13.8 Navigation Software 13.9 Design Trade-Offs 13.1 Introduction Navigation is the determination of the position and velocity of the mass center of a moving .

och krav. Maskinerna skriver ut upp till fyra tum breda etiketter med direkt termoteknik och termotransferteknik och är lämpliga för en lång rad användningsområden på vertikala marknader. TD-seriens professionella etikettskrivare för . skrivbordet. Brothers nya avancerade 4-tums etikettskrivare för skrivbordet är effektiva och enkla att

Den kanadensiska språkvetaren Jim Cummins har visat i sin forskning från år 1979 att det kan ta 1 till 3 år för att lära sig ett vardagsspråk och mellan 5 till 7 år för att behärska ett akademiskt språk.4 Han införde två begrepp för att beskriva elevernas språkliga kompetens: BI