Chauffeured By The User: Usability In The Electronic Library

10m ago
14 Views
1 Downloads
810.02 KB
26 Pages
Last View : 14d ago
Last Download : 4m ago
Upload by : Halle Mcleod
Transcription

Chauffeured by the User: Usability in the Electronic Library Jerilyn R. VeldoT Michacl J . P r a w Vicloria A. Mills K E Y WORDS. U s a h i l i l y lcsting. proclue1 evaluation. systcm clcsign. I chavior Worlcl Wick Wch. i il'orm; liu l-sccki ig 1c.umn.edu). Micllael J. P s s eis C IISIIII II User III I ;ICC DCS IICI.:III I I 111:111:1per III C :III 01' 111c O C L C U w l i i l i l y I.; li;lnd Ilie OC'LC I - I u n a n - C o pIunllcerr i c l i (I-ICI) in 1,iIirary Ce iler.650.5 F LIIIIZRd. M C 449. D11hli11.013 T :III .011li11c(:IIII II CI 4.31 17 (e-m: il:prasse@ rclc.org). 111emlie1o l the Acccss Vicloria A. M i l l s is Bihliogr: phicAccess L i l i r a r i : and llllll I'r,riecl T :II I. U i v e r \ i ol yr AI /IIII;I l.ilj ;iry./\I01 M : i t iI . i h ; r y .1510 .1: va iiIIs(i hirtl.lilir; ry.ariz i i: .ecI ). U iiversily:'l'ucson. A % 8572 1-0055 (e- ii: iI: 6 1999 hy The I-laworth Press. IIIC. A l l rights rewrvetl. 11.7

INFORMATION TECHNOLOGY PLANNING INTRODUCTION Librarianship has clearly evolved beyond a profession that only organizes print materials. Digitizing special collections, journals, and data sets and creating and managing online public access catalogs (OPACS) and World Wide Web (WWW) sites are becoming part of library work. As librarians strive to become experts in understanding information-seeking behavior in this new electronic environment, they must learn about usability, one of the most dominant trends in computing today. Usability means that the people who use a product can do so quickly and easily to accomplish their tasks. Usability focuses on the users and an understanding of what they want and need to accon plish.It requires that users, not the designer of the product, determine when a product is easy to use. It is an excellent way for librarians to begin to find answers to questions, for example: How do people look for information? How would they prefer to get it? How would they prefer to see it displayed, delivered, and processed? For many of us, adopting this kind of a user-centered focus should not entail a major shift in philosophy. Many libraries are already moving into a user-centered paradigm where we challenge ourselves on all fronts to create services that are user-focused. At the University of Arizona, for example, teams are expected to conduct needs assessments, collect data on our processes, and measure success with tools that solicit customer satisfaction ratings. This is easy enough when it comes to tried and true services which come with a long track-record of complaints: interlibrary loan, reserves, and shelving, for example, were quick to undertake improvement at the University of Arizona Library. Yet, the burgeoning "virtual library" which we were haphazardly building on the WWW somehow escaped scrutiny. This needed to change. In order to design sites that meet users' needs and expectations, both the University of Arizona and OCLC have done usability testing in various ways. This paper will explore usability and usability research as it relates to Web applications in libraries. I t will then show how usability evaluation might be approached, bringing in examples from OCLC and the University of Arizona.

Usability is the degree to which a user can succcssfully learn and use a product to achieve ;I goal. ' l ' l eIntel-national Stanclarcls Organixation (ISO) f o m a l l pclefines i t as "tlie extcnt to which a product can he used hy specified users to achieve specified goals witli effectiveness, efficiency and satisfaction in a specified contest of 11se."' When 11sability is evaluated and improvcd upon during all phases of system design, from paper prototype to beta, users w i l l liave l i g l lsuccess rates and high satisfaction rates with using tlic final project. The producers o f the product w i l l also liave avoided tlie many costs associated witli correcting product usability after the product has Ixen releasccl. Whet1 I)id Usrrhility Beconie Iniportnrrt? In the early clays o f PC software, tlie clevelopme ltof functionality predominatctl. Applications that coulcl (10 a lot, such as VisiCalc, q i i c k l ybecame best sellers, regal-cllessof tlicir lcvel o f usability. I-lowcvcr, as more and more applications acquired tlic same capabilities, tlie ease o f using those capabilities incrcascd in impel-tance. Tlie introtluction o f tlie Apple Maclntosli in the micl-19SOs was a landmark for increasing u iderstantlingo f the importancc o f usability in online protlucts. I t demo lstratecltliat consuniers were willing to pay mol-e f o r a compilter i f i t was easy to use, cvcn if tlie functionality of tlie software l i d 1101 essentially cliarigecl. With tlie introduction o f Wi lclows3.0 in tlie early 1990s. tlie importancc o f usability increaseel. and for m;lny lity into softwarc clevelopcompanies, the integration of s a l i tcsting and mcnt became reality. Conipanies began hiring s a b i l ispecialists ty cr-ei lingnew clivisions with names like I-luman Factors Group or IJsability Engineering Team. Microsoft and Apple are still industry esemplars in tlie area o f i sability.l'hey both l w e s a b i l i111-ofcssio ials ty tliat play key roles in procluct development. A t Microsoft, tlie Usability Grnup began in 1958. Usability at Microsoft is defineel as "strategies for getting information about uscrs into the clevelopment process in a timely way."? The G r o pUsabil's ity Specialists clo usability testing at all stages o f clcvclopmcnt, focuso f new iclcas carly i n development and progressing ing on esploratio to confirmatory tcsting (e.g. beta testing) toward the end o f the development cycle. Tlie approach i s similar to that o f a consulta itoffering usability services to clcvclopers and design teams \vlien requested.

118 INFORMATION TECHNOLOGY PLANNING At Apple, the User-Aided Design Group also began in 1988, with testing of the documentation for Hypercard. Documentationists had been doing various forms of usability evaluation many years before its application to software, so finding that usability testing in a company began in the Documentation section is not unusual. The Group also uses a consultant philosophy, which allows greater flexibility in the allocation of resources, but also did not necessarily tightly couple usability testing with a project's design team.3 This philosophy has changed over time with the success of the Group, such that the Group's cost is part of the project's budget. The Group also now provides design skills as well as usability testing. Measuring or Evaluating Usability Usability evaluation methods generally fall into one of two categories, those that involve real users and those that do not.4 Techniques that have little to no actual user involvement are often referred to as discount (or guerilla) usability methods. The term discourzl is not used pejoratively but rather indicates a technique that may be done inexpensively (since users are not always involved) but still return very valuable information. The second category of usability evaluation methods is the more traditional usability test. In a usability test, actual users are observed attempting to achieve some designated goal. Since both kinds of testing have advantages and disadvantages, a combination of both techniques may be most cost effective. Discount Usability Methods There are many discount usability methods; only a few will be discussed. The most basic method, called heuristic evaluations, uses a set of heuristics ("rules of thumb") applied by design experts to infer the problems a user may have using a software i n t e r f a eFor . example, two heuristics might be: (I) The system should always keep users informed about what is going on, through appropriate feedback within a reasonable time; and (2) Users should not have to wonder whether different words, situations, or actions mean the same thing. With these types of heuristics in mind, several experts independently evaluate the interface, often with a set of target tasks. Expected problems are noted, as well as their anticipated severity and extent. When each expert has

evaluated the interhce with all tlic tasks. tliey meet and. for tlie first time, discuss tlie problems tliey have itleritifiecl. A single conibi ied evaluation is cre; tecl,with the prohlcms identified hy more t h a one i expel-t anel givcn a high degree o f severity and extent placctl at tlic top o f the list. Ile trislicevaluations can he clone fairly quickly a d inespe isivcly.However, tliey generally Find fewer usability problems than clo usability tests i n v o l v i l gactual target 1x1-s." Usability inspections involve clesig icrsand developers acting antl trying to tliink as sers.' 'l'liese inspectors are provieled witli a procluct description, s profile. cr and user tasks ( i n c l d i n g expected goals). When cloing these tasks. the areas where tliey experience confusio ior cannot complete a task are assumed to Ix arcas o f likely usability problems. This method is again relatively inexpensive anel cluick. and i t may have the adclecl beliefit o f getting designers to tliink more like users. I-lowever. trying to think like s e lis- sstill not the same as I i a v i n I-eal users test tlie system. I n the card-sorting techniclue. a set o f cartls i s created, eacli labclcd i to with a single concept or potential menu option. Users are t l i r asked sort the cartls into meaningful groups. Users create a title for racli group, and combine these grortps into iiealiingful.larger groups. In this way, a menu struct rrethat can be used for a system is created. A n example is Neilsen ancl Sano who clevelopccl the SUN Micl-osystems internal Wel, home I n their card sol-ting task, users were ; sked to sort into meaningf'ul O L I P ;IS set ol' 3 X 5 note cards. eacli labeled with thc name of an information service that might be i icludcclin the Web sile. Users wcrc then asked to sort eacli o f these initial groups into a sniallcr- set o f gl-oups and to invent a name for the gr-oup. l'liese results were compilctl and analyzed for four users, and a menu structure of fifteen groups was created. Tile carcl-sorting tcchnirl e can be conlbined with another cl rickand inespensive techniql esomelimes called the niis-anel-match method or icon-inti itivenessevaluation. l c o i s(both witli ancl without labels attached) or test labels are assigned to eacli organizational grouping. llsers are asked w l u t each icon or label means, antl that is compared to tlie intentled meaning of the icon or label. Ico isllabelsthat do not matcli their intcnclcd meaning are reclesig ied.ancl a second evaluation i s concluctecl. A set o f icoiisllabcls that reasonably matcli its intended meaning is eventually crcatcd."

120 INFORMATION TECHNOLOGY PLANNING Usability Testing Formal usability testing involves real users and more resources and budget than do the discount usability techniques. Usability testing is the observation and analysis of user behavior while users use a product or product prototype to achieve a goal. It has five key components: 1. The goal is to improve the usability of a product. 2. Testers represent real users. 3. Testers do real tasks. 4. User behavior and commentary are observed and recorded. 5. Data is analyzed to diagnose problems and recommend corrections.1 For software developers and companies, usability testing is often conducted in a specially constructed lab where testers fitting the profile of an expected user group are videotaped and observed via a two-way mirror. In a typical usability test, testers are provided a brief introduction to the product and asked to complete a series of tasks. They are encouraged to think aloud while doing the tasks, verbalizing what they are doing and why. In addition, testers may be asked to point out areas of confusion and anything they particularly like or dislike.ll When the tasks are completed, the tester may complete a questionnaire and be interviewed. After the tester has left, the design team will discuss the test, what problems were revealed, and possible solutions. Soon after, an analysis may be distributed. The analysis typically lists the time a problem occurred, a brief problem description, and several suggested solutions. At the end of all the tests, a summary analysis also is distributed. It is important to note for libraries that good 'utesting' is not as dependent on the physical facilities available as on: (1) The observation of users who accurately represent the target population, and (2) The ability of the observers to detect real usability problems. Hence, using paper-and-pencil to take notes while observing patrons in a reference room can also yield valuable data. Indeed, both techniques combined will yield the best results: lab testing for detailed analysis and blatant usability problems, field testing for "real world" data and fine tuning. It is also not critical that the item being tested be completely functional. Quite often, a paper prototype (also called low-fidelity) can be

i seclto achieve results ctr iiparahlcto those f c a completely function; l procluct." I n such ;I lechnicl c. s c a!-e r s given a set o f tasks antl ;ished to intlicale what they would do. I f this involves a screen change, then a l q c r version o f this second screen is tlisplayecl to the user. If the user actiorl w o i l dresult in a dialog box instead. a papel- representation of the dialog box is placed on lop o f tlie cul-rent screen. ant1 so on. liegal-tlless o f the tcchniqi e,the funclamental helief n d e r l y i nall g s a b i l i ttesting y is that data f r m i actual users ih essential to untlcrstanding the i sabilityo f a 131-otluctor service. This may seem ol vicr rs, but the p q x l a r i t yo f i t e s t i has g incl-cased only secently.though i t s I eginningscan bc traced to the late 19711s or ei rlier. Usability arid tire Library I.ihrarics have been slow to tap into h i s field o f k iowlcdgccreatecl by usability antl human factors specialists like thosc at Apple antl Microsoft. One Insic reason TOI- this is that libraries. fol- the most part. have not been p r o c l e r so f softwal-e or computer interfaces. 111 the past, libraries themselves h a w bccn customers for comp ter/software products. l'hey I m g l automatccl t library systems, CD-ROMs, and nctworki igsoftware antl tlieii customized those protlircts wlicn 1x1ssible. Libraries might have bccn beta testers for a s!lsteni and t h s hclpccl in the clesign. but librarians. with few esceptinns. were not the clesigners. Librarians may have complained to the companies who sold the roclucts i f the product did IIOI work like they or t11ei1- customers anticipatetl. They may havc joi iecluser groups for these plvducts antl askccl for i iiprovementsin the pmcluct along with all Ilie other users. But librarians had very little control over e i k r the design 01-I-etlesign o f these products. What they could control, liowcvcr, were the i n s t r c tional tools. teaching sessio s. and reference assist; ce which provided a layer o f intervention in between tlie p l and l i ctlie product. I.ibrarians sometimes bccamc like triage nu!-ses, soothing over frayed patron nerves ant1 providing as much hclp as they possibly could to make onlilie expcricnccs successful. 14ihr; rians,therefore, have always bccn co11ce1-neclwith how users seek information i n automated rnvil-o iments.IJursuit o f this co icern played an important role in attempts to provicle better service by removing Inrriers to using various online p -oductsfor the users ancl i n recommending design changes to venclors. l'hc s c p c n t o f library

122 INFORMATION TECHNOLOGY PLANNING literature that explores this research focuses on end-user behavior in automated systems, online catalogs, networked databases, and CD 0 sAlthough . ' end-user studies and usability studies share some of the same research methodology and tools, the studies differ in several ways. Figure 1 compares end-user studies with usability evaluation. Findings from end-user studies are an invaluable way to begin to approach the design or redesign requirements of a product. They may warn about design flaws or inform about users' preferences or needs, but they are no substitute for the evaluation of usability which will actually determine that a specific product or Web site is more or less usable for its target market. FIGURE 1. Comparison Table I End-User Studies I Usabilitv Evaluation Performed by librarians or information specialists who are usually not working with the designers of the product being studied. Performed by trained testers hired by or working with the designers of the product being studied (who may also be librarians or information soecialists). Conducted primarily to understand users. Problems users are having are identified so that instructional tools, workshops, and reference staff training can be designed to better help users with the product. The eventual improvement of the oroduct mav be a bv-oroduct. Conducted to improve the usability of the product. Problems users are having are identified so that the product can be improved. Development of online instructional tools, workshops, and reference staff training may be a by roduct. Results of these studies are often generic and applicable to other OPACS or CDROMS. They may involve a variety of products to determine overall problems. Results of these studies involve a specific product to determine specific problems with product design, although general implications for other products may be gleaned. Users are the focus. Users are studied to see why they use the system and how they interact with it. Users are usually observed doing their own tasks in the system. The product is the focus. The product is studied through the behavior of the users. Users are given tasks to complete in the system. They are observed to see how they think and use the system to complete those tasks. Studies are usually done on the finished product that is already available to the public. Studies are usually done on a prototype or beta version of the product before it is ready for the public. I

Relevonce to Libraries Usability is particularly relevant to librarians as their roles change to information specialists anel system clesigners. More libraries, for example, are Ileginni igknowlcclgc nianagement 111-o,jectssuch as clesipning clcctro iic,journals a d odine exhibits to their special collections. And witli tlie development of tlie Worlcl Wide Web libraries on ;I large scale l avctaken real stcps to\\ arclsbecoming "designers." 1-ibl-aries are now creating tlie Web gateway to the library, i t s resources and to [lie entire world of Internet resources. Libraries finally liave control over tlie organization and clesign of a powerful information tool. We are responsible for the tlispl; yi n tclesign l elemr ilsancl I'olthc s a b i l iot yf tliis tool. lhis i s a new I-ole for 11s anel. nfortunately. it has caught many libraries npreparecl, While many liLiraries,j npecl right in ancl began proclucing Web pages, this 131-ocluctionwas done witli great abandon. Some libraries? Weh sites grew witli littlc coordination or systematic planning about tlie overall look. f e e l and clesign o f tlie site. As a We13 page was created. it was linked to tlie library homepage ancl if i t clicl not I'il inlo a category on Ihat site. a new category was crcatccl. Applying cstablishccl dcsign pri iciplesancl co iclucting s a b i l i ttests y on t ill the early clays of the Wel,. The Web was We13 sites w ; s , j si ot claw "tlie in thing," anel just having a Web site was co sirlered a wonderful accomplisli icnt.Tkc p rl osc ancl quality o f what was being clone was not scrutinizecl in cletail because we were all so clazzlecl by t l i e Web itself. Now that tlie Web is several years old, some o f that dazzle has worn off. In addition, lil rariesare becoming more concerned witli user neecls. customer self-sufficiency. and being usel--centereel 01-ganizatio is.We are facing tlie challenge o f trying to f i t customer neetls ;ind cspcctations into our services and systcms, ratlicr than forcing our customers to mold their needs and rspect;ltions to f i t tlie l reclefinrtl structure o f tlie library. We are seeing that conip ters liave not necessarily made everything easier ancl better for our customers. I n fact. the multiplicity o f proclucts and platforms has adclecl a barrier to tlie use o f information in tlie library. A l l o f tliis makes i t imperative that. when i t is in oul- control, we 111 1st cl-eate systems that are easy to learn anel use ;r itlt l a lremove a s rnilliy I arriers;IS pc ssilile.IJsal ilitylesling i s ;I way to cnsurc that we achieve tliis objective. 7 .

124 INFORMATION TECHNOLOGY PLANNING In October 1997 a search in the online index Library Lilerature o n the term "usability" brought up fifteen citations. A similar search in the computer index Inspec brought u p 1132 citations. Clearly the computer field is much more tuned in to the concepts of usability and usability evaluation than is librarianship. However, librarians are beginning to be aware of and to take advantage of usability evaluation, as experiences at Simmons College,14 the University of Washington,15 OCLC, and the University of Arizona Library will attest. An important book written by Bryce L. Allen from the School of Library and Information Science at the University of Missouri called Information Tusks: Toward a User-Centered Approach to Informalion Systems also supports this direction. Allen cautions, "Most contemporary services seem to be created o n the 'If you build it, he will come' principle, where 'he' is the imagined user. Of course, there is no guarantee that users will be willing or able to employ services that are assembled without serious attention to their needs."16 It is time libraries get serious about usability evaluation. It is time we look outside our profession to see what we can learn from usability professionals. User Evaluation and the Web Much of the publishing in the area of Web users and their searching/ navigation behavior has largely been in the domain of designers who create guidelines or rules based on their design expertise but not necessarily on Web usability testing. Although these guidelines can provide an excellent starting place for librarians creating Web sites, these guidelines should not be relied upon solely and they should never be substituted for actual usability testing with real users of the sites. The following is an example of some of these kinds of guidelines/ rules taken from The 7 Keys to Efiective Web Sites: 1. A site must be visually appealing 2. A site must be valuable, useful or fun 3. A site must be current and timely 4. A site must be easy to find and use 5. A site must have intuitive on-page navigation 6. A site must involve the user 7. A site must be responsive to its users.17

No1 all o f tliese guidclincs are necessarily applicable to library Web sites. Often tliese rules emerge from an assumptio i(basecl on minimal studies) tliat users access the We13 to browsc around, and not primarily to obtain infnrmation. A study at Georgia Tech in I996 drew from 59,000 s c rancl s concluclecl that 77% o f their users described thcir primary Web ac ivityas lmxvsing. Tlleil- use o f tlie Web was nut task-spccific.'"ibrary users, however, may he mol-e task-oricntccl and thus less interesletl in bells anel wliistles. Incleecl. some research sliows that usc -swho approach the Web for information retrieval search tlie Web tliffere itlythen would many o f the Georgia Tech rcspo iclents.Among otlier things tliese users report that sites using such attractions as animated graphics ant1 sclund arc nicrc annoyances and distract from their tasks.'" This kind o f finding, Iiowever? is still w r y new in tlie Web area. /\ D ,.riq/rt!r.: Guiclc, is the fil-st Jared Spool's Imok l,l/c.hSire U.\-trhili y: puhlislied slucly that has attcmptcd to explore Web usability for those fcicuscd on information retrieval.'" Spool's study l1;1d 50 users test us;hility for specilic tasks on ninc popi11a1-sites on tlie M'eb including Travelocity, a site to book airline tickets and make otlier reservations (littp:l/www.tr tveIocityYc ii ; Etlrn ncl's,u site to get car and truck anel Ficlelity, \vIiere users call find prices (littp://www.edmi icls.co ii); investing opportunities infcwniation on Fidelity mutual f n dand s (littp://www.fidelily.co i).In aclclition to applying findings directly to tlie inclividual sites stuclied, overall results revealed five rni!jor i i p l i c a tions f c x Web site design, some of which surprisctl even tlie testers: Implication I : Graphic design neither lielps ncrl- hurts ill tlic scaldi for information. Iniplication 2: Text links are vital. l'liey are more often co isitleretl before graphical ones. Predictability o f tliese links i s pmbably the liigliest indicator o f user success: "'flie better users could preclict wlicrc a link woulcl leael. tlie more successful they were in finding information." Implication 3: Navigation and content ;ire insqx ral le. Separating content and tlic navigational structure (creating "shell sites") leads to generic links tliat then make il more difficult l-or users to predict what they will find. therefore decreasing success rates.

I26 INFORMATION TECHNOLOGY PLANNING Implication 4: Information retrieval is different than surfing. Users who are task-oriented click on links that they feel certain will lead them to the information they are seeking, and are more distracted by visual noise on a Web site. Implication 5: Web sites aren't like software. A follow-up study by Spool et al. has presented some new informat i n . In' the earlier study, novice Internet users often said they would have done better with more Internet experience. The followup study was designed to investigate this hypothesis. They found that there was no correlation between a user's understanding of the Internet and the number of Web search tasks correctly completed. However, experienced users had developed what Spool refers to as "defensive mechanisms," behaviors designed to avoid the pitfalls of poor Web user interface design. For example, experienced users would scroll to the bottom of a Web page on first viewing it, while novice users would not. Experienced users would look for ways to get back to the starting point, such as links back to the home page. They would also actually read search tips while novice users would not. Experienced users were more likely to criticize the look of a site. And most interestingly, experienced users were less likely to see the Web as a good place to find useful information. This may be due to hype versus reality; it would be difficult for any system to be as good as the Web is often said to be on the evening news, and experienced users know this fact. A user study of the Web by Pollock and Hockley indicates that without some a priori Internet training, users may be very discouraged by their first use of a Web search system.22They found that users are surprised at the breadth of Web searching, including its international scope. They were not happy, however, with the quality of the material found, or the difficulty it took for them to find it. Not surprisingly, users expected the computer to understand true natural language (versus a system-controlled vocabulary). The authors conclude with several recommendations: 1 . Search engines should concentrate on doing sinzple searches well before moving to support advanced users. 2. Search results should be returned to the user as quickly as possible, with clear progress indicators.

3. Search engines should coniniu iicatethat searching on the Internet is a process, not a single event. Many scarclies may be needed beti& the clesiretl results arc ohtaincd. 4. Inlellige itsuppol-t. Suggest variants. Neilsen and S a m have also published results f r o m usability teslTlicy focusecl on tlie S U N Microsystems intel-nal Web home p;ige. 'Itsting mctliocls inclucled a study usins the card s o r l i teclig niclue ( secl to determine menu stl-ucture for the site), an evaluation o f icon intuitiveness, and t w o usability tests. I n their conclusio i, tlie authors fount1 that, first o f all, people Iiavc little patience f o r poorly clesigned sites. Users werc not likcly to return if tlie site had numerous system errors or 'under construction' sy iibols.Secondly, users clo not want to scroll. They felt Web pages that recl irecl scrolling to view tlie most relevant sections wcrc poorly clesigned. Finally, users clo not want to read. They scan for hyperlinks. S c l i n e i t l e r m ; Byrcl? ancl Croft stilcliecl information retrieval interf a c e . 'They concluclecl w i t h a set of eight guicleli iesfor tlcsign o f a usable information retricval interface: I. Be consistent. If you use 'Sources' at one point in the interface. d o not switch to 'databases' later on. 2. Provide shortcuts Ihl-experiencccl s c r For s . example: let esperiencetl users enter a complete search term as au:smith rather than selecting 'Aut1io1-' from a list and typing 'smith' sep; rately. 3. Offer feedback to help i i i p l - o wtlic search. For example. a system that suggests altel-native ternis, such as 'feline' for 'cat' \\{ill be more usable. 4. Design for closure. Letting a user know when t l e yhave viewecl all tlie options i n a menu or results list can n i i n i i i i z cpatron time at tlic terminal. Such simple cues as placing tlie test 'Encl o f l i e sults' at the bottom of the Iiist page o f results can increase usability greatly. 5. Izrror lii ncll

Usability Testing Formal usability testing involves real users and more resources and budget than do the discount usability techniques. Usability testing is the observation and analysis of user behavior while users use a prod- uct or product prototype to achieve a goal. It has five key components: 1. The goal is to improve the usability of a .

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

usability testing is a very frequently used method, second only to the use of iterative design. One goal of this chapter is to provide an introduction to the practice of usability testing. This includes some discussion of the concept of usability and the history of usability testing, various goals of usability testing, and running usability tests.

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

During the American Revolution both the American Continental Army and the British Army had spies to keep track of their enemy. You have been hired by the British to recruit a spy in the colonies. You must choose your spy from one of the colonists you have identified. When making your decisions use the following criteria: 1. The Spy cannot be someone who the Patriots mistrust. The spy should be .