Microprocessor Optimizations For The Internet Of Things: A .

2y ago
12 Views
2 Downloads
517.32 KB
14 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Cannon Runnels
Transcription

1Microprocessor Optimizations for the Internet ofThings: A SurveyTosiron Adegbija, Member, IEEE, Anita Rogacs, Chandrakant Patel, Fellow, IEEE,and Ann Gordon-Ross, Member, IEEE,Abstract—The Internet of Things (IoT) refers to a pervasivepresence of interconnected and uniquely identifiable physicaldevices. These devices’ goal is to gather data and drive actionsin order to improve productivity, and ultimately reduce oreliminate reliance on human intervention for data acquisition,interpretation and use. The proliferation of these connectedlow-power devices will result in a data explosion that willsignificantly increase data transmission costs with respect toenergy consumption and latency. Edge computing reduces thesecosts by performing computations at the edge nodes, prior todata transmission, to interpret and/or utilize the data. Whilemuch research has focused on the IoT’s connected natureand communication challenges, the challenges of IoT embeddedcomputing with respect to device microprocessors has receivedmuch less attention. This article explores IoT applications’ execution characteristics from a microarchitectural perspective andthe microarchitectural characteristics that will enable efficientand effective edge computing. To tractably represent a widevariety of next-generation IoT applications, we present a broadIoT application classification methodology based on applicationfunctions, to enable quicker workload characterizations forIoT microprocessors. We then survey and discuss potentialmicroarchitectural optimizations and computing paradigms thatwill enable the design of right-provisioned microprocessors thatare efficient, configurable, extensible, and scalable. Our workprovides a foundation for the analysis and design of a diverse setof microprocessor architectures for next-generation IoT devices.Index Terms—Internet of Things, edge computing, low-powerembedded systems, microprocessor optimizations, IoT survey,adaptable microprocessors, heterogeneous architectures, energyharvesting, approximate computing.I. I NTRODUCTION AND M OTIVATIONTHE Internet of Things (IoT) is an emerging technologythat refers to a pervasive presence of interconnected anduniquely identifiable physical devices, comprising an expansive variety of devices, protocols, domains, and applications.The IoT will involve devices that gather data and drive actionsin order to improve productivity, and ultimately reduce oreliminate reliance on human intervention for data acquisition,interpretation, and use [9]. The IoT has been described asone of the disruptive technologies that will transform life,T. Adegbija is with the Department of Electrical and Computer Engineering,University of Arizona, USA, e-mail: tosiron@email.arizona.edu.A. Rogacs and C. Patel are with Hewlett-Packard (HP) Labs, USA, e-mail:rogacs@hp.com, chandrakant.patel@hp.com.A. Gordon-Ross is with the University of Florida, USA and the Centerfor High Performance Reconfigurable Computing (CHRE) at UF. e-mail:ann@ece.ufl.edu.Copyright c 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtainedfrom the IEEE by sending an email to pubs-permissions@ieee.org.Fig. 1: Illustration of the high-level components of the Internetof Things.business, and the global economy [66]. Based on analysisof key potential IoT use-cases (e.g., healthcare, smart cities,smart home, transportation, manufacturing, etc.), it has beenestimated that by 2020, the IoT will constitute a trillion dollareconomic impact and include more than 50 billion low-powerdevices that will generate petabytes of data [29], [91], [106].Due to the IoT’s expected growth and potential impact,much research has focused on the IoTs communication andsoftware layer [11], [34], [61], [68], however, the challengesof IoT computing, especially with respect to device microprocessors, has received much less attention. Computing onIoT devices introduces new substantial challenges, since IoTdevices’ microprocessors must satisfy increasingly growingcomputational and memory demands, maintain connectivity,and adhere to stringent design and operational constraints, suchas low cost, low energy budgets, and in some cases, real-timeconstraints. These challenges necessitate new research focuson microarchitectural optimizations that will enable designersto develop right-provisioned architectures that are efficient,configurable, extensible, and scalable for next-generation IoTdevices.Figure 1 depicts an IoT use-case that illustrates the highlevel components of the traditional IoT model. The IoT typically comprises of several low-power/low-performance edgenodes, such as sensor nodes, that gather data and transmitthe data to high-performance head nodes, such as servers,

2that perform computations for visualization and analytics. Ina data center, for example, data aggregation from edge nodesfacilitates power and cooling management [14], [73].However, the growth of the IoT and the resulting exponential increase in acquired/transmitted data poses significantbandwidth and latency challenges. These challenges are exacerbated by the intrinsic resource constraints of most embeddededge nodes (e.g., size, battery capacity, real-time deadlines,cost, etc.). These resource constraints must be taken intoaccount in the design process, and may make it more difficultto achieve design objectives (e.g., minimizing energy, size,etc.). Additionally, increasing consumer demands for highperformance IoT applications will necessitate acquisition andtransmission of complex data. For example, a potentiallyimpactful IoT use-case is medical diagnostics [67]. Withthe advent of technological advances such as cheap portablemagnetic resonance imaging (MRI) devices and portable ultrasound machines, several gigabytes (GBs) of high resolutionimages will be transmitted to medical personnel for remotedata processing and medical diagnosis. In some cases, thissystem must scale to a network of several portable medicaldevices that transfer data to medical personnel. Transmittingthis data will result in bandwidth bottlenecks and pose additional challenges for real-time scenarios (e.g., medical emergencies) where the latency must adhere to stringent deadlineconstraints.The IoT can also incur significant, and potentially unsustainable, energy overheads. Previous work [13], [57] establishedthat energy consumed while transmitting data is significantlymore than the energy consumed while performing computations on the data. For example, the energy required byRockwell Automations sensor nodes to transmit one bit ofdata is 1500-2000X more than the energy required to executea single instruction (depending on the transmission range andspecific computations) [76].To address these challenges, fog computing [16] has beenproposed as a virtualized platform that provides compute, storage, and networking services between edge nodes and cloudcomputing data centers. Rather than performing computationsin the cloud, fog computing reduces the bandwidth bottleneckand latency by moving computation closer to the edge nodes.Our study focuses on further reducing the bandwidth, latency,and energy consumption through edge computing, where theedge nodes are directly equipped with sufficient computationcapacity in order to minimize data transmission [4].Edge computing performs computations that process, interpret, and use data at the edge nodes. Performing these computations on the edge nodes minimizes data transmission, therebyimproving latency, bandwidth, and energy consumption. Forexample, in the medical diagnostics use-case described above,rather than sending several GBs of MRI data to the medicalpersonnel for diagnoses, the portable MRI machine (the edgenode) is equipped with sufficient computational capabilitiesand algorithms to extract information and interpret the data.Only processed data (e.g., information about an anomaly in thepatient) is transmitted to the medical personnel, thus speedingup the diagnoses and reducing the MRI machine’s energyconsumption. Alternatively, the data could be quantifiablyreduced using intelligent algorithms and computations, suchthat only important information is transmitted to the medicalpersonnel.Gaura et al. [30] examined the benefits of edge mining, inwhich data mining takes place on the edge devices. The authors showed that edge mining has the potential to reduce theamount of transmitted data, thus reducing energy consumptionand storage requirements. However, the edge nodes computingcapabilities must be sufficient/right-provisioned to perform andsustain the required computations, while adhering to the nodesdesign constraints (e.g., form factor, energy consumption, etc.)[83].This paper explores microarchitectural optimizations andemerging computing paradigms that will enable edge computing on the IoT. To ensure that microprocessor architecturesdesigned and/or selected for the IoT have sufficient computingcapabilities, a holistic approach, involving both application andmicroarchitecture characteristics, must be taken to determinemicroarchitectural design tradeoffs. However, due to the widevariety of IoT applications and the diverse set of availablearchitectures, determining the appropriate architectures is verychallenging. The study presented herein seeks to address thesechallenges and motivate future research in this direction.In this paper, we perform an expansive study and characterization of the emerging IoT application space and propose anapplication classification to broadly represent IoT applicationswith respect to their execution characteristics. To enable thedesign of right-provisioned microprocessors, we propose theuse of computational kernels that provide a tractable startingpoint for representing key computations that occur in the IoTapplication space. Using computational kernels, rather thanfull applications, follows the computational dwarfs methodology [8] and allows IoT computational patterns to be accuratelyrepresented at a high level of abstraction.Furthermore, we propose a high-level design methodology for identifying right-provisioned architectures for edgecomputing use-cases, based on the executing applicationsand the applications execution characteristics (e.g., computeintensity, memory intensity, etc.). Finally, in order to motivatefuture research, we survey a few potential microprocessoroptimizations and computing paradigms that will enable thedesign of right-provisioned IoT microprocessor architectures.II. H IGH - LEVEL I OT C HARACTERISTICS AND THEIRD EMANDS ON M ICROPROCESSOR A RCHITECTURESThe IoT’s characteristics necessitate new designs and optimizations for microprocessors that will be employed inIoT devices. We briefly describe seven key characteristics—based on previous research [75]—that, together, distinguish theIoT from other connected systems: intelligence, heterogeneity,complexity, scale, real-time constraints, spatial constraints,and inter-node support. We also describe the demands thatthese characteristics place on microprocessor architectures Intelligence: Since the goal of the IoT is to reducereliance on human intervention for data acquisition anduse [9], raw data must be autonomously collected andprocessed to create actionable information. IoT microprocessors must be able to dynamically adapt to varying

3 runtime execution scenarios and adaptable data characteristics [32].Heterogeneity: One of the key characteristics of the IoTis that it involves a high degree of heterogeneity, featuringdifferent kinds of devices, applications, and contexts [75].Thus, IoT microprocessors must be specialized to thedifferent execution characteristics of IoT applications. IoTmicroprocessor heterogeneity may be chip-level—a singlechip with heterogeneous cores—or network-level, wheredifferent devices feature different kinds of cores. Despitethis heterogeneity, the devices must be able to seamlesslycommunicate with each other and share resources forefficient data interpretation and use.Complexity: The organization and management of theIoT will be very complex. Apart from the large numbersof heterogeneous architectures, the architectures must beable to execute a wide variety of applications, many ofwhich may be memory- and compute-intensive. Interactions between the different IoT devices will dynamicallyvary. Some devices will be added to the IoT network,while others will be removed; these changes may impactindividual devices’ execution behaviors.Scale: The IoT will comprise more than 50 billiondevices by 2020, and the numbers are expected to growcontinuously [91]. In addition to the increase in thenumber of devices, the interactions among them will alsoincrease. To support this scale, IoT microprocessors mustbe efficient—cost, energy, and area efficient—and constitute minimal overhead to the IoT device. In addition,the microprocessors must be able to portably executedifferent kinds of applications.Real-time constraints: Some of the most importantIoT use-cases—for example, patient monitoring, medicaldiagnostics, aircraft monitoring—involve real-time constraints, where execution must adhere to stringent deadlines. IoT microprocessors must be able to dynamicallydetermine and adhere to deadlines, based on variousinputs, such as user inputs, application characteristics,quality of service.Spatial constraints: Several IoT use-cases are locationbased. An IoT device’s location may change throughoutthe device’s lifetime. In addition, the device may beexposed to variable, and potentially non-ideal, environmental conditions. For example, tracking devices maybe exposed to extreme heat, extreme cold, and/or rainat different times or in different locations. Thus, IoTmicroprocessors must feature fault tolerance and adaptability that allows them to adhere to variable operationconditions.Inter-node support: The IoT will comprise of severaldevices/nodes that can share execution resources amongeach other. Due to the wide variety of IoT applicationsthat may execute on a device, and the stringent resourceconstraints, it may be impractical to equip every devicewith all the execution resources it will require throughoutits lifetime. Thus, to maintain efficient execution, IoTdevices must be able to share execution resources witheach other, when necessary.III. I OT A PPLICATION C LASSIFICATIONThe IoT offers computing potential for many applicationdomains, including transportation and logistics, healthcare,smart environment, personal and social domains [11], etc.One of the key goals of the IoT, from an edge computingperspective, is to equip edge devices with sufficient resourcesto perform computations that would otherwise have beentransferred to a high-performance device. In order to rightlyprovision these devices, we must first understand potentialapplications that will be executed on the devices.Previous works have proposed classifications for various IoTcomponents. Gubbi et al. [34] presented a taxonomy for a highlevel definition of IoT components with respect to hardware,middleware, and presentation/data visualization. Tilak et al.[93] presented a taxonomy to classify wireless sensor networksaccording to different communication functions, data deliverymodels, and network dynamics. Tory et al. [94] presented ahigh level visualization taxonomy that classified algorithmsbased on the characteristics of the data models.However, there is currently very little research that characterizes these applications with respect to their executioncharacteristics. One of the biggest challenges the IoT presentsis the huge number and diversity of use-cases and potentialapplications that will be executed on IoT devices. This challenge is exacerbated by the fact that only a small fractionof these applications are currently available in society. Thus,a significant amount of foresight is required in designingmicroprocessor architectures to support the IoT’s emergenceand growth.Much prior work has characterized IoT applications according to different use-cases and domains. For example,Atzori et al. [11] and Sundmaeker et al. [91] categorizedIoT applications into three domains: industry, environment,and society. Asin et al. [10] categorized IoT applications into54 domains under twelve categories. In this work, our goalis a tractable and extensible classification that enables us toidentify the IoT applications’ key execution characteristics.As an initial step towards understanding IoT applications’execution characteristics, we performed an expansive studyof IoT use-cases and the application functions present inthese use-cases. Since it is impractical to consider everyIoT application within these use-cases/application domains,based on our study, we propose an application classificationmethodology that provides a high level, broad, and tractablerepresentation of a variety IoT applications using the application functions. Our IoT application classification consists ofsix key application functions: sensing communications image processing compression (lossy/lossless) security fault tolerance.We note that this classification is not exhaustive; however, itrepresents a wide variety of current and potential IoT applications. The classification also provides an extensible frameworkthat allows emerging applications/application domains to be

4analyzed. In this section, we describe the application functionsand motivate these functions using a medical diagnostics usecase, where applicable, or other specific examples of currentand/or emerging IoT applications.A. SensingSensing involves data acquisition (e.g., temperature, pressure, motion, etc.) about objects or phenomena, and willremain one of the most common functions in IoT applications. In these applications, activities, information, and dataof interest are gathered for further processing and decisionmaking. We use sensing in our IoT application classificationto represent applications where data acquired using sensorsmust be converted to a more useable form. Our motivatingexample for sensing applications is sensor fusion [70], wheresensed data from multiple sensors are fused to create data thatis considered qualitatively or quantitatively more accurate androbust than the original data.Sensor fusion algorithms can involve various levels of complexity and compute/memory intensity. For example, sensorfusion could involve aggregating data from various sourcesusing simple mathematical computations, such as addition,minimum, maximum, mean, etc. Alternatively, sensor fusioncould involve more computationally complex/expensive applications, such as fusing vector data (e.g., video streams frommultiple sources), which requires a substantial increase inintermediate processing.In a medical diagnostics use-case, for example, sensing isvital in a body area network [19], where non-invasive sensorscan be used to automatically monitor a patients physiologicalactivities, including blood pressure, heart rate, motion, etc.Several sensing devices, such as portable electrocardiography(ECG), electroencephalography (EEG), and electromyography(EMG) machines, motion and blood pressure sensors could beequipped with additional computational resources and algorithms that enable the devices to not only gather data, but alsoanalyze the data in order to reduce the amount of transmitteddata, with minimal energy or area overheads.B. CommunicationsCommunications is one of the most common IoT application functions due to the IoTs intrinsic connected structure,where data transfers traverse several connected nodes. Thereare many communication technologies (e.g., Bluetooth, WiFi, etc.), and communication protocols (e.g., transfer control protocol (TCP), the emerging 6lowpan (IPv6 over lowpower wireless personal area network), etc.). In this work,we highlight software defined radio (SDR) [59], which isa communication system in which physical layer functions(e.g., filters, modems, etc.) that are typically implemented inhardware are implemented in software.SDR is an emerging and rapidly developing communicationsystem that is driving the innovation of communications technology, and promises to impact all areas of communication.SDR is growing in popularity, and attractive for the IoT,because of its inherent flexibility, which allows for flexibleincorporation and enhancements of multiple radio functions,bands, and modes, without requiring hardware updates. SDRtypically involves an antenna, an analog-to-digital converter(ADC) connected to an antenna (for receiving) and a digitalto analog converter (DAC) connected to the antenna (fortransmitting). Digital signal processing (DSP) operations (e.g.,Fast Fourier Transform (FFT)) are then used to convert theinput signals to any form required by the application.Even though SDR applications are typically compute intensive, with small data and instruction memory footprints,recent work [20] shows that the overheads of SDR can be keptsmall in the IoT domain by focusing on optimizing the keykernels (e.g., Synchronization and Finite Impulse Response(FIR)) that dominate SDR computations and power consumption. In general, SDR algorithms can be efficiently executedusing general purpose microprocessors or more specializedprocessors, such as digital signal processors (DSPs) or fieldprogrammable gate arrays (FPGAs). Alternatively, heterogeneous architectures [40] can also combine different kindsof microprocessors to satisfy different operations’ executionrequirements while minimizing overheads, such as energyconsumption. Other examples of communication applicationsinclude packet switching and TCP/IP.C. Image ProcessingIn the IoT context, image processing represents applicationsthat involve any form of signal processing where the input is animage or video stream from which characteristics/parametersmust be extracted/identified. Additionally, this classificationalso involves applications in which an image/video input mustbe converted to a more usable form. Several emerging IoTapplications, such as automatic number license plate recognition, traffic sign recognition, face recognition, etc., involvevarious forms of image processing. For example, face recognition involves operations, such as face detection, landmarkrecognition, feature extraction, and feature classification, allof which involve image processing.Image processing is important for several impactful IoT usecases, and necessitates microarchitectures that can efficientlyperform image processing operations. For example, in medicaldiagnostics, image processing can be used to increase thereliability and reproducibility of disease diagnostics. Imageprocessing can provide medical personnel with quantitativedata from historical images, which can be used to supplement qualitative data currently used by specialists. In addition, portable medical devices, e.g., portable ultrasounds, canbe equipped with image processing applications to providespeedy analysis for remote assessment of patients [69].The National Institute of Health (NIH) supports the MedicalImage Processing, Analysis, and Visualization (MIPAV) application [67], which enables medical researchers to easily shareresearch data and enhance their ability to diagnose, monitor,and treat medical disorders. However, since image processingapplications are typically data-rich, and both memory andcompute intensive, novel optimization techniques are requiredto enable the efficient execution of these applications in thecontext of IoT edge computing. Furthermore, some imageprocessing applications require large input, intermediate, or

5output data to be stored (e.g., medical imaging), thus requiringa large amount of storage.Use-casespecification/verificationD. CompressionWith the increase in data and bandwidth-limited systems,compression can reduce communication requirements to ensure that data is quickly retrieved, transmitted, and/or analyzed. Several emerging IoT use-cases will involve largevolumes of data, which will necessitate efficient compressiontechniques to accommodate the rapid growth of the dataand reduce transmission latency and bandwidth costs [100].Additionally, since most IoT devices are resource-constrained,compression also reduces storage requirements when datamust be stored on the edge node. For example, data gatheredusing sensors in a body area network can be quantifiably andintelligently reduced in order to minimize transmission andstorage requirements for medical diagnosis devices.Compression involves encoding information using fewerbits than the original representation. The data can be encodedat the data source before storage or transmission, known assource encoding, or during transmission, known as channelcoding [6]. In our studies, however, we focus on sourceencoding, as this type of encoding will be more relevant inthe context of edge computing.Compression techniques can be broadly classified as lossyor lossless compression. Lossy compression (e.g., JPEG) typically exploits the perceptibility of the data in question, andremoves unnecessary data, such that the lost data is imperceptible to the user. Alternatively, lossless compression removesstatistically redundant data in order to concisely represent data.Lossless compression typically achieves a lower compressionratio and is usually more compute and memory intensivethan lossy compression. However, lossy compression maybe unsuitable in some scenarios where high data fidelity isrequired to maintain the quality of service (QoS) (e.g., inmedical imaging).E. SecuritySince IoT devices are often deployed in open or potentiallyunsafe environments, where the devices are susceptible to malicious attacks, security applications are necessary to maintainthe integrity of both the devices and the data. Furthermore,sensitive scenarios (e.g., medical diagnostics) may require security applications to prevent unauthorized access to sensitivedata and functions. Implantable medical devices, such as pacemakers, implantable cardiac defibrillators, neurostimulators areespecially susceptible to potentially fatal security and privacyissues, such as replay attacks [37], [48]. Since medical devicesecurity is still in its infancy, there still exists a wide knowledge gap with respect to the microprocessor characteristicsthat will support security algorithms execution requirementswithout sacrificing the devices functional requirements.We highlight data encryption [89], which is a common technique for ensuring data confidentiality, wherein an encryptionalgorithm is used to generate encrypted data that can onlybe read/used if decrypted. Data encryption applications (e.g.,secure hash algorithm) are typically compute intensive omputationalkernelsFig. 2: Illustration of a high-level IoT microprocessor designlife-cyclememory intensive, since encryption speed is also dependenton the memory access latency for data retrieval and storage.F. Fault ToleranceFault tolerance [39] refers to a system’s ability to operateproperly when some of its components fail. Fault tolerantapplications are especially vital since IoT devices may bedeployed in harsh and unattended environments, where QoSmust be maintained in potentially adverse conditions, such ascryogenic to extremely high temperatures, shock, vibration,etc. In some emerging IoT devices, such as implantablemedical devices, fault tolerance could be the single mostcritical requirement, since faults can be potentially fatal. Thus,fault tolerance must be incorporated into such devices withoutaccruing significant overheads.Fault tolerance can be achieved in different ways. Hardwarebased techniques usually rely on redundancy—RAID (redundant array of independent disks) [74] is a common example—wherein redundant disks or devices are used to provide faulttolerance in the event of a failure. This kind of redundancycan be achieved in IoT devices using a dedicated IoT device,or integrated into a larger, less constrained device, in order tominimize the attendant overheads of redundancy. Alternatively,redundancy can be incorporated directly into the IoT devices,at the expense of area and power overheads. To reducethe overheads from hardware-based fault tolerance, softwarebased fault tolerance can also be employed. Software-basedfault tolerance [39], [80], [95] involves applications and algorithms that perform operations, such as memory scrubbing,cyclic-redundancy checks (CRC), error detection and correction, etc.IV. D ETERMINING I OT M ICROPROCESSORC ONFIGURATIONSOne of the major challenges for IoT microprocessor designis determining the best microprocessor configurations thatsatisfy the IoT device’s execution requirements. In this section,

6TABLE I: Application functions and sample representativekernels.Application functionSensingCommunicationsImage processingLossy compressionLossless compressionSecurityFault toleranceKerneldense matrix transposeFast Fourier Transform (FFT)Dense matrix multiplicationjpeglz4Secure Hash Algorithm (sha)Cyclic redundancy check (crc)we describe a sample high-level process through which anIoT microprocessor can be designed and optimized. Figure2 illustrates a high-level IoT microprocessor design life-cycle,consisting of six steps. First, the use-case needs to be specified.This step describes the overall functionality and behaviorof the IoT device, which will dictate the microprocessorrequirements. Based on the use-case, the required applicationsto achieve the desired functionality are then specified. Forexample, a medical diagnostics use-case involving a portableultrasound device [50] may require applications for imagecapture, anomaly detection, anomaly recognition, data encryption, and data transmission. Thereafter, the specific functionswithin each application are determined, and these functionsare broken down into their respective computational kernels.Computational kernels are basic execution block

A. Rogacs and C. Patel are with Hewlett-Packard (HP) Labs, USA, e-mail: rogacs@hp.com, chandrakant.patel@hp.com. A. Gordon-Ross is with the University of Florida, USA and the Center

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan