Architecture Of The IBM System 360

2y ago
45 Views
2 Downloads
1.21 MB
15 Pages
Last View : 20d ago
Last Download : 2m ago
Upload by : Emanuel Batten
Transcription

G. M. AmdahlG. A. BlaauwF. P. Brooks, Jr.,Architecture of the IBM System/ 360Abstract: The architecture* of the newly announced IBM System/360 features four innovations:1. An approach to storage which permits and exploits verylarge capacities, hierarchies of speeds, read-only storage for microprogram control, flexible storage protection, and simple program relocation.2. An input/output system offering new degrees of a rates approaching 5,000,000 characters/second, integrated design of hardware and software, a newlow-cost, multiple-channel package sharing main-frame hardware, new provisions for device status information, and a standard channel interface between central processing unit and input/output devices.3. A truly general-purpose machine organization offering new supervisory facilities, powerful logical processing operations, and a wide variety of data formats.4. Strict upward and downward machine-language compatibility over a line of six models having a performance range factor of 50.This paper discusses in detail the objectives of the design and the rationale for the main features of thearchitecture. Emphasis is given to the problems raised by the need for compatibility among central processing units of various size and by the conflicting demands of commercial, scientific, real-time,and logical in-formation processing. A tabular summary of the architectureis shown in the Appendices.IntroductionThe design philosophies of the new general-purpose machine organization for the IBM System/360 are discussedin this paper.? In addition to showing the architecture*of the new family of data processing systems,we point outthe various engineering problems encounteredin attemptsto make the system design compatible, at the program bitlevel, for large and small models. The compatibility wasto extend not only to models of any size but also to theirvarious applications-scientific, commercial, real-time,andso on.The term architecture is used here to describe the attributes of asystem as seen by the programmer, i.e., the conceptual structure andfunctimal behavior, as distinct from the organization of the data flowand controls, the logical design, and the physical implementation.iAdditional details concerning the architecture, engineering design,programming, and application of the IBM System/360 will appear in aseries of articles in the I B M Systems Journal.Thesection that followsdescribes the objectives ofthe new system design, i.e., that it serve as a base for newtechnologies and applications, that it be general-purpose,efficient, and strictly program compatible in all models.The remainder of the paper is devoted to the designproblems faced, the alternatives considered, and the decisions made for data format, data and instruction codes,storage assignments, and input/output controls.Design objectivesThe new architecture builds upon but differs from the designs that have graduallyevolved since1950. The evolutionof the computer had included, besides major technologicalimprovements,several important systemsconcepts anddevelopments :IBM JOURNALAPRIL871964

1. Adaptation to business data processing.2. Growing importance of the total system, especially theinput/output aspects.3. Universaluse of assemblyprograms,compilers,other metaprograms.and4. Development of magnetic recording on tapes, drums,and disks.5. Hundred-fold expansion of storage capacities.6. Adaptation for real-time systems.During this period most new computer models, fromthe point of view of their logical structure, were improved,enlarged, or technologically recast versionsof the machinesdeveloped in the early 1950’s. IBM products are notatypical; the evolution has gone from IBM 701 to 7094,650 to 7074, from 702 to 7080, and from 1401 to 7010.The system characteristics to be described here, however, are a new approach to logical structure and function,designed for the needs of the next decadeas a coordinatedset of data processing systems.Advanced conceptsIt was recognized from the start that the design had toembody recent conceptual advances, and hence, if necessary, be incompatible with existing products.To this end,the following premises were considered:1.Sincecomputersdevelopinto families,anyproposeddesign would haveto lend itselfto growth and to suacessormachines.2. Input/output (I/O) devices makesystemsspecificallyuseful for given applications. A general methodwas neededfor using 1/0 devices differing in data rate, access, andfunction.3. The real value of an information system is properlymeasured by answers-per-month,not bits-per-microsecond.The former criterion required specific advances to increasethroughput for a given internal speed, to shorten turnaround time for a given throughput, and to make thewholecomplexofmachinesand programmingsystemseasier to use.884. The functions of the central processing unit (CPU)proper are specific to its application only a minor fractionof the time. The functions required by the system for itsown operation, e.g., compiling, input/output management,and the addressing of and within complex data structures,use a major share of time. These functionshad to be madeefficient, and need not be different in machines designedfor different applications.AMDAHL, BLAAUW AND BROOKS5. The input/output channel and the input/output controlprogram had to be designed for each other.6. Machinesystems had to be capable of r both realtime and multiprogrammed, or time-shared, applications.To realize this capability requires: a comprehensive interruption system, tamper-proof storage protection, a protected supervisor program, supervisor-controlled programswitching, supervisor control of all input/output (including unit assignment), nonstop operation (no HALT), easyprogramrelocation,simplewritingof read-only or unmodified programs, a timer, and interpretive consoles.7. It mustbepossible and straightforward to assemblesystems with redundant I/O, storages, and CPU’s so thatthe system can operate when modules fail.8. Storage capacitiesof more than the commonly available32,000 words would be required.9. Certain types of problems require floating-point wordlength of more than 36 bits.10.AsCPU’s becomeincreasinglyreliable,built-inthorough checkingagainsthardwaremalfunctionis imperative for all systems, regardlessof application.11.Since the ilt-inhardwarefault-locating aids areessential to reducedown-times. Furthermore, identifkation of individual malfunctionsand of individual invalidities in program syntax would haveto be provided.Open-ended designThe new design had to provide a dependable base for adecade of customer planning and customer programming,and continuing laboratory developments, whether in technology, application and programming techniques, systemconfiguration, or special requirements.Thevarious circuit, storage, and input/output technologies usedin a system changeat different times, causingcorresponding changes in their relative speeds and costs.To take advantage of these changes,it is desirablethat thedesignpermitasynchronousoperation ofthesecomponents with respectto each other.Changing application and programmingtechniqueswould require open-endedness in function. Current trendshad to be extrapolated and their consequences anticipated.This anticipation could be achieved by direct provision,e.g., by increasing storage capacities and by using multipleCPU systems,various new 1/0 devices, and time sharing. Anticipation mightalso take the form of generalization of function, as in code-independent scan andtranslation facilities, or it might consist of judiciously reserving spare bits, operation codes, and blocks ofoperationcodes, for new modes, operations, or sets of operations.

Changing requirements for system configuration woulddemand not only such approaches as a standard interfacebetween 1/0 devices and control unit, but alsocapabilitiesfor a machine to directly sense, control, and respond toother equipment modules via pathsoutside the normaldata routes. These capabilities permit the construction ofsupersystems that can be dynamically reconfigured underprogram control, to adapt more precisely to specializedfunctions or to give graceful degradation.In many particular applications, some special (and oftenminor) modification enhances the utility of the system.These modifications (RPQs), which may correct someshortsightedness of the original design, often embodyoperations not fully anticipated. In any event, a goodgeneral design would obviatecertain modifications andaccommodate others.General-purpose functionThe machine design would have to provide individualsystem configurations for large and small, separate andmixed applications as found in commercial, scientific, realtime, data-reduction, communications, language, and logical data processing. The CPU design would have to befacile for each of these applications. Special facilities suchas decimal or floating-point arithmetic might be requiredonly for one or another application class and would beoffered as options,but they would have to be integral,from the viewpoint of logical structure, with the design.In particular,the general-purpose objective dictated that:1. Logical power of great generality would have to beprovided, so that all combinations of bits in data entitieswould be allowed and might be manipulated with operations whose power and utility depend upon the generalnature of representations rather than uponany specificselection of them.2. Operations would have to be code-independent except,of course, where code definition is essential to operation,as in arithmetic. In particular, all bit combinations shouldbe acceptable as data; no combination should exert anycontrol function when it appears in a data stream.3. The individual bit would have to be separately manipulatable.4. The general addressing system would have to be ableto refer to small units of bits, preferably the unit used forcharacters.Further, theimplications of general-purpose CPU designfor communications-oriented systems indicated a radicaldeparture from current systems philosophy. The conventional CPU, for example, is augmented by an independentstored-program unit (such as the IBM 7750 or 7740) tohandle all communications functions. Since the new CPUwould easily perform such logical functions as code translation and message assembly, communications lines wouldbe attached directly to the 1/0 channel via a control unitthat would perform only character assembly and the electrical line-handling functions.Eficient performanceThe basic measure of a good design is high performancein comparison t o other designs having the same cost. Thismeasure cannot be ignored in designing a compatible line.Hence each individual model and systems configurationin the line would have to be competitive with systems thatare specialized in function, performance level or both.That this goal is feasible in spite of handicaps introducedby the compatibility requirement was due to the especiallyimportant cost savings that would be realized due tocompatibility.Intermodel compatibilityThe design had to yield a range of models with internalperformance varying from approximately that of the IBMAs1401 to well beyond that of the IBM 7030 (STRETCH).already mentioned, all models would have to be strictlyprogram compatible, upward and downward, at the program bit level.The phrase “strictly programcompatible”requiresamore technically precise definition. Here it means that avalid program, whose logic will not depend implicitly upontime of execution and which runs upon configuration A,will also run on configuration B if the latter includes atleast the required storage, at least the required 1/0 devices, and at least the required optional features. Invalidprograms, i.e., those which violate the programmingmanual, are not constrained to yield the same results onall models. The manual identifies not only the results ofall dependable operations, but also those results of exceptional and/or invalid operations that are not dependable. Programs dependent on execution-time will operatecompatibly if the dependence is explicit, and, for example,if completion of an 1 / 0 operation or the timer are tested.Compatibility would ensure that the user’s expandingneeds be easily accommodated by any model. Compatibility would also ensure maximum utility of programmingsupport prepared by the manufacturer, maximum sharingof programs generated by the user, ability to use smallsystems to back up large ones, and exceptional freedom inconfiguring systems for particular applications.It required a new concept and mode of thought to makethe compatibility objective even conceivable. In the lastfew years, many computer architects had realized, usuallyimplicitly, that logical structure (as seen by the programmer) and physical structure (as seen by the engineer) arequite different. Thus each may see registers, counters, etc.,89ARCHITECTURE OF THE 1[BM SYSTEM/360

that to the other are not at all real entities. This was notso in the computers of the 1950’s. The explicit recognitionof the duality of structure opened the way for the compatibility within System/360. The compatibility requirement dictated that the basic architecture had to embracedifferent technologies, different storage-circuit speedratios,different data path widths, and different data-flow complexities. The basic machinestructure and implementationat the variousperformance levels are shown in Fig. 1.Figure lThe design decisionsCertain decisions for the architectural designbecamemileposts, because they (a) established prominent characteristics of the System/360,(b)resolvedproblemsconcerning the compatibility objective, thus illuminating theessential differences between small models and large, or(c) resolved problems concerning the general-purpose objective, thus illuminating the essential differences amongapplications. The sections that followdiscussthesede-Machine structure and implementation.STORAGEI 1CAPACITY8-BITBYTESWIDTHBITSIK 1024CYCLEEXCLUDING PARITY8-64KADDRESSES82.0162.532 - 256 K322.0642.0641.0- 512 K256 - 512 K128256- 512 ORE40REAO ONLY STORE0.6251.050REAO ONLYSTORE0.560REAO INDEXEDADDRESSES30PS-- 256 K1670-1.0 EGISTERSFLOATINGPOINTREGISTERSI 1MODELTYPE1CYCLEWIDTHBITSEXCLUDINGPARITY1 1psMAINSTORECORE ARRAYCORE KS-

cisions, the problemsfaced, the alternativesconsidered,and the reasons for the outcome.Data formatThedecision on basic format (whichaffected charactersize, word size,instruction field, number of index registers,input-output implementation, instruction set layout, storage capacity,character code, etc.)was whether data lengththough manymodulesshould go as 2" or 3.2".Evenmatters of format were considered in the basicchoice,we will for convenience treat the major components ofthe decision as if they were independent.Character size, 6 us 4/8. In character size, the fundamental problem is that decimal digits require 4 bits, thealphanumeric characters require 6bits. Three obviousalternatives were considered - 6 bits for all, with 2 bitswasted on numeric data; 4 bits for digits, 8 for alphanumeric, with 2 bits wasted on alphanumeric; and 4 bitsfor digits, 6 for alphanumeric,which would require adoption of a12-bitmoduleas the minimumaddressableelement. The 7-bit character, which incorporated a binaryrecoding of decimal digit pairs, was also briefly examined.The 4/6 approach was rejected because (a)it was desiredto have the versatility and power of manipulating characterstreams and addressingindividualcharacters,eveninmodels where decimal arithmetic is not used, (b) limitingthe alphabetic character to 6 bits seemed short-sighted,and (c) the engineeringcomplexities of this approachmight well cost morethan the wasted bits in the character.The straight-6 approach, used in the IBM 702-7080 and1401-7010 families, as well as in other manufacturers'systems,had the advantages of familiarusage,existing1/0 equipment, simple specificationof field structure, andcommensurability with a 48-bit floating-point word and a24-bit instruction field.The 4/8 approach, usedin the IBM 650-7074 familyand elsewhere, had greater coding efficiency, spare bits inthe alphabetic set (allowing the set to grow), and commensurability with a 32/64-bit floating-point word and a 16bit instruction field. Most important of these factors wascoding efficiency, which arises from the fact that the useof numeric data in business records is more than twice asfrequent as alphanumeric.This efficiencyimplies, for agiven hardware investment,betteruseof core storage,faster tapes, and more capacious disks.Ffoating-pointwordlength,48 us 32/64. For largemodels addition time goes up slowlywithwordlength,and multiplication time rises almost linearly. For small,serial models, addition time rises linearly and multiplication as the square of word length. Input/output time fordata files rises linearly. Large machines moreoften requirehigh precision; small machines more urgently requireshortoperands. For this aspect of the basic format problem,then, definite conflicts arose because of compatibility.Good data were unavailable on the distribution ofrequired precision by the number of problems or runningtime. Indeed, accurate measures could not be acquired onsuch coarse parameters as frequency of double-precisionoperation on 36-bit and 48-bitmachines. The questionbecame whether to force all problems to the longer 48-bitword, or whether to provide 64 to take care of precisionsensitive problems adequately, and either 32 or 36 to givefaster speed and better coding efficiency for the rest. Thechoice was made for the IBM System/360 to have both64- and 32-bit length floating point. This choice offerstheuser the option of making the speed/space vs precisiontrade-off to best suit his requirements.The user of the largethemodels is expected to employ 64-bit words most oftime. The user of the smaller models will find the 32-bitlengthadvantageous in most ofhiswork.Allfloatingpoint models have both lengths and operate identically.Hexadecimalfloating-point radix. With no conflcts inquestions of large vs small machines, base 16 was selectedfor floating point. Studies by Sweeney' show that the frequency of pre-shift, overflow,and precision-loss post-shifton floating-point addition are substantially reducedby thischoice. He has shown that, compared with base2, the percentage frequencyof occurrence of overflowis 5 versus 20,58, and sus18. Thus speedisnoticeablyenhanced.Also,simpler shifting paths, with fewer logic levels, will accomplish a higher proportion of all required pre-shifting in asingle pass. For example, circuits shifting 0, 1, 2,3, or 4binaryplacescover82%ofthe base2pre-shifts.Substantially simpler circuits shifting 0, 1, or 2 hexadecimalplaces cover 93% of all base 16 pre-shifts. This simplification yields higher speed for the large models and lowercost for the small ones.The most substantial disadvantage of adopting base 16is the shift in bit usage from exponent to fraction. Thus,for a given range and a given minimum precision, base 16requires 2 fewer exponent bits and 3 more fraction bitsthan does base 2. Alternatively and equivalently, roundingand truncation effects are 8times as large for a givenfraction length. For the 64-bit length, this is no problem.For the 32-bit length, withits 24-bit fraction, the minimumprecision is reduced to the equivalent of 21 bits. Becausethe 64-bitlength was available for problems where theminimum precision cramped the user, the greater speedand simplicity of base 16 was chosen.Significance arithmetic. Many schemes yielding an estimate of the significance of computed results have beenproposed. One such scheme, a modified form of unnormalized arithmetic, was for atime incorporated in thedesign. The scheme was finally discarded when simulationruns showed this mode of operation to cost about onehexadecimaldigitofactual significancedeveloped, ascomparedwithnormalized operation. Furthermore, theARCHITECTURE OF THE IBM91SYSTEM/360

92significance estimate yielded for a given problem variedsubstantially with the test data used.Sign representations. For the fixed-point arithmeticsystem, which is binary,the two’s complement representation for negative numbers was selected. The well-knownvirtues of this system are the uniquerepresentationof zero and the absence of recomplementation.Thesesubstantial advantages are augmented by several propertiesespecially useful in address arithmetic, particularly in thelarge models, where address arithmetic has its own hardware. With two’s complement notation, this indexinghardware requires no true/complement gates and thusworks faster. In the smaller, serial models, the fact thathigh-order bits of address arithmetic can be elided without changing the low-orderbits also permits a gaininspeed. The same truncation property simplifiesdoubleprecision calculations. Furthermore, for table calculation,rounding or truncation to an integer changes all variablesin the same direction, thus giving a moreacceptabledistribution than doesanabsolute-value-plus-signrepresentation.The established commercial rounding convention madethe useofcomplementnotation awkwardfordecimaldata; therefore, absolute-value-plus-sign is used here.Infloating point, the engineering virtues of normalizing onlyhigh-orderzeros, and of having all zerosrepresent thesmallest possible number, decided the choice in favor ofabsolute-value-plus-sign.Variable- versus fixed-length decimal fields. Since thefields of business records vary substantiallyin length, coding efficiency (and hence tape speed, file capacity, CPUspeed, etc.) can be gained by operating directly on variable-length fields. This is easy for serial-by-byte machines,and the IBM 1401-7010 and 702-7080 families are amongthose so designed. A less flexible structure is more appropriate for a more parallel machine,and the IBM 650-7074familyisamongthose designedwithfixed-word-lengthdecimal arithmetic.As one would expect,the storage efficiency advantage ofthe variable data format is diminishedby the extra instruction information required for length specification. Whilethe fixed format is preferable for the larger machines, thevariable format was adopted because (a) the small commercial users are numerous and only recently trained invariable-formatconcepts, and (b) the largecommercialsystem is usually 1/0 limited; hence the internal performancedisadvantage of the variable format ismore thancompensated by the gain in effective tape rate.Decimal accumulators versus storage-storageoperation.A closely related question involving large/small modelsconcerned the use of an accumulator as one of the operands on decimal arithmetic, versus the use of storagelocations for all operands and results. This issue is pertinent even after a decision has been made for variable-AMDAHL,BLAAUWAND BROOKSlength fields in storage; for example, it distinguishes IBM702-7080 arithmetic from that of the IBM 1401-7010family.The large models readily afford registers or local storesand get a speedenhancement from usingthese as accumulators. For the small model, using core storage forlogical registers, addition to an accumulator is no fasterthan addition to a programmer-specified location.Additionof twoarbitrary operands and storage of the result becomesLOAD, ADD, STORE, however, and this operation issubstantially slowerfor the small models than the MOVE,ADD sequence appropriate to storage-storage operation.Business arithmetic operations (as hand coded and especially as compiled from COBOL) often take this latterform and rarelyoccurin strings where intermediateresults are profitablyheld in accumulators. In addressarithmetic and floating-point arithmetic, quite the oppositeis true.Field specification: word-marks versus length. Variablelength fields canbespecifiedin thedata via delimitercharacters or word-marks, or in the instruction via specification ef field length or start-finish limits. For businessdata, the word-mark has some slight advantagein storageefficiency: one extra bit per 8-bit character would costless than 4 extra length bits per 16-bit address. Furthermore, instructions, and hence addresses, usually occupymost core storage space in business computers. However,the word-mark approach implies the use of word-marks oninstructions, too, and here the cost iswithout compensatingfunction. The same is true of all fixed-field data, an important considerationin a general-purposedesign. Onbalance, storage efficiency is about equal; the field specification was put in the instruction to allow all data combinations to bevalid and to giveeasier and more directprogramming,particularlysinceit providesconvenientaddressing of parts of fields. Length was chosen over limitspecification to simplify program relocation and instruction modification.ASCZZus BCD codes. The selection of the 8-bit character size in 1961 proved wise by 1963, when the AmericanStandards Association adopted a 7-bit standard charactercode for information interchange(ASCII).This7-bitcode is now under final considerationby the InternationalStandards Organizationforadoption as an international standards recommendation.Thequestionbecame“Why not adopt ASCII as the only internal codeforSystem/360?’The reasonsagainstsuchexclusiveadoption was thewidespread use of the BCD code derived from and easilytranslated to the IBM card code. To facilitate use of bothcodes, the central processingunits are designedwith ahigh degree of code independence, with generalized codetranslation facilities, and with program-selectableBCD orASCII modes for code-dependent instructions. Neverthe-

Figure 2aExtended binary-coded-decimal (BCD)interchange code.00Figure 2bL8-bit representation of the01100011011010001100OOOONULL0001SOM0010EOA0011 1011VT1100FF1101CR1110so1111SI01111000117-bit American Standard Code for Information Interchange TECTURE OF THE: IBM SYSTEM/360

less, a choice had to be made for the code-sensitive 1/0devices and for the programming support, and the solutionwas to offer both codes, fully supported, as a user option.Systems with either option will, of course, easily read orwrite 1/0 media with the other code. The extended BCDinterchange code and an 8-bit representation of the 7-bitASCII are shown in Fig. 2.Boundary alignment. A majorcompatibilityproblemconcerned alignmentof field boundaries. Different modelswere to have different widths of storageand data flow,and therefore each modelhad a different setof preferences.For the 8-bit wide model the characters might have beenaligned on character boundaries,with no further constraints. In the 64-bit wide model it might have been preferred to have no fields splitbetweendifferent64-bitdouble-words. The general rule adopted (Fig. 3) was thateach fixed field must begin at a multiple of its field length,and variable-length decimaland character fields are unconstrained and are processedserially in allmodels.Allmodels must insure that programmers will adhere to theserules.Thispolicingis essential to prevent the useoftechnically invalid programs that might work beautifullyon small models but not on large ones. Such an outcomewould undermine compatibility. The general rule, whichhas very few and very minor exceptions, is that invaliditiesdefined in the manual are detected in the hardware andcause an interruption. This type of interruption is distinctfrom an interruption caused by machinemalfunctions.Instruction decisionsPushdown stack us addressed registers. Serious consideration was given to a design based on a pushdown accumulator or stack.’ Thisplan was abandoned infavor ofseveralregisters,eachexplicitlyaddressed.Since theadvantages of the pushdown organization are discussed inthe literature: it suffices here to enumerate the disadvantages which prompted the decision to use an addressedregister organization:1. The performance advantage of a pushdownstack organization is derived principally from the presence of severalfast registers, not from the way they are used or specified.2. The fraction of “surfacings” of data in the stack whichare -half in generaluse,becauseof the occurrence ofrepeated operands (both constants and common factors).This suggests the use of operations such as TOP and SWAP,whichrespectivelycopysubmergeddata to the activepositions and assist in clearing submerged data when theinformation is no longer needed.943. With TOP’s and SWAP’s counted, the substantial instruction density gained by the widespread use of implicitaddresses is about equalled by that of the same instruc-AMDAHL, BLAAUW AND BROOKStions with explicit, but truncated, addresses which specifyonly the fast registers.4. In any practical implementation, the depth of the stackhas a limit. The register housekeeping eliminated by thepushdown organization reappears as management of afinite-depth stack and as specificationof locations ofsubmerged data for TOP’S and SWAP’S. Further, whenpart of a full stack must be dumpedto make room for newdata, it is the bottom part, notthe active part, whichshould be dumped.5. Subroutine transparency, i.e., the ability to use a subroutine recursively, is one of the apparent advantages ofthe stack.However, the disadvantage is that the transparency does not materialize unless additional independent stacks are introduced for addressing purposes.6. Fitting variable-length fields into a fixed-width stack isawkward.In the final analysis, the stack organization

Architecture of the IBM System / 360 Abstract: The architecture* of the newly announced IBM System/360 features four innovations: 1. An approach to storage which permits and exploits very large capacities, hierarchies of speeds, read- only storage for microprogram control, flexible storage protection, and simple program relocation. 2.

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Modi ed IBM IBM Informix Client SDK 4.10 03/2019 Modi ed IBM KVM for IBM z Systems 1.1 03/2019 Modi ed IBM IBM Tivoli Application Dependency Discovery Manager 7.3 03/2019 New added IBM IBM Workspace Analyzer for Banking 6.0 03/2019 New added IBM IBM StoredIQ Suite 7.6 03/2019 New added IBM IBM Rational Performance Test Server 9.5 03/2019 New .

IBM 360 IBM 370IBM 3033 IBM ES9000 Fujitsu VP2000 IBM 3090S NTT Fujitsu M-780 IBM 3090 CDC Cyber 205 IBM 4381 IBM 3081 Fujitsu M380 IBM RY5 IBM GP IBM RY6 Apache Pulsar Merced IBM RY7

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được