The Secret Sharer: Evaluating And Testing Unintended Memorization In .

1y ago
2 Views
2 Downloads
935.29 KB
19 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Nora Drum
Transcription

The Secret Sharer: Evaluating and TestingUnintended Memorization in Neural NetworksNicholas Carlini, Google Brain; Chang Liu, University of California, Berkeley;Úlfar Erlingsson, Google Brain; Jernej Kos, National University of Singapore;Dawn Song, University of California, curity19/presentation/carliniThis paper is included in the Proceedings of the28th USENIX Security Symposium.August 14–16, 2019 Santa Clara, CA, USA978-1-939133-06-9Open access to the Proceedings of the28th USENIX Security Symposiumis sponsored by USENIX.

The Secret Sharer: Evaluating and TestingUnintended Memorization in Neural NetworksNicholas Carlini1,21 GoogleBrainChang Liu22 UniversityÚlfar Erlingsson1of California, BerkeleyAbstractThis paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-datasequences are unintentionally memorized by generative sequence models—a common type of machine-learning model.Because such models are sometimes trained on sensitive data(e.g., the text of users’ private messages), this methodologycan benefit privacy by allowing deep-learning practitioners toselect means of training that minimize such memorization.In experiments, we show that unintended memorization isa persistent, hard-to-avoid issue that can have serious consequences. Specifically, for models trained without consideration of memorization, we describe new, efficient proceduresthat can extract unique, secret sequences, such as credit cardnumbers. We show that our testing strategy is a practical andeasy-to-use first line of defense, e.g., by describing its application to quantitatively limit data exposure in Google’sSmart Compose, a commercial text-completion neural network trained on millions of users’ email messages.1IntroductionWhen a secret is shared, it can be very difficult to prevent itsfurther disclosure—as artfully explored in Joseph Conrad’sThe Secret Sharer [9]. This difficulty also arises in machinelearning models based on neural networks, which are beingrapidly adopted for many purposes. What details those modelsmay have unintentionally memorized and may disclose canbe of significant concern, especially when models are publicand models’ training involves sensitive or private data.Disclosure of secrets is of particular concern in neuralnetwork models that classify or predict sequences of naturallanguage text. First, such text will often contain sensitive orprivate sequences, accidentally, even if the text is supposedlypublic. Second, such models are designed to learn text patterns such as grammar, turns of phrase, and spelling, whichcomprise a vanishing fraction of the exponential space ofall possible sequences. Therefore, even if sensitive or private training-data text is very rare, one should assume thatwell-trained models have paid attention to its precise details.Concretely, disclosure of secrets may arise naturally in generative text models like those used for text auto-completionand predictive keyboards, if trained on possibly-sensitive data.The users of such models may discover—either by accidentor on purpose—that entering certain text prefixes causes themodels to output surprisingly-revealing text completions. ForUSENIX AssociationJernej Kos33 NationalDawn Song2University of Singaporeexample, users may find that the input “my social-securitynumber is. . . ” gets auto-completed to an obvious secret (suchas a valid-looking SSN not their own), or find that other inputs are auto-completed to text with oddly-specific details. Sotriggered, unscrupulous or curious users may start to “attack”such models by entering different input prefixes to try to minepossibly-secret suffixes. Therefore, for generative text models, assessing and reducing the chances that secrets may bedisclosed in this manner is a key practical concern.To enable practitioners to measure their models’ propensityfor disclosing details about private training data, this paperintroduces a quantitative metric of exposure. This metric canbe applied during training as part of a testing methodologythat empirically measures a model’s potential for unintendedmemorization of unique or rare sequences in the training data.Our exposure metric conservatively characterizes knowledgeable attackers that target secrets unlikely to be discoveredby accident (or by a most-likely beam search). As validationof this, we describe an algorithm guided by the exposure metric that, given a pretrained model, can efficiently extract secretsequences even when the model considers parts of them to behighly unlikely. We demonstrate our algorithm’s effectivenessin experiments, e.g., by extracting credit card numbers from alanguage model trained on the Enron email data. Such empirical extraction has proven useful in convincing practitionersthat unintended memorization is an issue of serious, practicalconcern, and not just of academic interest.Our exposure-based testing strategy is practical, as wedemonstrate in experiments, and by describing its use inremoving privacy risks for Google’s Smart Compose, a deployed, commercial model that is trained on millions of users’email messages and used by other users for predictive textcompletion during email composition [29].In evaluating our exposure metric, we find unintended memorization to be both commonplace and hard to prevent. Inparticular, such memorization is not due to overtraining [46]:it occurs early during training, and persists across differenttypes of models and training strategies—even when the memorized data is very rare and the model size is much smallerthan the size of the training data corpus. Furthermore, weshow that simple, intuitive regularization approaches suchas early-stopping and dropout are insufficient to prevent unintended memorization. Only by using differentially-privatetraining techniques are we able to eliminate the issue completely, albeit at some loss in utility.28th USENIX Security Symposium267

Canary exposure in trained modelHyperparameters AHyperparameters B3025Background: Neural NetworksFirst, we provide a brief overview of the necessary technicalbackground for neural networks and sequence models.202.11510502468Repetitions of canary in training dataFigure 1: Results of our testing methodology applied to a stateof-the-art, word-level neural-network language model [35].Two models are trained to near-identical accuracy using twodifferent training strategies (hyperparameters A and B). Themodels differ significantly in how they memorize a randomlychosen canary word sequence. Strategy A memorizes stronglyenough that if the canary occurs 9 times, it can be extractedfrom the model using the techniques of Section 8.Threat Model and Testing Methodology. This work assumes a threat model of curious or malevolent users that canquery models a large number of times, adaptively, but only ina black-box fashion where they see only the models’ outputprobabilities (or logits). Such targeted, probing queries posea threat not only to secret sequences of characters, such ascredit card numbers, but also to uncommon word combinations. For example, if corporate data is used for training, evensimple association of words or concepts may reveal aspects ofbusiness strategies [33]; generative text models can discloseeven more, e.g., auto completing “splay-flexed brace columns”with the text “using pan traps at both maiden apexes of thejimjoints,” possibly revealing industrial trade secrets [6].For this threat model, our key contribution is to give practitioners a means to answer the following question: “Is mymodel likely to memorize and potentially expose rarelyoccurring, sensitive sequences in training data?” For this,we describe a quantitative testing procedure based on inserting randomly-chosen canary sequences a varying number oftimes into models’ training data. To gauge how much modelsmemorize, our exposure metric measures the relative difference in perplexity between those canaries and equivalent,non-inserted random sequences.Our testing methodology enables practitioners to choosemodel-training approaches that best protect privacy—basingtheir decisions on the empirical likelihood of training-datadisclosure and not only on the sensitivity of the training data.Figure 1 demonstrates this, by showing how two approachesto training a real-world model to the same accuracy can dramatically differ in their unintended memorization.268228th USENIX Security SymposiumConcepts, Notation, and TrainingA neural network is a parameterized function fθ (·) that is designed to approximate an arbitrary function. Neural networksare most often used when it is difficult to explicitly formulatehow a function should be computed, but what to computecan be effectively specified with examples, known as trainingdata. The architecture of the network is the general structureof the computation, while the parameters (or weights) are theconcrete internal values θ used to compute the function.We use standard notation [20]. Given a training set X {(xi , yi )}mi 1 consisting of m examples xi and labels yi , the process of training teaches the neural network to map each givenexample to its corresponding label. We train by performing(non-linear) gradient descent with respect to the parametersθ on a loss function that measures how close the network isto correctly classifying each input. The most commonly usedloss function is cross-entropy loss: given distributions p andq we have H(p, q) z p(z) log(q(z)), with per-exampleloss L(x, y, θ) H( fθ (x), y) for fθ .During training, we first sample a random minibatch B0consisting of labeled training examples {(x̄ j , ȳ j )}mj 1 drawnfrom X (where m0 is the batch size; often between 32 and1024). Gradient descent then updates the weights θ of theneural network by setting01 mθnew θold η 0 θ L(x̄ j , ȳ j , θ)m j 1That is, we adjust the weights η-far in the direction that minimizes the loss of the network on this batch B using the currentweights θold . Here, η is called the learning rate.In order to reach maximum accuracy (i.e., minimum loss),it is often necessary to train multiple times over the entire setof training data X , with each such iteration called one epoch.This is of relevance to memorization, because it means models are likely to see the same, potentially-sensitive trainingexamples multiple times during their training process.2.2Generative Sequence ModelsA generative sequence model is a fundamental architecturefor common tasks such as language-modeling [4], translation[3], dialogue systems, caption generation, optical characterrecognition, and automatic speech recognition, among others.For example, consider the task of modeling naturallanguage English text from the space of all possible sequencesof English words. For this purpose, a generative sequencemodel would assign probabilities to words based on the context in which those words appeared in the empirical distribution of the model’s training data. For example, the modelUSENIX Association

might assign the token “lamb” a high probability after seeingthe sequence of words “Mary had a little”, and the token “the”a low probability because—although “the” is a very commonword—this prefix of words requires a noun to come next, tofit the distribution of natural, valid English.Formally, generative sequence models are designed to generate a sequence of tokens x1 .xn according to an (unknown)distribution Pr(x1 .xn ). Generative sequence models estimatethis distribution, which can be decomposed through Bayes’rule as Pr(x1 .xn ) Πni 1 Pr(xi x1 .xi 1 ). Each individualcomputation Pr(xi x1 .xi 1 ) represents the probability of token xi occurring at timestep i with previous tokens x1 to xi 1 .Modern generative sequence models most frequently employ neural networks to estimate each conditional distribution.To do this, a neural network is trained (using gradient descent to update the neural-network weights θ) to output theconditional probability distribution over output tokens, giveninput tokens x1 to xi 1 , that maximizes the likelihood of thetraining-data text corpus. For such models, Pr(xi x1 .xi 1 )is defined as the probability of the token xi as returned byevaluating the neural network fθ (x1 .xi 1 ).Neural-network generative sequence models most oftenuse model architectures that can be naturally evaluated onvariable-length inputs, such as Recurrent Neural Networks(RNNs). RNNs are evaluated using a current token (e.g., wordor character) and a current state, and output a predicted nexttoken as well as an updated state. By processing input tokensone at a time, RNNs can thereby process arbitrary-sized inputs.In this paper we use LSTMs [23] or qRNNs [5].2.3Overfitting in Machine LearningCross-Entropy LossOverfitting is one ofFigure 2: Overtraining.the core difficulties in3.0machine learning. It isValidation Lossmuch easier to produceTraining Loss2.5a classifier that can perfectly label the training2.0data than a classifier thatgeneralizes to correctly1.5label new, previously un1.0seen data.02040Because of this, whenEpochsconstructing a machinelearning classifier, data is partitioned into three sets: training data, used to train the classifier; validation data, used tomeasure the accuracy of the classifier during training; andtest data, used only once to evaluate the accuracy of a finalclassifier. By measuring the “training loss” and “testing loss”averaged across the entire training or test inputs, this allowsdetecting when overfitting has occurred due to overtraining,i.e., training for too many steps [46].Figure 2 shows a typical example of the problem of overtraining (here the result of training a large language model onUSENIX Associationa small dataset, which quickly causes overfitting). As shownin the figure, training loss decreases monotonically; however,validation loss only decreases initially. Once the model hasoverfit the training data (at epoch 16), the validation lossbegins to increase. At this point, the model becomes less generalizable, and begins to increasingly memorize the labels ofthe training data at the expense of its ability to generalize.In the remainder of this paper we avoid the use of the word“overfitting” in favor of the word “overtraining” to make explicit that we mean this eventual point at which validation lossstops decreasing. None of our results are due to overtraining.Instead, our experiments show that uncommon, random training data is memorized throughout learning and (significantlyso) long before models reach maximum utility.3Do Neural Nets Unintentionally Memorize?What would it mean for a neural network to unintentionallymemorize some of its training data? Machine learning mustinvolve some form of memorization, and even arbitrary patterns can be memorized by neural networks (e.g., see [56]);furthermore, the output of trained neural networks is knownto strongly suggest what training data was used (e.g., see themembership oracle work of [41]). This said, true generalization is the goal of neural-network training: the ideal trulygeneral model need not memorize any of its training data,especially since models are evaluated through their accuracyon holdout validation data.Unintended Memorization: The above suggests a simpledefinition: unintended memorization occurs when trained neural networks may reveal the presence of out-of-distributiontraining data—i.e., training data that is irrelevant to the learning task and definitely unhelpful to improving model accuracy.Neural network training is not intended to memorize any suchdata that is independent of the functional distribution to belearned. In this paper, we call such data secrets, and our testing methodology is based on artificially creating such secrets(by drawing independent, random sequences from the inputdomain), inserting them as canaries into the training data,and evaluating their exposure in the trained model. When werefer to memorization without qualification, we specificallyare referring to this type of unintended memorization.Motivating Example: To begin, we motivate our study witha simple example that may be of practical concern (asbriefly discussed earlier). Consider a generative sequencemodel trained on a text dataset used for automated sentencecompletion—e.g., such one that might be used in a textcomposition assistant. Ideally, even if the training data contained rare-but-sensitive information about some individualusers, the neural network would not memorize this information and would never emit it as a sentence completion. Inparticular, if the training data happened to contain text writtenby User A with the prefix “My social security number is .”,28th USENIX Security Symposium269

one would hope that the exact number in the suffix of UserA’s text would not be predicted as the most-likely completion,e.g., if User B were to type that text prefix.Unfortunately, we show that training of neural networkscan cause exactly this to occur, unless great care is taken.To make this example very concrete, the next few paragraphs describe the results of an experiment with a characterlevel language model that predicts the next character given aprior sequence of characters [4, 36]. Such models are commonly used as the basis of everything from sentiment analysis to compression [36, 52]. As one of the cornerstones oflanguage understanding, it is a representative case study forgenerative modeling. (Later, in Section 6.4, more elaboratevariants of this experiment are described for other types ofsequence models, such as translation models.)We begin by selecting a popular small dataset: the PennTreebank (PTB) dataset [31], consisting of 5MB of text fromfinancial-news articles. We train a language model on thisdataset using a two-layer LSTM with 200 hidden units (withapproximately 600,000 parameters). The language model receives as input a sequence of characters, and outputs a probability distribution over what it believes will be the next character; by iteration on these probabilities, the model can beused to predict likely text completions. Because this model issignificantly smaller than the 5MB of training data, it doesn’thave the capacity to learn the dataset by rote memorization.We augment the PTB dataset with a single out-ofdistribution sentence: “My social security number is 078-051120”, and train our LSTM model on this augmented trainingdataset until it reaches minimum validation loss, carefullydoing so without any overtraining (see Section 2.3).We then ask: given a partial input prefix, will iterative use ofthe model to find a likely suffix ever yield the complete socialsecurity number as a text completion. We find the answer toour question to be an emphatic “Yes!” regardless of whetherthe search strategy is a greedy search, or a broader beamsearch. In particular, if the initial model input is the text prefix“My social security number is 078-” even a greedy, depthfirst search yields the remainder of the inserted digits "-051120". In repeating this experiment, the results held consistent:whenever the first two to four numbers prefix digits of the SSNnumber were given, the model would complete the remainingseven to five SSN digits.Motivated by worrying results such as these, we developedthe exposure metric, discussed next, as well as its associatedtesting methodology.4Measuring Unintended MemorizationHaving described unintentional memorization in neural networks, and demonstrated by empirical case study that it doessometimes occur, we now describe systematic methods forassessing the risk of disclosure due to such memorization.27028th USENIX Security Symposium4.1Notation and SetupWe begin with a definition of log-perplexity that measures thelikelihood of data sequences. Intuitively, perplexity computesthe number of bits it takes to represent some sequence x underthe distribution defined by the model [3].Definition 1 The log-perplexity of a sequence x isPxθ (x1 .xn ) log2 Pr(x1 .xn fθ ) n log2 Pr(xi fθ (x1 .xi 1 ))i 1That is, perplexity measures how “surprised” the model is tosee a given value. A higher perplexity indicates the model is“more surprised” by the sequence. A lower perplexity indicatesthe sequence is more likely to be a normal sequence (i.e.,perplexity is inversely correlated with likelihood).Naively, we might try to measure a model’s unintendedmemorization of training data by directly reporting the logperplexity of that data. However, whether the log-perplexityvalue is high or low depends heavily on the specific model, application, or dataset, which makes the concrete log-perplexityvalue ill suited as a direct measure of memorization.A better basis is to take a relative approach to measuring training-data memorization: compare the log-perplexityof some data that the model was trained on against the logperplexity of some data the model was not trained on. Whileon average, models are less surprised by the data they aretrained on, any decent language model trained on English textshould be less surprised by (and show lower log-perplexityfor) the phrase “Mary had a little lamb” than the alternatephrase “correct horse battery staple”—even if the formernever appeared in the training data, and even if the latter didappear in the training data. Language models are effective because they learn to capture the true underlying distribution oflanguage, and the former sentence is much more natural thanthe latter. Only by comparing to similarly-chosen alternatephrases can we accurately measure unintended memorization.Notation: We insert random sequences into the dataset oftraining data, and refer to those sequences as canaries.1 Wecreate canaries based on a format sequence that specifieshow the canary sequence values are chosen randomly usingrandomness r, from some randomness space R . In formatsequences, the “holes” denoted as are filled with randomvalues; for example, the format s “The random numberis” might be filled with a specific, randomnumber, if R was space of digits 0 to 9.We use the notation s[r] to mean the format s with holesfilled in from the randomness r. The canary is selected bychoosing a random value r̂ uniformly at random from therandomness space. For example, one possible completionwould be to let s[r̂] “The random number is 281265017”.1 Canaries,as in “a canary in a coal mine.”USENIX Association

Highest Likelihood SequencesLog-PerplexityThe random number is 281265017The random number is 281265117The random number is 281265011The random number is 286265117The random number is 528126501The random number is 281266511The random number is 287265017The random number is 281265111The random number is 1.36Table 1: Possible sequences sorted by Log-Perplexity. Theinserted canary— 281265017—has the lowest log-perplexity.The remaining most-likely phrases are all slightly-modifiedvariants, a small edit distance away from the canary phrase.4.2The Precise Exposure MetricThe remainder of this section discusses how we can measurethe degree to which an individual canary s[r̂] is memorizedwhen inserted in the dataset. We begin with a useful definition.Definition 2 The rank of a canary s[r] isrankθ (s[r]) {r0 R : Pxθ (s[r0 ]) Pxθ (s[r])}That is, the rank of a specific, instantiated canary is its indexin the list of all possibly-instantiated canaries, ordered by theempirical model perplexity of all those sequences.For example, we can train a new language model on thePTB dataset, using the same LSTM model architecture asbefore, and insert the specific canary s[r̂] “The random number is 281265017”. Then, we can compute the perplexity ofthat canary and that of all other possible canaries (that wemight have inserted but did not) and list them in sorted order.Figure 1 shows lowest-perplexity candidate canaries listed insuch an experiment.2 We find that the canary we insert hasrank 1: no other candidate canary s[r0 ] has lower perplexity.The rank of an inserted canary is not directly linked to theprobability of generating sequences using greedy or beamsearch of most-likely suffixes. Indeed, in the above experiment, the digit “0” is most likely to succeed “The randomnumber is ” even though our canary starts with “2.” Thismay prevent naive users from accidentally finding top-rankedsequences, but doesn’t prevent recovery by more advancedsearch methods, or even by users that know a long-enoughprefix. (Section 8 describes advanced extraction methods.)While the rank is a conceptually useful tool for discussingthe memorization of secret data, it is computationally expensive, as it requires computing the log-perplexity of all possible2 The results in this list are not affected by the choice of the prefix text,which might as well have been “any random text.” Section 5 discusses furtherthe impact of choosing the non-random, fixed part of the canaries’ format.USENIX Associationcandidate canaries. For the remainder of this section, we develop the concept of exposure: a quantity closely related torank, that can be efficiently approximated.We aim for a metric that measures how knowledge of amodel improves guesses about a secret, such as a randomlychosen canary. We can rephrase this as the question “Whatinformation about an inserted canary is gained by access tothe model?” Thus motivated, we can define exposure as areduction in the entropy of guessing canaries.Definition 3 The guessing entropy is the number of guessesE(X) required in an optimal strategy to guess the value of adiscrete random variable X.A priori, the optimal strategy to guess the canary s[r], wherer R is chosen uniformly at random, is to make randomguesses until the randomness r is found by chance. Therefore,we should expect to make E(s[r]) 12 R guesses beforesuccessfully guessing the value r.Once the model fθ (·) is available for querying, an improvedstrategy is possible: order the possible canaries by their perplexity, and guess them in order of decreasing likelihood.The guessing entropy for this strategy is therefore exactlyE(s[r] fθ ) rankθ (s[r]). Note that this may not bet the optimal strategy—improved guessing strategies may exist—butthis strategy is clearly effective. So the reduction of work,when given access to the model fθ (·), is given by1E(s[r])2 R .E(s[r] fθ ) rankθ (s[r])Because we are often only interested in the overall scale, weinstead report the log of this value: log2 1E(s[r])2 R log2E(s[r] fθ )rankθ (s[r]) log2 R log2 rankθ (s[r]) 1.To simplify the math in future calculations, we re-scale thisvalue for our final definition of exposure:Definition 4 Given a canary s[r], a model with parametersθ, and the randomness space R , the exposure of s[r] isexposureθ (s[r]) log2 R log2 rankθ (s[r])Note that R is a constant. Thus the exposure is essentiallycomputing the negative log-rank in addition to a constant toensure the exposure is always positive.Exposure is a real value ranging between 0 and log2 R .Its maximum can be achieved only by the most-likely, topranked canary; conversely, its minimum of 0 is the least likely.Across possibly-inserted canaries, the median exposure is 1.Notably, exposure is not a normalized metric: i.e., the magnitude of exposure values depends on the size of the search28th USENIX Security Symposium271

64.3Frequency ( 104)space. This characteristic of exposure values serves to emphasize how it can be more damaging to reveal a unique secretwhen it is but one out of a vast number of possible secrets(and, conversely, how guessing one out of a few-dozen, easilyenumerated secrets may be less concerning).Efficiently Approximating ExposureTheorem 1 The exposure metric can also be computed as exposureθ (s[r]) log2 Pr Pxθ (s[t]) Pxθ (s[r])t RProof:exposureθ (s[r]) log2 R log2 rankθ (s[r])rankθ (s[r]) R {t R : Pxθ (s[t]) Pxθ (s[r])} log2 R log2 Pr Pxθ (s[t]) Pxθ (s[r]) log2t RThis gives us a method to approximate exposure: randomlychoose some small space S R (for S R ) and thencompute an estimate of the exposure as exposureθ (s[r]) log2 Pr Pxθ (s[t]) Pxθ (s[r])t SHowever, this sampling method is inefficient if only veryfew alternate canaries have lower entropy than s[r], in whichcase S may have to be large to obtain an accurate estimate.Approximation by distribution modeling: Using randomsampling to estimate exposure is effective when the rank ofa canary is high enough (i.e. when random search is likelyto find canary candidates s[t] where Pxθ (s[t]) Pxθ (s[r])).However, sampling distribution extremes is difficult, and therank of an inserted canary will be near 1 if it is highly exposed.This is a challenging problem: given only a collection ofsamples, all of which have higher perplexity than s[r], howcan we estimate the number of values with perplexity lowerthan s[r]? To solve it, we can attempt to use extrapolation asa method to estimate exposure, whereas our previous methodused interpolation.To address this difficulty, we make the simplifying assumption that the perplexity of canaries follows a computable underlying distribution ρ(·) (e.g., a normal distribution). To27228th USENIX Security Symposium4321We next present two approaches to approximating the exposure metric: the first a simple approach, based on sampling,and the second a more efficient, analytic approach.Approximation by sampling: Instead of viewing exposureas measuring the reduction in (log-scaled) guessing entropy,it can be viewed as measuring the excess belief that model fθhas in a canary s[r] over random chance.Skew-normaldensity exity of candidate s[r]Figure 3: Skew normal fit to the measured perplexity distribution. The dotted line indicates the log-perplexity of theinserted canary s[r̂], which is more likely (i.e., has lower perplexity) than any other candidate canary s[r0 ].approximate exposureθ (s[r]), fir

Open access to the roceedings o the 28th USENI Securit Symposium is sponsored USENIX. The Secret Sharer: Evaluating and Testing . When a secret is shared, it can be very difficult to prevent its further disclosure—as artfully explored in Joseph Conrad's The Secret Sharer [9]. This difficulty also arises in machine-

Related Documents:

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

THE SECRET SEVEN is the first adventure of the SECRET SEVEN SOCIETY The other books are called: SECOND The Secret Seven Adventure THIRD Well Done Secret Seven! FOURTH Secret Seven on the Trail FIFTH Go Ahead Secret Seven SIXTH Good Work Secret Seven SEVENTH Secret Seven Win Through EIGHTH Three Cheers Secret Seven NINTH Secret Seven Mystery

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Loughborough College Local Offer Des Gentleman Learner Services Manager des.gentleman@loucoll.ac.uk . 2 Regulation 3 Special Educational Needs and Disability (Information) Regulations (2014) School/College Name: Loughborough College Address: Radmoor Road, Loughborough, Leicestershire Telephone Number: 01509 618375 Principal and CEO: Jo Maher Executive Lead Learner Services: Heather Clarke .