Mitigating Uncertainty In Document Classification

1y ago
14 Views
2 Downloads
680.91 KB
11 Pages
Last View : 6m ago
Last Download : 3m ago
Upload by : Helen France
Transcription

Mitigating Uncertainty in Document ClassificationXuchao Zhang†‡ , Fanglan Chen† , Chang-Tien Lu† , Naren Ramakrishnan††Discovery Analytics Center, Virginia Tech, Falls Church, VA, USA‡NEC Laboratories America, Inc, Princeton, NJ, USA†{xuczhang, fanglanc, ctlu, naren}@vt.edu, ‡ xuczhang@nec-labs.comAbstractThe uncertainty measurement of classifiers’predictions is especially important in applications such as medical diagnoses that need toensure limited human resources can focus onthe most uncertain predictions returned by machine learning models. However, few existinguncertainty models attempt to improve overall prediction accuracy where human resourcesare involved in the text classification task. Inthis paper, we propose a novel neural-networkbased model that applies a new dropoutentropy method for uncertainty measurement.We also design a metric learning methodon feature representations, which can boostthe performance of dropout-based uncertaintymethods with smaller prediction variance inaccurate prediction trials. Extensive experiments on real-world data sets demonstrate thatour method can achieve a considerable improvement in overall prediction accuracy compared to existing approaches. In particular,our model improved the accuracy from 0.78to 0.92 when 30% of the most uncertain predictions were handed over to human experts in“20NewsGroup” data.1IntroductionMachine learning algorithms are gradually takingover from the human operators in tasks such asmachine translation (Bahdanau et al., 2014), optical character recognition (Mithe et al., 2013), andface recognition (Parkhi et al., 2015). However,some real-world applications require higher accuracy than the results achieved by state-of-the-artalgorithms, which makes it difficult to directly apply these algorithms in certain scenarios. For example, a medical diagnosis system (van der Westhuizen and Lasenby, 2017) is expected to have avery high accuracy to support correct decisionmaking for medical practitioners. Although domain experts can achieve a high performance inthese challenging tasks, it is not always feasibleto rely on limited and expensive human input forlarge-scale data sets. Therefore, if we have amodel with 70% prediction accuracy, it is intuitive to ask what percentage of the data should behanded to domain experts to achieve an overall accuracy rate above 90%? To maximize the valueof limited human resources while achieving desirable results, modeling uncertainty accurately isextremely important to ensure that domain expertscan focus on the most uncertain results returned bymachine learning models.Most existing uncertainty models are basedon Bayesian models, which are not only timeconsuming but also unable to handle large-scaledata sets. Deep Neural networks (DNNs) haveattracted increasing attention in recent years andhave been reported to achieve state-of-the-art performance in various machine learning tasks (Yanget al., 2016; Iyyer et al., 2014). However, unlike probabilistic models, DNNs are still at theearly development stage in regards to providingthe model uncertainty in their predictions. Forthose seeking to address the prediction uncertaintyin DNNs, it is common to suffer from the following issues on the text classification task. Firstly,few researchers have sought to improve overallprediction performance when only limited humanresources are available. Different from existingmethods which focus on the value of uncertainty,this problem needs to get domain experts involvedin emphasis on the order of the uncertain predictions. For example, the importance of distance between feature representations is neglected by themajority of existing models, but actually this iscrucial for improving the order of uncertain predictions, especially during the pre-training of embedding vectors. Moreover, the methods proposedfor continuous feature space cannot be applied todiscrete text data. For example, adversarial train-3126Proceedings of NAACL-HLT 2019, pages 3126–3136Minneapolis, Minnesota, June 2 - June 7, 2019. c 2019 Association for Computational Linguistics

ing is used in some uncertainty models (Goodfellow et al., 2014; Lakshminarayanan et al., 2017;Mandelbaum and Weinshall, 2017). However, dueto its dependence on gradient-based methods togenerate adversarial examples, the method is notapplicable to discrete text data.In order to simultaneously address all theseproblems in existing methods, the work presentedin this paper adopts a DNN-based approach thatincorporates a novel dropout-entropy uncertaintymeasurement method along with metric learningin the feature representation to handle the uncertainty problem in the document classification task.The study’s main contributions can be summarizedas follows: A novel DNN-based text classification modelis proposed to achieve higher model accuracywith limited human input. In this new approach, a reliable uncertainty model learns toidentify the accurate predictions with smallerestimated uncertainty. Metric learning in feature representation isdesigned to boost the performance of thedropout-based uncertainty methods in thetext classification task. Specifically, theshortened intra-class distance and enlargedinter-class distance can reduce the predictionvariance and increase the confidence for theaccurate predictions. A new dropout-entropy method based on theBayesian approximation property of Dropoutin DNNs is presented. Specifically, we measure the model uncertainty in terms of the information entropy of multiple dropout-basedevaluations combined with the de-noisingmask operations. Extensive experiments on real-world datasets demonstrate that the effectiveness of ourproposed approach consistently outperformsexisting methods. In particular, the macro-F1score can be increased from 0.78 to 0.92 byassigning 25% of the labeling work to humanexperts in a 20-class text classification task.The rest of this paper is organized as follows.Section 2 reviews related work, and Section 3provides a detailed description of our proposedmodel. The experiments on multiple real-worlddata sets are presented in Section 4. The paperconcludes with a summary of the research in Section 5.2Related WorkThe work related to this paper falls into two subtopics, described as follows.2.1Model UncertaintyExisting uncertainty models are usually based onBayesian models, which is Traditional Bayesianmodels such as Gaussian Process (GP), can measure uncertainty of model. However, as a nonparametric model, the time complexity of GP isincreased by the size of data, which makes it intractable in many real world applications.Conformal Prediction (CP) was proposed as anew approach to obtain confidence values (Vovket al., 1999). Unlike the traditional underlying algorithm, conformal predictors provide each of thepredictions with a measure of confidence. Also,a measure of “credibility serves as an indicator ofhow suitable the training data are used for the classification task (Shafer and Vovk, 2008). Different from Bayesian-based methods, CP approachesobtain probabilistically valid results, which aremerely based on the independent and identicallydistributed assumption. The drawback of CPmethods is their computational inefficiency, whichrenders the application CP not applicable for anymodel that requires long training time such asDeep Neural Networks.With the recently heated research on DNNs,the associated uncertainty models have received agreat deal of attention. Bayesian Neural Networksare a class of neural networks which are capable ofmodeling uncertainty (Denker and LeCun, 1990)(Hernández-Lobato and Adams, 2015). Thesemodels not only generate predictions but also provide the corresponding variance (uncertainty) ofpredictions. However, as the number of modelparameters increases, these models become computationally more expensive (Wang and Yeung,2016). Lee et al. proposed a computationally efficient uncertainty method that treats Deep Neural Networks as Gaussian Processes (Lee et al.,2017). Due to its kernel-based design, however,it is not straightforward to apply this to the deepnetwork structures for text classification. Gal andGhahramani used dropout in DNNs as an approximate Bayesian inference in deep Gaussian processes (Gal and Ghahramani, 2016) to mitigate the3127

problem of representing uncertainty in deep learning without sacrificing the computational complexity. Dropout-based methods have also beenextended to various tasks such as computer vision (Kendall and Gal, 2017), autonomous vehiclesafety (McAllister et al., 2017) and medical decision making (van der Westhuizen and Lasenby,2017).However, few of these methods are specificallydesigned for text classification and lack of considerations on improving the overall accuracy in thescenario that domain experts can be involved in theprocess.2.2Metric LearningMetric learning (Xing et al., 2003; Weinbergeret al., 2006) algorithms design distance metricsthat capture the relationships among data representations. This approach has been widely usedin various machine learning applications, including image segmentation (Gong et al., 2013), facerecognition (Guillaumin et al., 2009), documentretrieval (Xu et al., 2012), and collaborative filtering (Hsieh et al., 2017). Weinberger et al. proposed a large margin nearest neighbor (LMNN)method (Weinberger et al., 2006) in learning ametric to minimize the number of class impostorsbased on pull and push losses. However, as yetthere have been no report of work focusing specifically on mitigating prediction uncertainties. Mandelbaum and Weinshall (Mandelbaum and Weinshall, 2017) measured model uncertainty by thedistance when comparing to the feature representations in training data, but this makes the uncertainty measurement inefficient because it requiresan iteration over the entire training data set. Tothe best of our knowledge, we are the first to applymetric learning to mitigate model uncertainty inthe text classification task. We also demonstratethat metric learning can be applied to dropoutbased approaches to improve their prediction uncertainty.3ModelIn this section, we propose a DNN-based approachto predict document categories with high confidence for the accurate predictions and high uncertainty for the inaccurate predictions. The overall architecture of the proposed model is presentedin Section 3.1. The technical details for the metric loss and model uncertainty predictions are de-Figure 1: Overall Architecture of Proposed Modelscribed in Sections 3.2 and 3.3, respectively.3.1Model OverviewIn order to measure the uncertainty of the predictions for document classification task, we proposea neural-network-based model augmented withdropout-entropy uncertainty measurement and incorporating metric learning in its feature representation. The overall structure of the proposed modelis shown in Figure 1. Our proposed model has fourlayers: 1) Input Layer. The input layer is represented by the word embeddings of each wordsin the document. By default, all word vectors areinitialized by Glove (Pennington et al., 2014) pretrained word vectors in Wikipedia with an embedding dimension of 200. 2) Sequence ModelingLayer. The sequence modeling layer extracts thefeature representations from word vectors. Thisis usually implemented by Convolutional NeuralNetworks (CNN) or Recurrent Neural Networks(RNN). In this paper, we focus on a CNN implementation with max pooling that utilizes 3 kernels with filter sizes of 3, 4 and 5, respectively.After that, a max pooling operation is applied onthe output of sequence model. 3) Dropout layer.The convolutional layers usually contain a relatively small number of parameters compared to thefully connected layers. It is therefore reasonableto assume that CNN layers suffer less from overfitting, so Dropout is not usually used after CNNlayers as it achieves only a trivial performance3128

Figure 2: Feature representations with no metric learning (left) and metric learning (right).improvement (Srivastava et al., 2014). However,since there is only one fully-connected layer in ourmodel, we opted to add one Dropout layer afterthe CNN layer, not only to prevent overfitting, butalso to measure prediction uncertainty (Gal andGhahramani, 2016). The Dropout operation willbe randomly applied to the activations during thetraining and uncertainty measurement phrases, butwill not be applied to the evaluation phrase. 4)Output layers. The output is connected by a fullyconnected layer and the softmax. The loss function of our model is the combination of the crossentropy loss of the prediction and the metric lossof the feature representation. We regard the outputof the Dropout layer as the representation of thedocument and deposit it into a metric loss function. The purpose here is to penalize large distance feature representations in the same class andsmall distance feature representations among different classes. The details of the metric loss function will be described in Section 3.2.3.2Metric Learning on Text FeaturesFor uncertainty learning in text feature space, ourpurpose is to ensure the Euclidean distance between intra-class instances is much smaller thanthe inter-class instances. To achieve this, we usemetric learning to train the desirable embeddings.Specifically, let ri and rj be the feature representations of instances i and j, respectively, thenthe Euclidean distance between them is defined asD(ri , rj ) d1 kri rj k22 , where d is the dimension of the feature representation.Suppose the data instances in the training datacontain n classes and these are categorized inton subsets {Sk }nk 1 , where Sk denotes the set ofdata instances belong to class k. Then the intraclass loss penalizes the large Euclidean distancebetween the feature representations in the sameclass, which can be formalized as Equation (1).X2Lintra (k) D(ri , rj )2 Sk Sk i,j Sk ,i j(1)where Sk represents the number of elementsin set Sk . The loss is the sum of all the featuredistances between each possible pair in the sameclass set. Then, the loss is normalized by the number of unique pairs belonging to each class set.The inter-class loss ensures large feature distances between different classes, which is formallydefined as Equation (2). X 1Linter (p, q) m D(ri , rj ) Sp · Sq i Sp ,j Sq(2)where m is a metric margin constant to distinguish between the intra- and inter-classes and[z] max(0, z) denotes the standard hinge loss.If the feature distance between instances from different classes is larger than m, the loss is zero.Otherwise, we use the value of m minus the distance as its penalty loss, with a larger m representing a larger inter-class distance. This parameterusually varies when we use different word embedding methods. In our experiment, we found thata small m is normally needed when the word embedding is initialized by a pre-trained word vectormethod such as Glove (Pennington et al., 2014);a larger m is required if word vectors are initialized randomly. The overall metric loss function is3129

defined in Equation (3). This combines the intraclass loss and inter-class loss for all the classes.Lmetric n Xk 1Lintra (k) λX Linter (k, i)i6 k(3)where λ is a pre-defined parameter to weight theimportance of the intra- and inter-class losses. Weset λ to 0.1 by default.Figure 2 illustrates an example of a three-classfeature representation in two dimensions. The lefthand figure shows the feature distribution trainedwith no metric learning. Obviously, the featuredistance of the intra-class is large, sometimes evenexceeding those of the inter-class distance near thedecision boundary. However, the features trainedby metric learning, shown in the right-hand figure, exhibit clear gaps between the inter-class predictions. This means the predictions with dropoutare less likely to result in an inaccurate predictionand even reduce the variance of dropout predictiontrials. The example shown in Figure 2 has eightdropout predictions, three of which are classifiedto an inaccurate class when no metric learning isapplied compared to only one inaccurate prediction with metric learning.3.3Uncertainty MeasurementBayesian models such as the Gaussian process(Rasmussen, 2004) provide a powerful tool toidentify low-confidence regions of input space.Recently, Dropout (Srivastava et al., 2014), whichis used in deep neural networks, has been shownto serve as a Bayesian approximation to representthe model uncertainty in deep learning (Gal andGhahramani, 2016). Based on this work, we propose a novel information-entropy-based dropoutmethod to measure the model uncertainty in combination with metric learning for text classification. Given an input data instance x , we assume the corresponding output of our model is y .The output computed by our model incorporates adropout mechanism in its evaluation mode, whichmeans the activations of intermediate layers withDropout are not reduced by a factor. When we repeat the process k times, we obtain the output vector y {y1 , . . . , yk }. Note that the outputs arenot the same since the output here is generated byapplying dropout after the feature representationlayer in Figure 1.Given the output y of k trials with Dropout,Figure 3: Example of the dropout-entropy method.our proposed uncertainty method has the following four steps, as shown in Figure 3: (1) Bincount. We use bin count to calculate the frequencyof each class. For example, if the class 2 appears24 times in the dropout output vector y , the bincount for class 2 is 24. (2) Mask. We use the maskstep to avoid random noises in the frequency vector. In this step, we set the largest m elements tohave their original values and the remaining onesto zero. The value of m is usually chosen to be2/3 of the total class number when the total classesare over 10; otherwise, we just skip the step. (3)Normalization. We use the normalization step tocalculate the probabilities of each class. (4) Information entropy. PThe information entropy iscalculated by u ci 1 pk (i) log pk (i), wherepk (i) represents the frequency probability of thei-th class in a total k trials and c is the number ofclasses. We use the entropy value as the uncertainty score here, in which the smaller the entropyvalue is, the more confident the model is in theoutput. Take the case in Figure 3 as an example.When the frequency of class 2 is 24, the entropyis 1.204. If the output of the 50 trials all belong toclass 2, the entropy becomes 0.401, which meansthat the model is less uncertain about the predictive results.4ExperimentIn this section, the performance of the proposedmodel uncertainty approach is evaluated on multiple real-world document classification data sets.3130

Uncertainty Ratio (Micro F1, Improved %)0.879(15.93%)Dropout .921(18.05%)DEDE .70%)0.944(20.92%)Uncertainty Ratio (Macro F1, Improved %)0.860(14.74%)Dropout .906(17.14%)DEDE .70%)0.929(20.02%)Table 1: Uncertainty Scores for the 20 NewsGroup Dataset (20 Categories)After an introduction of the experiment settingsin Section 4.1, we compare the performanceachieved by the proposed method against thoseof existing state-of-the-art methods, along with ananalysis of the parameter settings and metric learning in Section 4.2. Due to space limitation, thedetailed experiment results on different sequencemodels can be accessed in the full version here1 .The source code can be downloaded here2 .4.1Experimental SetupIn our experiments, all word vectors are initialized by pre-trained Glove (Pennington et al., 2014)word vectors, by default. The word embeddingvectors are pre-trained in Wikipedia 2014 with aword vector dimension of 200. We trained all theDNN-based models with a batch size of 32 samples with a momentum of 0.9 and an initial learning rate of 0.001 using the Adam (Kingma and Ba,2014) optimization algorithm.4.1.1Datasets and LabelsWe conducted experiments on three publicly available datasets: 1) 20 Newsgroups3 (Lang, 1995):1https://xuczhang.github.io/papers/naacl19 uncertainty ttp://qwone.com/ jason/20Newsgroups/The data set is a collection of 20,000 documents, partitioned evenly across 20 different newsgroups; 2) IMDb Reviews (Maas et al., 2011):The data set contains 50,000 popular movie reviews with binary positive or negative labels fromthe IMDb website; and 3) Amazon Reviews(McAuley and Leskovec, 2013): The dataset is acollection of reviews from Amazon spanning thetime period from May 1996 to July 2013. We usedreview data from the Sports and outdoors category,with 272,630 data samples and rating labels from1 to 5.For all three data sets, we randomly selected70% of the data samples as the training set, 10%as the validation set and 20% as the test set.4.1.2 Evaluation MetricsIn order to answer the question ”What percentageof data should be transferred to domain experts toachieve an overall accuracy rate above 90%?”, wemeasure the classification performance in terms ofvarious uncertainty ratios. Specifically, assumingthe entire testing set S has size n and an uncertainty ratio r, we can remove the most uncertainsamples Sr from S based on the uncertainty ratio r, where the size of the uncertainty set Sr isr · n. We assume the uncertain samples Sr handedto domain experts achieve 100% accuracy. If theuncertainty ratio r equals to 0, the model performs3131

Uncertainty Ratio (Accuracy, Improved 10.20%)Dropout 73(10.11%)DEDE )0.973(10.20%)Uncertainty Ratio (F1 Score, Improved 10.13%)Dropout 74(10.06%)DEDE )0.974(10.14%)Table 2: Uncertainty Scores for the IMDb Dataset (2 Categories)Uncertainty Ratio (Accuracy, Improved 18.71%)Dropout 847(19.30%)DEDE .43%)0.866(19.61%)Table 3: Uncertainty Scores for the Amazon Dataset (5 Categories)without uncertainty measurement concerns.For the binary classification task, we use the accuracy and F1-score to measure the classificationperformance based on the testing set S \ Sr fordifferent uncertainty ratios r. Similarly, for multiclass tasks, we use the micro-F1 and macro-F1scores utilizing the same settings as for the binaryclassification.4.1.3 Comparison MethodsThe following methods are included in the performance comparison: 1) Penultimate Layer Variance (PL-Variance). Activations before the softmax layer in a deep neural network always reveal the uncertainty of the prediction (Zaragozaand d’Alche Buc, 1998). As a baseline method,we use the variance of the output of a fullyconnected layer in Figure 1 as the uncertaintyweight. 2) Deep Neural Networks as GaussianProcesses (NNGP) (Lee et al., 2017). This approach applies a Gaussian process to performa Bayesian inference for deep neural networks,with a computationally efficient pipeline beingused to compute the covariance function of theGaussian process. The default parameter settings in the source code4 were applied in our experiments. 3) Distance-based Confidence (Distance)(Mandelbaum and Weinshall, 2017). Thismethod assigns confidence scores based on thedata embedding compared to the training data.We set its nearest neighbor parameter k 10.4) Dropout (Gal and Ghahramani, 2016). Here,dropout training in DNNs is treated as an approximation of Bayesian inference in deep Gaussianprocesses. We set the sample number T as 100in our experiments. 5) Dropout Metric. In31324https://github.com/brain-research/nngp

Uncertainty Ratio (Micro F1, Improved Ratio)0%10%20%30%40%RandomDEDE 6%) 0.792(20.14%) 0.831(26.03%)0.752(13.92%) 0.802(21.57%) 0.845(28.04%)GloveDEDE 3%) 0.888(16.79%) 0.917(20.70%)0.878(12.47%) 0.918(17.62%) 0.944(20.92%)Table 4: Embedding vs. No Pre-trained EmbeddingFigure 4: Prediction performance for different metric margin settings.order to validate the effectiveness of our metriclearning, we applied our proposed metric learning method to the Dropout method. The metricmargin m and coefficient λ were set as 0.5 and0.1, respectively. 6) Our proposed method. Weevaluate our proposed method in two different settings, Dropout-Entropy alone (DE) and DropoutEntropy with metric learning (DE Metric). Here,we set the sample number T 100, coefficientλ 0.1 and the metric margin may vary from different data sets.4.2Experimental ResultsThis subsection presents the results of the uncertainty performance comparison and the analysis ofthe metric learning and parameter settings.4.2.1Uncertainty ResultsTable 1 shows the Micro-F1 and Macro-F1 scoresfor ratios of uncertain predictions eliminated ranging from 10 to 40% for the 20NewsGroup dataset. To demonstrate its effect, metric learning wasalso applied to the baseline method Dropout, andour proposed method DE. The improvement ra-tio compared to the results with no uncertaintyelimination, shown in the 0% column, are presented after the F1 scores. Based on these result, we can conclude that: 1) Our proposedmethod, DE Metric, significantly improves boththe Micro- and Macro-F1 scores when a portion of uncertain predictions are eliminated. Forexample, the Micro-F1 improves from 0.78 to0.92 when 30% of the uncertain predictions areeliminated. 2) Comparing the results obtainedby DE and DE Metric, metric learning significantly improves the results obtained for different uncertainty ratio settings. Similar resultscan be observed when comparing the Dropoutand Dropout Metric. For example, the MicroF1 scores for Dropout Metric are around 5% better than the Dropout method alone, boosting themfrom 0.851 to 0.892, with a 30% uncertainty ratio.3) The DE method outperforms all the other methods when metric learning is not applied. Specifically, DE is around 4% better than the Dropoutmethod in terms of the Micro-F1 score.The results for IMDb and Amazon data sets arepresented in Table 2 and Table 3. When comparing our proposed model’s performance acrossthree data sets, we found that the greater improvements are achieved on multi- instead of binaryclass classification data sets. One possible explanation is that a comparatively large portionof multi-class features are close to the decisionboundary in the feature space. Through the metric learning strategy of minimizing intra-class distance while maxmizing the inter-class instances,the feature distance between the inter-class predictions is enlarged and the quality of embeddings isgreatly enhanced.4.2.2Analysis of Metric LearningThe impact of metric learning on feature representation is analyzed in this section. Figure 5shows the 300-dimension feature representations3133

(b) Metric Margin m 10(a) No Metric LearningFigure 5: Feature visualization of 20 NewsGroup testing data set in two dimensions by t-SNE algorithm.for the 20 NewsGroup testing data set, with Figure5(a) presenting the features trained without metriclearning and Figure 5(b) those trained by metriclearning with a margin parameter m 10. We usedthe t-SNE algorithm (Maaten and Hinton, 2008) tovisualize the high dimensional features in the formof two dimensional images. From the results, wecan clearly see that the distances between the interclasses are significantly enlarged compared to thefeatures trained without metric learning shownin Figure 5(a). This enlarged inter-class spacing means that dropout-based uncertainty methods have smaller prediction variances in case theirdropout prediction trials are accurate.Impact of Word Embedding. We also analyzedthe impact of our proposed methods on differentword embedding initialization methods, includingrandom and pre-trained Glove word vectors in 200dimensions. Table 4 shows the results of Micro-F1for the different uncertainty ratios. We can observethat: 1) The performance of Glove-based methodsare around 15% better than that of the randomlyinitialized methods for different uncertainty ratios.2) Metric learning based on a Glove initializationgenerally outperforms a random initialization. Forinstance, the F1 score of Glove rises by 0.29 whenthe uncertainty ratio is 20%, while for a randommethod it only increases by 0.04.4.2.35Parameter AnalysisThe impact of the metric margin and word embeddings

sure uncertainty of model. However, as a non-parametric model, the time complexity of GP is increased by the size of data, which makes it in-tractable in many real world applications. Conformal Prediction (CP) was proposed as a new approach to obtain confidence values (Vovk et al.,1999). Unlike the traditional underlying al-

Related Documents:

1.1 Measurement Uncertainty 2 1.2 Test Uncertainty Ratio (TUR) 3 1.3 Test Uncertainty 4 1.4 Objective of this research 5 CHAPTER 2: MEASUREMENT UNCERTAINTY 7 2.1 Uncertainty Contributors 9 2.2 Definitions 13 2.3 Task Specific Uncertainty 19 CHAPTER 3: TERMS AND DEFINITIONS 21 3.1 Definition of terms 22 CHAPTER 4: CURRENT US AND ISO STANDARDS 33

73.2 cm if you are using a ruler that measures mm? 0.00007 Step 1 : Find Absolute Uncertainty ½ * 1mm 0.5 mm absolute uncertainty Step 2 convert uncertainty to same units as measurement (cm): x 0.05 cm Step 3: Calculate Relative Uncertainty Absolute Uncertainty Measurement Relative Uncertainty 1

fractional uncertainty or, when appropriate, the percent uncertainty. Example 2. In the example above the fractional uncertainty is 12 0.036 3.6% 330 Vml Vml (0.13) Reducing random uncertainty by repeated observation By taking a large number of individual measurements, we can use statistics to reduce the random uncertainty of a quantity.

Uncertainty in volume: DVm 001. 3 or 001 668 100 0 1497006 0 1 3 3. %. % .% m m ª Uncertainty in density is the sum of the uncertainty percentage of mass and volume, but the volume is one-tenth that of the mass, so we just keep the resultant uncertainty at 1%. r 186 1.%kgm-3 (for a percentage of uncertainty) Where 1% of the density is .

economic policy uncertainty index, while the trade policy uncertainty index is used in Section 5. Concluding remarks follow. 2. Literature on policy uncertainty and trade There is a large body of theoretical and empirical work that studies the impact of uncertainty, and of policy

Measurement Uncertainty Approach 10 ISOGUM -ISO Guide to the Expression of Uncertainty Determine the uncertainty components based on the model: Isolate each component in the model that is anticipated to cause variation in the measured results. Calculate the sensitivity coefficients: Sensitivity coefficients are multipliers that are used to convert uncertainty

Dealing with Uncertainty: A Survey of Theories and Practices Yiping Li, Jianwen Chen, and Ling Feng,Member, IEEE Abstract—Uncertainty accompanies our life processes and covers almost all fields of scientific studies. Two general categories of uncertainty, namely, aleatory uncertainty and epistemic uncertainty, exist in the world.

Nature of modeling uncertainty in the Earth Sciences Needs to be application tailored Several sources of uncertainty Measurements and their interpretation Geological setting Spatial variation Response uncertainty Uncertainty assessment is subjective Dealing with a high-dimensional / large problem Mathematical challenges