Personality Prediction Using Machine Learning Classifiers

2y ago
34 Views
2 Downloads
594.74 KB
6 Pages
Last View : 23d ago
Last Download : 11m ago
Upload by : Cade Thielen
Transcription

Journal of Applied Technology and Innovation (e -ISSN: 2600-7304) vol. 5, no. 1, (2021)1Personality prediction using machine learningclassifiersXin Yee ChinSchool of ComputingAsia Pacific University of Technologyand Innovation (APU)Kuala Lumpur, Malaysiacindychin0724@gmail.comHan Yang LauSchool of ComputingAsia Pacific University of Technologyand Innovation (APU)Kuala Lumpur, Malaysiahans-lau2@hotmail.comMan Pan ChowSchool of ComputingAsia Pacific University of Technologyand Innovation (APU)Kuala Lumpur, Malaysiachrischow1995@gmail.comZailan Arabee Abdul SalamSchool of ComputingAsia Pacific University of Technologyand Innovation (APU)Kuala Lumpur, Malaysiazailan@apu.edu.myAbstract— Personality is a fundamental basis of humanbehaviour. At most basic, personality including patterns ofthought, feeling, behaviours that make an individual unique.Personality will directly or indirectly influence the interactionor preferences of a person. This research using differentlearning algorithms and concepts of data mining to mine on thedata features and learn from the pattern. The aim of thisexperiment is to explore different options of the algorithm onmodifying the personality prediction source code by usinglogistic regression algorithm, and to find whether the accuracyof the classification can be improved. There are fivecharacteristics of different people that are known as the Big scientiousness, agreeableness and extraversion that havebeen stored in the dataset used for training. Then, an overviewand comparison will be provided on the different measurestaken to reduce the issues faced by researchers in this field.Classification methods implemented are Support VectorMachine, Ridge Algorithm, Naive Bayes, Logistic Regressionand Voting Classifier. Testing results showed that the LogisticRegression still outperformed the other methods.Keywords—machine learning; personality prediction; BigFive Personality; regressionI.INTRODUCTIONPersonality is all about the different characteristic of anindividual’s pattern of feeling, behaving and thinking.Personality embraces the mood, opinion and attitude ofsomeone, it is the best expression way to express clearly andin an understandable form when interacting with someone.Personality provides the ability to distinguish one person fromanother that can be observed in the workplace environmentand so on. Although there are many more ways to explainwhat personality exactly is, from the psychological point ofperspective, there are two main explanations. First pertains tothe consistency of differences between humans.In this way, the study of personality can focus onclassifying and identifying human’s psychological patterns.Second belongs to the emphasis of quality which is mostlylikely to make people alike and that will help to distinguishpsychological man from the other species. PersonalityZhi Xin ChongSchool of ComputingAsia Pacific University of Technologyand Innovation (APU)Kuala Lumpur, Malaysiazhixin20000803@gmail.comtheorists are then directed to research about those regulationsamong people that can usefully define the nature of man andother factors that influence the course of live. Understandingpersonality is important and useful. Personality providespeople the idea of how leading, influencing communicationcan take place in certain conditions. For example, personalitytraits such as agreeableness and extraversion are mostly goingto improve the chance of communication. Whereaspersonality traits such as high self-esteem are most likelygoing to remain silent at the workplace.Therefore, personality shows that it can be useful in manysituations. In this experiment, machine learning is being applyto judge and classify personality. Based on the previous sourcecode, logistic regression was used to classify the big fivepersonalities. Big five personality traits include ssandextraversion. Classifying these personality traits is useful inmany ways, one of the reasons to classify personality is tocheck the suitability of an employee. Employee’s personalityis often tested in real time to determine which position of thejob he or she is particularly fitting in well.In this research, different algorithms are added to furtherexplore the dataset to test if higher accuracy can be found andcreated. Classification methods will be added to the originalcode is Support Vector Machine, Ridge Algorithm, NaïveBayes, Logistic Regression and Voting Classifier. LogisticRegression is being the default algorithms to the source code.Critical analysis was performed on similar projects/papersthat used different methods. [2] used Multiclass SupportVector Machine (SVM) to perform personality classificationbased on handwriting. The personalities are Optimistic,Extrovert, Introvert, Sloppy, Energetic.Multiclassclassification. Histogram of oriented gradient performsfeature extraction on handwriting data, and noise removalperformed using adaptive thresholding followed by resizing toreorient image, then multiclass SVM classification wasapplied using polynomial kernel to map the feature space to ahigher dimension, then a hyper-plane will be created whichclassifies the handwriting features to different classes. Even

Journal of Applied Technology and Innovation (e -ISSN: 2600-7304) vol. 5, no. 1, (2020)with a limited dataset and feature extraction, it achieves 80%accuracy. [2] study used the position of the user’s iris basedon Eye Accessing Cues in Neural Language Processing topredict their personality. The iris position is an indicator of themind’s internal representational system of what brainsections’ currently active. Support Vector Machine is used totake a rectangle crop of the eye with 9000 pixels as an input.Visual, auditory and kinaesthetic (VAK) learning style is usedbecause it best conveys the personality of the person. 215images of eyes (features) are pre-processed with an eyedetection procedure called Cascade Object detector andresized into smaller size and classified with SVM. The resultsshow Radial Basis Function kernel (Standard GaussianKernel) has best accuracy at 84.9% followed by linear kernelat 83.7%, and a train-test split of 75:25 gives the best accuracyat 84.9%, followed by 70:30 at 82.8%.Allouch, Azaria and Azoulay’s [4] used a voting classifieralgorithm to assist children with special needs withcommunication in social encounters by recognising possiblyinsulting or harmful sentences. Dataset consists of interviewswith parents of ASD children, categorised to five categories.Audio is translated to text for text classification by the votingclassifier (ensemble method), it performs a voting protocoland chooses the result that the majority of algorithms suggest.Algorithms used in voting classifier included random forest,SVM, ridge classifier, extra trees, Bayesian inference method,MLP and K-nearest neighbours. Voting classifier achieved thebest accuracy at 71.2%, followed by random forest at 71%accuracy and Embedded Convolutional Neural Network at69.6% [4] . Using a combined set of neural networks achievedeven better results at 71.4% accuracy, downside beingsignificantly prolonged training period and requiring moretraining data.It can be concluded that SVM with polynomial and rbfkernel are good algorithms with high accuracy that have beenused successfully in other projects. Voting classifier is apromising method that can achieve the highest accuracy withthe downside of taking longer training and computationalresources, but still faster than typical convolutional neuralnetworks. Thus it is recommended that these algorithms andmethods be incorporated into our project to further improveand iterate on our results.II.MATERIALS AND METHODSA. HardwareFor this experiment, the system was implemented on anAcer Swift 3 laptop. The technical specifications are listed inTable I.TABLE I.SpecificationProcessorStorageMemoryOperating SystemSPECIFICATION OF ACER SWIFT 3Description2.30GHz Hexa-core Intel Corei5(8th Gen) processor512GB Solid State DriveCapacity (SSD)8GB of LPDDR4 onboardmemoryWindows 10B. Programming LanguagePython language is used in this research program aspython is easy to understand as it has English-like syntax.Besides that, it has free and open source. For example, source2code can be downloaded and modified because python has theopen-source license which is under OSI approval. It also canimprove productivity as it is the productive language whichdevelopers can execute the code line by line. Furthermore, itcan import almost all the library and it also consists of 200,000packages in the library.C. SoftwareIn this research, we are using Anaconda navigator to runthe packages as the anaconda navigator allows us to managethe conda packages, environments and channels. Datascientists always use different versions of packages andenvironments so that anaconda navigator can help them toseparate different versions. For instance, we can use thenavigator to import the library or packages without typing theconda commands. We also can modify the parameter of thesource code and run in the navigator as shown in Table II.TABLE II.S.NO1234567DATASET le/Female17-281-81-81-81-81-8Class label description:No. of class labels: 5Type: NominalValues: Extraverted Serious Responsible Lively dependableD. Original systemThe steps involved in the system work are: Import all the libraries needed. Load the train dataset from the train dataset file. Pre-processing of the train datasets(a) Data transformation is performed by encode allnominal data type (Male and Female) into binarynumbers (0 and 1).(b) Among the datasets, first 7 columns are used fortraining purposes and remaining one column fortesting purpose. Train the Logistic Regression Classification using thetrain dataset. Load the test datasets from the test dataset file. Pre-processing of the test datasets.(a) Data transformation is performed by encode allnominal data type (Male and Female) into binarynumbers (0 and 1). Evaluate the performance of the model (LogisticRegression) by predicting the testing Datasets. Stored the prediction output to a file.

Journal of Applied Technology and Innovation (e -ISSN: 2600-7304) vol. 5, no. 1, (2020)III.RESULT AND DISCUSSIONIn order to classify the personalities, a number oftechniques are implemented. During the training phase, themodels are trained by using the features of Big 5 personalityobservation. When it comes to validation phase, the model isused to predict the unlabeled features.A. Logistic RegressionLogistic regression is one of the classification techniquesand it is a common and useful regression method to solvemultinomial classification problems which is to handle theissues of multiple classes. For example, we can use logisticregression to predict personality or predict cancer as logisticregression is predictive analysis most of the time. Multinomialregression is used to present the relationship between nominaldependent variable and independent variable.3From Fig 1, CNN will have C output neurons and Scoreswill be gathered in a vector s. After that, it will generate theoutput of sigmoid and softmax. Sigmoid is also called aslogistic function and it will squash the vector in the range of(0,1) and it will apply to the elements of s and s(i). However,softmax is a function but not a loss. It will also squash at thevector which is in the range of (0,1) and the result will add 1in the vector s which is shown in Figure 3.0.1.2. Lastly, thetarget will be stored in a vector t and along with a positiveclass and C- 1 negative class. As it is using multi-classclassification, each sample can only belong to one C classesand it is also known as a single classification problem.(multi class ‘multinomial’, solver ‘newton-cg’,max iter 1500)Multi class:{‘auto’, ‘ovr’, ‘multinomial’}Multinomial allows the minimum loss of the wholeprobability distribution even if the data is binary. When usingthe solver ‘liblinear’, it cannot apply to the multinomial assolver ‘liblinear’ is accepting binary data only. If the data isbinary, it will change to ‘auto’ or ‘ovr’ and solver ‘liblinear’otherwise it will select ‘multinomial’. It is becausemultinomial logistic regression can generate the variablewhich is more than two levels. Our assignment is usingmultinomial as it needs to predict the personality which ismore than 2 nominal independent variables.Fig. 2. Sigmoid and SoftmaxSolver:{‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}Cross-entropy lossThere is 5 solver is multinomial logistic regression whichis newton-cg, lbfgs, liblinear, sag and saga. Only ‘newton-cg’,‘lbfgs’, ‘sag’, ‘saga’ solver can be applied in multiclassproblems in order to handle the multinomial loss and theresupports the penalty of ‘l2’ or none penalty. We have choose‘newton-cg’ solver in our assignment and it is the method toexact hessian matrix. Among all of the solvers, ‘newton-cg’can generate a higher rate of accuracy when compared withanother 4 solvers.Max iter : int, default 100Maximum number of iterations which are taken by thesolver to coverage. In our assignment, we are using 1500iterations. If we change the parameter of iteration to 1000, theaccuracy will become lower. In the multiclass case, the crossentropy loss is used in the training algorithms if the option isset to the ‘multinomial’.𝑓(𝑠𝑖 ) 𝑓(𝑠)𝑖 11 𝑒 𝑠𝑖𝑒 𝑠𝑖𝑠𝑗 𝐶𝑗𝑒𝐶𝐸 𝐶𝑖 𝑡𝑖 log(𝑓(𝑠)𝑖 ) 𝐶𝐸 𝐶𝑖 𝑡𝑖 log(𝑓(𝑠)𝑖 ) ′𝐶𝐸 𝐶𝑖 1 2 𝑡𝑖 log(𝑓(𝑠1 )) (1 𝑡1 ) log(1 𝑓(𝑠1 )) (4)(5)The value ti and si is the groundtruth of the cross-entropyloss and the value of C is the CNN score of each class. Besidesthat, f(si) refers to the activations and it is applied to the scoreswhen sigmoid and softmax is calculated and it is also appliedbefore calculating the CE loss calculation.B. Support Vector MachineSupport vector machines are one of the supervisedlearning algorithms are mainly used in classifying and solvingregression problems which also can be named as supportvector classification (SVC) and support vector regression(SVR)[5] . SVM will come out with a hyperplane that split thefeatures into different domain in higher dimensions(). Type ofkernel that have been implemented will be kernel polynomial.Fig 1. shows the formula of polynomial kernel.𝐾(𝑋1 , 𝑋2 ) (𝑎 𝑋1𝑇 𝑋2 )𝑏Fig. 1. cross-entropy loss(3)(6)x and y will be the input space, a determines the constantwhile b sets the degree of the polynomial. After applying thisformula, x and y will be mapped into a higher dimension Zthat may seem like in Fig 2. 𝑍 (𝑋 ) (1, 𝑎 , 𝑎 , 𝑎2 , 𝑎2 , 𝑎 𝑎 )𝑎𝑎1 2 1 2 12(7)𝑍𝑏 (𝑋𝑏 ) (1, 𝑏1 , 𝑏2 , 𝑏12 , 𝑏22 , 𝑏1 𝑏2 )(8)

Journal of Applied Technology and Innovation (e -ISSN: 2600-7304) vol. 5, no. 1, (2020)To solve the SVM , we would have to perform the dotproduct on each of the data points and do multiplication withthe next dot product, by using kernel trick, dot product can besimply calculated by increasing the value of the power [5] .𝑍𝑎𝑇 𝑍𝑏 𝑘(𝑋𝑎 , 𝑋𝑏 ) (1 𝑋𝑎𝑇 𝑋𝑏 )2(9)𝑍𝑎𝑇 𝑍𝑏 1 𝑎1 𝑏1 𝑎2 𝑏2 𝑎12 𝑎12 𝑎22 𝑎22 𝑎1 𝑎1 𝑏1 𝑏1(10)The 2 dimensional relationships can be used to find asupport vector classifier while in a higher dimensionalrelationship, hyperplane is needed to split the features intodifferent domain. If there are “m” dimensions, the equation ofthe hyperplane is shown in Equation 11.Finally, by using the algorithm, hyperplane or a line willbe created which can separates the data into classes. By usinghyperplane can classify the test dataset. SVM is reallyeffective in the higher dimension.C. Ridge ClassificationRidge Classification is simply ridge regression, just withthe response values converted into -1 and 1, then treated as anormal regression task. It’s a traditional machine learningalgorithm, improving on the least squares regression, oftenused to handle multicollinearity.With presence of multicollinear data, regressioncoefficients will have significantly huge standard errors,which reduces prediction accuracy of the coefficients. A smallconstant value is modified into original Least Squaresestimator,𝛽 (𝑋′𝑋) 1 𝑋′𝑌, forming:𝛽𝑟𝑖𝑑𝑔𝑒 (𝑋′𝑋 𝜆𝐼𝑝 ) 1 𝑋′𝑌(12)Ridge regression performs L2 regularization by penalizingthe square of features’ coefficients’ magnitude to reduce theerror between actual and predicted observations. β ridge isselected to minimize the penalized square sums:𝑦 𝑤0 𝑤1 𝑥1 𝑤2 𝑥2 𝑤3 𝑥3 .𝑚 𝑤0 𝑤𝑖 𝑥𝑖𝑖 1 𝑤0 𝑤 𝑇 𝑋 𝑏 𝑤𝑇 𝑋4(11)Wi will be determine as a vector (W1,W2,W3 .Wm) , bwill be the biased term which is W0, while X is variables.Thus we can conclude that for point wTx b 0 for di 1and wTx b 0 for di -1. Wthis is equivalent to minimization 𝑛𝑖 1(𝑦𝑖 𝑝𝑗 1 𝑥𝑖𝑗 𝛽𝑗 )2 𝜆 𝑝𝑗 1 𝛽𝑗2 )𝑝Of 𝑛𝑖 1(𝑦𝑖 𝑗 𝑖 𝑋𝑖𝑗 𝛽𝑗 )2𝑝subject to, for some, 𝐶 0, 𝑗 1 𝛽𝑗2 𝑐.(13)(14)(15)Therefore, by placing a constraining/penalty term oncertain parameters, ridge regression further minimizes theresidual sum of squares. A constant is chosen as the penaltyterm, which is multiplied with the squared vector. The largerthe value of the vector, the more the optimization function ispenalized.Fig. 3. HyperplaneThe input vector that touch the margin in Fig 3. will bepick as the “tips” of the vectors.D. Naive BayesNaïve bayes is one of the statistical algorithms used forclassification based on Bayes Theorem. For example, a fruitmay be considered as orange if it is in orange colour and 3inches in diameter. Even though these features will rely oneach other, these properties will be contributing independentlyto the probability that the fruit is orange, and this is themeaning of why it is known as “Naïve’ [3].Naive bayes algorithm commonly applies to large amountsof data. Naïve Bayes algorithm is suitable in classifyingvarious applications. For example, text classification,recommender system, spam filtering and so forth. It is fast andaccurate, providing high accuracy of prediction. A few modelsof naïve bayes had been created to perform different tasksbased on suitability. Equation below shows the formula forcalculating probability that a document occurs in a class byusing multinomial Naïve Bayes classification.Fig. 4. hyperplane with actual support vector𝑃(𝑡 𝑑) 𝑃(𝐶) 𝑃(𝑡1 𝑐) 𝑃(𝑡2 𝑐) 𝑃(𝑡3 𝑐). . . 𝑃(𝑡𝑛 𝑐)(16)

Journal of Applied Technology and Innovation (e -ISSN: 2600-7304) vol. 5, no. 1, (2020)In order to calculate the probability that a document occursin a class, prior probability and probability of nth word areneeded to calculate [6] . Equation 16 shows the formula ofprior probability P(C). Equation 17 shows the formula ofprobability of the nth word.𝑃(𝐶) 𝑃(𝑡𝑛 𝐶) 𝑁𝑐𝑁(17)𝑐𝑜𝑢𝑛𝑡(𝑡𝑛 ,𝑐) 1𝑐𝑜𝑢𝑛𝑡(𝑐) 𝑉 (18)Polynomial Kernel (SVM), Radial Basis FunctionKernel(SVM), Ridge Regression and Voting Classifier.All the prediction models were trained by using the traindatasets. After the process of train models, the predictionsmodel were used to predict the test datasets. Performanceresult of each prediction model will then print as aclassification report.V.These are the probability was being specific in a particularcategory from a set of documents. Equation 18 will show theformula for the calculation of TF-IDF (term frequencyinverse document frequency). This is a statistical method toobserve how relevant a word is to its document [6] .𝑡𝑓𝑖𝑑𝑓𝑡 𝑓𝑡,𝑑 𝑙𝑜𝑔𝑑𝑓𝑡Classifier: Logistic RegressionClassification Report:(19)Equation above will show the formula for calculatingconditional probability which is known as likelihood. It iscalculating the conditional probability of a word appearing ina document that the document belongs to a class.RESULTTo evaluate the performance of different classifiers, inorder to make the result more precise, there are some differentmetrics used in the result, recall, F1-score, and accuracy.TABLE III.𝑁5AccuracyMacro Avg.Weighted Avg.PERFORMANCE METRIC FOR LOGISTIC .860.770.85Support315315315Classifier: Multinomial Naive BayesClassification Report:(𝑡𝑛 𝐶) 𝑊𝑐𝑡 1′′𝑊′ 𝑉 𝑊 𝑐𝑡 𝐵(20)E. Voting ClassifierAlso known as an ensemble method, voting classifier is awrapper for a set of different models that are trained inparallel. It then predicts the output class based on highestprobability. Ensemble methods tend to produce lesser errorand result in less over-fitting. There are 2 basic types of votingclassifiers:y mode{C1 (x), C2 (x), . . . , Cm(x) }(21)Majority voting: Simplest case, the class that receives thehighest majority of votes by each model predicted is the outputclass.𝑦 𝑎𝑟𝑔𝑚𝑎𝑥 𝑚 𝑗 1 𝑤𝑗 𝑝𝑖𝑗𝑖(22)where wj is the weight that can be assigned to the jthclassifierSoft voting: The probability vector for each predicted classfrom all the classifiers are summed up and averaged. Softvoting is recommended only if each classifier is wellcalibrated. Voting classifier makes the most of the differentalgorithms and if done right should yield better performancethan any single model. It is important that the set of classifiersare diverse so errors do not aggregate.IV.DISCUSS ON IMPLEMENTATIONAfter the pre-processing of data is done, five types ofalgorithms are implemented. Nominal types of data wereencoded into binary where 1 represents male, 0 representsfemale. Hence, there are 7 types of prediction models wereappend to an model array, which were Logistic Regression,Multinomial Naive Bayes, Support Vector Machine,TABLE IV.AccuracyMacro Avg.Weighted Avg.PERFORMANCE METRIC FOR MULTINOMIAL NAIVE 380.58Support315315315Classifier: Support Vector Machine (SVM)Classification Report:TABLE V.AccuracyMacro Avg.Weighted Avg.PERFORMANCE METRIC FOR FOR 0.68Support315315315Classifier: SVM with Polynomial kernelClassification Report:TABLE VI.PERFORMANCE METRIC FOR SVM WITH POLYNOMIALKERNELAccuracyMacro Avg.Weighted 70.48Support315315315Classifier: SVM with RBF kernelClassification Report:TABLE VII.AccuracyMacro Avg.Weighted Avg.PERFORMANCE METRIC FOR SVM WITH RBF KERNELPrecisionRecall0.240.430.280.58Classifier: Ridge ClassifierF1-Score0.580.250.48Support315315315

Journal of Applied Technology and Innovation (e -ISSN: 2600-7304) vol. 5, no. 1, (2020)Classification Report:TABLE VIII.AccuracyMacro Avg.Weighted Avg.PERFORMANCE METRIC FOR RIDGE .830.710.81Support315315315Classifier: Voting ClassifierClassification Report:TABLE IX.PERFORMANCE METRIC FOR VOTING CLASSIFIERAccuracyMacro Avg.Weighted 50.71Support315315315purpose of this research is to study the new algorithms andimprove the accuracy of the original algorithm which isLogistic Regression. New algorithms have been added areSupport Vector Machine, Naive Bayes Classifier, RidgeClassifier, Voting Classifier. Precision, recall, accuracy and f1score are used to measure the performance of all theclassifiers. The overall results have shown that LogisticRegression still outperformed all new algorithmsadded. Moreover, predicting personality is an abstract modelwhich means that it will describe the phenomena whileconcrete models will have a direct analogue result in machinelearning. Therefore, abstract models cannot obtain a very highaccuracy result when compared to concrete model.REFERENCES[1][2][3][4][5]Fig. 5. Bar chart of accuracy for each algorithmBased on Fig 5, Logistic Regression remains the bestperforming algorithm at 85%. This is followed by RidgeClassifier at 83%, and then Voting classifier at 75%. LogisticRegression continues to outperform the other models becauseas a linear regressor it does not overfit to the training model.Ridge also performed well because it avoids overfitting amodel by using L2 regularisation and can deal withmulticollinearity. Voting classifier came in third, proving thatensemble methods aren’t always necessarily better than theindividual classifiers, especially when some of the classifiersare subpar.SVM RBF KERNEL AT 58% AND SVM POLYNOMIAL57% PERFORMED. It is also possible that the datasetproblem is very close to being linearly separable. From thisstudy, we know that the ensemble could be worse than theindividual models. For example, taking the average of the truemodel and a bad model would give a fairly bad model. Thereis no absolute guarantee a ensemble model performs betterthan an individual model, but if you build many of those, andyour individual classifier is weak. Your overall performanceshould be better than an individual model. real gains comefrom so-called unstable models such as decisions trees, whereeach observation usually has an impact on the decisionboundary. More stable ones like SVMs do not gain as muchbecause resampling usually does not affect support vectorsmuch. Though It tends to improve performance on average.KERNELVI.CONCLUSIONIn this research, we have studied 5 machine learningclassifiers to classify the human personality. The main6[6]A. Chitlangia, G. Malathi, “Handwriting Analysis based on Histogramof Oriented Gradient for Predicting Personality traits usingSVM,” Procedia Computer Science 165, pp. 384-390, 2019.S. Ramli, S Nordin, “Personality Prediction Based on Iris PositionClassification Using Support Vector Machines,” Indonesian Journal ofElectrical Engineering and Computer Science, 9(3), pp. 667-672, 2018.A. P. Wibawa, A. C. Kurniawan, D. M. P. Murti, R. P. Adiperkasa, S.M. Putra, S. A. Kurniawan, Y. R. Nugraha, “ Naïve Bayes Classifierfor Journal Quartile Classification” International Journal of RecentContributions from Engineering, Science & IT (iJES), 7(2), pp. 91-99,2019.M. Allouch, A. Azaria, R. Azoulay., “Detecting Sentences that May beHarmful to Children with Special Needs,” 2019 IEEE 31stInternational Conference on Tools with Artificial Intelligence (ICTAI),pp. 1209-1213, 2019.N. Guenther, M. Schonlau.,“ Support vector machines,” The StataJournal, 16(4), pp.917-937, 2016.Y. Artissa, I. Asror, S.A. Faraby.,“ Personality Classification based onFacebook status text using Multinomial Naïve Bayes method,”In Journal of Physics: Conference Series, vol. 1192, no. 1, p. 012003.IOP Publishing, 2019.

A. Logistic Regression Logistic regression is one of the classification techniques and it is a common and useful regression method to solve multinomial classification problems which is to handle the issues of multiple classes. For example, we can use logistic regression to predict personality or predict cancer as logistic

Related Documents:

to other personality disorders such as antisocial personality disorder. Ogloff (2005) distinguishes psychopathy from antisocial personality disorder due to the emphasis on affective and personality rather than mostly behavioral elements of antisocial personality disorder. Besides antisocial personality disorder, there are other DSM-IV personality

Understanding The Supporter Personality Chapter 5: Understanding The Promoting/Supporter Personality Chapter 6: Understanding The Promoter/Controller Personality Chapter 7: Understanding The Controller/Analyzer Personality Chapter 8 : Understanding The Analyzer/Supporter Personality Chapter 9: Understanding The Centric Personality Wrapping Up

Personality Inventory was used. Data analyses were performed in two major stages. First, factor analysis of health-relevant personality instruments was conducted. The results of these analyses indicated that . fundamental domains of personality derived from basic personality research (Digman, 1990). METHOD Subjects

personality and a archetype. Part of personality: The shadow is the dark side of your personality that contains the animal (and sexual) instincts. It is the opposite of the Persona (mask) and is the part of personality that is repressed from the ego ideal. As archetype: The importance of the

One of the most popular personality tests is the True Colors Personality Test. This personality test asks a serious of questions to rate your likes and dislikes. The test will then rate your personality as either a blue, green, orange or gold personality type. You may be a combination

the light of sound theoretical systems of personality. 3. To acquaint the students with the applications of personality theories in different walks of life. Unit Contents No. of Lectures Unit-I INTRODUCTION TO PERSONALITY 1.1.Definitions and nature of personality 1.2.Characteristics of good personality theory and Evaluation of personality theory

The WPS personality assessment test measures twenty personality dimensions covering the Big Five personality factors. The questionnaire also provides a measure of the General Factor of Personality (GFP). Figure 1 illustrates the WPS concept model, and Figure 2 shows the twenty facets of personality that the test measures. Table 1 provides summary

Paranoid Personality Disorder Paranoid personality disorder - A personality disorder characterized by undue suspiciousness of others' motives, but not to the point of delusion. People who have paranoid personality disorder tend to be overly sensitive to criticism, whether real or imagined. Clinicians need to weigh cultural and sociopolitical