A New Decision Tree For Recognition Of Persian Handwritten . - IJCA

1y ago
14 Views
2 Downloads
1.07 MB
7 Pages
Last View : 11d ago
Last Download : 3m ago
Upload by : Milo Davies
Transcription

International Journal of Computer Applications (0975 – 8887)Volume 44– No.6, April 2012A New Decision Tree for Recognition of Persian HandwrittenCharactersMohammad ent of ComputerEngineering, University ofIsfahan, IranDepartment of ComputerEngineering, University ofIsfahan, IranDepartment of ComputerEngineering, University ofIsfahan, IranABSTRACTIn this paper a binary decision tree, based on NeuralNetworks, Support Vector Machine and K-Nearest Neighboris employed and presented for recognition of Persianhandwritten isolated digits and characters. In the proposedmethod, a part of the training data is divided into twoclustersusing a clustering algorithm, and this processcontinues until each subtree reaches clusters with optimumclustering, where the tree leaves are the final obtainedclusters. According to the clustering results, classifiers such asANN and SVM can perform correctly, therefore the decisiontree can be built. A part of the test data is selected asvalidation data and in each node of the tree, a classifier withthe highest recognition accuracy on validation data is selected.Recognition accuracy at 8, 20, and 33 clusters have beenevaluated and compared with other existing methods.Recognition accuracy of 98.72% and 97.3% on IFHCDBdatabase is obtained respectivelywhen 8-class and 20-classproblems is assumed. Again 98.9% accuracy on HODAdatabase is achieved.General TermsPersian handwritten recognition, Feature extraction,Supervised learning, Classification,Binary decision tree,Unsupervised learning.KeywordsSupport Vector Machine, K nearest neighbor, Decision Tree,Self organized map, Neural Networks, Multi classclassification.1. INTRODUCTIONHandwritten Character recognition is one of the still openissues in the pattern recognition and it has diverse applicationssuch as reading the checks, car plate and handwritten postalcodes recognition. In recent years the use of SVM and ANNhas had good results in recognizing of Persian handwrittendigits and characters so that their recognition rate have beenreached over 97% and 95%, respectively[1-6].Ebrahimpur et al. [1]have used Loci Featuresinthefeatureextraction phase and Neural Network and combination ofexperts in the classification phase, They achieved 97.52%accuracy in recognizing Farsi handwritten digits. Abdleazeemand Sherif[7]have used gradientfeaturesandclassifiersofNeuralNetworks with two hidden layersandSVM in recognition ofArabic handwritten digits. The accuracy of their system was99.2% and 99.48% using neural networks and SVM.Ziaratban et al. [8] have divided 32 Persian characters into uresinthefeatureextraction phase and achieved 93.15%accuracy usingneuralnetworks. In [9] a system is presented for recognizinghandwritten courtesy amounts in Persian bank checks by Sadriet al. they have used different algorithms for extractingfeatures such as Outer profile, Chain codes and Zoning, andseveral algorithms in recognition phase such as NeuralNetworks and SVM. They used two hidden layers and 74neurons for each layer in neural networks and RBF kernel forSVM. The best accuracy of their system was 96.5% by neuralnetworks.Inthispaperwehave useddifferent methods forextractingfeatures,in the recognition phase we used SVM, KNN andneural networks, for SVM we used different methodstocreateamulti-class recognitionsystem such as OVA ،OVOandSVM-BDT. In clustering phase and creation of etters, same characters were tried to be placed intoa single cluster. The number of clustersusually is selected 8 or20. Table1 shows printed and handwritten Persian charactersin 32 classes.The organization of the rest of the paper is as follows: ,differentmethods offeature extractionis introducedinSection 3,inSection 4 we introduce different methodsof classificationwehave usedin thispaper. We explained details ofproposedmethod in section 5, described experimental resultsand comparisons in section 6, and finally mentioned theconclusions and our future works in Section 7.2. PREPROCESSING AND IMPROVINGIMAGESAtthissection,wetry toremoveimages’ noise using,imageprocessingalgorithms and normalize them. Database imagesare grayscale format, in the first step of pre-processing phasewe convert grayscale images into binary images using somenoiseanddiscontinuity, algorithms [12], we fixthem. In the last step, the images arenormalized by the normalization method described in [11] andthe images’ size is converted into 45*45 pixels. Figure1showsthepre-processing steps inthe paper.52

International Journal of Computer Applications (0975 – 8887)Volume 44– No.6, April 2012Table 1. Persian Handwritten and Printed CharactersPrinted Characters اآ ب پ ت ث ج چ ح خ د ذ ر ز ژ س ش ص ض ط ظ ع غ ف ق ک گ ل م ن و ه ی HandwrittenCharactersPrinted CharactersHandwrittenCharacters3.3 Crossing CountMorphology(a)(b)(c)Figure1. (a) An example of Binarization using Otsumethod, (b) Structuring element for morphologicaloperation to filling of discontinued images, (c)Normalization of characters3. FEATURE EXTRACTIONWe used different algorithms for extracting features; in thisstep some of themost arebrieflydescribedasfollows:Zoning[13],Outer profiles [14]and Crossing counts[15] .3.1 ZoningInthismethod,imagesare divided into3*3zones, the ratio ofblack pixels number to white pixels numberof each region ated.Figure 2 showsan example ofthismethod.Figure2. An example of extracting zoningFeatures3.2 Outer ndright)hasaparticularview, and featurescanbe extractedfrom eachofthem. For example in extractingfeaturesfromtopview, westartfrom thefirst leftpixelandgodowntogetthe firstblackpixel,then we store the obtained row number as the first featureandalsodothis for theothercolumns, mpleofthismethod.In this method theimageis scannedline by lineandnumberofchangescolorfromblackto white pixelor vice versa iscounted, the number of extracted features in the first phasewill be 45. Thisworkis alsodonefor thecolumnsandeventuallyafeaturevectorwith 90elementswill be formed.Anexampleofthismethodis showninFigure 4.Figure4. An example of extracting Crossing countFeatures4. CASSIFICATION AND edinthe 4.1 K-Nearest NeighborThisclassifier has nolearningprocessand isable to solvemulticlass classification problems. Each Sample of training set hasa label that defines its class, whenasamplecomesfromthetestset for classification; the distancebetween test sampleand all training samples is measured. n,Hamming,Correlation, and etc.Let xibe an input sample with k features (xi1,xi2, ,xik) ,n be thetotal number of input samples (i 1,2, ,n) and k the totalnumber of features (j 1,2, ,k) . Eq. (1) shows the Euclideandistance between sample xi and xl (l 1,2, ,n) :𝑑 𝑥𝑖 , 𝑥𝑙 (𝑥𝑖1 𝑥𝑙1 )2 (𝑥𝑖2 𝑥𝑙2 )2 (𝑥𝑖𝑘 𝑥𝑙𝑘 )2 (1)Then k samples (neighbors) that have minimumdistance fromthe test sample are selected. Using majority voting among Kneighbors, class label for the test sample is found. Forexample with 1-nearest neighbor rule, if ω be the true class ofa training sample, the predicted class of test sample x is setequal to the true class ω of its nearest neighbor, where mi is anearest neighbor to x if the distance d(mi,x) minj{d(mj,x)}.4.2 Support Vector MachineFigure3. An example of extracting Outer profile featuresSupport Vector Machine [16] isaset of supervisedlearningmethods thatare usedforclassification andregression.A data in SVM can be seenasavectorinap-dimensional(or a listofp numbers). SVM goal is to maximizemarginbetweentwoclasses. So it selects a hyperplane that has53

International Journal of Computer Applications (0975 – 8887)Volume 44– No.6, April 2012maximum distance from the nearest data on both sides of thisseparator.Ifthere isthehyperplane, it is knownasthemaximummarginhyperplane. The decision function toseparate the data is determined with a subset of trainingexamples that are called support vectors rly we mustbuildahyperplane to minimizeprobability offalseseparation.Theconcept illustrated inFigure5.4.2.1 OVA (One versus All)There is a binary SVM per-class in OVA training, So that oneclass has labeled with 1 and N-1 class have labeled with 1.Figure 7 shows image of training using OVA for a 6-classproblem. The number of SVM in training phase will be thesame as number of classes. Input sample is given to all SVMsin the test phase. Each SVM may have negative or positiveanswer. Input image class is selected by SVM that has themost confidence. Disadvantage of this method is complextraining for a very large training set, because each class mustbe taught against all data from all other classes. In thefollowing subsections, we describe methods are better thanOVA in terms of speed and accuracy.4.2.2 DAGSVM (Directed Acyclic Graph)In this method, training phase is the same of OVO. In the testphase, input sample is sent to the first SVM, between class c1and c2 if c1 is chosen then candidate c2 will be completelyremoved and N-1othercandidates will beremained.Recognitionrateof thismethodis same asOVO ognitionspeedishigher than OVO.Support VectorsMargin WidthFigure 5. Linear separating hyperplanes for the separablecase4.2.3 SVM-BDT (Support Vector Machinesutilizing Binary Decision Tree)SVM isinherently abinaryclassifier anditcannotbe useddirectlyinmulti-class problems. In the following subsections,we review thevarietyofmethodsto createa multi-classclassifierbySVM.In thepresentedmethod describedin[18],datasetare dividedintotwo groupsbyaclusteringalgorithm. Then esscontinuesuntildata related to each class isplaced into a group.Figure 8 showsthisclusteringfor a 7-classproblem.1Thenumber ofclassesforallmethodsisdenoted byN. Duringthestudy it wasprovedthatSVM withpolynomial kernelhasbestrecognitionaccuracycompared to the otherkernelssuchasRBFandMLP.1112taught.Thus22666 66Amethodforconstructingmulti-class Support Vector Machineis OVO [17]. Inthismethod,each classwithN-1otherclassesis𝑁(𝑁 1)264.2.1 OVO (one versus one)𝑁(𝑁 svm6svm3C3C1svm2C255Figure8.Clustering of the 7-classes problem.C4C2C3Figure6. OVO decomposes a k-class classification problem intosvm17753svm4svm2553Disadvantage ofthismethodislow speedinrecognitionphasebecauseinthismethod the number ofbinary SVMs istoo much.svm1512SVM is required for training.given516AsshowninFigure 6, for n 4, 6 SVM is required for �� 𝟏)𝟐othersC3C4binary classification VA decomposes a k-class classification problem into k binary classification problems. Each binaryclassifier distinguishes instances of one class form all other remaining classes.54

International Journal of Computer Applications (0975 – 8887)Volume 44– No.6, April 2012According totheFigure 8, classes1, 5and 7 have been placedinthefirst groupandclasses2, 3, 4 and 6 have been placedinthesecond group.Now the decision tree can be constructed usingSVM. Figure 9 showshow to buildthistreeAccording swithtrianglesare shown. Number of internal nodes shows thenumber of needed SVMs for train. There are n-1 internalnodes in a problem with Nclass. More speed and accuracy minthetraining phase we need n-1 SVM fortraining and in the test phase, at most log 2 𝑁 SVM arerequired. Lower number of SVMs in test phase,made 623,45,7SVMSVM734155. DECISION TREE BASED ON ANN,SVM AND KNNIn this method we have used SVM-BDT idea.Featuresextractedforeachimageisa combination ofZoning andCrossing counts because they had better performance. Thetotal number of combination features will be 315.Ourresearchissue has 33classes. Initially,33classes are dividedintotwoclusters by SOM clustering,and then they are trainedby SVM and ithpolynomial kernelhasbetter performancethanotherkernels. The number ofhiddenlayersandepochsin eachnode forneural networkis selected separately. Clusteringprocedureisdoneforthe nextclusters, so that most of cisiontreeforourproblem. In this method N-1 classifier is required.As describedin section4, decisiontree is a solution for supportvector machine to create multi-class classifier,but thebinarydecision tree. We have fiterationsandhidden layers) of the networks. Table 2 showsrecognition of classifiers for each node.Table 2.Resultof ANN, SVM, and KNNclassifiersoneachnodeofthe tree (%)Figure9.Illustration of SVM-BDT.Classifiers4.3 Neural fneuronsthatapproximatethebrainfunctions. Artificial neural network can be considered aparallel computing system that consists of manycomputational small units which are connected together. ANNinmachine learningand classification problems is used as aclassifier. In this paper, we use a Multi-Layer Perceptron(MLP) network [19] and Scaled conjugate .4799.5699.399.534567SOM [21]isanunsupervisedlearning algorithm and earningproblems. It consists of twolayers, Inputlayer and eurons usuallyis equal tothe numberofclusterswe want tomake. Figure 10 shows architectureSOMwithtwooutputneurons is used in the paper.f1f2f3f4 F315Figure10. SOM with two neurons in competition 9895.389.595.52024. .989999.7698.789114.4 SOM (Self Organizes 696.5909592.694973297.679955

International Journal of Computer Applications (0975 – 8887)Volume 44– No.6, April 2012Figure11.Decision treecreated by SOMAftertrainingclassifiersforeachnode, we select validation datafrom test data and testrecognitionat eachnodebyKNN, SVMand ANN, and then we choose the classifier which has thebest recognition accuracy as the main classifier of thenode.According toTable 2inmost of thenodes, neural networkshave better recognitionthanKNNandSVM. SVM hasmoreaccuratelyatnodes1,27and 32than theothertwoclassifiers.KNNalso hasmore accuratelyonly at nodes3 and 5thantheothertwoclassifiers. Thus for each node, classifier that hasmore accuracy is selected.' 'ج , ' 'چ are very similar and onlydots,distinguishthemfrom eachother. Due tothis,researchersin theirpreviousstudies havetriedtoputthesecharactersintoacluster and have reducedthenumberofclasses. Figure 12and 13 showsgroupingproblem into 8and20 classes, respectively.6. EXPERIMENTAL RESULTSWe used two databases to evaluate our method:IFH-CDB [22]and HODA [23] described below, each data set is divided intotraining, validation, and test set.Figure12. Similar classes of Persian charactersgrouped into 8 classesThe IFH-CDB database, thisdatabaseincludes 52320 Persianisolated characters thatresearcherstesttheir methods on it.Images inthe database have beenextractedfrominputtestregistrationoftraining centers. The size ofthe imagesinthis collectionis90*77pixelsin 300 dpi grayscale format. Wehave used 32400 samples for training, 16620 samples for testand 3300 samples for validation set.The HODA database, this database was introduced byKhosravi and Kabir. They collected handwritten Persian digitsusing universities’ entrance exam forms. The database consistof 80,000 samples, according [24] we consider it an easydatabase, so we selected 9000 harder sample which wereclassified into more than three classes by k-nearest neighbormethod with k 6.Some Persian characters such as' 'س , ' 'ش , ' 'ص , ' 'ض or ' 'ح , ' 'خ ,Figure13. Similar classes of Persian charactersgrouped into 20 classesIn the following, recognitionaccuracy oftheproposed methodis compared withthe recognition accuracy of other methodsand previous researches for8,20 and,33 classes. The numberof test data was 16620 and 3300 samples isused for validation.Table 3showsthe accuracyof recognitionwithdifferentmethods.56

International Journal of Computer Applications (0975 – 8887)Volume 44– No.6, April 2012Table 6. Results of different algorithms on IFHCDB datasetTable 3. Different result obtained with different classifiers(%)KNNSVMNO. ofClustersTrain SizeTest SizeAlaei et al. [2]36682153388Accuracy(%)98.1Alaei et al. [2]36682153383296.68Ziaratban et al. [8]114717647893.15AlgorithmNO. 96.2797.3Mozaffari et al. 94.82Dehghani et al. [26]--871.82Mowlai et al. [27]32002880832.75Dehghan et al. [28]160016002096.92Shanbezade et al. [29]180012003287Proposed method3600013320898.72Proposed method36000133202097.3Proposed method36000133203394.82As shown in Table 3, recognition accuracy of the proposedmethod is more than of other methods such as neural networksand support vector machine in all clustering modes. Table 4shows confusion matrix when the number of classes areconsidered 8. Also we tested our method on HODA dataset,table5 show confusion matrix for handwritten Persian digits inthe dataset.Table 6 and table 7 compared accuracy of the proposedmethod with other methods were used in recent researches onIFHCDB and HODA dataset, respectively.Table 7.Result ofdifferent algorithms on HODA datasetTable 4. Confusion matrix for the 8-class problem of ourmethod on IFH-CDB dataset (%)InputRecognized 900.0900.191.5797.78C7C8Table 5. Confusion matrix of the proposed method on HODAdataset (%)InputDigitRecognized as0123456789098.80.530000.67000010.2998.76 909000.3600.390098.250.498.70.22 98.680.67 0.33AlgorithmTrain sizeTest sizeAccuracy (%)Ebrahimpour et al. [1]60,00020,00097.52Ebrahimpour et al. [24]6000200095.3Javidi and sharifizadeh [30] 6000200098.16Javidi et al. [31]6000200097.73Moradi et al. [32]18000200096Proposed method6000200098.97. CONCLUSIONIn this paper we introduced a binary decision tree for Persianhandwritten isolated characters recognition; at each node ofthe tree, a binary classifier was used. We combined zoningand crossing count methods in feature extraction phase, andcreated a feature vector with 315 elements. We also employedSOM to create binary decision tree. In the training phase, wetrained SVM and ANN for each node of the decision tree. Apart of the test data called validation is used for selecting aclassifier for each node. The classifier, whose recognitionaccuracy was the most at each node of the tree, was selected.Recognition accuracy which was obtained for 8 and 20clusters (98.72% and 97.3%)were higher than those of theprevious methods. Most of the misclassified samples wererelated to the clusters that were very similar to each other.This caused low accuracy when the number of clustersequaledthe number of classes. In future, we plan to use moreefficient methods in feature extraction such as connectedcomponent to extract main body of the image and its dotsinformation to remove some of the confusions amongstsimilar classes.57

International Journal of Computer Applications (0975 – 8887)Volume 44– No.6, April 20128. REFERENCES[1] R. Ebrahimpur, M. R. Moradian, A. Esmkhani et al.,“Recognition of Persian handwritten digits usingCharacterization Loci and Mixture of Experts,”International Journal of Digital Content Technology andits Applications, vol. 3, 2009.[2] A. Alaei, P. Nagabhushan, and U. Pal, "A New TwoStage Scheme for the Recognition of PersianHandwritten Characters." pp. 130-135.[3] D. Ghosh, T. Dube, and A. P. Shivaprasad, “ScriptRecognition-A Review,” Pattern Analysis and MachineIntelligence, IEEE Transactions on, vol. 32, no. 12, pp.2142-2161, 2010.[4] S. S. Ahranjany, F. Razzazi, and M. H. Ghassemian, "Avery high accuracy handwritten character recognitionsystem for Farsi/Arabic digits using ConvolutionalNeural Networks." pp. 1585-1592.[5] C. L. Liu, and C. Y. Suen, “A new benchmark on therecognition of handwritten Bangla and Farsi numeralcharacters,” Pattern Recognition, vol. 42, no. 12, pp.3287-3295, 2009.[6] A. Borji, M. Hamidi, and F. Mahmoudi, “RobustHandwritten Character Recognition with FeaturesInspired by Visual Ventral Stream,” Neural ProcessingLetters, vol. 28, no. 2, pp. 97-111, 2008.[7] S. Abdleazeem, and E. El-Sherif, “Arabic handwrittendigit recognition,” International Journal of DocumentAnalysis and Recognition (IJDAR), vol. 11, no. 3, pp.127-141, 2008.[8] M. ziaratban, K. Faez, and F. Allahveiradi, "NovelStatistical Description for the Structure of IsolatedFarsi/Arabic Handwritten Characters." pp. 332-337.[9] J. Sadri, Y. Akbari, M. J. Jalili et al., "A New System forRecognition of Handwritten Persian Bank Checks." pp.925-930.[10] N. Otsu, “A Threshold Selection Method from GrayLevel Histograms,” Systems, Man and Cybernetics, IEEETransactions on, vol. 9, no. 1, pp. 62-66, 1979.[11] M. Cheriet, N. Kharma, C. L. Liu et al., CharacterRecognition Systems a Guide for Students andPractitioners: John Wiley & Sons Inc, 2007.[12] R. C.Gonzalez, R. E.Woods, and S. L.Eddins, DigitalImage Processing Using MATLAB: GatesmarkPublishing, 2009.[13] S. V. Rajashekararadhya, and P. V. Ranjan, "Zone basedFeature Extraction Algorithm for Handwritten NumeralRecognition of Kannada Script." pp. 525-528.[14] J. Sadri, C. Y. Suen, and T. D. Bui, “Application ofSupport Vector Machines for Recognition ofHandwritten Arabic/Persian Digits,” in 2nd Conferenceon Machine Vision and Image Processing &Applications, Iran, 2003, pp. 300-307.[15] H. Soltanzadeh, and M. Rahmati, “Recognition ofPersian handwritten digits using image profiles ofmultiple orientations,” Pattern Recognition Letters, vol.25, no. 14, pp. 1569-1576, 2004.[16] V. Vapnik, Statistical Learning Theory: Wiley,NewYork, 1998.[17] X. Peng, and A. K. Chan, "Support vector machines formulti-class signal classification with unbalancedsamples." pp. 1116-1119 vol.2.[18] G. Madzarov, D. Gjorgjevikj, and I. Ghorbev, "A MultiClass SVM Classifier Utilizing Binary Decision Tree."pp. 77-81.[19] P. Y. Simard, D. Steinkraus, and J. C. Platt, "Bestpractices for convolutional neural networks applied tovisual document analysis." pp. 958-963.[20] M. Martin Fodslette, “A scaled conjugate gradientalgorithm for fast supervised learning,” Neural Networks,vol. 6, no. 4, pp. 525-533, 1993.[21] T. Kohonen, “The self-organizing map,” Proceedings ofthe IEEE, vol. 78, no. 9, pp. 1464-1480, 1990.[22] S. mozaffari, K. faez, F. Faradji et al., "A comprehensiveisolated Farsi/Arabic character database for handwrittenOCR research." pp. 385-389.[23] H. Khosravi, and E. Kabir, “Introducing a very largedataset of handwritten Farsi digits and a study on theirvarieties,” Pattern Recognition Letters, vol. 28, no. 10,pp. 1133-1141, 2007.[24] R. Ebrahimpur, A. Esmkhani, and F. Faradji, “Farsihandwritten digit recognition based on mixture of RBFexperts,” IEICE Electron. Express, vol. 7, no. 14, pp.1014-1019, 2010.[25] S. Mozaffari, K. Faez, and H. R. Kanan, "Recognition ofisolated handwritten Farsi/Arabic alphanumeric usingfractal codes." pp. 104-108.[26] A. Dehghani, F. Shabini, and P. Nava, "Off-linerecognition of isolated Persian handwritten charactersusing multiple hidden Markov models." pp. 506-510.[27] A. Mowlaei, and K. Faez, "Recognition of isolatedhandwritten Persian/Arabic characters and numeralsusing support vector machines." pp. 547-554.[28] M. Dehghan, and K. Faez, "Farsi handwritten characterrecognition with moment invariants." pp. 507-510 vol.2.[29] J. Shanbezadeh, H. Pezashki, and A. Sarrafzadeh,"Features Extraction from Farsi Hand Written Letters."pp. 35-40.[30] M. M. Javidi, and f. Sharifizadeh, “A Modified DecisionTemplates Method for Persian Handwritten DigitRecognition,” Journal of American Science, vol. 8, no. 1,pp. 504-512, 2012.[31] M. M. Javidi, R. Ebrahimpur, and f. Sharifizadeh,“Persian handwritten digits recognition: A divide andconquer approach based on mixture of MLP experts,”International Journal of the Physical Sciences, vol. 6, no.30, pp. 7007-7015, 2011.[32] M. Moradi, M. A. Pourmina, and F. Razzazi, “FPGABased Farsi Handwritten Digit Recognition System,”International Journal of Simulation Systems, Science &Technology, vol. 11, pp. 17-22, 2010.58

the test sample are selected. Using majority voting among K neighbors, class label for the test sample is found. For example with 1-nearest neighbor rule, if ω be the true class of a training sample, the predicted class of test sample x is set ωof its nearest neighbor, where m i is a nearest neighbor to x if the distance d(m i,x) min j {d(m j .

Related Documents:

Civic Style - Marker Symbols Ü Star 4 û Street Light 1 ú Street Light 2 ý Tag g Taxi Æb Train Station Þ Tree 1 òñðTree 2 õôóTree 3 Ý Tree 4 d Truck ëWreck Tree, Columnar Tree, Columnar Trunk Tree, Columnar Crown @ Tree, Vase-Shaped A Tree, Vase-Shaped Trunk B Tree, Vase-Shaped Crown C Tree, Conical D Tree, Conical Trunk E Tree, Conical Crown F Tree, Globe-Shaped G Tree, Globe .

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

Family tree File/directory tree Decision tree Organizational charts Discussion You have most likely encountered examples of trees: your family tree, the directory . An m-ary tree is one in which every internal vertex has no more than m children. 2. A full m-ary tree is a tree in which every

A. Decision Tree A decision tree is a flow-chart-like tree structure, where each internal node denotes a test on an attribute, each branch represents an outcome of the test, and leaf nodes represent classes or class distributions [3]. The popular Decision Tree algorithms are ID3, C4.5, CART.