A Novel Approach For Face Pattern Identification And .

2y ago
32 Views
2 Downloads
512.49 KB
6 Pages
Last View : 24d ago
Last Download : 3m ago
Upload by : Olive Grimm
Transcription

ISSN: 2319-8753International Journal of Innovative Research in Science,Engineering and Technology(An ISO 3297: 2007 Certified Organization)Vol. 3 , Issue 4 , April 2014A Novel Approach for Face PatternIdentification and IlluminationViniya.P1 ,Peeroli.H2PG scholar, Applied Electronics, Mohamed sathak Engineering college,Kilakarai,Tamilnadu, India1HOD, Department of ECE, MohamedSathak Engineering College,Kiakarai,Tamilnadu, India2Abstract—Extracting facial feature is a key step in facialexpression recognition (FER). Inaccurate featureextraction very often results in erroneous categorizing offacial expressions. Especially in robotic application,environmental factors such as illumination variation maycause FER system to extract feature inaccurately. In thispaper, we propose a robust facial feature point extractionmethod to recognize facial expression in various lightingconditions. Before extracting facial features, a face islocalized and segmented from a digitized image frame. Facepreprocessing stage consists of face normalization andfeature region localization steps to extract facial featuresefficiently. As regions of interest corresponding to relevantfeatures are determined, Gabor jets are applied based onGabor wavelet transformation to extract the facial points.Gabor jets are more invariable and reliable than gray-levelvalues, which suffer from ambiguity as well as illuminationvariation while representing local features. The proposedalgorithm has two advantages only one face training imageis needed to train the classifier by using the facial blockfeatures with lower data dimensions, the proposed system ismore computational efficiency.The main aim of thisproposed work is to improve the accuracy of the facerecognition system using the multiple training images.Keywords—Facial expression recognition , Ada boostclassifier, Dynamic Bayesian network,Gabor Transform.I.INTRODUCTIONThe tracking and recognition of facial activitiesfrom images or videos have attracted great attention incomputer vision field. The recovery of facial activities inimage sequence is an important and challenging problem.In recent years, plenty of computer vision techniqueshave been developed to track or recognize facialactivities in three levels. First, in the bottom level, facialfeature tracking, which usually detects and tracksprominent facial feature points (i.e., the facial landmarks)surrounding facial components (i.e., mouth, eyebrow,etc.),captures the detailed face shape information.Copyright to IJIRSETSecond, facial actions recognition, i.e., recognize facialAction Units (AUs) defined in the Facial Action CodingSystem (FACS), try to recognize some meaningful facialactivities (i.e., lid tightener, eyebrow raiser, etc).In thetop level, facial expression analysis attempts torecognize facial expressions that represent the humanemotional states. The facial feature tracking, AUrecognition and expression recognition represent thefacial activities in three levels from local to global , andthey are interdependent problems.For example, facial feature tracking can beused in the feature extraction stage in expression/AUsrecognition, and expression/ AUs recognition results canprovide a prior distribution for facial featurepoints.However, most current methods only track orrecognize the facial activities in one or two levels, andtrack them separately, either ignoring their interactionsor limiting the interaction to one way. In addition, theestimates obtained by image-based methods in each levelare always uncertain and ambiguous because of noise,occlusion and the imperfect nature of the visionalgorithm.In this paper, Dynamic Bayesian Network(DBN) are used to capture the facial interactions atdifferent levels. In particular, not only the facial featuretracking can contribute to the expression/AUsrecognition, but also the expression/AU recognitionhelps to further improve the facial feature trackingperformance.FACE RECOGNITIONFace recognition is one of the most importantapplications of Gabor wavelets. The face image isconvolved with a set of Gabor wavelets and the resultingimages are further processed for recognition purpose.The Gabor wavelets are usually called Gabor filters inwww.ijirset.com11749

ISSN: 2319-8753International Journal of Innovative Research in Science,Engineering and Technology(An ISO 3297: 2007 Certified Organization)Vol. 3 , Issue 4 , April 2014the scope of applications. There are various proposedapproaches could be roughly classified into analytic andholistic approaches. Some feature points are detectedfrom the face, especially the important facial landmarkssuch as eyes, noses, and mouths. These detected pointsare called the fiducial points, and the local featuresextracted on these points, distance and angle betweenthese points, and some quantitative measures from theface are used for face recognition.In contrast to usinginformation only from key feature points, holisticapproaches extracts features from the whole face image.Normalization on face size and rotation is a reallyimportant pre-processing to make the recognition robust.The eigenface based on principal component analysis(PCA) and the fisher face based on linear discriminantanalysis (LDA) are two of the most well know holisticapproaches.II.RELATED WORKSA.Facial Feature TrackingFacial feature points encode critical informationabout face shape and face shape deformation. Accuratelocation and tracking of facial feature points areimportant in the applications such as animation,computer graphics, etc. Generally, the facial featurepoints tracking technologies could be classified into twocategories: model free and model-based trackingalgorithms. Model free approaches are general purposepoint trackers without the prior knowledge of the object.Each feature point is usually detected and trackedindividually by performing a local search for the bestmatching position. However, the model free methods aresusceptible to the inevitable tracking errors due to theaperture problem, noise, and occlusion.Model based methods, such as Active ShapeModel (ASM), Active Appearance Model (AAM) ,Direct Appearance Model (DAM) etc., on the other hand,focus on explicitly modeling the shape of the objects.The ASM proposed by Cootesetal is a popular statisticalmodel-based approach to represent deformable objects,where shapes are represented by a set of feature points.Feature points are first searched individually, and thenPrincipal Component Analysis (PCA) is applied toanalyze the models of shape variation so that the objectshape can only deform in specific ways found in thetraining data. Robust parameter estimation and Gaborwavelets have also been employed in ASM to improvethe robustness and accuracy of feature point search . TheAAM and DAM are subsequently proposed to combineCopyright to IJIRSETconstraints of both shape variation and texture variation.In the conventional statistical models, e.g. ASM, thefeature points positions are updated (or projected)simultaneously. Intuitively, human faces have asophisticated structure, and a simple parallel mechanismmay not be adequate to describe the interactions amongfacial feature points.B. Expression/AUs RecognitionFacial expression recognition systems usuallytry to recognize either six expressions or the AUs. Overthe past decades, there has been extensive research onfacial expression analysis. Current methods in this areacan be grouped into two categories: image-basedmethods and model-based methods. Image-basedapproaches, which focus on recognizing facial actions byobserving the representative facial appearance changes,usually try to classify expression or AUs independentlyand statically. This kind of method usually consists oftwo key stages. First, various facial features, such asoptical flow, explicit feature measurement (e.g., lengthof wrinkles and degree of eye opening) , Haar features ,Local Binary Patterns (LBP) features , independentcomponent analysis (ICA) , feature points , Gaborwavelets etc., are extracted to represent the facialgestures or facial movements. Given the extracted facialfeatures, the expression/AUs are identified byrecognition engines, such as Neural Networks, SupportVector Machines (SVM), rule-based approach,AdaBoost classifiers, Sparse Representation (SR)classifiers etc.The common weakness of image-basedmethods for AUrecognition is that they tend to recognizeeach AU or certainAU combinations individually andstatically directly from theimage data, ignoring thesemantic and dynamic relationshipsamong AUs,although some of them analyze the temporalproperties offacial features, Model-based methods overcome thisweakness by making use of the relationships among AUs,and recognize the AUs simultaneously.C.Simultaneous facial activity tracking /recognitionThe idea of combining tracking withrecognition has been attempted before, such assimultaneous facial feature tracking and expressionrecognition and integrating face tracking with videocoding. However, in most of these works, the interactionbetween facial feature tracking and facial expressionrecognition is one-way, i.e., facial feature trackingresults are fed to facial expression recognition. There isno feedback from the recognition results to facial featuretracking. Most recently, Dornaika et al. and Chen &Jiwww.ijirset.com11750

ISSN: 2319-8753International Journal of Innovative Research in Science,Engineering and Technology(An ISO 3297: 2007 Certified Organization)Vol. 3 , Issue 4 , April 2014improved the facial feature tracking performance owever, in Simultaneous facial action trackingand expression recognition in the presence of headmotion, they only modeled six expressions and they needto retrain the model for a new subject, while in ahierarchical framework for simultaneous facial activitytracking, they represented all upper facial action units inone vector node and in such a way, they ignored thesemantic relationships among AUs, which is a key pointto improve the AU recognition accuracy.III.PROPOSED SYSTEMA.Facial Activity ModelDynamic Bayesian network is a directedgraphical model, DBN is more general to capturecomplex relationships among variables. Specifically, theglobal facial expression is the main cause to producecertain AU configurations, which in turn causes localmuscle movements, and hence feature points movements.For example, a global facial expression (e.g.Happiness)dictates the AU configurations, which in turn dictates thefacial muscle movement and hence the facial featurepoint positions. For the facial expression in the top level,we will focus on recognizing six basic facial expressions,i.e., happiness, surprise, sadness, fear, disgust and anger.Though psychologists agree presently that there are tenbasic human emotions , most current research in facialexpression recognition mainly focuses on six majoremotions, partially because they are the most basic, andculturally and ethnically independent expressions andpartially because most current facial expressiondatabases provide the six emotion labels.various computer vision techniques are used to track thefacial feature points, and to get the measurements offacial motions, i.e., AUs.These measurements are thenused as evidence to infer the true states of the three levelfacial activities simultaneously.B. Gabor Wavelet Representation of FacesDaugman generalized the Gabor function tothe following 2D form in order to model thereceptive fields of the orientation selective simple cells:(1)Each yiis a plane wave characterized by the vector i kenveloped by a Gaussianfunction, where s is the standarddeviation of this Gaussian.Thecenter frequencyof ithfilter is given by the characteristic wave vector,having a scale and orientation given by (kv,qμ ). Thefirst term in the brackets determines the oscillatory partof the kernel, and the second term compensates for theDC value of the kernel. Subtracting the DC response,Gabor filters becomes insensitive to the overall level ofillumination.Daugman and others have proposed that anensemble of simple cells is best modeled as a family of2D Gabor wavelets sampling the frequency domain ina log-polar manner. This class is equivalent to afamily of affine coherent states generated by rotationand dilation. The decomposition of an image I into thesestates is called the wavelet transform of the image:(3)Fig 1: Block DiagramThe proposed facial activity recognition systemconsists of two main stages: offline facial activity modelconstruction and online facial motion measurement andinference. Specifically, using training data andsubjective domain knowledge, the facial activity modelis constructed offline. During the online recognition,Copyright to IJIRSETwww.ijirset.com11751

ISSN: 2319-8753International Journal of Innovative Research in Science,Engineering and Technology(An ISO 3297: 2007 Certified Organization)Vol. 3 , Issue 4 , April 2014Fig 2: Example of A Facial Image Response to Above Gabor FiltersA)Original Face Image (From Stirling Database) B) Filter Responsesclassifiers and a set of training data consisting of positiveand negative samples, the adaboost approach can be usedto select a subset of weak classifiers and theclassification function.Adaboost requires no prior knowledge, that isno information about the structure or features of the faceare required when used for face detection.Given a set oftraining samples and weak classifiers the boostingprocess automatically chooses the optimal feature setand the classifier function.The approach is adaptive inthe sense that misclassified samples are given higherweight in order to increase the discriminative power of(4)the classifier. As a result, easily classified samples aredetected in the burst iteration and have less weight andharder samples with higher weights are used to train thelater iterations.The theoretical training error converges tozero as proved by Freund and Schapire (1995). Thetraining for a set of positive and negative samplesreaches zero after a finite number of iterations. Given afeature setT { ( x1 , y1) , ( x2 , y2 ) , ( x3 , y3 ) . . . . }where xi is the training sample and yi is a binary valueof the sample class (1 is positive, 0 negative).A finalboosted classifier network is formed from the subset ofgiven features after an arbitrary number of iterations asshown in Equation .Input face image and the amplitude of the Gaborfilter responses are shown in figure 2.One of thetechniques used in the literature for Gabor basedface recognition is based on using the response of agrid representing the facial topography for coding theface. Instead of using the graph nodes, high-energizedpoints can be used in comparisons which forms thebasis of this work. This approach not only reducescomputational complexity, but also improves theperformance in the presence of occlusions.(5)C.Ada Boost ClassifierBoosting is a method to combine a collection ofwhere t is the weight assigned to the t th classifier, andweak classification functions (weak learner) to form aht is the classifier decision. Adaboost training is anstronger classifier. AdaBoost is an adaptive algorithm toiterative process and the accuracy of the final classifierboost a sequence of classifiers. AdaBoost is a kind offunction depends on the number of iterations andlarge margin classifiers. A weak classifier in simplewhether the training error converges to zero after finiteterms is a decision rule that classifies a test sample asnumber of iterations. A classifier is trained as follows byeither a positive or a negative sample.A weighted linearViola et.combination of these weak classifiers forms a strong Given example images (x1; y1), (x2; y2) ,classifier with improved detection performance.The(xn; yn) where yi [0; 1] for negative andboosting process has two stages corresponding topositive samples respectively.training and detection. In the training stage a very largeset of labeled samples is used to identify the better Initialize weights w1;i 1/ 2m; 1/ 2l for yi [0;performing weak classifiers, and a strong classifier1] respectively, where m and l are the numbernetwork is constructed by a weighted linear combinationof negative and positive samples respectively.of these weak classifiers. The output of the training stageis a trained classifier network that can be used in theIV.EXPERIMENTAL RESULTdetection phase to classify samples as positive orIntheexperimental result showing the twonegative.The idea of boosting is to combine a set ofconditions first one is illumination condition another one issimple rules or weak classifiers to form an ensemblefacial expression condition. Test image and equivalentsuch that the performance of a single ensemble memberimage of the illumination image is shown in the figure.is improved . For example given a family of weakCopyright to IJIRSETwww.ijirset.com11752

ISSN: 2319-8753International Journal of Innovative Research in Science,Engineering and Technology(An ISO 3297: 2007 Certified Organization)Vol. 3 , Issue 4 , April 2014Recognition Rate of the original image is 97.8042. Theoriginal image and the test image is rotated to achievethe original test image.FACIAL EXPRESSIONILLUMINATIONTest image and equivalent image of the facialexpression image is shown in the figure.The facialexpression Recognition Rate is 97.8033.The graph isplot between parameters proportion and subjectsproportion. Four samples are used to construct the plot .The four samples are False Negative, True Positive(Sensibility), True Negative (Specificity), False Positive.Sensitivity is the probability that test is positive onunhealthy data.specificity is the probability that test isnegative on healthy data.Copyright to IJIRSETwww.ijirset.com11753

ISSN: 2319-8753International Journal of Innovative Research in Science,Engineering and Technology(An ISO 3297: 2007 Certified Organization)Vol. 3 , Issue 4 , April 2014V.CONCLUSIONIn this paper, we proposed a hierarchicalframework based on Dynamic Bayesian Network forsimultaneous facial feature tracking and facialexpression recognition. By systematically representingand modeling inter relationships among different levelsof facial activities, as well as the temporal evolutioninformation, the proposed model achieved significantimprovement for both facial feature tracking and AUrecognition. The improvements for facial feature pointsand AUs come mainly from combining the facial actionmodel with the image measurements. Specifically, theerroneous facial feature measurements and the AUmeasurements can be compensated by the model’s buildin relationships among different levels of facial activities,and the build-in temporal relationships. Since our modelsystematically captures and combines the priorknowledge with the image measurements, with improvedimage-based computer vision technology, our systemmay achieve better results with little changes to themodel. In this paper, we evaluate our model on posedexpression databases from frontal view images. In thefuture work, we plan to introduce the rigid headmovements, i.e., head pose, into the model to handlemulti view faces.[7] F. Dornaika and F. Davoine, “Simultaneous facial action trackingand expression recognition in the presence of head motion,” Int. J.Comput.Vis., vol. 76, no. 3, pp. 257–281, 2008.[8] Y. Tong, W. Liao, and Q. Ji, “Facial action unit recognition byexploiting Their dynamic and semantic relationships,” IEEE Trans.Pattern Anal. Mach. Intell., vol. 29, no. 10, pp. 1683–1699, Oct. 2007.[9] Y. Tong, Y. Wang, Z. Zhu, and Q. Ji, “Robust facial featuretracking under varying face pose and facial expression,” PatternRecognit., vol. 40, no. 11, pp. 3195–3208, 2007.[10] G. Zhao and M. Pietikainen, “Dynamic texture recognition usinglocal binary patterns with an application to facial expressions,” IEEETrans. Pattern Anal. Mach. Intell., vol. 29, no. 6, pp. 915–928, Jun.2007.REFERENCES[1] M. Valstar and M. Pantic, “Fully automatic recognition of thetemporal

simultaneous facial feature tracking and expression recognition and integrating face tracking with video coding. However, in most of these works, the interaction between facial feature tracking and facial expression recognition is one-way, i.e., facial feature tracking results are fed to facial expression recognition. There is

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

Garment Sizing Chart 37 Face Masks 38 PFL Face Masks 39 PFL Face Masks with No Magic Arch 39 PFL Laser Face Masks 40 PFL N-95 Particulate Respirator 40 AlphaAir Face Masks 41 AlphaGuard Face Masks 41 MVT Highly Breathable Face Masks 42 Microbreathe Face Masks 42 Coo

och krav. Maskinerna skriver ut upp till fyra tum breda etiketter med direkt termoteknik och termotransferteknik och är lämpliga för en lång rad användningsområden på vertikala marknader. TD-seriens professionella etikettskrivare för . skrivbordet. Brothers nya avancerade 4-tums etikettskrivare för skrivbordet är effektiva och enkla att

Den kanadensiska språkvetaren Jim Cummins har visat i sin forskning från år 1979 att det kan ta 1 till 3 år för att lära sig ett vardagsspråk och mellan 5 till 7 år för att behärska ett akademiskt språk.4 Han införde två begrepp för att beskriva elevernas språkliga kompetens: BI