Multimodal Biometric Identification System Using Deep Learning - ISROSET

1y ago
7 Views
1 Downloads
899.18 KB
7 Pages
Last View : 13d ago
Last Download : 3m ago
Upload by : Arnav Humphrey
Transcription

International Journal of Scientific Research in Research Paper . Computer Science and Engineering Vol.8, Issue.5, pp.01-07, October (2020) E-ISSN: 2320-7639 DOI: https://doi.org/10.26438/ijsrcse/v8i5.17 Multimodal Biometric Identification System using Deep Learning Bhavya D.N.1*, Chethan H.K.2 1 Department of Computer Science and Engg, Maharaja Research Foundation Department of Computer Science and Engg, MIT Thandvapura, Karnataka, India 2 * Corresponding Author: bhavyavijay.dn@gmail.com, Tel.: 00-12345-54321 Available online at: www.isroset.org Received: 12/Oct/2020, Accepted: 22/Oct/2020, Online: 31/Aug/2020 Abstract— In real world we knows that a multimodal biometric system performs better and overcomes the limitation and gives better classification accuracy when compare to than Unimodal biometric system. This paper proposes a novel multimodal biometric recognition system based on feature-level fusion and deep learning model. The significance of this paper is, it focuses on the issue of selection of best feature extraction and classification techniques, by investigating different types of feature extraction techniques with different databases of given modality like face, Plamprint and iris. We proposed unimodal biometric recognition using Convolution Neural Network (CNN). Later the results of unimodal recognition used two-layer fusion to build multimodal biometric recognition. Features like Historgram of Gradient, Zernike Moments and Pseudo Zernike Moments are extracted. The performance of proposed multimodal recognition method shows better recognition accuracy than unimodal recognition. Keywords— Multimodal, HOG, Zernike, PZM, CNN I. INTRODUCTION Biometrics is a progressively growing technology widely used in forensics, secured access, medicine, robotics and a wide array of commercial applications. Due to security breaches and transaction fraud is getting high, the identification and verification is needed. A biometric based recognition system makes the persons’ identification decision using specific physiological or behavioral features. Most of the biometric systems that are currently in use employ single biometric trait. Biometric system using single biometric trait is referred to as Uni-modal biometric system. Unfortunately, recognition systems developed with single biometric trait suffers from noise, intra class similarity and spoof attacks. The rest of the paper is organized as follows. An overview of Multimodal biometric and its related work are discussed in section 2. The proposed model is presented in section 3. In section 4, the experimental results of the proposed model are brought out with sufficient experimentations. Finally conclusions are drawn in section 5. II. RELATED WORK Assaad & Serpen [1] introduced a simulation research for a fusion or combination algorithm for an ensemble classifier that enables multimodal biometrics user authentication. The research aimed at transformation-based score fusion technique with vocal and facial recognition classifier outputs as input. The fusion method is solely reliant on the score obtained from every biometric module. Gupta & 2020, IJSRCSE All Rights Reserved Gupta [2] suggested an effective multimodal authentication model that utilizes palm-dorsal vein patterns. There are four levels of fusion in the system: multi-algorithm fusion, information fusion, features fusion and score fusion. Multialgorithm fusion is employed for extracting genuine vein pattern from a vein image through the usage of several vein extraction algorithms. The pade & Bhondave [3] introduced the multimodal biometric methods with iris and palm print features. Various colour spaces are regarded with score level fusion for obtaining suggested multimodal identification method through usage of Block Truncation Coding (BTC) with Bit Plane Slicing. Experiments were carried out through a test bed comprising sixty pairs of iris and palmprint images of ten people. Mohamad et al. [4] surveyed various data fusion methods and schemes for applying in multimodal biometrics systems through the usage official features and palmprint images. Multimodal biometric systems are capable of overcoming various restrictions which are present in unimodal biometrics systems like intra-class differences, lesser discriminative capacity, noisy data and repetitive attributes. Sim et al. [5] suggested a technique which fused facial and iris biometric features with weighted score level fusion method for flexible fusion of matching scores from these two modalities on the basis of their weight availability. The model attains excellent accuracy and possesses great decidability index that considerably separates the distance between intra- and inter- distances. Aravinth & Valarmathy [6] suggested score-level fusion method for multimodal biometric recognition through the Artificial Bee Colony (ABC) based Neural Networks (NN). Fusion of scores has 1

Int. J. Sci. Res. in Computer Science and Engineering its basis in NNs where ABC is used as a training algorithm and on the basis of the scores acquired from the ABCbased NN, recognition is carried out. Rao & Reddy [7] suggested a technique for managing access control for enduring adequate degree of security through the usage of adaptive combinations of multimodal matching score. Daniel & Monica [8] suggested a novel technique of data fusion. Combined information acquired from the two traits namely: fingerprint and iris at the level of feature extraction. Decision regarding the identity of an individual was made for the suggested multimodal system and for every unimodal system examined. Furthermore, performance of all systems was contrasted and presented. Tharwat et al. [9] suggested two multimodal biometric authentication techniques through the usage of ear and Finger Knuckle (FK) images. The investigators suggested a technique on the basis of fusing the images of ear and finger knuckle prior to features level and so there is no loss of data. The investigators also suggested a multi-level fusion technique at image and classification levels. Shanthini & Swamynathan [10] suggested a novel mode of fusion of multimodal biometric systems where the quantity of security services offered by the system rises because the requirement for security rose and is primarily used in hierarchy-based extreme security Mobile Adhoc NETworks (MANET), for instance, on an active battlefield. Furthermore, the investigators detailed various multimodal methods and algorithms suggested by several other experts, the mode of utilizing multimodalities and the objective of multimodal fusion and soon. Chetty & Lipton [11] suggested a new local features analysis and features level fusion method for the detection of tampering or forgery for facial-biometric based online access control settings. Ahmad et al. [12] suggested the usage of multimodal biometric system for facial features and palmprint images through the fusion methods at features level. Gabor-based image processing was used for the extraction of discriminant attributes, and PCA and LDA were used for reducing dimensions of all modalities. Output attributes of LDA undergo serial combination and classification by Euclidean distance classifiers. He et al. [13] analyzed the performance of SUM rule-based score level and support vector machine (SVM)-based score level fusion. Nageshkumar et al. [14] suggested an authentication technique for multimodal biometric system identification through the usage of two features, which are face and palmprint. The suggested model was formulated for application where training data comprises face and palm print. Integration of palmprint and facial features improves the resilient of the authentication. Raghavendra et al. [15] suggested a new framework for optimum fusion of match scores through usage of Gaussian Mixture Model (GMM) and Monte Carlo technique. The suggested fusion method has the capacity to deal with small sizes of match scores as it is more frequently faced in biometric fusions and random distribution of match scores. Chaudhary & Nath [16] suggested a novel multimodal biometric system which 2020, IJSRCSE All Rights Reserved Vol.8, Issue.5, Oct 2020 incorporates several attributes of a person for recognition that is capable of alleviating the issues faced by unimodal biometrics system while concurrently enhancing recognition performance. Mukherjee et al. [17] suggested a new adaptive weight and exponent- based function mapping the matching score from various biometrics sources into one amalgamated matching score to be used by classifiers for further decision making. Kumar & Devi [18] built a fused biometric individual recognition through usage of SVM in a multimodal method for effective identification in various security components. Multimodal identification procedure is developed by integrating data from several biometric sources. Baig et al. [19] suggested a cascaded classifier- based model for utilization in biometrics recognition. The suggested model uses a set of weak classifiers for decreasing registered users’ database to a smaller set of candidate users. Bharadi et al. [20] suggested a multimodal biometric system on the basis of iris and fingerprints. Textural features extraction through the usage of hybrid wavelets is carried out. Fingerprint and iris features were extracted by using multi-level decomposition of the sample image utilizing a novel family of wavelets known as hybrid wavelets. kNN classifier is used for unimodal fingerprints and multi-instance iris recognitions. Gawande & Hajari [21] suggested multimodal biometric for overcoming restrictions of unimodal biometric systems. Among the several fusion levels, features level fusion is anticipated to yield best recognition. Elmir et al. [22] examined two significant problems associated to score level fusion. The investigators have examined the performance of score level fusion based multimodal biometric system against various unimodal biometrics system on the basis of voice, finger print modalities and bimodal biometric system on the basis of features level fusion of the same modality. Razzak et al. [23] proposed effective multimodal low resolution face and finger vein biometric recognition system on the basis of class-specific linear discriminant to client-specific discriminant analyses and finger vein fusion at score level. Soviany & Puscoci [24] proposed an optimized multimodal biometric system for the purpose of identification applications. Viswanathan & Chitra [25] suggested a multimodal biometric system through the usage of fingerprint images and finger vein patterns with optimized Radial Basis Function (RBF) kernel classifiers were suggested for the identification of authorized users. Shekhar et al. [26] proposed a multimodal system through sparse linear fusion of training data and restricting observations from various modalities of the test subject for sharing their sparse representation. Bhatt et al. [27] suggested a model for dynamic classifier selection and fusion on the basis of the quality of gallery and probe images related to every modality with several classifiers. 2

Int. J. Sci. Res. in Computer Science and Engineering III. METHODOLOGY In the work we proposed a multimodal biometrics recognition model using Convolutional neural network based on face, iris and palmprint. The proposed diagram is shown in Figure. 1. Two-layer feature fusion is proposed with convolutional layer through four different modes. Figure 1. The framework of the proposed multimodal biometric recognition method based on CNN A. Feature Extraction Histogram of Gradients HOG[28] features put across the local appearance and shape of the object by finding the distribution of intensity gradients of pixels as well as orientations of edge pixels. This technique inputs the image and divides it into small square cells and computes the histogram of gradient directions and edge directions for each of these cells, based on the central differences. The Sobel filter is used to extract both x and y directional gradients of the image. Then the magnitude and Orientation of these gradients are obtained where each pixel is represented as a gradient vector consisting of both magnitude and direction as defined: magnitude Gdh( x, y )2 Gdv( x, y ) where Gdh and Gdv are x and y directional gradientsof the image. Then, a histogram of orientations of edge pixels is obtained for each of cell, after the image is divided into small connected areas. To achieve it, each of these cells are examined to calculate gradient angles of all pixels in each representative cell which are further approximated into a number of buckets so that the magnitudes of similar orientations can be collected in the form of a histogram. The buckets orientations are uniformly spread out over [00 - 1800] for unsigned angles and [00 - 3600] for signed angle derivation. The histogram size of each of the cell is specified by the number of buckets being applied. Finally, a final HOG feature vector is formed using the histograms and the buckets which are fed as a classifier input. The extracted features are represented in regions through cell sizes by varying the size parameters. 2020, IJSRCSE All Rights Reserved Vol.8, Issue.5, Oct 2020 Zernike moment Zernike moment [29] is an orthogonal moment and carries a scale and rotation invariance property and therefore is observed effective for recognition. Zernike moments have a place with a class of persistent orthogonal moments characterized over a unit circle. In this way, for a square image, calculations of Zernike moments include a specific square-to-circle mapping to delineate pixel coordinates inside the scope of the unit circle. ZM are orthogonal moments in light of Zernike polynomials. Orthogonality here implies that there is no excess and overlapping of data between the moments. Therefore moments are uniquely evaluated in terms of their orders. The distinctive element of ZM is the invariance of its magnitude with respect to rotation. The calculation of Zernike moments incorporates three stages: calculation of spiral polynomials, calculation of Zernike premise capacities and calculation of Zernike moments by anticipating the picture onto the Zernike premise capacities. For a defined input image bitmap, calculation of Zernike moments depends on the computation of Zernike outspread polynomial. The 1-D spiral polynomial Rn,m, which is real valued, is characterized as Rnm ( p) ( n m )/2 s 0 12 (n s )! p ( n 2 s ) (n m ) n m s !( s )!( s)! 2 2 where n depicts the radial polynomial order and is a positive whole number, m depicts the reiteration of the azimuthal point and is a negative or positive number fulfilling limitations m n and n- m even and ρ is the length of vector from the inception to (x, y). Utilizing the above condition, complex-esteemed 2-D Zernike moment capacities, which are characterized inside a circle of radius one, are shaped by: Vn,m ( p, ) Rn,m ( p)e jm , p 1 The condition for orthogonality is verified by the complexZernike polynomials: where * depicts the complex conjugate. Pseudo-Zernike moment PZMs [30] has been defined by projection of the image intensity function on the pseudo- Zernike polynomials and are characterized utilizing a polar coordinate system of the feature space. Henceforth, they are ordinarily utilized in image understanding applications which require geometrical invariance. The two dimensional pseudoZernike moment (PZM), PZmn, of order m with repetition n is defined in terms of polar coordinates (r, θ) inside the unit circle scribing the whole image as: 3

Int. J. Sci. Res. in Computer Science and Engineering where * denotes the complex conjugate, and Pmn(r, θ) is the pseudo-Zernike polynomial given by: here RADmn (r) is the real-valued radial polynomial defined as: Network Model In this paper, Alex-Net[33] is used for recognition task. The table 1 shows the 4-layer, 5-layer, 6-layer, 7-layer and 8layer network models used to calculate the recognition performance. Optimal network will be selected from multimodal biometric system. Here C-Conv and P-Pool Vol.8, Issue.5, Oct 2020 70:30, when compared to the other 30:70 and 50:50 ratios. However, the fusion of all the three features performs better than any individual feature for all three datasets, irrespective of the number of training and testing samples. From the experimental outcome, it is experiential that every mixture of features with the deep learning classifier has an improved characterization exactness when contrasted with an individual feature and classifier. Out of our analysis of results, we also have understood that the feature based on deep learning classifier responses most of the time produces better accuracy than any other combination of features. Figure 4 shows the accuracy of individual and combination of features. In addition, we also calculated the performance using Measure like Precision, Recall and F-Measure shown in shown in Figure 5. Further, from experimentation shows that he accuracy of multimodal recognition method is better than unimodal recognition. Table 1. Five network models is shown below 4layer 5layer 6layer 7layer 8layer C C P FC1 FC2 C P C P FC1 FC2 C C P C P FC1 FC2 C C P C C P FC1 FC2 C C P C P C P FC1 FC2 images of CMU PIE database images of CASIA database The steps of the proposed training methodology are summarized as follows: 1. Split the dataset: Training, Validation and Test set. 2. Pick the CNN Architecture 3. Train the Architecture 4. Evaluate the Architecture 5. Repeat steps 3 through 4 using N epochs. 6. Pick the best configuration with less error 7. Evaluate the final system with test set. IV. images of the Poly-U database Figure 2: shows the images of different databases EXPERIMENTATION RESULTS We have used CMU PIE facial database [31] of total of 1,632 images and CASIA 1.0 standard iris database total of 700 images and Poly-U database [32] is a benchmark database for palmprint recognition. Figure 2 shows the samples of the images. In this work, the input image size is investigated using the sizes (64 64) pixels and (128 128) pixels. The experiment is conducted on face, iris and palmprint with different layers of CNN of network models. The experimental results are shown in Figure. 3, where each model runs 10 times. The proposed 3 features viz., ZM, HOG, and PZM are considered individually and combination. We have analyzed results and learnt that all the features have reasonably a good recognition accuracy when the training and testing samples were in the ration of 2020, IJSRCSE All Rights Reserved 4

Int. J. Sci. Res. in Computer Science and Engineering Vol.8, Issue.5, Oct 2020 Figure 3: Results of all the layers for databases is shown 1.2 1 0.8 0.6 0.4 0.2 0 1 2 3 4 5 6 7 8 9 10 7 8 9 1.2 1 0.8 0.6 0.4 0.2 0 1 2 3 4 5 6 10 Figure 5: Class wise performance analysis in terms of precision, recall, and F-measure for varying samples VI. CONCLUSION Figure 4: Accuracy of Fusion features is shown 2020, IJSRCSE All Rights Reserved In this work we proposes Face, Plamprint and Iris based multimodal biometric system based on feature level fusion and Deep learning approach. The experiment is performed with different databases and techniques. In this work, first we developed a unimodal biometric recognition system using CNN with three different databases of each to evaluate the performance of the dataset and feature level algorithm. Afterword feature level fusion approach is used to fuse other modalities features. This fused feature set scaled and classified using Deep Learning classifier. 5

Int. J. Sci. Res. in Computer Science and Engineering Vol.8, Issue.5, Oct 2020 REFERENCES [1] Assaad, FS & Serpen, G 2015, ‘Transformation based Score Fusion Algorithm for Multi-modal Biometric user Authentication through Ensemble Classification’, Procedia Computer Science, vol. 61, pp. 410-415. [2] Gupta, P & Gupta, P 2015, ‘Multi-modal fusion of palm-dorsa vein pattern for accurate personal authentication’, KnowledgeBased Systems, vol. 81, pp. 117-130. [3] Thepade, SD & Bhondave, RK 2015, ‘Multimodal identification technique using Iris & Palmprint traits with matching score level in various Color Spaces with BTC of bit plane slices’, 2015 IEEE International Conference on Industrial Instrumentation and Control (ICIC) , pp. 1469-1473. [4] Mohamad, N, Ahmad, M. I, Ngadiran, R, Ilyas, M. Z, Isa, MNM & Saad, P 2014, ‘Investigation of information fusion in face and palmprint multimodal biometrics’, In Electronic Design (ICED), 2014 2nd International Conference on IEEE, pp. 347-350. [5] Sim, HM, Asmuni, H, Hassan, R & Othman, RM 2014, ’Multimodal biometrics: Weighted score level fusion based on non-ideal iris and face images’, Expert Systems with Applications, vol. 41, no. 11, pp. 5390-5404. [6] Aravinth, J & Valarmathy, S 2013, ‘Score-Level Fusion Technique for Multi-Modal Biometric Recognition using ABCBased Neural Network’, International Review on Computers and Software (IRECOS), vol. 8, no. 8, pp. 1889-1900. [7] Rao, TS & Reddy, ES 2013, ‘Multimodal Biometric Authentication Based on Score Normalization Technique’, In Intelligent Informatics, Springer Berlin Heidelberg, pp. 425434. [8] Daniel, DM & Monica, B 2012, ‘A data fusion technique designed for multimodal biometric systems’, In 2012 10th International Symposium on Electronics and Telecommunications. [9] Tharwat, A, Ibrahim, AF & Ali, H 2012, ‘Multimodal biometric authentication algorithm using ear and finger knuckle images’, 2012 Seventh IEEE International Conference on Computer Engineering & Systems, pp. 176- 179. [10] Shanthini, B & Swamynathan, S 2011, ‘A secure authentication system using multimodal biometrics for high security MANETs’, In Advances in Computing and Information Technology, Springer Berlin Heidelberg, pp. 290-307. [11] Chetty, G & Lipton, M 2010, ‘Multimodal feature fusion for video forgery detection’, In Information Fusion (FUSION), 2010 13th Conference on IEEE, pp. 1-7. [12] Ahmad, MI, Woo, WL & Dlay, SS 2010, ‘Multimodal biometric fusion at feature level: Face and palmprint’, In Communication Systems Networks and Digital Signal Processing (CSNDSP), 2010 7th International Symposium on IEEE, pp. 801-805. [13] He, M, Horng, SJ, Fan, P, Run, RS, Chen, RJ, Lai, JL, & Sentosa, KO 2010, ‘Performance evaluation of score level fusion in multimodal biometric systems’, Pattern Recognition, vol. 43, no. 5, pp. 1789-1800. [14] Nageshkumar, M, Mahesh, PK & Swamy, MS 2009, ‚An efficient secure multimodal biometric fusion using palmprint and face image’, International Journal of Computer Science, vol. 2, no. 1, pp. 49-53. [15] Raghavendra, R, Rao, A & Hemantha Kumar, G 2009, ‘A novel approach for multimodal biometric score fusion using gaussian mixture model and monte carlo method’, In Advances in Recent Technologies in Communication and Computing, ARTCom'09. International Conference on IEEE, pp. 90-92. [16] Chaudhary, S & Nath, R 2015, ‘A New Multimodal Biometric Recognition System Integrating Iris, Face and Voice’, International Journal of Advanced Research in Computer Science and Software Engineering vol. 5, no. 4, pp. 145-150. [17] Mukherjee, S, Pal, K, Majumder, BP, Saha, C, Panigrahi, BK & Das, S 2014, ‘Differential evolution based score level fusion for multi-modal biometric systems’, In Computational Intelligence 2020, IJSRCSE All Rights Reserved [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] in Biometrics and Identity Management (CIBIM), 2014 IEEE Symposium on IEEE, pp. 38-44. Kumar, GS & Devi, CJ 2014, ‘A Multimodal SVM Approach for Fused Biometric Recognition’, International Journal of Computer Science and Information Technologies, vol. 5, no. 3, pp. 3327-3330. Baig, A, Bouridane, A, Kurugollu, F & Albesher, B 2014, ‘Cascaded multimodal biometric recognition framework’, Biometrics, IET, vol. 3, no. 1, pp. 16-28. Bharadi, VA, Pandya, B & Nemade, B 2014, ‘Multimodal biometric recognition using iris & fingerprint: By texture feature extraction using hybrid wavelets’, In Confluence The Next Generation Information Technology Summit (Confluence), 2014 5th International Conference, IEEE, pp. 697-702. Gawande, U & Hajari, K 2013, ‘Adaptive Cascade Classifier based Multimodal Biometric Recognition and Identification System’, International Journal of Applied Information Systems (IJAIS), vol. 6, no. 2, pp. 42-47. Elmir, Y, Elberrichi, Z & Adjoudj, R 2011, ‘Score level Fusion based Multimodal Biometric Identification’, In CIIA. Razzak, MI, Alghathbar, MKKK & Yusof, R 2011, ‘Multimodal biometric recognition based on fusion of low resolution face and finger veins’, International Journal of Innovative Computing, Information and Control ICIC International, vol. 7, no. 8, pp. 4679-4689. Soviany, S & Puscoci, S 2014, ‘An optimized multimodal biometric system with hierachical classifiers and reduced features’, , 2014 IEEE International Symposium on Medical Measurements and Applications (MeMeA), pp. 1-6. Viswanathan, A & Chitra, S 2014, ‘Optimized Radial Basis Function Classifier for Multi Modal Biometrics’, Research Journal of Applied Sciences, Engineering and Technology, vol. 8, no. 4, pp. 521-529. 26. Shekhar, S, Patel, VM, Nasrabadi, NM & Chellappa, R 2014, ‘Joint sparse representation for robust multimodal biometrics recognition’, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 1, pp. 113-126. 27. Bhatt, HS, Bharadwaj, S, Vatsa, M, Singh, R, Ross, A & Noore, A 2011, ‘A framework for quality-based biometric classifier selection’, In Biometrics (IJCB), 2011 International Joint Conference on IEEE, pp. 1-7. N. Dalal and B. Triggs, "Histograms of Oriented Gradients for Human Detection", Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005. A. Khotanzad and Y. H. Hong, "Invariant image recognition by Zernike moments," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 5, pp. 489-497, May 1990, doi: 10.1109/34.55109. J. Herman, J. S. Rani and D. Devaraj, "Face Recognition Using Generalized Pseudo-Zernike Moment," 2009 Annual IEEE India Conference, Gujarat, 2009, pp. 1-4, doi: 10.1109/INDCON.2009.5409386. T. Sim, S. Baker, M. Bsat, “The CMU pose, illumination, and expression (PIE) databas,” Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, pp. 53-58, 2002. D. Zhang, W. Shu, “Two novel characteristics in Palm Print verification: datum point in variance and line feature matching,” Pattern Recognition, vol. 33, pp. 691-70, 1999. A. Krizhevsky, I. Sutskever, G. E. Hinton, “ImageNet classification with deep convolutional neural networks” International Conference on Neural Information Processing Systems, pp. 1097-1105, 2012. Y. Badhe, H. Balbatti, N. Kaladagi, K. Kumar, "IRIS Recognition and Authentication System for Enhancing Data Security," International Journal of Computer Sciences and Engineering, Vol.2, Issue.3, pp.55-59, 2014. P.S. Hiremath, M. Hiremath, "Symbolic Factorial Discriminant Analysis for 3D Face Recognition," International Journal of 6

Int. J. Sci. Res. in Computer Science and Engineering Vol.8, Issue.5, Oct 2020 Computer Sciences and Engineering, Vol.2, Issue.1, pp.6-12, 2014. [36] Rohini M., Arsha P., "Detection of Microaneurysm using Machine Learning Techniques," International Journal of Scientific Research in Network Security and Communication, Vol.7, Issue.3, pp.1-6, 2019. [37] Hemant Kumar Soni, "Machine Learning – A New Paradigm of AI," International Journal of Scientific Research in Network Security and Communication, Vol.7, Issue.3, pp.31-32, 2019 AUTHORS PROFILE Bhavya D. N., Completed B.E and M.Tech from in Computer Science from Visvesvaraya Technological University, Karnataka, India. Presently She is pursuing Ph.D. at University of Mysore, Karnataka. Her research interest includes image processing, pattern recognition and Multimodal Biometrics Chethan H. K., Completed B.Sc, M.Sc and PhD from from University of Mysore, Karnataka, India. Presently working as Professor at Maharaja Institute of Technology, Thandavapura, Karnataka India. Guiding eight Ph.d Students in several domains. Have guided several projects for bachelors and masters’ student. He has published papers in International conferences and Journals. 2020, IJSRCSE All Rights Reserved 7

Biometric system using single biometric trait is referred to as Uni-modal biometric system. Unfortunately, recognition systems developed with single biometric trait suffers from noise, intra class similarity and spoof attacks. The rest of the paper is organized as follows. An overview of Multimodal biometric and its related work are discussed .

Related Documents:

Multimodal biometric systems increase opposition to certain kind of vulnerabilities. It checks from stolen the templates of biometric system as at the time it stores the 2 characteristics of biometric system within the info [22]. As an example, it might be additional challenge for offender to spoof many alternative biometric identifiers [17].

mode, the system recognizes an individual by searching the templates of all the users in the database for a match. In the verification mode, system validates identity of person by comparing the captured biometric data with the own biometric template(s) which are stored system database. Biometric systems which rely on the evidence of a single

existing password system. There are numerous pros and cons of Biometric system that must be considered. 2 BIOMETRIC TECHNIQUES Jain et al. describe four operations stages of a Unit-modal biometric recognition system. Biometric data has acquisition. Data evaluation and feature extraction. Enrollment (first scan of a feature by a biometric reader,

The multimodal biometric system creates two common biometrics statistics, face and fingerprint by employing real multimodal information and two unreal multimodal databases. Through the experimental results it was identified that there is a significant difference between the system performances obtained with the real and .

biometric. We illustrate the challenges involved in biometric key generation primarily due to drastic acquisition variations in the representation of a biometric identifier and the imperfect na-ture of biometric feature extraction and matching algorithms. We elaborate on the suitability of these algorithms for the digital rights management systems.

the specifics of biometric technology is available elsewhere.3 Biometric technology continues to advance, new biometric measures are under development, and existing technological restrictions may disappear. A biometric identifier may work today only under ideal conditions with bright lights, close proximity, and a cooperative data subject.

framework for multimodal biometric identification system which provide smaller memory footprint and faster implementation than the conventional systems. This framework has been verified by developing a fingerprint and iris fusion system which utilizes a single Hamming Distance based matcher.

Automotive EMC Guide. from . Interference Technology. We hope you enjoy the informative articles and helpful resources and refer-ences we have featured in this guide. Mitigating issues with electromagnetic interference (EMI) is crucial in the automotive indus - try, especially as vehicles become more connected and include high-level and demand- ing electronic systems. It is now commonplace for .