Towards More Accurate Iris Recognition Using Deeply Learned Spatially .

1y ago
7 Views
1 Downloads
1.13 MB
15 Pages
Last View : 1d ago
Last Download : 3m ago
Upload by : Amalia Wilborn
Transcription

Towards More Accurate Iris Recognition UsingDeeply Learned Spatially Corresponding FeaturesZijing Zhao, Ajay KumarDepartment of Computing, The Hong Kong Polytechnic UniversityHung Hom, Kowloon, Hong Kongjason.zhao@connect.polyu.hk, ajay.kumar@polyu.edu.hkAbstractThis paper proposes an accurate and generalizable deeplearning framework for iris recognition. The proposedframework is based on a fully convolutional network (FCN),which generates spatially corresponding iris featuredescriptors. A specially designed Extended Triplet Loss(ETL) function is introduced to incorporate the bit-shiftingand non-iris masking, which are found necessary forlearning discriminative spatial iris features. We alsodeveloped a sub-network to provide appropriateinformation for identifying meaningful iris regions, whichserves as essential input for the newly developed ETL.Thorough experiments on four publicly available databasessuggest that the proposed framework consistentlyoutperforms several classic and state-of-the-art irisrecognition approaches. More importantly, our modelexhibits superior generalization capability as, unlikepopular methods in the literature, it does not essentiallyrequire database-specific parameter tuning, which isanother key advantage over other approaches.1. IntroductionIris recognition has emerged as one of the most accurateand reliable biometric approaches for the humanrecognition. Automated iris recognition systems thereforehave been widely deployed for various applications fromborder control [22], citizen authentication [23], forensic [24]to commercial products [25]. The usefulness of irisrecognition has motivated increasing research effort in thepast decades for exploring more accurate and robust irismatching algorithms under different circumstances [1-6].In recent years, deep learning has gained tremendoussuccess especially in the area of computer vision, andaccomplished state-of-the-art performance for a number oftasks such as general image classification [17], objectdetection [18] and face recognition [15] [19]. However,unlike face, in the field of iris recognition, in the best of toour knowledge, there is almost nil attention to incorporateremarkable capabilities of the deep learning and achievesuperior performance than popular or state-of-the-art irisrecognition methods.In this paper we propose a new deep learning based irisrecognition framework which not only achievessatisfactory matching accuracy but also exhibitsoutstanding generalization capability to different databases.With the design of effective fully convolutional network,our model is able to significantly reduce parameter spaceand learn comprehensive iris features which generalize wellon different datasets. A newly developed Extended TripletLoss (ETL) function provides meaningful and extensivesupervision to the iris feature learning process with limitedsize of training data.The main contributions of this paper can be summarizedas follows: (i) We develop a new deep learning based irisrecognition framework which can be highly generalized foroperating on different databases that represent diversedeployment environments. A new Extended Triplet Lossfunction has been developed to successfully address thenature of iris pattern for learning comprehensive irisfeatures (more details in Section 2.2 and 3). Significantadvancement therefore has been made to bridge the gapbetween deep learning and iris recognition. (ii) Under faircomparison, our approach consistently outperforms severalstate-of-the-art methods on different datasets. Even underchallenging scenario that without having any parametertuning on the target dataset, our model can still achievesuperior performance over state-of-the-art methods thathave been extensively tuned.1.1. Related WorkOne of the most classic and effective approaches forautomated iris recognition was proposed by Daugman [1]in 2002. In his work, Gabor filter is applied on thesegmented and normalized iris image, and the responses arethen binarized as IrisCode. The hamming distance betweentwo IrisCodes is used as the dissimilarity score forverification. Based on [1], 1D log-Gabor filter wasproposed in [2] to replace 2D Gabor filter for more efficientiris feature extraction. A different approach, developed in[3] in 2007, has exploited discrete cosine transforms (DCT)for analyzing frequency information of image blocks andgenerating binary iris features. Another frequencyinformation based approach was proposed in [5] in 2008, inwhich 2D discrete Fourier transforms (DFT) was employed.In 2009, the multi-lobe differential filter (MLDF), which is

a specific kind of ordinal filters, was proposed in [4] as analternative to the Gabor/log-Gabor filters for generating iristemplates.Unlike the popularity of deep learning for variouscomputer vision tasks, especially for face recognition, theliterature so far has not yet fully exploited its potential foriris recognition. There has been very little attention onexploring iris recognition using deep learning. A deeprepresentation for iris was proposed in [27] in 2015, but thepurpose was for spoofing detection instead of irisrecognition. A recent approach named DeepIrisNet in [28]has investigated deep learning based frameworks forgeneral iris recognition. This work is essentially a directapplication of typical convolutional neural networks (CNN)without much optimization for iris pattern. Ourreproducible experimental comparison in section 5.3further indicates that under fair comparison, this approach[28] cannot deliver superior performance even over otherpopular methods. Another recent work [37] has attemptedto employ deep belief net (DBN) for iris recognition. Itscore component, however, is the optimal Gabor filterselection, while the DBN is again a simple application onthe IrisCode without iris-specific optimization. Abovestudies have made preliminary exploration but failed toestablish substantial connections between iris recognitionand deep learning.1.2. Limitations and ChallengesDespite the popularity of iris recognition in biometrics,conventional iris feature descriptors does have severallimitations. The summaries of earlier work in [7] and [8]reveal that existing methods can achieve satisfactoryperformance, but the performance needs to be furtherimproved to meet the expectations for wider range ofdeployments. Besides, traditional iris features, such asIrisCode, are mostly based on empirical models whichapply hand-crafted filters or feature generators. As a result,these models rely heavily on parameter selection whenapplied for different databases or imaging environments.Although there are some standards on iris image format[29], the selection of parameter for feature extractionremains empirical, or based on training methods such asboosting [30]. This situation can be observed from [4],where eight different combinations of parameters forordinal filters delivered varying performance on threedatabases, or from [9] which employed two sets ofparameters for log-Gabor filter on two databases byextensive tuning. Another limitation is that due to thesimplicity of conventional iris descriptors, they are lesspromising to fully exploit the underlying distribution fromvarious types of iris data available today. Learning datadistribution from large amount of samples to furtheradvance performance is one of the key trends nowadays.Deep learning has the potential to address the aboveIris DetectionNormalizationEnhancementFigure 1: Illustration of key steps for iris image preprocessing.limitations, since the parameters in deep neural networksare learned from data instead of being empirically set, anddeep architectures are known to have good generalizationcapability. However, new challenges emerge whileincorporating typical deep learning architectures (e.g.,CNN) for the iris recognition, which can be primarilyattributed to the nature of iris patterns. Different from face,iris pattern is observed to reveal little structural informationor meaningful hierarchies. Iris texture is believed to berandom [31]. Earlier promising works on iris recognition[1-5] mainly employed small-size filters or block-basedoperations to obtain iris features. Therefore, we can inferthat the most discriminative information in the iris patterncomes from the local intensity distribution of an iris imagerather than the global features, if any. CNN is known aseffective for extracting features from low level to high level,and from local to global, due to the combination ofconvolutional layers and fully connected layers [20].However, as discussed above, high level and global featuresmay not be the optimal for iris representation.This paper aims to develop more accurate and robustdeep learning based iris feature representation framework,making solid contributions towards fully discovering thepotential of deep learning for the iris recognition. Suchobjectives have not yet been pursued in the literature.Different from [28] and [37], this paper proposes a noveldeep network and customized loss function, which arehighly optimized for extracting discriminative iris features,which have been comparatively evaluated with severalstate-of-the-art methods on multiple iris image databases.The rest of this paper is organized as follows: Section 24 detail the proposed approach in terms of networkarchitecture, improved triplet loss function and featureencoding respectively; Section 5 presents the experimentalconfigurations, results and analysis; finally, the keyconclusions from this paper are presented in Section 6.2. Network ArchitectureWe have developed a highly optimized and unified deeplearning architecture, referred to as UniNet, for both irisregion masking and feature extraction, which is based onfully convolutional networks (FCN) [15]. A new

UniNetFeatNetFeature StackConv1 Tanh1 Pool1 Conv2 Tanh2 Pool2 Conv3 Tanh3 UpsampleConv4UpsampleMaskNetConv1 Pool1 Conv2 Pool2 Conv3 Pool3 Conv4 Conv4 s UpsampleConv3 sUpsampleConv2 s UpsampleFigure 2: Detailed structures for FeatNet (top) and MaskNet (bottom) respectively. The FeatNet generates a single-channel feature mapfor each sample for matching. The MaskNet outputs a two-channel map, on which the values for each pixel along two channels representthe probabilities of belonging to iris and non-iris regions, respectively.Table 1: Layer configurations for MaskNet and FeatNet.FeatNetKernelLayerTypesize3 7Conv1Convolution3 5Conv2Convolution3 3Conv3Convolution3 3Conv4ConvolutionTanh1, 2, 3 TanH activation/Pool1, 2, 3 Average pooling 2 2LayerConv1Conv2Conv2 sConv3Conv3 sConv4Conv4 sPool1, 2Pool3MaskNetKernelTypesize3 3Convolution3 3ConvolutionConvolution1 13 3ConvolutionConvolution1 13 3ConvolutionConvolution1 1Max pooling2 2Max pooling4 4Stride1111/2Stride111111124# Outputchannels1624321//# Outputchannels163226421282//customized loss function, named Extended Triplet Loss(ETL), has been developed to accommodate the nature ofiris texture in supervised learning. The motivations andtechnical details for the proposed approach are explained inthe following sections.2.1. Image PreprocessingFor all the experiments presented in this paper, we use arecent iris segmentation approach [10] for iris detection andnormalization. The resolution after normalization isuniformly set to 64 512 . We then apply a simple contrastenhancement process, which adjusts the image intensity sothat 5% pixels are saturated at low and high intensities. Theenhanced images are used as input to the deep network fortraining and testing. Figure 1 illustrates the key steps ofimage preprocessing.2.2. Fully Convolutional NetworkThe proposed unified network (termed as UniNet) iscomposed of two sub-networks, FeatNet and MaskNet,whose detailed structures are presented in Figure 2 andTable 1. Both of the two sub-networks are based on fullyconvolutional networks (FCN) which is originallydeveloped for semantic segmentation [15]. Different fromcommon convolutional neural network (CNN), the FCNdoes not have fully connected layer. The major componentsof FCN are convolutional layers, pooling layers, activationlayers, etc. Since all these layers operate on local regionsaround pixels from their bottom map, the output map canpreserve spatial correspondence with the original inputimage. By incorporating up-sampling layers, FCN is able toperform pixel-to-pixel prediction. In the following wedetail the two components of UniNet. FeatNetFeatNet is designed for extracting discriminative iris

UniNetPositive SampleUniNetAnchor SampleExtendedTriplet LossUniNetNegative SampleFigure 3: Triplet-based network organization for training.features which can be used in matching. As shown in Figure2, the input iris image is forwarded by several convolutionallayers, activation layers and pooling layers. The networkactivations at different scales, i.e., TanH1-3, are then up sampled if necessary to the size of original input. Thesefeatures form a multi-channel feature stack which containsrich information from different scales, and are finallyconvolved again to generate an integrated single-channelfeature map.The reason for selecting FCN instead of CNN for irisfeature extraction primarily lies in the previous analysis oniris patterns in Section 1.2, i.e., the most discriminativeinformation of an iris probably comes from small and localpatterns. FCN is able to maintain local pixel-to-pixelcorrespondence between input and output, and therefore isa better candidate for the iris feature extraction. MaskNetMaskNet is set to perform non-iris region masking fornormalized iris images, which can be regarded as a specificproblem for the semantic segmentation. It is basically asimplified version of the FCNs proposed in [15]. Similar tothose in [15], MaskNet is supervised by a pixel-wisesoftmax loss, where each pixel is classified into one of twoclasses, i.e., iris or non-iris. In our practice, MaskNet istrained with 500 randomly selected samples from thetraining set of ND-IRIS-0405 database, and the groundtruth masks are manually generated by us. We would liketo declare that the main focus of this paper is on learningeffective iris feature representation. MaskNet is developedto provide adequate and immediate information formasking non-iris regions, which is necessary for the newlydesigned loss function (will be detailed in Section 3) andalso for the matching process. The placement of MaskNetin the unified network also preserves the possibilities thatiris masks may be jointly optimized/fine-tuned with thefeature representations, which is one of our future researchgoals. At this stage, however, MaskNet is pre-trained andfixed during learning the iris features. A sample evaluationfor its performance is provided in the supplementary file.2.3. Triplet-based Network ArchitectureA triplet network [16] was implemented for learning theconvolutional kernels in FeatNet. The overall structure forthe triplet network in the training stage is illustrated inFigure 3. As shown in the figure, three identical Uninets,whose weights are kept identical during training, are placedin parallel to forward and back-propagate the data andgradients for anchor, positive and negative samplesrespectively. The anchor-positive (AP) pair should comefrom the same person while the anchor-negative (AN) paircomes from different persons. The triplet loss function insuch architecture attempts to reduce the anchor-positivedistance and meanwhile increase the anchor-negativedistance. However, in order to ensure more appropriate andeffective supervision in the generation of iris features by theFCN, we improve the original triplet loss by incorporatinga bit-shifting operation. The improved loss function isreferred to as Extended Triplet Loss (ETL), whosemotivation and mechanism are detailed in Section 3.3. Extended Triplet Loss Function3.1. Triplet Loss Function Incorporating withMasks and Bit-ShiftingThe original loss function for a triplet network is definedas follows:L 1 N A f i f PiN i 1 2 f Ai f N i2 α (1)where N is the number of triplet samples in a mini-batch,f A i , f P i and f N i are the feature maps of anchor,positive and negative images in the i-th triplet respectively.The symbol [ ] is the as same as used in [16] and isequivalent to max( , 0) . α is a preset parameter tocontrol the desired margin between anchor-positivedistance and anchor-negative distance. Optimizing aboveloss will lead to the anchor-positive distance being reducedand anchor-negative distance being enlarged until theirmargin is larger than a certain value.In our case, however, using Euclidean distance as thedissimilarity metric is far from sufficient. As discussedearlier, we propose using spatial features which have thesame resolution with the input, the matching process has todeal with non-iris region masking and horizontal shifting,which are frequently observed in iris samples as illustratedin Figure 4. Therefore in the following, we extend theoriginal triplet loss function, which we refer to as theExtended Triplet Loss (ETL):ETL 1 N D( f Ai , f Pi ) D( f Ai , f N i ) α (2)N i 1 where D( f 1 , f 2 ) represents the Minimum Shifted andMasked Distance (MMSD) function, defined as follows:D ( f 1 , f 2 ) min B b B{FD ( f1b, f 2 )}(3)

Same iris with rotationwhere FD is the Fractional Distance which takes featuremasks into consideration:FD( f 1 , f 2 ) 1M ( f 1x , y f 2 x , y ) 2( x , y ) M(4)Iris normalizationIris normalizationM {( x, y ) m1x , y 0 and m2 x , y 0}where m1 and m 2 are the binary masks for two featuremaps, in which zero means the current position is non-iris.In other words, FD only measures the distances at valid irispixel positions, and normalizes the total distance by thenumber of valid pixels. In (3), the subscript b means thefeature map has been shifted horizontally by b pixels, i.e., ashifted feature map has the following spatialcorrespondence with the original one:fb [ xb , y] f [ x, y](5)xb ( x b w) mod wwhere x , y are the spatial coordinates and x b is obtainedby shifting the pixel to the left by a step of b. Note that whenx is less than b, the pixel position will be directed to theright end of the map, as the iris map is normalized byunwrapping the original iris circularly and the left end istherefore physically connected with the right end. When bis negative, the bit-shifting operation would shift the mapto the right by –b pixels. The iris feature matching then ismeaningful by computing the MMSD between feature maps.In order to maintain simplicity of the notations for theupcoming derivation, we denote the offsets that fulfills theMMSD of AP-pair and AN-pair as follows:bAP argmin { FD( f Ab , f P )} B b BbAN argmin { FD( f Ab , f N )}(6) B b BDuring the back-propagation (BP) of the training process,the gradients (or partial derivatives) of the new loss on theanchor, positive and negative feature maps need to becomputed. For simplicity, let us firstly derive the partialderivative w.r.t the positive feature map f P . From (2) itcan be derived that for one sample in the batch: 0, if ETL 0 ETL ETL D ( f A , f P ) 1, otherwise f P N D ( f A , f P ) f P (7)Again from (2) we can see that ETL 0 is equivalent toD ( f A , f P ) D ( f A , f N ) α 0 . We only need to showthe derivation when ETL is not 0. Let us define the set ofcommon valid iris pixel positions for AP pair as:M AP {( x, y ) m A [ x, y ] 0 and m P [ xbAP , y ] 0} (8)From (3), (4) we have the following pixel-wise derivatives:Figure 4: Illustration of occlusions (labeled in blue) andhorizontal translation which usually exist between twonormalized iris images even from a same iris.AP D ( f A , f P ) FD ( f bAP , f ) f P [ x, y ] f P [ x, y ](9) 0, if ( x, y ) M AP or ETL 0 2AP M ( f [ xbAP , y ] f [ x, y ]), otherwise APAnd apparently ETL 1 , thus from (7) and (9). D( f A , f P ) 0, if ( x , y ) M AP or ETL 0 ETL (10) 2( f A [ xbAP , y ] f P [ x , y ])P f [ x , y ] , otherwiseN M AP Similarly, for the partial derivatives on the negative featuremap, we have: 0, if ( x , y ) M AN or ETL 0 ETL (11) 2( f A [ xbAN , y ] f N [ x , y ]) f N [ x , y ] , otherwiseN M AN The final step is to calculate the derivatives w.r.t the anchorfeature map. It can be seen from (3)-(5) that shifting the firstmap to the left by b pixels is equivalent to shifting thesecond map to the right by b pixels. Making use of thisproperty, we have FD( f AbAP , f P ) FD ( f A , f P bAP ) andFD ( f AbAN , f N ) FD( f A , f N bAN ) . It is therefore quitestraightforward to obtain from (2)-(4): ETL ETL ETL P f A [ x, y ] f [ x bAP , y ] f N [ x bAN , y ](12)After calculating the derivative maps w.r.t f A , f P andf N respectively, the rest of the BP process is the same asfor common convolutional neural networks. Abovederivation shows that gradients will be computed only forpixels that are not masked. In this way, features are learnedonly within valid iris regions, while non-iris regions will beignored since they are not of our interest. After the lastconvolutional layer, a single-channel feature map isgenerated which can be used to measure similaritiesbetween the iris samples.

MaskNet OutputFeatNet OutputND-IRIS-0405CASIAv4-distanceIITDWVU NonidealCompute meanBinarization:value mean 1value mean 0Mask where value – mean tMatchingFigure 5: Illustration of feature binarization process.4. Feature Encoding and MatchingWe perform a simple encoding process for the featuremap output from UniNet. The feature maps originallycontain real values, and it is straightforward to measure thefractional Euclidean distance between the masked maps formatching, as the network is trained in this manner. However,binary features are more popular in most of the researchworks on iris recognition (e.g., [1]-[6], [9]), since it iswidely accepted by the community that binary features aremore resistant to illumination change, blurring and otherunderlying noise. Besides, binary features consume smallerstorage and enable faster matching. Therefore, we alsoinvestigated the feasibility of binarizing our features with areasonable scheme as described in the following:For each of the output feature map, the mean value of theelements within the non-masked iris regions is firstlycomputed as m. This mean value is then used as thethreshold to binarize the original feature map. In order toavoid marginal errors, elements with feature values v closeto m (i.e., v m t ) are regarded as less reliable and will bemasked together with the original mask output by MaskNet.Such a further masking step is conceptually similar to“Fragile Bits” [12], which discovered that some bits inIrisCode, with filtered responses near the axes of thecomplex space, are less consistent or unreliable. The rangethreshold t for masking unreliable bits is uniformly set to0.6 for all the experiments. The feature encoding processcan be demonstrated in Figure 5. For matching, we use thefractional Hamming distance [2] from the binarized featuremaps and extended masks. It is observed that using thebinary features does not degrade the performance comparedwith using the real-valued features, and even yield slightimprovements in some cross-dataset scenarios, probablydue to the factors discussed above.5. Experiments and ResultsThorough experiments were conducted to evaluate theperformance of the proposed approach from various aspects.The following sections detail the experimental settingsalong with the reproducible [38] results.Figure 6: Sample raw images from four employed databases.5.1. Databases and ProtocolsWe employed the following four publicly availabledatabases our experiments: ND-IRIS-0405 Iris Image Dataset (ICE 2006)This database [32] contains 64,980 iris samples from 356subjects and is one of the most popular iris databases inthe literature. The training set for this database iscomposed of the first 25 left eye images from all thesubjects, and the test set consists of first 10 right eyeimages from all the subjects. The test set, after removingsome falsely segmented samples, contains 14,791genuine pairs and 5,743,130 imposter pairs. CASIA Iris Image Database V4 – distanceThis database (subset) [33] includes 2,446 samples from142 subjects. Each sample captures the upper part of faceand therefore contain both left and right irises. Theimages were acquired from 3 meters away. An OpenCVimplemented eye detector [36] was applied to crop theeye regions from the original images. The training setconsists of all the right eye images from all the subjects,and the test set comprises all the left eye images. The testset generates 20,702 genuine pairs and 2,969,533imposter pairs. IITD Iris DatabaseThe IITD database [34] contains 2,240 image samplesfrom 224 subjects. All of the right eye iris images wereused as training set while the first five left eye imageswere used as test set. The test set contains 2,240 genuinepairs and 624,400 imposter pairs. WVU Non-ideal Iris Database – Release 1The WVU Non-ideal database [35] (Rel1 subset)comprises 3,043 iris samples from 231 subjects whichwere acquired under different extends of off-angle,illumination change, occlusions, etc. The training setconsists of all of the right eye images, and the test set wasformed by the first five left eye images from all thesubjects. The test set has 2,251 genuine pairs and643,565 imposter pairs.From the above introduction we can observe that theimaging conditions for these databases are quite different.Sample images from the four employed datasets areprovided in Figure 6, where noticeable variation in image

0.960.95 -310IrisCode (2D Gabor - OSIRIS)IrisCode (1D log-Gabor)OrdinalOurs-2-11010False Accept Rate1000.90.850.8 -310IrisCode (2D Gabor - OSIRIS)IrisCode (1D False Accept Rate1000.99IrisCode (2D Gabor - OSIRIS)IrisCode (1D log-Gabor)OrdinalOurs-CrossDBOurs-WithinDB0.980.97 -310-2-11010False Accept RateGenuine Accept Rate0.970.95Genuine Accept Rate0.98Genuine Accept RateGenuine Accept Rate0.9911111000.950.90.85 -310IrisCode (2D Gabor - OSIRIS)IrisCode (1D False Accept Rate100(a) ND-IRIS-0405(b) CASIA.v4-distance(c) IITD(d) WVU Non-idealFigure 7: ROCs for comparison with other state-of-the-art methods on for employed databases. Best viewed in color.Table 2: Summary of false reject rates (FRR) at 0.1% false accept rate (FAR) and equal error rates (EER) for the comparison.IrisCode (OSIRIS)IrisCode .93%7.89%13.27%4.54%11.15%3.85%quality can be observed. It is therefore judicious to assumethat these databases can represent diverse deploymentenvironments.5.2. Test ConfigurationsWe incorporated following two configurations during thetest phase for extensive evaluation of the proposed model. CrossDBIn the CrossDB configuration, we use the ND-IRIS-0405as the training set. During testing, the trained model wasdirectly applied on CASIA.v4-distance and IITD withoutany further tuning. The purpose of the CrossDB settingis to examine the generalization capability of theproposed framework under challenging scenario that fewtraining samples are available. WithinDBIn this configuration we use the network trained on NDIRIS-0405 as the initial model, then fine-tune it using theindependent training set from the target database. Thefine-tuned network is then evaluated on the respectivetest set. Being capable of learning from data is the keyadvantage of deep learning, therefore it is judicious toexamine the best possible performance from theproposed model by fine-tuning it with some samplesfrom the target database. The fine-tuned models from theWithinDB configuration are expected to perform betterthan the one with CrossDB, due to higher consistency ofimage quality between the training set and test set.It should be noted that in both of the above configurations,training set and test set are totally separated, i.e., none ofthe iris images are overlapping between the training set andtest set. All the experimental results were generated underall-to-all matching protocol, i.e., the scores of every imagepair in the test set have been 38%1.25%0.64%0.73%WVU 6%2.83%5.00%2.28%5.3. Comparison with Earlier WorksWe present comparative experimental results usingseveral highly competitive benchmarks. Gabor filter basedIrisCode [1] has been the most widely deployed iris featuredescriptor, largely due to the fact that few alternative irisfeatures in the literature are universally accepted as betterthan IrisCodes. Instead, the majority of recent works on irisbiometrics are more on improving segmentation and/ornormalization models [10] [11], applying multi-scorefusion [9] or feature bits selection [12]. In other words, inthe context of iris feature representations, IrisCode is stillthe most popular and highly competitive approach, andtherefore is definitely a fair benchmark for the performanceevaluation. IrisCode has a number of advanced versions.From the publicly available ones, we selected OSIRIS [13],which is an open source tool for iris recognition. Itimplements a band of multiple tunable 2D Gabor filters thatcan encode iris patterns at different scales, therefore is ahighly credible competitor. Another classic implementationof IrisCode is based on 1D log-Gabor filter(s) [2], which isclaimed to encode iris patterns more efficiently, and is alsowidely chosen as benchmark in a variety of research works(e.g., [6], [10]). Therefore, this approach is also investigated.Apart from the Gabor series filters, ordinal filters proposedin [4] can serve as a different type of iris feature extractorsto complement the comparisons. The aforeme

2. Network Architecture We have developed a highly optimized and unified deep learning architecture, referred to as UniNet, for both iris region masking and feature extraction, which is based on fully convolutional networks (FCN) [15]. A new Figure 1: Illustration of key steps for iris image preprocessing. Iris Detection Normalization Enhancement

Related Documents:

Iris pseudacorus L. - Yellow flag iris Species Family: Iridaceae Information Synonyms: None. Common Names: yellow flag iris, yellow iris, flag iris, yellow water iris, pale-yellow iris Botanical Description: Yellow flag iris' stout rhizomes grow in moist to wet soils. Its sword-like leaves grow 3 - 4 ft. tall. Yellow flowers bloom in

comparison techniques. Author revealed the story of iris recognition and biometrics comparison and provided the step by step detail about iris biometrics recognition and also elaborated the use of iris recognition and mentioned the key role played by it in daily life. Keywords Iris recognition, Biometrics, Comparison 1. INTRODUCTION

leak detector to Iris Smart Hub Water leak detector is out of range of Iris Smart Hub Move water leak detector closer to Iris Smart Hub or install a range extender Iris Smart Hub is not in pairing mode Ensure green light is flashing on Iris Smart Hub by clicking 'Add Devices' on the Iris dashboard website No alarm and LED not

The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. A front-on view of the iris is shown in Figure 1.1. The iris is perforated close to its centre by a circular aperture known as the pupil. The function of the iris is to control the amount of light entering through the pupil, and this is done by the

Iridology is the study of the Iris which is the colored part of the eye between the pupil and the Sclera The Seven Iris Reflex Zones of the Iris The Seven Iris Reflex Zones of a perfect Iris. 7/19/2017 7 Using One of the Variou

ACN Communication Services, Inc IRIS X USER MANUAL Updated : 3/2012 ACN Digital Phone Service,LLC IRIS X IP Multimedia Phone . ACN Communication Services, Inc IRIS X USER MANUAL FIRMWARE VERSION 1.0.1.1 Updated : 3/2012 IRIS X User Manual Index WELCOME .

The IRIS Touch USB Reflasher software is fixed to a single firmware release that allows the installer to reflash any IRIS Touch to this version using a PC/laptop and a USB cable. Also the IRIS Touch USB Reflasher allows for multiple IRIS Touch units to be reflashed at the same time, depending on the amount of USB ports available.

ner, Gladys Thomas, Charles McKinney, Mary Pelfrey, Christine Qualls, Dora Turner, David Petry, Cleone Gor don, Dorothy Scruggs, Phyllis Rice, Jacquelyn White, Rowena Napier, William Smith, Annie Smith, Ruth Ann Workman, Barbara Johnson and Letha Esque. The awards were presented by MU President Robert B. Hayes on March 4. Faculty meet Tuesday A general faculty meeting has been scheduled for .