2020 JETIR April 2020, Volume 7, Issue 4 Jetir .

2y ago
14 Views
3 Downloads
1.19 MB
6 Pages
Last View : 15d ago
Last Download : 2m ago
Upload by : Milo Davies
Transcription

2020 JETIR April 2020, Volume 7, Issue 4www.jetir.org (ISSN-2349-5162)NEURAL NETWORK BASED EFFICIENTGLAUCOMA DETECTION USING DENET1.M. RAVI, 2.Y.RUPA SESHA VARMA, 3. Y.TANUJA DURGA BHAVANI, 4. M.SRAVANI1. Assistant Professor, Guide, 2, 3, 4 B. tech Final Year StudentsDEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING USHA RAMA COLLEGE OFENGINEERING AND TECHNOLOGY, VIJAYAWADA, INDIA.ABSTRACT: Glaucoma is a chronic eye disease thatleads to irreversible vision loss. The cup to disc ratio(CDR) plays an important role in the screening anddiagnosis of glaucoma. Thus, the accurate andautomatic segmentation of optic disc (OD) and opticcup (OC) from fundus images is a fundamental task innow a days biomedical applications. Biomedicalapplication with image processing technology plays animportant role in now a days. We present a residuallearning framework to ease the training of networks thatare substantially deeper than those used previously. Weexplicitly reformulate the layers as learning residualfunctions with reference to the layer inputs, instead oflearning unreferenced functions. We providecomprehensive empirical evidence showing that theseresidual networks are easier to optimize, and can gainaccuracy from considerably increased depth.KEYWORDS: Glaucoma, cup to disc ratio, optic disc,Optical Nerve, Deep learning,INTRODUCTION:Automated Image analysis and processing is of greatsignificance in early detection, screening and treatmentplanning of various retinal, ophthalmic and systemicdisease especially because if its non-invasiveness [5] .Precise detection and accurate analysis of ophthalmicpathologies for timely treatment is essential inpreventing vision loss. The development of ComputerAssisted Diagnostic (CAD) systems for assisting theclinicians in diagnosis and prognosis of retinal diseaseshas a vital role in improving the healthcare, particularlyin the developing countries with shortage ofprofessional ophthalmologists [6].Accuratelocalization and precise segmentation of Optic Disc(OD) is the first step in developing CAD systems forearly detection of ophthalmic diseases like Glaucomaand diabetic retinopathy. Glaucoma is causing visualloss and blindness around the globe at second highestrate. The Glaucoma patients are mostly ignorant at earlystages about the affects until visual loss progress andover the 5-year period, damage to optic nerve fiberincreases to 63% [10]. Glaucoma is characterized byJETIR2004160the change in color, shape and depth of OD. Thepresence of parapapillary atrophy produces brightregions around the OD rim distorting the ellipticalshape [6]. Moreover, the progression in optic nervefiber damage causes the structural changes in OD, opticnerve head and nerve fiber layer which results in anincrease in Optic Cup to Disc ratio (CDR). The CDRcan be accessed by estimating the diameter and the areaof OD, the area of rim and Optic Cup diameter. Theaccurate and fast OD segmentation and analysis is thefirst step towards the development of computer assisteddiagnostic system for Glaucoma screening in largepopulation based studies. DR is an eye disease whichcauses complications due to enormous level of glucosein blood of diabetic patients. Diabetic Macular Edema(DME) is commonly associated with DR which causesthe swollen retina by fluid leaking in macular bloodvessels. The number and position of exudates relativeto fovea in retina can be assessed to find the risk offuture disease development. This step needs theaccurate identification of landmark features in fundusimagessuchastheOpticDisc.Figure-1: Retinal image featuresDeep convolutional neural networks [2, 1] have led to aseries of breakthroughs for image classification [2, 5,4]. Deep networks naturally integrate low/mid/highlevelfeatures [50] and classifiers in an end-to-end multilayerfashion, and the “levels” of features can be enriched bythe number of stacked layers (depth). Recentevidence[1, 4] reveals that network depth is of crucialimportance, and the leading results [4]on theJournal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org1179

2020 JETIR April 2020, Volume 7, Issue 4challenging ImageNet dataset [36] all exploit “verydeep” [41] models, with a depth of sixteen [4] to thirty[1].LITERATURE SURVEY:The problem of optic nerve detection has rarelyreceived unique attention. It has been investigated as aprecursor for other issues, for example as identifying astarting point for blood vessel segmentation [1,2]. Ithas also been investigated as a byproduct of generalretinal image segmentation, for instance into separateidentifications of arteries, veins, the nerve, the fovea,and lesions [1,6,12]. Here we review these relatedworks.In [12] a method is presented to segment a retinal imageinto arteries, veins, the optic disk, the macula, andbackground. The method is based upon split-andmerge segmentation, followed by feature basedclassification. The features used for classificationinclude region intensity and shape. The primary goal ofthe paper was vessel measurement; the nerve wasidentified only to prevent its inclusion in themeasurement of vessels. Ten healthy retinas and tenretinas with arterial hypertension were used forexperiments. Quantitative results for nerve detectionwere not provided. A similar approach was taken in[6], in which the segmentation was accomplished usingmatched spatial filters of bright and dark blobs.Quantitative results for nerve detection were notprovided.In [7] a method is presented to segment a retinal imageinto vessels, the nerve, the fovea, scotomas, andsubretinal leakages. Nerve detection is based upon thetransform of gradient edges into a Hough spacedescribing circles. The search is restricted to one-thirdof the image based upon apriori knowledge of theexpected general location of the nerve. Eleven retinaswith age-related macular degeneration (ARMD) wereused for experiments. In 10 out of 11 cases, the nervewas successfully detected.In [1] a method is presented to segment a retinal imageinto arteries, veins, the optic disk, and lesions. Nervedetection is based upon tracking the vessel network to acommon starting point. The tracking process uses theangles between vessels at branch points to identify thetrunk. A result is shown for two images; quantitativeresults were not provided.JETIR2004160www.jetir.org gure-2: Proposed blockOur Disc-aware Ensemble Network (DENet) takes intoaccount two levels of fundus image information, that isthe global image and the local disc region, as shown inbelow Fig. The global image level provides the coarsestructure representation on the whole fundus image,while the local disc region is utilized to learn a finerepresentation around the optic disc.Fig-3: Architecture of our DENetFUNDUS IMAGE LEVEL:In our DENet, two streams are employed to learnrepresentations on the global fundus image level. Thefirst stream is a standard classification network by usingResidual Network (ResNet) [3]. The ResNet is based ona Convolutional Neural Network and introduces theshortcut connection to handle the vanishing gradientproblem in very deep networks, as shown in Fig-3 Weutilize a ResNet-50 as the backbone modelto learn theglobal representation on the whole fundus imagedirectly, which consists of 5 down-sampling blocks,followedby a global max pooling layer and a fullyconnected (FC) layer for glaucoma screening The inputimage of this stream is resized to 224 224 to enable useof pre-trained model in [4] as initialization for ournetwork.The second global level stream is theJournal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org1180

2020 JETIR April 2020, Volume 7, Issue 4segmentation-guided network, which localizes the opticdisc region and produces a detection result based on thedisc-segmentation representation. As shown in Fig. 3the main architecture of the segmentation-guidednetwork isadapted by the U-shape convolutionalnetwork (U-Net) in [3], which is an efficient fullyconvolutional neural network for biomedical imagesegmentation. Similar to the original U-Netarchitecture, our method consists of the encoder path(left side) and decoder path (right side). Each encoderpath performs convolutional layer with a filter bank toproduce a set of encoder feature maps, and the elementwise rectified-linear non-linearity (ReLU) activationfunction is utilized. The decoder path also utilizes theconvolutional layer to output the decoder feature map.The skip connections transfer the corresponding featuremap from encoder path and concatenate them to upsampled decoder feature maps. Finally, a classifierutilizes 1 1 convolutional layer with Sigmoidactivation as the pixel-wise classification to produce thedisc probability map. Moreover, we extend a newbranch from the saddle layer of the U-shape network,where the size scale is the smallest (i.e., 40 40) and thenumber of channel is the highest (i.e., 512-D). Theextended branch acts as an implicit vector with averagepooling and flatten layers. Next it connects two fullyconnected layers to produce a glaucoma classificationprobability. This pipeline embeds the segmentationguided representation through the convolutional filterson decoder path of the U-shape network.Fig-4: (A)The detailed architecture of oursegmentation-guided networkIn our global fundus image level networks, two lossfunctions are employed. The first one is the binary crossentropy loss function for glaucoma detection layer. TheJETIR2004160www.jetir.org (ISSN-2349-5162)other is the Dice coefficient for assessing discsegmentation [12], which is defined as:where N is the pixel number, pi 2 [0; 1] and gi 2 f0; 1gdenote predicted probability and binary ground truthlabel for disc region, respectively. The Dice coefficientloss function can be differentiated yielding the gradientas:These two losses are efficiently integrated intobackpropagationvia standard stochastic gradient descent (SGD). Notethat we use two phases for training thesegmentationguided model. Firstly, the U-shapesegmentation network for disc detection is trained bypixel-level disc training data with Dice coefficient loss.Then the parameters of CNN layers are frozen and thefully connected layers for the classification task aretrained by using glaucoma detection training data. Weuse the separate phases to train our segmentationguided model instead of the multi-task based singlestage training, with the following reasons: 1) Using thedisc-segmentation representation on screening couldadd diversity of the proposed network. 2) The pixellevel label data for disc segmentation is more expensivethan image-level label data for glaucoma detection. Theseparate stage could employ different training datasetsand configuration (e.g, different batch sizes and imagenumbers). 3)OPTIC DISC REGION LEVEL:The second level in our network is based on the localoptic disc region, which is cropped based on theprevious segmentation-guided network. The local discregion preserves more detailed information with higherresolution and it is benefited to learn a finerepresentation. Two local streams are used in ournetwork to learn representations on the local discregion. In our method, we apply the pixel-wise polartransformation to transfer the original image to thepolar coordinate system. Let p(u; v) denote the point onthe original Cartesian plane, and its corresponding pointon polar coordinate system is denoted p0( ; r), asshown in Fig. 4, where r and are the radius anddirectional angle of the original point p, respectively.Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org1181

2020 JETIR April 2020, Volume 7, Issue 4Three parameters are utilized to control the polartransformation: the disc center O(uo; vo), the polarradius R, and the polar angle . The polar transformmapping isFig-5:. Illustration of the mapping from Cartesiancoordinate systemThe height and width of polar image are set as the polarradius R and discretization 2 s, where s is the stride.The disc polar transformation has the followingadvantages:1) Since the disc and cup are structured as nearconcentric circles shape. The polar transformation couldenlarge the cup region by interpolation. The increasedcup region displays more details. As shown in Fig. 4,the proportion of cup region is increased and morebalanced than that in original fundus image.2) Due to the pixel-wise mapping, the polartransformation is equivariant to the data augmentationon the originalfundus image [33]. For example, movingthe transformation center O(uo; vo) corresponds to thedrift translation in polar coordinates. Using differentpolar radius R is the same as augmenting with variousscaling factor. And changing the polar angle will shiftin horizontal coordinate in the image. Thus the dataaugmentation of deep learning can be done by using thepolar transformation with various parameters.JETIR2004160www.jetir.org (ISSN-2349-5162)MORPHOLOGICAL IMAGE PROCESSING:Binary images may contain numerous imperfections. Inparticular, the binary regions produced by simplethresholding are distorted by noise and texture.Morphological image processing pursues the goals ofremoving these imperfections by accounting for theform and structure of the image. These techniques canbe extended to greyscale images.BASIC CONCEPTSMorphological image processing is a collection ofnon-linear operations related to the shape ormorphology of features in an image. Morphologicaloperations rely only on the relative ordering of pixelvalues, not on their numerical values, and therefore areespecially suited to the processing of binary images.Morphological operations can also be applied togreyscale images such that their light transfer functionsare unknown and therefore their absolute pixel valuesare of no or minor interest.CONVOLUTIONAL NEURAL NETWORKS :Convolutional neural networks. Sounds like a weirdcombination of biology and math with a little CSsprinkled in, but these networks have been some of themost influential innovations in the field of computervision. 2012 was the first year that neural nets grew toprominence as Alex Krizhevsky used them to win thatyear’s ImageNet competition (basically, the annualOlympics of computer vision), dropping theclassification error record from 26% to 15%, anastounding improvement at the time. Ever since then, ahost of companies have been using deep learning at thecore of their services. Facebook uses neural nets fortheir automatic tagging algorithms, Google for ons, Pinterest for their home feedpersonalization, and Instagram for their searchinfrastructure.Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org1182

2020 JETIR April 2020, Volume 7, Issue 4www.jetir.org (ISSN-2349-5162)RESULT:Fig-9: Final resultsCONCLUSION:Glaucoma is kind of eye disease which leads to damageof optic nerve and vision loss. It is asymptotic in thebeginning but eventually leads to loss of vision, ifuntreated. Finally, this project proposed a novel Discaware Ensemble Network (DENet) for automaticglaucoma screening, which integrates four deep streamson different levels and modules. The multiple levels andmodules are beneficial to incorporate the hierarchicalrepresentations, while the disc-aware constraintguarantees contextual information from the optic discregion for glaucoma screening.Fig-6: Input imageFig-7: Neural Network architecture plotFig-8: Classification ResultsJETIR2004160REFERENCES[1] Y.-C. Tham, X. Li, T. Y. Wong, H. A. Quigley, T.Aung, and C.-Y. Cheng, “Global prevalence ofglaucoma and projections of glaucoma burden through2040: A systematic review and meta-analysis,”Ophthalmology, vol. 121, no. 11, pp. 2081–2090, 2014.[2] S. Y. Shen, T. Y. Wong, P. J. Foster, J.-L. Loo, M.Rosman, S.-C. Loon, W. L. Wong, S.-M. Saw, and T.Aung, “The prevalence and types of glaucoma in malaypeople: The singapore malay eye study,” InvestigativeOphthalmology and Visual Science, vol. 49, no. 9, p.3846, 2008.[3] J. Jonas, W. Budde, and S. Panda-Jonas,“Ophthalmoscopic evaluation of the optic nerve head,”Survey of Ophthalmology, vol. 43, no. 4, pp. 293–320,1999.[4] J. E. Morgan, N. J. L. Sheen, R. V. North, Y.Choong, and E. Ansari, “Digital imaging of the opticnerve head: monoscopic and stereoscopic analysis,”British Journal of Ophthalmology, vol. 89, no. 7, pp.879–884, 2005.Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org1183

2020 JETIR April 2020, Volume 7, Issue 4www.jetir.org (ISSN-2349-5162)[5] H. Fu, Y. Xu, S. Lin, X. Zhang, D. Wong, J. Liu,and A. Frangi, “Segmentation and Quantification forAngle-Closure Glaucoma Assessment in AnteriorSegment OCT,” IEEE Trans. Med. Imag., vol. 36, no.9, pp. 1930–1938, 2017.[6] J. B. Jonas, A. Bergua, P. SchmitzValckenberg, K.I. Papastathopoulos, and W. M. Budde, “Ranking ofoptic disc variables for detection of glaucomatous opticnerve damage,” Invest. Ophthalmol. Vis. Sci., vol. 41,no. 7, pp. 1764–1773, 2000.[7] M. D. HancoxO.D., “Optic disc size, an importantconsideration in the glaucoma evaluation,” Clinical Eyeand Vision Care, vol. 11, no. 2, pp. 59 – 62, 1999.[8] G. D. Joshi, J. Sivaswamy, and S. R. Krishnadas,“Optic Disk and Cup Segmentation from MonocularColour Retinal Images for GlaucomaAssessment,” IEEE Trans. Med. Imag., vol. 30, no. 6,pp. 1192–1205, 2011. [9] F. Yin, J. Liu, S. H. Ong, Y.Sun, D. W. K. Wong, N. M. Tan, C. Cheung,M. Baskaran, T. Aung, and T. Y. Wong, “Model-basedoptic nerve head segmentation on retinal fundusimages,” in Proc. EMBC, 2011, pp. 2626–2629.[10] J. Cheg, J. Liu, Y. Xu, F. Yin, D. Wong, N. Tan,D. Tao, C.-Y. Cheng,T. Aung, and T. Wong,“Superpixel classification based optic disc and opticcup segmentation for glaucoma screening,” IEEETrans. Med. Imag., vol. 32, no. 6, pp. 1019–1032, 2013.[11] J. Cheng, D. Tao, D. W. K. Wong, and J. Liu,“Quadratic divergence regularized SVM for optic discsegmentation,” Biomed. Opt. Express, vol. 8, no. 5, pp.2687–2696, 2017.[12] H. Fu, J. Cheng, Y. Xu, D. Wong, J. Liu, and X.Cao, “Joint Optic Disc and Cup Segmentation Based onMulti-label Deep Network and Polar Transformation,”IEEE Trans. Med. Imag., 2018.[13] J. Cheng, F. Yin, D. W. K. Wong, D. Tao, and J.Liu, “Sparse dissimilarity-constrained coding forglaucoma screening,” vol. 62, no. 5, pp. 1395–1403,2015.[14] R. Bock, J. Meier, L. G. Nyul, J. Hornegger, andG. Michelson, “Glaucoma risk index: Automatedglaucoma detection from color fundus images,”Medical Image Analysis, vol. 14, no. 3, pp. 471–481,2010.[15] S. Dua, U. Rajendra Acharya, P. Chowriappa, andS. Vinitha Sree, “Wavelet-based energy features forglaucomatous image classification,” IEEE Transactionson Information Technology in Biomedicine, vol. 16, no.1, pp. 80–87, 2012.JETIR2004160Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org1184

Assistant Professor, Guide, 2, 3, 4 B. tech Final Year Students DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY, VIJAYAWADA, INDIA. ABSTRACT: Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio

Related Documents:

2018 JETIR September 2018, Volume 5, Issue 9 www.jetir.org (ISSN-2349-5162) JETIR1809072 Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org 349 INTEGRATING PREDICTIVE ANALYTICS AND SOCIAL MEDIA 1 S.Leema Mary, 2 S.Benita Deepthi, 1Head and Associate Professor, 2ME Computer Science

2018 JETIR December 2018, Volume 5, Issue 12 www.jetir.org (ISSN-2349-5162) JETIRQ006017 Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org 79 impact through regenerating ULEO was achievable with 1-butanol displaying the best performance.

2018 JETIR April 2018, Volume 5, Issue 4 www.jetir.org (ISSN-2349-5162) JETIR1804180 Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org 881 Flue

July 2016, Volume 3, Issue 7 JETIR (ISSN-2349-5162) JETIR1607013 Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org 59 Note that in a counter-flow heat exchanger the outlet temperature of the cold flu

January 2018, Volume 5, Issue 1 JETIR (ISSN-2349-5162) JETIR1801167 Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org 837 democratic thought of the modern West" and the b

2018 JETIR August 2018, Volume 5, Issue 8 www.jetir.org (ISSN-2349-5162) JETIRD006040 Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org 284 Figure 2 Conv

Volume 1 Issue 5 JETIR (ISSN-2349-5162) JETIR1405007 Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org 283 A Case Study on Total Productive Maintenance in Rolling Mill Chetan S Sethia1, Prof. P. N. Shende2, Swapnil S Dange3 1Student, M.TECH (PRODUCTION), 2Assistant professor, 3Student, M.TECH (PRODUCTION)

Volume 1 Issue 3 JETIR (ISSN-2349-5162) JETIR1403002 Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org 167 Study and Analysis of Pin of Knuckle Joint in Train 1Ravindra S. Dharpure, 2Prof D. M. Mate 1M. Tech., PCE, 2PCE.Nagpur Abstract - The current paper presents the problem of the failure of the knuckle pin in a railway coupling due to