PET Image Reconstruction Using Convolutional Neural .

3y ago
15 Views
2 Downloads
6.24 MB
51 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Duke Fulford
Transcription

PET Image Reconstruction using Convolutional NeuralNetwork and Generative Adversarial Network in SinogramDomainI GNAS K UPCIKEVICIUSBiomedical Engineering & InformaticsSchool of Medicine and Health, Aalborg UnivertityJune 2019AbstractIntroduction: Artifacts and noise in PET imaging are caused by multiple factors, including lowered radiotracerdose and information loss in the form of missing pixels or missing projections. Recently, deep learning basedalgorithms have achieved promising results in the medical imaging field, including PET denoising and CTsinogram inpainting, especially using the Convolutional Neural Network (CNN) and Generative AdversarialNetworks (GAN) architectures. Aim: This article aims to compare CNN and GAN approaches for missingdata reconstruction on PET sinogram domain. Methods: The end-to-end framework, from PET image to thesinogram domain and back to PET image domain, was proposed. The Radon transform was applied to covert PETimages into sinograms. The first model was the CNN encoder-decoder based network with four skip connections.The effective strategy was applied to efficiently train more corrupted PET sinograms by loading previouslytrained weights. The second approach was the GAN network, with the generator designed similarly as the CNNencoder-decoder, and the discriminator containing four convolutional layers to classify generated sinograms asartificially generated or ground truth. The proposed framework ended by applying filter back projection algorithmto transform sinograms back to PET image domain. Results: The results revealed that GAN outperformed CNNby a small margin. The average PSNR and SSIM scores within all five corruption levels were 41.44, 0.977, and42.34, 0.983 when predicting missing pixels. Differences of two metrics between CNN and GAN were higherwhen predicting missing projections; 40.13, 0.866 versus 46.84, 0.989. Additionally, GAN performed noticeablybetter when 90% of sinogram data were removed, resulting in a sharper and more detailed reconstructed image,qualitatively comparing to CNN. Discussion: Different network architectures, chosen hyperparameters, andobjective functions, might be the reasons why GAN performed better than CNN. Even though the study had somelimitations, the promising results were achieved, which motivates to experiment further.Keywords PET reconstruction, sinogram domain, CNN, GAN, FBP, Radon transform1.I NTRODUCTIONin PET are caused by the missing pixels or missing projections which appear in high-resolution PETscanners [4, 5]. All these factors contribute to PETimage quality by making the resulting image blurred,unclear, and hard to interpret. Therefore, scientistsand researchers seek to come up with different solutions to overcome the limitations and improve PETimage quality. Additionally, the ability to use fewerdata and predict high-quality medical images wouldrequire less radiation to achieve the desired outcome.There is a trade-off between high-quality PET imagesand restrictions of available events from minimizing exposure for health reasons [6]. If high-qualityO sitron Emission Tomography (PET) is one of themajor imaging modalities, broadly used in hospitals for diagnostic and intervention purposes [1].High-quality PET images are often used in diagnosing brain diseases and disorders, such as Alzheimer’s,dementia [2]. Unfortunately, reconstructed PET images usually contain more noise and artifacts compared to magnetic resonance imaging (MRI) and computed tomography (CT) due to the constraints, suchas low number of coincidence events - photons thatreach the detector, imposed by a reduced dose of aradiotracer.[3] Additionally, the noise and artifactsP1

10th semester project, Spring 2019 Biomedical Engineering Informatics Aalborg UniversityPET images could be predicted from lower data representations, a lower amount of radiotracer wouldneed to be used. PET system detects pairs of gammarays emitted from the injected radioactive tracers onthe biologically active molecule. These coincidentalgamma pairs are recorded to generate a 3D volumeusing computer analysis [2]. According to the report "Biological Effects of Ionizing Radiation (BEIRVII)", the estimated risk of cancer increases about0.04% per single PET scan, which multiplies afterrepetitive examinations. On the other hand, when reducing the radiation dose, the quality of PET imagesare degraded, due to the lower amount of detectedphotons [1], which negatively affects the diagnosticcapabilities of PET. To address radiotracer dose related issues, Shi et al. [1] used regression forest-basedframework and predicted normal dose PET imagesfrom low-dose images. The study showed promisingresults when the mean standardized uptake valueof predicted standard dose images was close to theground truth and proved that the machine learningapproach could be successfully used on PET images.However, it also had limitations when images werepredicted voxelwise and did not account for the relationship with the neighboring voxels. Moreover, thestudy was conducted using a small number of subjects.Recently, PET systems started using silicon photomultipliers (SiPMs) as a detector for photon registration. According to the literature [5], there is ahigh probability that some of the SiPMSs may notfunction properly, resulting in loss of counts in pixels. Such "dead" pixels are pixels with missing datavalues causing the degraded quality of reconstructedPET images. Additionally, the missing pixels mayarise due to the problems in processing electronics[5]. The study by Jong et al. [4] investigated a fewmethods to compensate for missing data caused bythe gaps between high-resolution PET scanner detectors and faulty electric parts. Two approachedwere applied: bilinear interpolation approach andmodel-based approach. The model-based approachoutperformed bilinear interpolation, but with a priceof long processing time. [4]A variety of studies used commonly know algorithmsfor improving PET image quality. Firstly, Gaussianfiltering and bilateral filtering were used to improvePET post-reconstruction; however, such filtering resulted in reduced edge preservation and blurred2images [7]. Furthermore, the block-matching 3D(BM3D) [8] approach was applied to PET images andshowed structural improvements [9]. However, thestudy used simulated, not the real PET data, whileBM3D is not effective on natural images due to thelack of highly correlated matching blocks.[10] Another attempt to improve medical imaging quality isto apply filtering technique on the sinogram domaininstead of reconstructed images. A sinogram consistsof an angle based histograms. Such data format isusually the raw format in PET systems.[2] In theory,when saving PET data in the sinogram domain, everydetected event is stored using the angle and the offsetof the scanned tissue point, which represents a singleevent. Multiple sorted events create the sinogram(Figure 1), which usually is a smaller data formatcompared to the reconstructed image.Figure 1: Photon annihilation events in PET detector ring.A comparison study by Yu and Muhammed [11]suggested that signal-to-noise ratio (SNR) wasimproved on pre-reconstruction denoising, wherePET sinogram are denoised before applying a reconstruction algorithm, compared to post-reconstructionalgorithms. However, the study used only theclassical filtering approaches which tend to induceartifacts; the study was not extended to investigatethe performance with more recent neural networksbased approaches.Finally, most recently, convolutions neural networks(CNN) has become a state-of-the-art technique interms of the image analysis [12]. A study by An et al.[13] proposed a deep CNN for full-dose PET imagereconstruction based on the local patches from thelow-dose PET. CNN was applied to the CT sinogramsto predict the residuals - a difference between noise

10th semester project, Spring 2019 Biomedical Engineering Informatics Aalborg Universityand ground truth images. Residuals were subtractedfrom the noisy sinograms, resulting in a higherquality of the reconstructed CT images [12]. Eventhough the study showed promising results on CT, itis still unknown how sinogram based reconstructionusing CNN would perform on PET images.Another widespread machine learning approach isGenerative Adversarial Network (GAN) recentlyreceived widespread attention [14, 15, 6]. GANsare known as generative models, which have twomain components: generator and discriminator. Thegenerator learns to replicate the realistic inputimages, while the discriminator tries to distinguishbetween the generated and real images. A study byZhao et al. [16] suggested a sinogram inpaintingnetwork which uses GAN to solve a limited angleCT reconstruction problem. The study concludedpromising results in CT sinogram inpainting, yetagain, there were no indications of how such anapproach can be applied to PET images. Therefore, itis still unknown how CNN and GAN would performon PET sinograms. This is the key interest of thisstudy.This study aims to use CNN and GAN approachesfor corrupted PET image reconstruction in the sinogram domain. Additionally, to observe the dependency of models performance to the available imagedata, two data representations of multiple levels ofcorruption are introduced.2.M ETHODSData acquisitionThe publicly available dataset was acquiredfrom the Cancer Imaging Archive (TCIA:http://www.cancerimagingarchive.net), an archive ofthe medical images of cancer created by the NationalCancer Institute (NCI) [17]. Soft-tissue-Sarcomadataset was chosen from TCIA, which is, basedon the thorough search of the relevant datasetsyielded a large number of tested subjects, givingthe high number of training samples, necessaryto train a deep neural network [18]. The datasetconsists of 51 patients with histologically provensoft-tissue-sarcomas of the lower extremities inPET/CT/MRI image formats [17, 19].For this study PET data from all 51 subjects wereused which contained DICOM (Digital Imagingand Communications in Medicine) images of 128 x128-pixel resolution in a total of 13,417 2D images.The fludeoxyglucose (FDG) radiotracer was usedwhen performing PET scans on a PET/CT scanner(Discovery ST, by GE Healthcare). A median of 420MBq of FDG was injected intravenously followed bythe 60 min of body imaging acquisition.[17] Additionally, the attenuation correction was performed whichreduces severe artifacts induced due to the high number of lost photons during the procedure.[17]From 51 data subjects, five random patients wereput aside and left as the unseen test dataset, whichapproximately corresponds to the 10% of the entiredataset. This test set is used only for the final results. The following 46 subjects correspond to the12,082 PET images, which were divided into training and validation datasets with a ratio of 70% to30%, respectively. Training and validation data setswere used during the training phase and for hyperparameter tuning. After each training phase, thequalitative results were observed for evaluation ofreconstructed images, in terms of visible structuraldetails, artifacts, blur. Qualitative results representthe real-world scenario since this is how the doctorsperceived image information in the working environment. Any image distortions increase the risk ofmisdiagnoses. Thus, the quality of reconstructed images have a direct impact on patient treatment [20].Radon transformThe TCIA data was in reconstructed PET image format. Therefore, the Radon transform was applied toconvert data into the raw sinogram domain. Radontransform of the image described as the functionf (x , y), is defined by the combination of the integralsthrough f (x , y) in vertical and horizontal projections,or y and x axis, respectively. [21] Radon transformmathematical is defined as:Z Z Rf f (x, y)δ(xcosθ ysinθ s)dxdy where θ is the angle of the line and s is a perpendicular offset of the line. [21]In this study, the data were transformed by using skimage library in Python coding environment.Skimage allowed to define two parameters before3

10th semester project, Spring 2019 Biomedical Engineering Informatics Aalborg Universitytransformation: the angle theta, which was left as adefault value of 180 , and Circle, which when set on’True’ allows making a uniform sinogram size of 128projections and 128 detectors, corresponding to thesize as the original image. This property was helpfulsince CNN requires uniform dataset (Figure 2).Zi Yi YminYmax Yminwhere Zi is rescaled data, Yi original data.Model architectureFigure 2: Radon transform applied on the PET image (a).Resulting sinogram (b) is used for further preprocessing.Two data representations were created to addressmissing data problems in PET imaging domain: missing pixels and missing projections. Missing pixelsare simulated by removing random data points fromthe PET sinogram. The amount of randomness wasincreased in 5 levels, 10%, 30%, 50%, 70% and 90%(Figure 3 a) ), to reflect multiple scenarios of missingpatterns, also, to investigate how far CNN and GANcan perform to compensate missing pixel values properly. The second data representation was focused onsimulating missing projections, which appear in PETsinograms due to the gaps between detector heads inthe scanner or because of faulty detectors [4]. Alsopresented in 5 different levels (Figure 3 b))Intensity normalisationApplying the Radon transform converted the wholedata into the sinogram domain. Therefore, furtherpre-processing was applied to prepare data for theneural networks. Data were rescaled so that all thevalues would be within the range 0 and 1. Such intensity normalization helped to maintain the weightsas relatively small values during the training phase.The large weight values usually result in the poorperformance and unstable model, thus, it is criticalto rescale input and output data before presentingdata to the network. [22] Thereby, this was achievedby applying the following equation:4In this paper, two different architectures were usedand compared for lost data reconstruction in PETsinogram domain. Most of the hyperparametertuning was performed manually, where some of themwere chosen according to the literature. Rectifiedlinear unit (ReLu) was used as an activation functionthroughout the proposed networks since it mathematically simple, yet effective, also recommendedaccording to the literature [18]. Different optimizerswere tested, and Adam solver [23], with a learningrate of 0,003 showed the fastest convergence andbest results and both CNN and GAN models.CNN ArchitectureThe first architecture is a convolutional neural network based autoencoder inspired by U-net CNN forimage segmentation [24]. Proposed network has fourconvolutional layers with 32, 48, 64, 64 filters, respectively. These layers encoded the image data into asmaller form, which contained automatically learnedthe most important features. Then, the compresseddata were fed through four deconvolving layers of64, 64, 48, 32 filters, respectively. Deconvolutionallayers of the network are called a decoder part. Itreconstructed the compressed image data into thenormal size (128 x 128) images. 3 x 3 filter size wasused through the entire network, with a stride of one,based on the similar studies [24, 25]. Additionally,four skip connections were added which shuttled lowlevel features to the high level features [16]. Suchdirect connections were proven to strengthen featurepropagation, encourage feature reuse, and lower thecomputational cost by reducing the number of parameters.[26] In the CNN model, the training andvalidation losses after every batch were computedbetween the batch of predicted and the ground truthvalues using Mean-square-error (MSE) also knownas L2 loss. MSE is one of the commonly used objective function, and it is available by default on Keraslibrary. It was used in the CNN model since it is a de

10th semester project, Spring 2019 Biomedical Engineering Informatics Aalborg UniversityFigure 3: Representations of corrupted sinograms with (a) simulated missing pixels, and (b) missing projections at 5different levels.facto standard objective function in neural networks[27].Two CNN models were trained from scratch; onewith data representation of 10% of missing pixels,and the second with a similar amount of missingprojections. Models were trained for 100 epochs until MSE error stopped decreasing, which indicatedthat the models started to overfit. The weights fromthese models were loaded to retrain the data representations with a higher level of corruption. Thistime approximately 30 epochs per all executed training were enough until the models started to overfit.Thus, using previously trained weights allowed to accelerate training by achieving faster convergence butmaintaining the high quality of reconstructed PETsinograms. A similar strategy was reported when pretrained 2D weights were used for 3D model training[28].GAN ArchitectureIn this paper, the proposed GAN was based on the’pix2pix’ network (Figure 4) [29], since it was previously successfully applied for the image inpaintingproblem when a big portion of removed sinogram wasreconstructed with promising results [15]. Unlikeconventional neural networks, GAN consist of twomodels: generator and discriminator. The GAN automatically learns the goal specific loss function, whichclassifies the output into real (ground truth) of ’fake’(artificially generated), while at the same time trainsthe generative model to minimize the loss error Isolaet al. [29]. Blurred images are ranked as ’fake’, therefore, the GAN will adjust generative model weightsto generate more realistic PET images. Such GANworking principle is the biggest advantage againstregular CNN and its pre-defined loss functions, suchas mean-square-error (MSE) [28]. For GAN generator model, the previously described CNN encoderdecoder architecture with skip connections was used.The discriminator consists of 4 convolutional layerswith 64, 128, 256, 512 filters, respectively. The finaloutput layer was a convolutional layer with one filter,followed by a Sigmoid function.In the GAN model, the objective function was similarto the one used in ’pix2pix’ paper [29], and mathematically described as a combination of GAN objectiveand L1 loss, also known as mean-absolute-error:G arg min max LGAN (G, D) λLL1 (G)GDwhere G is a final objective, G a generator, Da discriminator, LGAN a Laplace transform of G,Dobjectives, LL1 a Laplace transform of L1 loss. Gtries to minimize the objective function against theadversarial D that acts opposite and maximizes it[29].Additionally, the discriminator was designed to pe5

10th semester project, Spring 2019 Biomedical Engineering Informatics Aalborg Universityin decibels (dB) [10]. PSNR can be defined using themean square error (MSE) measured between groundtruth and corrupted representations:M SE Figure 4: Training a GAN by passing corrupted images (x)through the generator (G) to synthesize indistinguishable images (Gx ) from the real images (Y ).The discriminator (D) learns to classify betweenreal and fake images, while generator tries tofool discriminatornalize prediction on the defined 4 x 4 pixels patchsizes instead of the full image. Such a strategy wassuggested in the ’pix2pix’ paper [29], which demonstrated that patches, smaller than the full-size image,produced a high-quality outcome, had fewer parameters and made the training faster.Inverse Radon TransformOne of the easiest ways to reconstruct the imagefrom the sinogram domain is by using filtered backprojection (FBP). When applying backprojection, theblurring effect was induced on image space. The additional filtering was used to overcome this limitationand to correct the blurring effect to some extent. [30]In this study, different available filters were testedto see which filter performed best for TCIA dataset.The ramp filter showed noticeably better results andwas used when transf

the performance with more recent neural networks based approaches. Finally, most recently, convolutions neural networks (CNN) has become a state-of-the-art technique in terms of the image analysis [12]. A study by An et al. [13] proposed a deep CNN for full-dose PET image reconstruction based on the local patches from the low-dose PET.

Related Documents:

In a positron emission tomography (PET) scanner, the time-of-flight (TOF) information gives us rough event position along the line-of-response (LOR). Using the TOF information for PET image reconstruction is able to reduce image noise. The state-of-the-art TOF PET image reconstruction uses iterative algorithms.

Learning a Deep Convolutional Network for Image Super-Resolution . a deep convolutional neural network (CNN) [15] that takes the low- . Convolutional Neural Networks. Convolutional neural networks (CNN) date back decades [15] and have recently shown an explosive popularity par-

Tomographic*Imaging* f(x,y,z)" . normal 2D reconstruction - FORE (Fourier Rebinning) most common form . Intro Image Reconstruction! 11/2/11! . Stearns CW, et al. Application and evaluation of a measured spatially variant system model for PET image reconstruction. IEEE Trans Med Imaging. 2010;29:938-949. 36 FDG*PET*Example* OSEM LOR 4 .

Tentatively, the prize requires 10 legendary pets with maxed level. 2.4 Season Reward Team Prize 100,000 USDT Smart Contract: Grand Prize SS1 Requirements Legendary Pet 1 Legendary Pet 2 Legendary Pet n Player Own: Legendary Pet 1 Legendary Pet 2 Legendary Pet n Deposit (1) Release (5) Locked (2) Claim (3) Compare (4) Match

Your new POWER PET Electronic Pet Door is like no other pet door made. Patented POWER PETTM electronic pet doors are fully automatic, mo-tor driven and open completely under their own power. Activated by an ultrasonic pet collar, your new POWER PET door will keep out stray animals. It is secure against wind, rain, cold and snow and provides much

The Pet Food Test is the first consumer funded in-depth examination of pet food. The findings detailed in this report are a pet food snapshot in time; a scientific representation to the possible dangers that could be lurking inside a bag or can of pet food. The Pet Food Test examined twelve pet food products, six cat foods and six dog foods .

According to APPA [s 2017-2018 National Pet Owners Survey, 68 percent of American households now own a pet, accounting for 84.6 million pet-owning households, up from 79.7 million pet-owning households in 2015. The American Pet Products Association also indicates that millennials are now the primary pet-

ALEX RIDER SERIES POINT BLANK GOING DOWN MICHAEL J. ROSCOE was a careful man. The car that drove him to work at quarter past seven each morning was a custom-made Mercedes with reinforced steel plates and bulletproof windows. His driver, a retired FBI agent, carried a Beretta subcompact automatic pistol and knew how to use it. There were just .