Unsupervised Spatial-Spectral Feature Learning By 3D Convolutional .

1y ago
33 Views
3 Downloads
8.32 MB
13 Pages
Last View : 7d ago
Last Download : 3m ago
Upload by : Anton Mixon
Transcription

6808IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 57, NO. 9, SEPTEMBER 2019Unsupervised Spatial–Spectral Feature Learning by3D Convolutional Autoencoder forHyperspectral ClassificationShaohui Mei , Member, IEEE, Jingyu Ji, Yunhao Geng, Zhi Zhang, Xu Li, Member, IEEE,and Qian Du , Fellow, IEEEAbstract— Feature learning technologies using convolutionalneural networks (CNNs) have shown superior performance overtraditional hand-crafted feature extraction algorithms. However,a large number of labeled samples are generally required forCNN to learn effective features under classification task, whichare hard to be obtained for hyperspectral remote sensing images.Therefore, in this paper, an unsupervised spatial–spectral featurelearning strategy is proposed for hyperspectral images using3-Dimensional (3D) convolutional autoencoder (3D-CAE). Theproposed 3D-CAE consists of 3D or elementwise operations only,such as 3D convolution, 3D pooling, and 3D batch normalization,to maximally explore spatial–spectral structure information forfeature extraction. A companion 3D convolutional decoder network is also designed to reconstruct the input patterns to theproposed 3D-CAE, by which all the parameters involved in thenetwork can be trained without labeled training samples. As aresult, effective features are learned in an unsupervised modethat label information of pixels is not required. Experimentalresults on several benchmark hyperspectral data sets havedemonstrated that our proposed 3D-CAE is very effective inextracting spatial–spectral features and outperforms not onlytraditional unsupervised feature extraction algorithms but alsomany supervised feature extraction algorithms in classificationapplication.Index Terms— Convolutional neural network (CNN), featurelearning, hyperspectral, spatial–spectral.I. I NTRODUCTIONHYPERSPECTRAL imaging technology, which collectselectromagnetic spectral information in hundreds of con-Manuscript received December 12, 2018; revised February 24, 2019;accepted March 28, 2019. Date of publication April 22, 2019; date ofcurrent version August 27, 2019. This work was supported in part by theNational Natural Science Foundation of China under Grant 61671383 andGrant 61301235, in part by the Fundamental Research Funds for the CentralUniversities under Grant 3102018AX001, in part by the Natural ScienceFoundation of Shaanxi Province under Grant 2018JM6005, and in partby the China Postdoctoral Science Foundation under Grant 2014M550872.(Corresponding author: Shaohui Mei.)S. Mei, J. Ji, Y. Geng, and X. Li are with the School of Electronicsand Information, Northwestern Polytechnical University, Xi’an 710129, China(e-mail: meish@nwpu.edu.cn).Z. Zhang is with the State Key Laboratory of Remote Sensing Science,Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences,Beijing 100101, China.Q. Du is with the Department of Electrical and Computer Engineering andthe Geosystems Research Institute, Mississippi State University, Starkville,MS 39762 USA.Color versions of one or more of the figures in this article are availableonline at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TGRS.2019.2908756tiguous narrow bands, can identify ground objects accordingto their unique spectral characteristics. Taking advantage ofsuch rich spectral information, the hyperspectral image classification task, which classifies all the pixels into differentcategories, has been widely used in many applications, suchas land-cover mapping, mineral exploration, water pollutiondetection, and so on [1], [2]. However, raw hyperspectralimages always suffer from spectral variations caused by sensornoise and changes in illumination, environmental, atmospheric,and temporal conditions [3]. Such within-class variationdegrades the performance of classification a lot [4]. Therefore,feature extraction is usually performed as a preprocessingstep to enhance the separability between various classes inhyperspectral classification tasks.During the past decades, many strategies have been proposed to extract effective features prior to classification tasks.According to whether labeled information is used or not, feature extraction can be classified into two categories: supervisedand unsupervised methods. In the supervised feature extractionmethods, samples with known class labels are required toenhance discriminability among different classes, in whichthe linear discriminant analysis (LDA) [5] and nonparametric weighted feature extraction (NWFE) [6] are two typicalrepresentatives. Many variants of these two methods havealso been proposed in recent years, such as modified Fisher’sLDA [7], regularized LDA [8], modified NWFE using spatialand spectral information [9], and kernel NWFE [10].The unsupervised feature extraction algorithms automatically extract features from raw data without labeled information. One of the well-known unsupervised methods is theprincipal component analysis (PCA), which has been widelyused for hyperspectral image processing [11]. A tensorialversion of PCA has also been proposed to extract spectralspatial features of hyperspectral images [12]. Many manifold learning-based methods have been applied to reducethe dimensionality of hyperspectral images [13], such aslocally linear embedding [14], Laplacian eigenmap [15], andlocal tangent space alignment [16]. By considering spatialinformation around the data points, these local methods canpreserve local spatial neighborhood and detect the manifoldembedded in a high-dimensional feature space. Their linearapproximations, such as neighborhood preserving embedding(NPE) [17], locality preserving projection (LPP) [18], andlinear local tangent space alignment (LLTSA) [19], were also0196-2892 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

MEI et al.: UNSUPERVISED SPATIAL–SPECTRAL FEATURE LEARNING BY 3D-CAE FOR HYPERSPECTRAL CLASSIFICATIONapplied to feature extraction of hyperspectral images [20].In addition, graph-based discriminant analysis-based methodswere also proposed, e.g., graph-based discriminant analysis with spectral similarity (GDA-SS) [21] and sparse andlow-rank graph-based discriminant analysis (SLGDA) [22].Laplacian-regularized collaborative graph-based discriminantanalysis (LapCGDA) framework was also proposed in [23],in which a Laplacian graph of data manifold is incorporatedinto the CGDA [24].The aforementioned feature extraction algorithms, namely,PCA, manifold learning, LDA, and so on, extract a fixed pattern of features using a small amount of adjustable parametersfrom the original data, mainly taking advantage of humaningenuity and prior knowledge [25], [26]. Different from thesehand-crafted feature extraction algorithms [27], [28], featurelearning methods can automatically learn effective featuresfrom the data itself. Owning to rapid developments in deeplearning, feature learning by a deep neural network (DNN) hasbeen developed to learn effective features using a huge amountof data in many applications, such as image classification [29]–[31], object detection [32], [33], and so on. As a typical deeplearning technique for feature learning, convolutional neuralnetwork (CNN) often contains millions of parameters to belearned, e.g., VGG16 [34]. When these parameters are welloptimized under a classification task, both feature quality andclassification performance can be enhanced.Recently, deep learning-based techniques have also beenapplied to hyperspectral image processing. Hu et al. [35] firstused a CNN constructed by a spectral convolution operator forhyperspectral image classification (denoted as 1-dimensional(1D)-CNN in this paper). Makantasis et al. [36] integratedspatial–spectral inputs into CNN (randomized PCA-CNN) forclassification. Li et al. [37] also proposed to use CNN toclassify pixel pairs constructed from local neighborhood andassigned class labels by majority voting. Liu et al. [38]proposed a Siamese CNN (S-CNN) that adopted a marginranking loss function to guarantee a low intraclass and highinterclass variability. As for feature learning of hyperspectral images, Mei et al. [39] first proposed a sensor-specificspatial–spectral feature learning concept using CNN techniques, including the ability of feature extraction, transferring,and fine-tuning to different images acquired by the same sensor. Zhao and Du [40] fused the spectral feature extracted by abalanced local discriminant embedding algorithm and spatialfeature learned in a CNN for hyperspectral classification.Although these deep learning-based methods have achievedsatisfying performance in feature learning and classification,a large amount of labeled samples are required to train theseDNNs under a supervised manner by classification task. Dueto the difficulty in obtaining labeled training samples inhyperspectral images, it is difficult to increase the accuracyof these kinds of DNN-based supervised feature learningmethods.The unsupervised feature learning using DNNs has alsogained much attention. For example, the highly efficientenforcing population and lifetime sparsity (EPLS) algorithm[41] is used to train DNNs in greedy layerwise fashionfor unsupervised learning of sparse features of hyperspectral6809images [42]. Autoencoder (AE) is an artificial neural networkused for learning a valid encoding of data in an unsupervised manner [26], [43]. It learns a representation of inputsamples by reconstructing their input patterns with a minimum reconstruction error. Deep AE (DAE) was first usedin hyperspectral image classification and feature learning byChen et al. [44], in which PCA was adopted for dimensionreduction in spectral dimension and then flatten method wasused to arrange “neighbor region” as a 1D vector to integratespatial–spectral information. The stacked sparse AE (SSAE)first used AE for sparse spectral feature learning and multiscalespatial feature learning respectively and then fused thesetwo kinds of feature for classification [45]. In these twoalgorithms, the spatial information may be flattened whenusing PCA for dimensionality reduction. An improved versionof AE, namely, spatial-updated DAE (SDAE), was proposedby Ma et al. [46], in which sample similarity was consideredby adding a regularization term in the energy function andfeatures were updated by integrating contextual information.The 3D convolution has also been adopted in AE to explorethe spatial context for feature extraction [47]. Although theseAE-based techniques extract effective spatial–spectral features,spatial information is not sufficiently explored in the network.In this paper, by extending our preliminary work in [47], unsupervised feature learning by a 3D convolutional AE (3D-CAE)is proposed, in which only 3D or elementwise operations,such as 3D convolution, 3D pooling, 3D batch normalization, and parametric rectified linear unit (PReLU) [48], areused to maximally explore spatial structure information forspatial–spectral feature extraction. It should be noted thatthe proposed network is trained by developing a companion3D convolutional decoder network to reconstruct the inputto the proposed 3D-CAE, by which labeled samples is notrequired in the training process. As a result, spatial–spectralstructure information can be encoded, and effective spatial–spectral features are learned in an unsupervised mode. Finally,extensive experiments on three benchmark hyperspectral datasets are conducted to demonstrate the effectiveness of theproposed 3D-CAE for unsupervised spatial–spectral featurelearning.In summary, the main contributions of this paper aretwofold.1) Unsupervised feature learning is conducted using 3DCAE by which labeled samples are not required inthe feature learning process. Instead, the input samplesto the proposed 3D-CAE are used as ground truth totrain the parameters involved in the 3D-CAE. Suchunsupervised feature learning is especially useful forhyperspectral applications where training samples arerare and difficult to be obtained.2) The structure information in hyperspectral images is preserved by constructing an AE using 3D and elementwiseoperations only, e.g., 3D convolution, 3D pooling, 3Dbatch normalization, and so on. Thus, the proposed 3DCAE is very efficient in learning spatial–spectral featuressince all the flatten operations (e.g., fully connectionlayer) in traditional AE-based networks [44], [46], [47]are excluded.

6810IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 57, NO. 9, SEPTEMBER 2019Fig. 2.Fig. 1.Illustration of 3D convolution.The remainder of this paper is organized as follows.Section II presents the proposed 3D-CAE for unsupervisedspatial–spectral feature learning. Section III reports and discusses experimental results over three benchmark hyperspectral data sets. Finally, conclusions are drawn in Section IV.II. P ROPOSED 3D-CAE FOR U NSUPERVISEDS PATIAL –S PECTRAL F EATURE L EARNINGA. 3D Operations for CAEA hyperspectral image is represented by a 3D cube, whichcontains a 2-dimensional spatial context and 1D spectralinformation. In order to well explore both spatial context andspectral discrimination simultaneously, only 3D or elementwise operations is adopted in the proposed 3D-CAE for hyperspectral images, including 3D convolution, 3D deconvolution,3D pooling, 3D batch normalization, and elementwise PReLUfunction.1) 3D Convolution: As shown in Fig. 1, for an input I R 3(I)of size Dx(I) D (I)y Dz , when a 3D convolution is applied3using a kernel W R of size Dx(W) D (W) Dz(W) (Dx(W) y(I)(W)(I)(W)(I)Dx , D y D y , and Dz Dz ), its output is defined as(W)Ox,y,z(W)(W)Dx 1 D y 1 Dx 1 b p 0x 1, 2, . . . ,q 0ODx , yW p,q,r I x·sx p,y·s y q,z·sz rr 0 1, 2, . . . , D Oy(1)andz 1, 2, . . . , DzOwhere O x,y,z denotes the (x, y, z)th element in the outputO R 3 , (sx , s y , sz ) represents the size of stride in threeOdimensions, b denotes the bias, DxO , D Oy , and Dz representthe sizes of output O and are defined as I D (W)DxxDxO 1sx (W)D Iy D yODy 1sy (W)DzI DzODz 1(2)szComparison of convolution and deconvolution.where “ · ” represents the round-to-zero process.When such 3D convolution is applied to a hyperspectralimage cube, spatial–spectral features can be extracted since 3Dconvolution is conducted in both spatial and spectral domainssimultaneously. In general, dozens of 3D convolution kernelsare stacked in just one layer to explore different kinds ofspatial–spectral features in a local cube, producing dozensof feature cubes. When several such 3D convolution layersare connected sequentially, the 3D convolution should beconducted with an extra fixed dimension to handle these inputsfrom multiple feature cubes simultaneously. Therefore, in theproposed feature learning of hyperspectral images, 3D convolution kernel is defined as W R 4 of size Dx D y Dz D,where the extra fourth dimension D represents the numberof 3D feature cubes input to the convolutional layer. Supposethe input to the i th convolution layer is defined as Ii R 4(I)(I)(I)of size Dx i D y i Dz i Di . Without loss of generality,if the original hyperspectral image cube is fed as input, D 1.As a result, the 3D convolution in the i th convolution layer isrepresented asx,y,zOi, j bi, j Dy 1 Dz 1x 1 Di 1 D p,q,r x·sx p,y·s y q,z·sz rIi,kW j,k(3)k 0 p 0 q 0 r 0where subscripts “i ” and “ j ” index the convolutional layer andthe convolutional kernels in a layer, respectively. Obviously,the structure of input is not flattened in such 3D convolution.2) 3D Deconvolution: The deconvolution, also known astransposed convolution, can be viewed as the reverse of theconvolutional layer. Being capable of mapping the input froma low-dimensional space to a high-dimensional space, it isoften adopted in CNNs for image or voxel reconstructionin many applications, such as image semantic segmentation[49], style transfer [50], and image inpainting [51]. As shownin Fig. 2, deconvolution is realized by a padding process andconvolution, in which the input is first padded with zero togenerate a middle input that is of larger size than the outputand then convolution is filtered on the middle input to generateoutput. Similarly, in the 3D deconvolution, the input is paddedin all the three dimensions [i.e., x, y, and z in (3)] before the3D convolution is applied.3) 3D Batch Normalization: Assume that Xi (i 1, 2, . . . , Mi ) R3 is a minibatch of inputs, and the output ofa 3D batch normalization, denoted as Yi (i 1, 2, . . . , Mi ) R3 , is of the same size with Xi , in which Mi denotes the

MEI et al.: UNSUPERVISED SPATIAL–SPECTRAL FEATURE LEARNING BY 3D-CAE FOR HYPERSPECTRAL CLASSIFICATIONFig. 3.6811Framework of the proposed 3D-CAE for unsupervised spatial–spectral feature learning of hyperspectral images.number of feature maps in the minibatch. The 3D batchnormalization is represented asX i mean(Mi ) [X] γ β, i 1, 2, . . . , MiYi Var(Mi ) [X] (4)where mean(Mi ) [X] and Var(Mi ) [X], respectively, represent themean and standard deviation of Xi which are calculated ineach of the three dimensions over a minibatch, γ and β arethe learnable parameters, and was set to 1e 5 as default.During training, this layer stores the mean and variance of allbatches. The average of the mean and variance in the trainingprocess is used for normalization in the evaluation procedure.Such a strategy enables the network to be used for all kinds ofsamples without defining a specific value for a certain batch.4) 3D Pooling: Max pooling layer can reduce the numberof training parameters of a CNN [35]. Traditionally, 3D maxpooling is usually used for spatial–temporal feature learningin video action recognition and detection task [52], [53].Similarly, 3D max pooling can be used for spatial–spectralfeature learning of hyperspectral images. In a DNN, dozensof 3D convolution kernels are applied to the same input ina layer to explore different features, and then pooling isused to summarize these features such that the dominatingfeature is retained in just one feature. Suppose pooling isapplied to features extracted using T 3D convolution kernelsWt , t 1, 2, . . . , T , the 3D max pooling is defined asx,y,zO x,y,z max Ftt(5)x,y,zrepresents the features extracted using 3D convowhere Ftlution kernels Wt and O x,y,z represents the feature at position(x, y, z) after 3D max pooling. It is observed that the structureinformation of input to the 3D convolutional layer does notflatten in such 3D max pooling.B. Proposed 3D-CAE for Unsupervised Spatial–SpectralFeature LearningWhen using the proposed 3D-CAE for unsupervised spatial–spectral feature learning, it is first trained with an “encoding–decoding” step in which a hyperspectral data cube is pro-vided to the 3D-CAE for feature learning and then reconstructed using the learned features. After the network beingtrained to well recover the hyperspectral data cube usinglearned features, it can be used to extract spatial–spectralfeatures. As shown in Fig. 3, the proposed 3D-CAE forunsupervised spatial–spectral feature learning is conductedby the following three steps: 1) constructing encoder forspatial–spectral feature learning; 2) constructing decoder totrain the encoder under reconstruction task; and 3) extractingunsupervised features using the encoder of the proposed3D-CAE from hyperspectral images.CAE can automatically learn effective features under reconstruction task without labeled samples. Therefore, it has beenapplied for unsupervised feature extraction of hyperspectralimages [44]. However, the flatten method in [44] to simplyarrange different neighboring pixels as a vector loses thespatial structure information that has been demonstrated tobe crucial in hyperspectral applications. Therefore, as shownin Fig. 4, a novel multilayer 3D-CAE is proposed for unsupervised feature learning of hyperspectral images, in which theneighboring cubes of pixels are directly fed into the encoder ofthe proposed 3D-CAE without any other handcraft operatorsand 3D convolution is then used to explore the spatial contextfor feature learning.In this paper, for a pixel px,y Rc 1 located at (x, y)on the image plane, a square patch of size s s centered at(x, y) is considered as its spatial context, where c is the number of channels (spectral bands). Therefore, in the proposed3D-CAE, in order to fully explore spatial context of px,y , itsspatial neighborhood I(x,y) Rs s c is directly fed to theencoder without any other transformations.As shown in Fig. 4, the encoder of the proposed 3DCAE stacks 3D convolutional layers after the input layer tolearn spatial–spectral features within a spatial neighborhood, inwhich 3D convolution is applied in spatial and spectral domainsimultaneously. In the encoder, 3D batch normalization isadopted to normalize the features generated by each 3Dconvolutional layer such that feature weights are in the samerange. Thus, a large learning rate can be used to speed upthe training process [54]. After multiple 3D convolutional

6812Fig. 4.IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 57, NO. 9, SEPTEMBER 2019Architecture of the proposed 3D-CAE for feature extraction of hyperspectral images.layers, 3D max pooling is finally utilized to gather localfeatures explored by different 3D convolution filters. As aresult, a global spatial–spectral feature is produced as thefeatures extracted by the encoder of the proposed 3D-CAE.In order to train the encoder of the proposed 3D-CAEfor feature extraction, as shown in Fig. 4, a companion 3Dconvolutional decoder network is designed to reconstruct theinput hyperspectral cubes from the features extracted by theencoder. By such a training strategy, a large amount of labeledsamples are not required. In this paper, the 3D convolutionaldecoder network owns a mirrored structure with the encoder.However, 3D transposed convolutional layers are stacked toreconstruct hyperspectral cube from encoding features, insteadof the 3D convolution layers in the encoder to extract features.The backpropagate (BP) method is used to trained the networkwith a loss function designed asloss s 1 c 1s 1 λ1(I x,y,z Iˆx,y,z )2 W 22s s c2(6)x 0 y 0 z 0where I x,y,z represents the value at position (x, y, z) ofthe input I Rs s c to the encoder, Iˆx,y,z represents itsreconstructed value by the 3D convolutional decoder networkin the training process, W consists of the weights in allthe layers, and λ is a hyperparameter set as 0.0005. Thefirst term in (6) measures the reconstruction error, whilethe second regularization term forces the weights close to theorigin. Such an attenuation term of weights can greatly reducegeneralization errors over testing samples.In both the encoder and its companion 3D convolutionaldecoder network of the proposed 3D-CAE, the PReLU activation function [48], which is an elementwise operation withoutflattening spatial structure, is adopted for all convolutionaland deconvolutional layers since it can improve model fittingwithout extra computational cost and overfitting risk [48].Fig. 5. (a) Pseudocolor image of the Indian Pine data set. (b) Ground-truthclassification map of the Indian Pine data set.When the encoder is well trained by its companion 3Dconvolutional decoder network under a reconstruction taskof hyperspectral images, it is then used independently toextract spatial–spectral features of pixels in the image for otherapplications, such as classification and object detection.III. E XPERIMENTSIn this section, extensive experiments are conducted to verify the performance of the proposed 3D-CAE for unsupervisedspatial–spectral feature learning of hyperspectral images.A. Experimental Results Over Data Sets Acquired by AVIRISSensorIn this experiment, two benchmark data sets acquired bythe AVIRIS sensor, i.e., the Indian Pine data set and theSalinas Valley data set, are adopted for evaluation.1 As shownin Fig. 5(a), the Indian Pine data set contains 145 145 pixelswith a ground resolution of 17 m. According to the groundtruth classification map of the Indian Pine data set shown1 Available online from http://www.ehu.eus/ccwintco/index.php?title Hyperspectral Remote Sensing Scenes

MEI et al.: UNSUPERVISED SPATIAL–SPECTRAL FEATURE LEARNING BY 3D-CAE FOR HYPERSPECTRAL CLASSIFICATIONFig. 6. (a) Pseudocolor image of the Salinas Valley data set. (b) Ground-truthclassification map of the Salinas Valley data set.TABLE IC LASS L ABELS AND T RAIN –T EST D ISTRIBUTION OF S AMPLESFOR THE I NDIAN P INES D ATA S ETTABLE IIC LASS L ABELS AND T RAIN –T EST D ISTRIBUTION OF S AMPLESFOR THE S ALINAS D ATA S ETin Fig. 5(b), 16 different land-cover classes of agriculture aremainly contained in this area as listed in Table I. The SalinasValley data set, which is shown in Fig. 6, contains 512 217pixels with a ground resolution of 3.7 m. As shown in Table II,16 classes such as vegetables, bare soils, and vineyard fieldsare adopted.6813The proposed 3D-CAE is designed, trained, and tested usingkeras framework.2 The parameter settings of the proposed 3DCAE are listed in Table III. The network is trained using“adagrad” on a Geforce GTX 1080 GPU for 200 epochs,with a learning rate of 0.01 and minibatch of 32. In order tolearn the spatial–spectral feature of a pixel, pixels in its 5 5neighborhood are fed to the network, in which the border pixelis padded by mirror. In addition, the training set and validationset are divided by a ratio of 1:9.After the network is well trained under the reconstructiontask, it is then used to extract spatial–spectral features of pixelsby flattening the output of “Pool2” layer as feature vectors. Inorder to evaluate the effectiveness of these learned features bythe proposed 3D-CAE, the traditional support vector machine(SVM) classifier with radial basis function kernel, which isimplemented using the LIBSVM package [55], is used forclassification. In this experiment, as shown in Tables I andII, 10% and 5% samples of each class are used for trainingand others are used for the validation in the Indian Pine andSalinas Valley data sets, respectively. In addition, a tenfoldcross-validation strategy is used for evaluation.In this experiment, both unsupervised feature reductionmethods and supervised feature reduction methods are adoptedfor comparison. For unsupervised feature reduction, PCA,NPE [17], LPP [58], DAE [44], tensorial PCA (TPCA) [12],SSAE [45], and EPLS [42] are adopted, whereas four supervised feature reduction methods, including LDA [7], localFisher’s discriminant analysis (LFDA) [56], sparse graphbased discriminant analysis (SGDA) [57], and SLGDA [22]are adopted for comparison. Note that the results of LFDA,SGDA, and SLGDA are selected from [22]. The PCA, NPE,LPP, and LDA are implemented according to the publishedcode online,3 in which the reduced dimension of features inPCA is set as 40, and the size of neighborhood in NPE andLPP is set as 5 5. The TPCA is implemented as explained in[12], in which 10% of pixels are randomly selected to constructtensor of correlation. In the DAE [44], the dimensionality ofspectral dimension is reduced to 4 by PCA and the size of“neighbor region” is set as 5 5. In multiscale spatial featurelearning of SSAE [45], the spatial scale is set as 3, 5, and7. In EPLS [42], the spatial neighbor is set as 5 5. Inaddition, two typical CNNs that learned effective features ina supervised manner, i.e., 1D-CNN in [35] and S-CNN [38],are also adopted for comparison. In these two CNN basedalgorithms, the number of samples adopted for training in thefeature extraction step is identical to that used to train thesubsequent classifier. The quantitative results over these twodata sets, evaluated by average accuracy (AA) and overallaccuracy (OA), are listed in Tables IV and V, respectively.Their corresponding visual results are shown in Figs. 7 and 8,respectively. The observation is as follows.1) The features learned in the proposed 3D-CAE nearlyoutperform all the other considered features over thesetwo data sets, including both supervised and unsupervised feature extraction algorithms. The proposed2 https://keras.io/3 ionReduction.html

6814IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 57, NO. 9, SEPTEMBER 2019TABLE IIIPARAMETER S ETTINGS OF THE P ROPOSED 3D-CAE W HEN A PPLIED TO D ATA S ETS A CQUIRED BY THE AVIRIS S ENSORTABLE IVC LASSIFICATION A CCURACY OF D IFFERENT F EATURE E XTRACTION A LGORITHMS OVER THE I NDIAN P INES D ATA S ETTABLE VC LASSIFICATION A CCURACY OF D IFFERENT F EATURE E XTRACTION A LGORITHMS OVER THE S ALINAS VALLEY D ATA S ET3D-CAE offers a similar performance as the S-CNN andTPCA on the Salinas Valley data set in terms of OA.2) In the Indian Pine data set where all the classes area little difficult to be discriminated, the superiority ofthe features learned in the proposed 3D-CAE is moreobvious. Of the 16 classes, the proposed 3D-CAE yieldsthe best performance in 7 classes while approaching thebest for the other 9 classes.

MEI et al.: UNSUPERVISED SPATIAL–SPECTRAL FEATURE LEARNING BY 3D-CAE FOR HYPERSPECTRAL CLASSIFICATION6815Fig. 7.Classification maps generated by different feature extractionalgorithms over the Indian Pine data set. (a) LDA [7]. (b) 1D-CNN [35].(c) S-CNN [38]. (d) PCA. (e) NPE [17]. (f) LPP [58]. (g) DAE [44].(h) TPCA [12]. (i) SSAE [45]. (j) EPLS [42]. (k) Proposed 3D-CAE.3) Although both SSAE and DAE use the same idea ofAE as in the proposed 3D-CAE, the spatial context inthese two algorithms has been flatten by using PCAfor dimensionality reduction before feature extraction.On the contrary, the proposed 3D-CAE significantlyimproves the performance of the existing AE-basedalgorithms using 3D or elementwise operations only toretain structure information.4) The proposed 3D

used a CNN constructed by a spectral convolution operator for hyperspectral image classification (denoted as 1-dimensional (1D)-CNN in this paper). . spatial-spectral feature learning concept using CNN tech-niques, including the ability of feature extraction, transferring, and fine-tuning to different images acquired by the same sen-

Related Documents:

Since research in feature selection for unsupervised learning is relatively recent, we hope that this paper will serve as a guide to future researchers. With this aim, we 1. Explore the wrapper framework for unsupervised learning, 2. Identify the issues involved in developing a feature selection algorithm for unsupervised learning within this .

Unsupervised learning For about 40 years, unsupervised learning was largely ignored by the machine learning community - Some widely used definitions of machine learning actually excluded it. - Many researchers thought that clustering was the only form of unsupervised learning. It is hard to say what the aim of unsupervised learning is.

The term spatial intelligence covers five fundamental skills: Spatial visualization, mental rotation, spatial perception, spatial relationship, and spatial orientation [14]. Spatial visualization [15] denotes the ability to perceive and mentally recreate two- and three-dimensional objects or models. Several authors [16,17] use the term spatial vis-

5 10 feature a feature b (a) plane a b 0 5 10 0 5 10 feature a feature c (b) plane a c 0 5 10 0 5 10 feature b feature c (c) plane b c Figure 1: A failed example for binary clusters/classes feature selection methods. (a)-(c) show the projections of the data on the plane of two joint features, respectively. Without the label .

Decision Trees and Random Forest Linear Regression. What are some limitations of supervised learning? Today: Unsupervised Learning In unsupervised learning, the training data is unlabeled . An Example: Clustering. Unsupervised Learning 1 3 .

Spectral iQ Gain refers to gain applied to the newly generated spectral cue. This value is relative to gain prescribed across the target region. For example, if the target region spans 1000 Hz and 3000 Hz and the gain applied in that range is 20dB, a Spectral iQ gain setting of 3dB will cause Spectral iQ to generate new cues that peak at

Spectral Methods and Inverse Problems Omid Khanmohamadi Department of Mathematics Florida State University. Outline Outline 1 Fourier Spectral Methods Fourier Transforms Trigonometric Polynomial Interpolants FFT Regularity and Fourier Spectral Accuracy Wave PDE 2 System Modeling Direct vs. Inverse PDE Reconstruction 3 Chebyshev Spectral Methods .

It would be called the American Board of Radiology. A short time after his speech to the ACR, Dr. Christie repeated his proposal at a session of the American Medical Association (AMA) Section on Radiology in June 1933. It was received favorably. After two years of discussion among representatives of the four major national radiology societies (ACR, ARRS, ARS, and RSNA), the ABR was .