IEEE TRANSACTIONS ON IMAGE PROCESSING, TO APPEAR 1 Quality-Aware Images

1y ago
46 Views
2 Downloads
918.82 KB
9 Pages
Last View : 1d ago
Last Download : 2m ago
Upload by : Rafael Ruffin
Transcription

IEEE TRANSACTIONS ON IMAGE PROCESSING, TO APPEAR1Quality-Aware ImagesZhou Wang, Member, IEEE, Guixing Wu, Student Member, IEEE,Hamid R. Sheikh, Member, IEEE, Eero P. Simoncelli, Senior Member, IEEE,En-Hui Yang, Senior Member, IEEE, and Alan C. Bovik, Fellow, IEEEAbstract— We propose the concept of quality-aware image,in which certain extracted features of the original (highquality) image are embedded into the image data as invisible hidden messages. When a distorted version of such animage is received, users can decode the hidden messages anduse them to provide an objective measure of the quality ofthe distorted image. To demonstrate the idea, we build apractical quality-aware image encoding, decoding and quality analysis system1 , which employs 1) a novel reducedreference image quality assessment algorithm based on astatistical model of natural images, and 2) a previously developed quantization watermarking-based data hiding technique in the wavelet transform domain.Keywords— quality-aware image, image quality assessment, reduced-reference image quality assessment, naturalimage statistics, generalized Gaussian density, informationhiding, image watermarking, image communicationI. IntroductionDigital images are subject to a variety of distortionsduring compression, transmission, processing, and reproduction. In order to maintain, control and possibly enhance the quality of the image and video data being delivered, it is important for data management systems (e.g.,network video servers) to be able to identify and quantify quality degradations on the fly. Since most of theimage data will eventually be consumed by humans, themost reliable means of assessing image quality is subjective evaluation. However, subjective testing is expensiveand time-consuming. On the other hand, most objectiveimage/video quality assessment methods proposed in theliterature [1]–[3] are not applicable in this scenario becausethey are full-reference (FR) methods that require access tothe original images as references. Therefore, it is highlydesirable to develop quality assessment algorithms that doManuscript received August 30, 2004; revised February 14, 2005.Z. Wang and E. P. Simoncelli are supported by Howard Hughes Medical Institute. G. Wu and E.-H. Yang are supported by NationalSciences and Engineering Research Council of Canada, Premier’s Research Excellence Award, Canada Foundation for Innovation, Ontario Distinguished Research Award, and Canada Research ChairsProgram. H. R. Sheikh and A. C. Bovik are supported by NationalScience Foundation. The associate editor coordinating the review ofthis manuscript and approving it for publication was Dr. ZhigangFan. Z. Wang and E. P. Simoncelli are with Center for Neural Science and the Courant Institute for Mathematical Sciences, New YorkUniversity, New York, NY 10012 USA (e-mail: zhouwang@ieee.org;eero.simoncelli@nyu.edu). G. Wu and E.-H. Yang are with Department of Electrical and Computer Engineering, University of Waterloo,Waterloo, ON, N2L 3G1 Canada (e-mail: g2wu@bbcr.uwaterloo.ca;ehyang@bbcr.uwaterloo.ca). H. R. Sheikh is with Texas Instrument,Inc., Dallas TX 75243 USA (e-mail: hamid.sheikh@ieee.org). A. C.Bovik is with Department of Electrical and Computer Engineering,The University of Texas at Austin, Austin, TX 78712 USA (e-mail:bovik@ece.utexas.edu).1 A MatLab implementation of the system is available online athttp://www.cns.nyu.edu/ lcv/qaware/.not require full access to the reference images.Unfortunately, no-reference (NR) or “blind” image quality assessment is an extremely difficult task. Most proposed NR quality metrics are designed for one or a set ofpredefined specific distortion types [4]–[10] that may notbe generalized for evaluating images degraded with othertypes of distortions. Moreover, knowledge of the distortions that arise between the original and corrupted imagesis in general not available to image quality assessment systems. Thus, it is desirable to have a more general imagequality assessment system that is applicable to a wide variety of distortions. However, to the best of our knowledge,no such method has been proposed and extensively tested.One interesting recent development in image/video quality assessment research is to design reduced-reference (RR)methods for quality assessment [2], [3]. These methods donot require full access to reference images, but only needspartial information, in the form of a set of extracted features. Conceptually, RR methods make the quality assessment task easier than NR methods by paying the additionalcost of transmitting side information to the users. Thestandard deployment of an RR method requires the sideinformation to be sent through an ancillary data channel[3]. However, this restricts the application scope of themethod because an additional data channel may be inconvenient or expensive to provide. An alternative solutionwould be to send the side information in the same channel as the images being transmitted. For example, theside information can be included as a component of theimage data structure (e.g., as part of the header of the image format). However, this strategy would be difficult toimplement in existing large-scale, heterogeneous networkssuch as the Internet, because it requires all the users in thecommunication network to adopt a new image format, oramend all the existing image formats to allow the side information to be included. Besides, lossy data transmissionand typical image format conversion may cause loss of theoriginal image headers.In this paper, we propose the concept of quality-awareimage, in which extracted features of the reference imageare embedded as hidden messages. When a distorted version of such an image is received, the users can decode thehidden messages and use them to help evaluate the quality of the distorted image using an RR quality assessmentmethod. There are several advantages of this approach: It uses an RR method that makes the image qualityassessment task feasible (as compared to FR andNR methods). It does not affect the conventional usage of the image data because the data hiding process causes

2IEEE TRANSACTIONS ON IMAGE PROCESSING, TO APPEARonly invisible changes to the image. It does not require a separate data channel to transmit the side information. It allows the image data to be stored, converted anddistributed using any existing or user-defined formats without losing the functionality of “qualityawareness”, provided the hidden messages are notcorrupted during lossy format conversion. It provides the users with a chance to partially “repair” the received distorted images by making useof the embedded features.This study is largely inspired by [11], [12] and [13], wherea pseudo-random bit sequence or a watermark image is hidden inside the image being transmitted. The bit error rateor the degradation of the watermark image measured atthe receiver side is then used as an indication of the qualitydegradation of the host image signal. These methods areperhaps the first attempts to use information hiding technologies for the estimation of image quality degradation.Nevertheless, strictly speaking, these methods are not image quality assessment methods because no extracted features about either the reference or the distorted images areactually used in the quality evaluation process. Instead, thedistortion processes that occur in the distortion channel aregauged, in the hope that such estimated channel distortionwould correlate well with perceptual image degradation incurred during transmission through the channel. However,such a connection is obscured by the nature (e.g., complexity) of the image signals and the types of image distortions,which have variable effects on perceived image quality. Inaddition, these methods provide no clue about how the received distorted images can be corrected.Information hiding or digital watermarking has been anactive research area in the last decade. Traditionally, thesetechniques have been designed for security-related applications such as copyright protection and data authentication. Recently, researchers have attempted to broadentheir application scope to non-security oriented applications [14], [15]. Quality-aware images mainly belong to thesecond category (see Section V for discussions), and theybring about new challenges in the selection and design ofinformation hiding techniques.II. Quality-Aware ImageA. FrameworkA system diagram of quality-aware image encoding, decoding and quality analysis system is shown in Fig. 1. Afeature extraction process is first applied to the original image, which is assumed to have perfect quality. The qualityaware image is obtained by embedding these features as invisible messages into the original image. The quality-awareimage may then pass through a “distortion process” beforeit reaches the receiver side. Here the “distortion process”is general in concept. It can be a distortion channel in animage communication system, with possibly lossy compression, noise contamination and/or postprocessing involved.It can also be any other processes that may alter the essageInformationEmbeddingKEYDecoder and Quality AnalyzerqualitymeasureRR mageFeatureExtractionFig. 1. Quality-aware image encoding, decoding and quality analysissystem.At the receiver side, the hidden messages are first decoded from the distorted quality-aware image. In order forcorrect decoding of the messages, the key for informationembedding and decoding is shared between the sender andthe receiver. Depending on the application environment,there may be different ways to distribute the embeddingkey. One simple solution is to attach the key to the decoder software and/or publish the key, so that it can beeasily obtained by all potential users of quality-aware images. Note that the key is independent of the image andcan be the same for all quality-aware images, so it does notneed to be transmitted with the image data. The decodedmessages are translated back to the features about the reference image. Next, another feature extraction procedurecorresponding to the one at the sender side is applied tothe distorted image. The resulting features are then compared with those of the reference image to yield a qualityscore for the distorted quality-aware image.In order to improve robustness, error detection/correctioncoding techniques may be applied before the informationembedding process. Nevertheless, the hidden messagesmay still be decoded incorrectly when the distortions areextremely severe. At the receiver side, the system must beable to detect such situations (based on the error detectionand correction code) and report a failure message, insteadof a quality score.B. Design ConsiderationsDesigning an effective quality-aware image system is achallenging task. On the one hand, in order to provideeffective quality prediction, the RR quality assessment system desires to know as much information as possible aboutthe reference image. Therefore, the information hiding system would need to embed a fairly large amount of information. On the other hand, in order for the hidden messagesto be invisible and for these messages to survive a wide variety and degree of distortions, the amount of information

WANG, WU, SHEIKH, SIMONCELLI, YANG & BOVIK: QUALITY-AWARE IMAGESthat can be embedded is limited. The RR quality assessment system must observe this limit and carefully select aset of features that can be encoded within the limit. Thesefeatures must be highly relevant to image quality degradations. They must also provide an efficient summary aboutthe reference image.Another issue that may need to be considered is thatmany data hiding techniques tend to change certain statistical features of the original image (e.g. [16], [17]). Thiscould potentially conflict with quality assessment systemsbecause these systems may rely on the way that these statistical features change as an indication of quality degradation.To summarize, a successful quality-aware image systemmust provide a good trade-off between data hiding load,embedding distortion, robustness, and the accuracy of image quality prediction.C. A Simple ExamplePerhaps the simplest way to implement a quality-awareimage system is to embed a certain number of (perhapsrandomly selected) reference image pixels as hidden messages. For synchronization purpose, the positions of thesespixels also need to be embedded. At the receiver side,the decoded reference image pixels are compared with thecorresponding distorted image pixels, and certain distortion/quality metric, such as mean squared error (MSE) andpeak signal-to-noise ratio (PSNR), are estimated.Such a system, although simple, is quite weak in severalaspects. Firstly, it requires a high data hiding rate. Forexample, for a 512 512, 8 bits/pixel gray scale image, toembed 1% of the image pixels (together with 2 9 bits forencoding each pixel position) requires a total of 68146 bits,a heavy load for most robust information hiding systems.Secondly, such a small number of pixels is unlikely to allow accurate estimation of the distortion metrics, unless thedistortion between the reference and distorted images is independently and identically distributed noise. The obviousdrawbacks of this simple example lead us to consider image features that are more efficient in summarizing imageinformation and more effective in evaluating image quality.III. ImplementationA. RR Quality AssessmentHere, we propose a new RR quality assessment methodbased on statistics computed for natural images in thewavelet transform domain. Wavelet transforms provide aconvenient framework for localized representation of signalssimultaneously in space and frequency. They have beenwidely used to model the processing in the early stages ofbiological visual systems and have also become the preferred form of representations for many image processingand computer vision algorithms. In recent years, naturalimage statistics have played an important role in the understanding of sensory neural behaviors of the human visualsystem [18]. In the image processing literature, statistical prior models of natural images have been employed as3fundamental ingredients in a large number of image codingand estimation algorithms (e.g., [19]–[21]). They have alsobeen used for image quality assessment purposes (e.g., [8]).Figure 2 shows the histograms of the coefficients computed from one of the wavelet subbands in a steerable pyramid decomposition [22] (a type of redundant wavelet transform that avoids aliasing in subbands). It has been pointedout that the marginal distributions of such oriented bandpass filter responses of natural images are highly kurtotic(with sharp peaks at zero and much longer tails than Gaussian density, as demonstrated in Fig. 2(a)) and have a number of important implications to sensory neural coding ofnatural visual scene [23]. In [24], [25], it was demonstratedthat many natural looking texture images can be synthesized by matching the histograms of the filter responses of aset of well-selected bandpass filters. Psychophysical visualsensitivity to histogram changes of wavelet-textures hadalso been studied (e.g., [26], [27]). In Fig. 2, it can be seenthat the marginal distribution of the wavelet coefficientschanges in different ways for different types of image distortions. Such histogram changes in images contaminatedwith white Gaussian noise have been observed previouslyand used for image denoising [19], [20].Let p(x) and q(x) denote the probability density functions of the wavelet coefficients (assumed to be independently and identically distributed) in the same subband oftwo images, respectively. Let x {x1 , ., xN } be a set ofN randomly and independently selected coefficients. Thelog-likelihoods of x being drawn from p(x) and q(x) areN1 Xlog q(xn ) ,N n 1(1)respectively. Now assume that p(x) is the true probability density distribution of the coefficients. Based onthe law of large numbers, when N is large, the differenceof the log-likelihoods (or equivalently, the log-likelihoodratio) between p(x) and q(x) asymptotically approachesthe Kullback-Leibler distance [28] (KLD) between p(x) andq(x):Zp(x)dx ,(2)l(p) l(q) d(pkq) p(x) logq(x)l(p) N1 Xlog p(xn ) andN n 1l(q) In previous work, a number of authors have pointed outthe relationship between KLD and log-likelihood functionand used KLD to compare images, mainly for classificationand retrieval purposes [29]–[32]. KLD has also been used toquantify the distributions of image pixel intensity values forthe evaluation of compressed image quality [33],[34]. Here,we use KLD to quantify the difference between wavelet coefficient distributions of a perfect quality reference imageand a distorted image (denoted later on as p(x) and q(x),respectively). To make an effective estimation, the coefficient histograms for both images must be available. Thelatter can be easily computed from the received distortedimage. The difficulty is in obtaining the coefficient histogram of the reference image at the receiver side. Transmitting all the histogram bins as hidden messages would

4IEEE TRANSACTIONS ON IMAGE PROCESSING, TO APPEAR-1-1-310 0(c)020(d)Fig. 2. Comparisons of wavelet coefficient histograms (solid curves) calculated from the same horizontal subband in the steerable pyramiddecomposition [22]. (a) original (reference) “buildings” image (cropped for visibility); (b) JPEG2000 compressed image; (c) whiteGaussian noise contaminated image; (d) Gaussian blurred image. The histogram of the original image coefficients is well fitted by ageneralized Gaussian density model (dashed curves).result in either a heavy data load (when the bin step sizeis fine) or weaker statistical characterization (when the binstep size is coarse).One important discovery in the literature of natural image statistics is that the marginal distribution of the coefficients in individual wavelet subbands can be well-fittedwith a 2-parameter generalized Gaussian density (GGD)model [35]:(3)At the receiver side, we wish to compute an approximation to Eq. (2), the KLD between the histogram of theoriginal image, p(x), and that of the distorted image, q(x).Since we do not have the original histogram, we replace theexpectation over p(x) with an expectation over the modeldensity, pm (x):Zp( x)ˆdx(6)d(pkq) pm (x) logq(x) d(pm kq) d(pm kp)(7)R where Γ(a) 0 ta 1 e t dt (for a 0) is the Gammafunction. One fitting example is shown in Fig. 2(a) as thedashed line. This model provides a very efficient meansto summarize the coefficient histogram of the reference image, so that only two model parameters {α, β} need to betransmitted to the receiver as hidden messages. This modelhas been used in previous work for image compression [21]and texture image retrieval [32]. In addition to the fittingparameters α and β, we also embed the prediction error asa third parameter, which is defined as the KLD betweenpm (x) and p(x):Zpm (x)dx.(4)d(pm kp) pm (x) logp(x)The second term is simply the KLD between the original histogram and the model (Eq. (4)), which is embeddedin the image by the encoder. The first term is the KLDbetween pm (x) and q(x), the histogram of the distortedimage:Zpm (x)dx.(8)d(pm kq) pm (x) logq(x)This is computed by the receiver from the histogram binsof the distorted wavelet coefficients (analogous to Eq. (5)).Note that, unlike the encoding side, we avoid fitting q(x)with a GGD model, which may not be appropriate for thedistorted data.Finally, the overall distortion between the distorted andreference images is defined as:pm (x) ββe ( x /α) ,2αΓ(1/β)In practice, this quantity has to be evaluated numericallyusing histograms:d(pm kp) LXi 1Pm (i) logPm (i),P (i)D log2 (1 K1 X ˆk k k d (p kq ) ) ,D0(9)k 1(5)where P (i) and Pm (i) are the normalized heights of thei-th histogram bins, and L is the number of bins in thehistograms.where K is the number of subbands, pk and q k are theprobability density functions of the k-th subbands in thereference and distorted images, respectively, dˆk is the estimation of the KLD between pk and q k , and D0 is a constantused to control the scale of the distortion measure.

WANG, WU, SHEIKH, SIMONCELLI, YANG & BOVIK: QUALITY-AWARE IMAGES5Feature Extraction sform.originalimageCoefficientStatisticsbitstream tobe embeddedFig. 3. Feature extraction system at the encoder side.Figure 3 illustrates our implementation of the featureextraction system at the encoder side. We first apply a3-scale 4-orientation steerable pyramid transform [22] todecompose the image into 12 oriented subbands (4 for eachscale) and the highpass and lowpass residuals, as demonstrated in Fig. 4. For each subband, the histogram of thecoefficients is computed and then its feature parameters {α,β, d(pm kp)} are estimated using a gradient descent algorithm to minimize the KLD between p(x) and pm (x). Six ofthe 12 oriented subbands (as shown in Fig. 4) are selectedfor feature extraction. The major criterion for selectingthese subbands is to reduce the data rate of RR featureswhile at the same time, maintain the quality predictionperformance. Specifically, in the Fourier domain, the adjacent steerable pyramid subbands (in both scale and orientation) have significant overlaps, but there is essentially nooverlap between non-adjacent subbands. Therefore, the 6subbands marked in Fig. 4 are selected to reduce the useof redundant information. Furthermore, in our tests, selecting the other 6 oriented subbands or all the 12 orientedsubbands gives similar overall performance of image qualityprediction. Finally, the extracted scalar features are quantized to finite precision. Both β and d(pm kp) are quantizedinto 8-bit precision, and α is represented using 11-bit floating point, with 8 bits for mantissa and 3 bits for exponent.These quantization precision parameters were hand-pickedto represent the features in a limited number of bits whilemaintaining a reasonable approximation of the features.The final result is a total of (8 8 8 3) 6 162 bitsthat are embedded into the image.B. Information EmbeddingTo embed the extracted features into the image, wechoose to use an existing dithered uniform scalar quantization watermarking method in the wavelet transformdomain. This method is a simple case of the classof quantization-index-modulation information embeddingtechniques [36], which allow for “blind” decoding (decoding does not require the access to the reference image) andachieve a good tradeoff between data-hiding rate and robustness. The information embedding system is illustratedin Fig. 5.We first use a five-scale separable wavelet transform(specifically, a quadrature mirror filter transform [37]) todecompose the reference image into 16 subbands, including the horizontal, vertical and diagonal subbands at eachsteeerablepyramiddecompositionFig. 4. Steerable pyramid decomposition [22] of image (highpassresidual band not shown). A set of selected subbands (markedwith dashed boxes) are used for GGD feature extraction.scale, and a low frequency residual band. In order to embedone bit of information m {0, 1} into a wavelet coefficientc, the coefficient is altered according to the following rule:cq Q(c d(m)) d(m) Qm (c) ,(10)where cq is the altered coefficient, Q(·) is a base quantization operator with quantization step size , and d(m) is adithering operator defined as½ /4, if m 0d(m) .(11) /4,if m 1At the receiver side, a distorted coefficient cd is obtainedand used to estimate the embedded bit based on the minimum distance criterion:m̂(cd ) arg min kcd Qm (cd )k .m {0,1}(12)

6IEEE TRANSACTIONS ON IMAGE PROCESSING, TO onembedding key(positions of veletCoefficientsError eddingquantized wavelet coefficientsInformation Embedding Systemage and relayed to the quality assessment system. Finally,a quality score of the distorted image is reported.In several cases a failure message may be reported. Itcould be that the received image is not a quality-aware image (no side information has been embedded) or the embedded information is desynchronized (e.g., by image editing).It could also be that the image quality degradation is verysevere, such that the embedded information cannot be completely recovered. It is often useful to distinguish betweenthe two cases, because in the latter case, a failure messagecan serve as an indication of low image quality. One wayto make such a distinction is to look at the percentage Rof correct CRC bits because statistically only in the lattercase, R may be significantly higher than 50%. Followingthe general idea of [11]–[13], one can take an even furtherstep to use R as an important factor for the prediction ofimage quality at very low quality range, although the accuracy may be complicated by the nature (e.g. complexity)of the images being evaluated.IV. Testquality-awareimageFig. 5. Information embedding system.We embed the hidden messages into the horizontal, vertical and diagonal subbands at the fifth scale (counted fromfine to coarse) of the wavelet decomposition. We choose touse these low-frequency components because they usuallyhave high signal energy and are less likely to be significantly altered during typical image processing operations.Moreover, such a selection avoids conflict with the proposed RR quality assessment method, which is based ondetecting the statistical changes of the wavelet coefficientsat the finer scales. To further improve robustness, two error detection/correction techniques are employed. First, a16-bit cyclic redundancy check (CRC) code [38] is computed and attached to the 162 information bits. Second,the resulting 178 bits are further encoded using a binary(15,5,7) BCH code [38], which can correct up to 3 bits oferrors out of every 15 bits. As a result, a total of 540 bitsare generated. The same number of wavelet coefficients arerandomly selected from the fifth scale of the wavelet transform, and every bit is encoded into one coefficient using Eq.(10). The positions of the coefficients are shared betweenthe sender and receiver as the embedding key.At the receiver side, we first apply the same wavelettransform to the received image. The embedded 540 bitsare then extracted from the corresponding wavelet coefficients using Eq. (12), and decoded with the BCH system.The decoded 178 bits are split into the corresponding 162information bits and 16 CRC bits. We then calculate anew set of CRC bits using the decoded information bitsand compare them with the decoded CRC bits. If any ofthe CRC bit is incorrect, the system reports a failure message. Otherwise, the extracted 162 information bits areconverted back into scalar features about the reference im-A. Performance of Quality AssessmentIn order to evaluate and compare the performance ofimage quality assessment algorithms, we built a large image database (the LIVE image database, available online[39]) and conducted an extensive subjective experiment toassess the quality of the images in the database. Thedatabase contains 29 high-resolution (typically 768 512)original images altered with five types of distortions atdifferent distortion levels. The distorted images were divided into seven datasets. Datasets 1 (87 images) and 2(82 images) are JPEG2000 compressed images; Datasets 3(87 images) and 4 (88 images) are JPEG compressed images; and Datasets 5, 6 and 7 (each containing 145 images)are distorted with white Gaussian noise, Gaussian blur,and transmission errors in the JPEG2000 bitstream usinga fast-fading Rayleigh channel model, respectively. Subjects were asked to provide their perception of quality on acontinuous linear scale and each image was rated by 20-25subjects. The raw scores for each subject were convertedinto Z-scores and rescaled within each dataset to fill therange from 1 to 100. Mean opinion score and the standarddeviation between subjective scores were then computedfor each image.Three measures are computed to quantify the performance of the proposed quality assessment method. First,following the procedure given in the video quality expertsgroup (VQEG) Phase I FR-TV test [42], we use a logistic function to provide a nonlinear mapping between theobjective and subjective scores:f (s) a1 a2 a2 ,1 exp( (s a3 )/a4 )(13)where s is the objective score and a1 , a2 , a3 and a4 arethe model parameters, which are found numerically usinga nonlinear regression process with MatLab optimizationtoolbox. After the nonlinear mapping, the correlation coefficient between the predicted and true subjective scores

WANG, WU, SHEIKH, SIMONCELLI, YANG & BOVIK: QUALITY-AWARE IMAGES7TABLE IPerformance evaluation of image quality measures using the LIVE database [39]. JP2: JPEG2000 dataset; JPG: JPEGdataset; Noise: white Gaussian noise dataset; Blur: Gaussian blur dataset; Error: transmission error dataset.Datasetnumber of imagesmethodtypeProposedRRPSNRFRSarnoff [40]FRMSSIM [41]FRWang et al. [7]NRSheikh et al. [8]NRmethodtypeProposedRRPSNRFRSarnoff [40]FRMSSIM [41]FRWang et al. [7]NRSheikh et al. [8]NRmethodtypeProposedRRPSNRFRSarnoff [40]FRMSSIM [41]FRWang et al. [7]NRSheikh et al. [8]NRJP2 (1)87JP2 (2) JPG (1) JPG (2)NoiseBlurError828788145145145Correlation Coefficient (prediction accuracy)0.93530.94900.84520.96950.8889 0.8872 0.91750.93370.89480.90150.91360.9866 0.7742 0.88110.97060.96500.95890.98370.9631 0.9480 0.91440.96760.96690.96470.98560.9706 0.9361 0.9439N/AN/A0.95920.9808N/A

IEEE TRANSACTIONS ON IMAGE PROCESSING, TO APPEAR 1 Quality-Aware Images Zhou Wang, Member, IEEE, Guixing Wu, Student Member, IEEE, Hamid R. Sheikh, Member, IEEE, Eero P. Simoncelli, Senior Member, IEEE, En-Hui Yang, Senior Member, IEEE, and Alan C. Bovik, Fellow, IEEE Abstract— We propose the concept of quality-aware image, in which certain extracted features of the original (high-

Related Documents:

IEEE 3 Park Avenue New York, NY 10016-5997 USA 28 December 2012 IEEE Power and Energy Society IEEE Std 81 -2012 (Revision of IEEE Std 81-1983) Authorized licensed use limited to: Australian National University. Downloaded on July 27,2018 at 14:57:43 UTC from IEEE Xplore. Restrictions apply.File Size: 2MBPage Count: 86Explore furtherIEEE 81-2012 - IEEE Guide for Measuring Earth Resistivity .standards.ieee.org81-2012 - IEEE Guide for Measuring Earth Resistivity .ieeexplore.ieee.orgAn Overview Of The IEEE Standard 81 Fall-Of-Potential .www.agiusa.com(PDF) IEEE Std 80-2000 IEEE Guide for Safety in AC .www.academia.eduTesting and Evaluation of Grounding . - IEEE Web Hostingwww.ewh.ieee.orgRecommended to you b

Signal Processing, IEEE Transactions on IEEE Trans. Signal Process. IEEE Trans. Acoust., Speech, Signal Process.*(1975-1990) IEEE Trans. Audio Electroacoust.* (until 1974) Smart Grid, IEEE Transactions on IEEE Trans. Smart Grid Software Engineering, IEEE Transactions on IEEE Trans. Softw. Eng.

IEEE Robotics and Automation Society IEEE Signal Processing Society IEEE Society on Social Implications of Technology IEEE Solid-State Circuits Society IEEE Systems, Man, and Cybernetics Society . IEEE Communications Standards Magazine IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology IEEE Transactions on Emerging .

Standards IEEE 802.1D-2004 for Spanning Tree Protocol IEEE 802.1p for Class of Service IEEE 802.1Q for VLAN Tagging IEEE 802.1s for Multiple Spanning Tree Protocol IEEE 802.1w for Rapid Spanning Tree Protocol IEEE 802.1X for authentication IEEE 802.3 for 10BaseT IEEE 802.3ab for 1000BaseT(X) IEEE 802.3ad for Port Trunk with LACP IEEE 802.3u for .

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 6, JUNE 2017 2957 No-Reference Quality Assessment of Tone-Mapped HDR Pictures Debarati Kundu, Deepti Ghadiyaram, Student Member, IEEE,AlanC.Bovik,Fellow, IEEE, and Brian L. Evans, Fellow, IEEE Abstract—Being able to automatically predict digital picture quality, as perceived by human observers, has become important

4 IEEE TRANSACTIONS ON IMAGE PROCESSING, XXXX Natural image source Channel (Distortion) HVS HVS C D F E Fig. 1. Mutual information between C and E quantifies the information that the brain could ideally extract from the reference image, whereas the mutual information between C and F quantifies the corresponding information that could be extracted from the test image.

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 7, JULY 1998 979 Nonlinear Image Estimation Using Piecewise and Local Image Models Scott T. Acton, Member, IEEE, and Alan C. Bovik, Fellow, IEEE Abstract— We introduce a new approach to image estimation based on a flexible constraint framework that encapsulates mean-ingful structural image .

ACCOUNTING 0452/21 Paper 2 May/June 2018 1 hour 45 minutes Candidates answer on the Question Paper. No Additional Materials are required. READ THESE INSTRUCTIONS FIRST Write your Centre number, candidate number and name on all the work you hand in. Write in dark blue or black pen. You may use an HB pencil for any diagrams or graphs. Do not use staples, paper clips, glue or correction fluid. DO .