Quantization - Stanford University

2y ago
2 Views
1 Downloads
952.26 KB
34 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Rosemary Rios
Transcription

QuantizationInput-output characteristic of a scalar quantizerxOutputx̂Qx̂Sometimes, thisconvention is used:xˆq 2M representative levelsxxˆq 1xˆqtqQq-1t q 1qt q 2Q x̂input signal xM-1 decisionthresholdsBernd Girod: EE398A Image and Video CompressionQuantization no. 1

Example of a quantized waveformOriginal and Quantized SignalQuantization ErrorBernd Girod: EE398A Image and Video CompressionQuantization no. 2

Lloyd-Max scalar quantizer Problem : For a signal x with given PDF f X ( x) find a quantizer withM representative levels such that d MSE E X Xˆ 2 min. Solution : Lloyd-Max quantizer[Lloyd,1957] [Max,1960] M-1 decision thresholds exactlyhalf-way between representativelevels. 1tq xˆq 1 xˆq q 1, 2, , M -12M representative levels in thecentroid of the PDF between two xˆq successive decision thresholds.Necessary (but not sufficient)conditionsBernd Girod: EE398A Image and Video Compressiontq 1 x fX( x)dxtqtq 1 q 0,1, , M -1f X ( x)dxtqQuantization no. 3

Iterative Lloyd-Max quantizer design1.2.Guess initial set of representative levels xˆq q 0,1, 2, , M -1Calculate decision thresholds1tq xˆq 1 xˆq q 1, 2, , M -123.Calculate new representative levelstq 1xˆq x fX( x)dxtqtq 1 q 0,1, , M -1f X ( x)dxtq4.Repeat 2. and 3. until no further distortion reductionBernd Girod: EE398A Image and Video CompressionQuantization no. 4

Example of use of the Lloyd algorithm (I) X zero-mean, unit-variance Gaussian r.v.Design scalar quantizer with 4 quantization indices withminimum expected distortion D*Optimum quantizer, obtained with the Lloyd algorithm Decision thresholds -0.98, 0, 0.98Representative levels –1.51, -0.45, 0.45, 1.51D* 0.12 9.30 dBBoundaryReconstructionBernd Girod: EE398A Image and Video CompressionQuantization no. 5

Example of use of the Lloyd algorithm (II)ConvergenceQuantization Function Initial quantizer B:decision thresholds –½, 0, ½Quantization FunctionInitial quantizer A:decision thresholds –3, 0 3 UT final 9.297805100.115501501010155SNROUT10final 9.29815860Iteration Number 5Iteration NumberSNROUT [dB]DIteration NumberSNROUT [dB] 51015Iteration NumberAfter 6 iterations, in both cases, (D-D*)/D* 1%Bernd Girod: EE398A Image and Video CompressionQuantization no. 6

Example of use of the Lloyd algorithm (III) X zero-mean, unit-variance Laplacian r.v.Design scalar quantizer with 4 quantization indices withminimum expected distortion D*Optimum quantizer, obtained with the Lloyd algorithm Decision thresholds -1.13, 0, 1.13Representative levels -1.83, -0.42, 0.42, 1.83D* 0.18 7.54 dBThresholdRepresentativeBernd Girod: EE398A Image and Video CompressionQuantization no. 7

Example of use of the Lloyd algorithm (IV)ConvergenceInitial quantizer A,decision thresholds –3, 0 31050-5-10Initial quantizer B,decision thresholds –½, 0, ½Quantization Function 246810Iteration Number12141050-5-100.2D0.25641015SNRfinal 7.54150 510Iteration Number46810121416Iteration Number0.40082160.4SNR [dB]DQuantization Function 15SNR [dB] 0085641015SNRfinal 7.541505Iteration Number1015After 6 iterations, in both cases, (D-D*)/D* 1%Bernd Girod: EE398A Image and Video CompressionQuantization no. 8

Lloyd algorithm with training dataxˆq ; q 0,1, 2, , M -11.Guess initial set of representative levels2.Assign each sample xi in training set T to closest representative xˆq Bq x T : Q x q3.Calculate new representative levelsxˆq 4.q 0,1,2, , M -11Bq xq 0,1, , M -1x BqRepeat 2. and 3. until no further distortion reductionBernd Girod: EE398A Image and Video CompressionQuantization no. 9

Lloyd-Max quantizer properties Zero-mean quantization errorE X Xˆ 0 Quantization error and reconstruction decorrelated E X Xˆ Xˆ 0 Variance subtraction property 2 ˆ E X X 2Xˆ2XBernd Girod: EE398A Image and Video CompressionQuantization no. 10

High rate approximation Approximate solution of the "Max quantization problem,"assuming high rate and smooth PDF [Panter, Dite, 1951] x( x) const13f X ( x)Distance between twosuccessive quantizerrepresentative levels Probability densityfunction of xApproximation for the quantization error variance: d E X Xˆ 2 1 3 f ( x)dx 12M 2 X x 3Number of representative levelsBernd Girod: EE398A Image and Video CompressionQuantization no. 11

High rate approximation (cont.) High-rate distortion-rate function for scalar Lloyd-Max quantizerd R 2 X2 2 2 R 1 3with f X ( x)dx 12 x 2 32XSome example values for uniformLaplacianGaussian2921 4.53 2.7212Bernd Girod: EE398A Image and Video CompressionQuantization no. 12

High rate approximation (cont.) Partial distortion theorem: each interval makes an (approximately)equal contribution to overall mean-squared error 2 ˆPr ti X ti 1 E X X ti X ti 1 2 ˆ Pr t j X t j 1 E X X t j X t j 1 for all i, j[Panter, Dite, 1951], [Fejes Toth, 1959], [Gersho, 1979]Bernd Girod: EE398A Image and Video CompressionQuantization no. 13

Entropy-constrained scalar quantizer Lloyd-Max quantizer optimum for fixed-rate encoding, how can wedo better for variable-length encoding of quantizer index?Problem : For a signal x with given pdfrateM 1 f X ( x) find a quantizer withR H Xˆ pq log 2 pqsuch that q 0 d MSE E X Xˆ min.2Solution: Lagrangian cost function J d R E X Xˆ H Xˆ min.2Bernd Girod: EE398A Image and Video CompressionQuantization no. 14

Iterative entropy-constrained scalar quantizer design1.2.Guess initial set of representative levels xˆq ; q 0,1, 2, , M -1and corresponding probabilities pqCalculate M-1 decision thresholdstq 3.xˆq -1 xˆq2 log 2 pq 1 log 2 pq2 xˆq -1 xˆq q 1, 2, , M -1Calculate M new representative levels and probabilities pqtq 1xˆq xfX( x)dxtqtq 1 q 0,1, , M -1f X ( x)dxtq4.Repeat 2. and 3. until no further reduction in Lagrangian costBernd Girod: EE398A Image and Video CompressionQuantization no. 15

Lloyd algorithm for entropy-constrainedquantizer design based on training set1.2.Guess initial set of representative levels xˆq ; q 0,1, 2, , M -1and corresponding probabilities pqAssign each sample xi in training set T to representative2ˆJq x xminimizing Lagrangian cost x i q log 2 pqxˆqiBq x T : Q x q 3.Calculate new representative levels and probabilities pqxˆq 4.q 0,1, 2, , M -11Bq xq 0,1, , M -1x BqRepeat 2. and 3. until no further reduction in overall Lagrangian2costJ J x Q x log p xixi i i 2q xi xiBernd Girod: EE398A Image and Video CompressionQuantization no. 16

Example of the EC Lloyd algorithm (I) X zero-mean, unit-variance Gaussian r.v.Design entropy-constrained scalar quantizer with rate R 2 bits, andminimum distortion D*Optimum quantizer, obtained with the entropy-constrained Lloydalgorithm 11 intervals (in [-6,6]), almost uniformD* 0.09 10.53 dB, R 2.0035 bits (compare to fixed-length example)ThresholdRepresentative levelBernd Girod: EE398A Image and Video CompressionQuantization no. 17

Example of the EC Lloyd algorithm (II)Same Lagrangian multiplier λ used in all experimentsQuantization FunctionR [bit/symbol] SNR [dB]420420-4-4-6-6510152025Iteration Number303540SNRfinal 10.5363865101520253035402Rfinal 2.00441.506-2-2101Initial quantizer B, only 4 intervals( 11) in [-6,6], with the same length61202.5 Quantization FunctionInitial quantizer A, 15 intervals( 11) in [-6,6], with the same length 510152025303540R [bit/symbol] SNR [dB] 5101520Iteration Number12SNRfinal 8.9452108602.55101520Rfinal 1.772321.510Iteration NumberBernd Girod: EE398A Image and Video Compression5101520Iteration NumberQuantization no. 18

Example of the EC Lloyd algorithm (III) X zero-mean, unit-variance Laplacian r.v.Design entropy-constrained scalar quantizer with rate R 2 bitsand minimum distortion D*Optimum quantizer, obtained with the entropy-constrained Lloydalgorithm 21 intervals (in [-10,10]), almost uniformD* 0.07 11.38 dB, R 2.0023 bits (compare to fixed-length example)Decision thresholdRepresentative levelBernd Girod: EE398A Image and Video CompressionQuantization no. 19

Example of the EC Lloyd algorithm (IV)Same Lagrangian multiplier λ used in all experimentsInitial quantizer A, 25 intervals( 21 & odd) in [-10,10], with thesame length 1050-5-105101520253035Initial quantizer B, only 4 intervals( 21) in [-10,10], with the same lengthQuantization FunctionQuantization Function 401050-5-1051010SNRfinal 11.40730510152025303540Rfinal 2.0063210510152025303540R [bit/symbol] SNR [dB]155201510SNRfinal 7.41115051015203Rfinal 1.6186210Iteration Number 15Iteration NumberIteration NumberR [bit/symbol] SNR [dB] 510Iteration Number1520Convergence in cost faster than convergence of decision thresholdsBernd Girod: EE398A Image and Video CompressionQuantization no. 20

High-rate results for EC scalar quantizers For MSE distortion and high rates, uniform quantizers (followed byentropy coding) are optimum [Gish, Pierce, 1968]Distortion and entropy for smooth PDF and fine quantizer interval d 2 2 d 122 H Xˆ h X log 2 2Distortion-rate function1 2 h X 2 Rd R 2212 eis factoror 1.53 dB from Shannon Lower Bound61 2 h X 2 RD R 222 eBernd Girod: EE398A Image and Video CompressionQuantization no. 21

Comparison of high-rate performance ofscalar quantizers High-rate distortion-rate functiond R 2 X2 2 2 R 2 Scaling factorShannon LowBdLaplacian6 0.703 ee 0.865Gaussian1Uniform Lloyd-MaxEntropy-coded119 4.523 2.7212Bernd Girod: EE398A Image and Video Compressione2 1.2326 e 1.4236Quantization no. 22

Deadzone uniform quantizerQuantizeroutput x̂ Bernd Girod: EE398A Image and Video CompressionQuantizerinput xQuantization no. 23

Embedded quantizers Motivation: “scalability” – decoding of compressedbitstreams at different rates (with different qualities)Nested quantization intervalsQ0xQ1Q2 In general, only one quantizer can be optimum(exception: uniform quantizers)Bernd Girod: EE398A Image and Video CompressionQuantization no. 24

Example: Lloyd-Max quantizers for Gaussian PDF0 2-bit and 3-bit optimalquantizers not embeddablePerformance loss forembedded quantizers00110101-.98.980 0 0 01 1 1 10 0 1 10 0 1 10 1 0 10 1 0 1-1.75 -1.05 -.50Bernd Girod: EE398A Image and Video Compression1.50 1.05 1.75Quantization no. 25

Information theoretic analysis “Successive refinement” – Embedded coding at multiplerates w/o loss relative to R-D functionE d ( X , Xˆ 1 ) D1I ( X ; Xˆ 1 ) R( D1 )I ( X ; Xˆ ) R( D )E d ( X , Xˆ 2 ) D2 2“Successive refinement” with distortions D1 and D2 D1can be achieved iff there exists a conditional distributionf Xˆ , Xˆ1 22X( xˆ1 , xˆ2 , x) f Xˆ2X( xˆ2 , x) f Xˆ1Xˆ 2( xˆ1 , xˆ2 )Markov chain conditionX Xˆ 2 Xˆ 1Bernd Girod: EE398A Image and Video Compression[Equitz,Cover, 1991]Quantization no. 26

Embedded deadzone uniform quantizersx 08 4 xQ0Q1Q24 2 2 Supported in JPEG-2000 with general forquantization of wavelet coefficients.Bernd Girod: EE398A Image and Video CompressionQuantization no. 27

Vector quantizationBernd Girod: EE398A Image and Video CompressionQuantization no. 28

LBG algorithm Lloyd algorithm generalized for VQ [Linde, Buzo, Gray, 1980]Best representativevectorsfor given partitioningof training set Best partitioningof training setfor givenrepresentativevectorsAssumption: fixed code word lengthCode book unstructured: full searchBernd Girod: EE398A Image and Video CompressionQuantization no. 29

Design of entropy-coded vector quantizers Extended LBG algorithm for entropy-coded VQ[Chou, Lookabaugh, Gray, 1989] Lagrangian cost function: solve unconstrained problem rather thanconstrained problem 2 ˆJ d R E X X H Xˆ min. Unstructured code book: full search forJ xi q xi xˆq log 2 pq2The most general coder structure:Any source coder can be interpreted as VQ with VLC!Bernd Girod: EE398A Image and Video CompressionQuantization no. 30

Lattice vector quantization Amplitude 2cell pdf representativevectorAmplitude 1Bernd Girod: EE398A Image and Video CompressionQuantization no. 31

8D VQ of a Gauss-Markov sourceSNR [dB]18LBG fixedCWL12E8-latticeLBG variableCWL6scalarr 0.95000.51.0Rate [bit/sample]Bernd Girod: EE398A Image and Video CompressionQuantization no. 32

8D VQ of memoryless Laplacian source9E8-latticeSNR [dB]LBG variableCWL6scalar3LBG fixedCWL000.51.0Rate [bit/sample]Bernd Girod: EE398A Image and Video CompressionQuantization no. 33

Reading Taubman, Marcellin, Sections 3.2, 3.4J. Max, “Quantizing for Minimum Distortion,” IEEE Trans.Information Theory, vol. 6, no. 1, pp. 7-12, March 1960.S. P. Lloyd, “Least Squares Quantization in PCM,” IEEETrans. Information Theory, vol. 28, no. 2, pp. 129-137,March 1982.P. A. Chou, T. Lookabaugh, R. M. Gray, “Entropyconstrained vector quantization,” IEEE Trans. SignalProcessing, vol. 37, no. 1, pp. 31-42, January 1989.Bernd Girod: EE398A Image and Video CompressionQuantization no. 34

Problem : For a signal x with given PDF find a quantizer with M . D 0 5 10 15 0 5 10 R T] Iteration Number SNR OUT final 9.2978 5 10 15-6-4 0 2 4 n 6 0 Iteration Number5 10 15 0.1 0.15 0.2 D 0 5 10 15 6 8 10 R Iteration Number SNR OUT final 9.298. Bernd Girod: EE398A Image

Related Documents:

SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity Qingyuan Zhao Stanford University qyzhao@stanford.edu Murat A. Erdogdu Stanford University erdogdu@stanford.edu Hera Y. He Stanford University yhe1@stanford.edu Anand Rajaraman Stanford University anand@cs.stanford.edu Jure Leskovec Stanford University jure@cs.stanford .

Quantization of Gauge Fields We will now turn to the problem of the quantization of gauge th eories. We will begin with the simplest gauge theory, the free electromagnetic field. This is an abelian gauge theory. After that we will discuss at length the quantization of non-abelian gauge fields. Unlike abelian theories, such as the

Computer Science Stanford University ymaniyar@stanford.edu Madhu Karra Computer Science Stanford University mkarra@stanford.edu Arvind Subramanian Computer Science Stanford University arvindvs@stanford.edu 1 Problem Description Most existing COVID-19 tests use nasal swabs and a polymerase chain reaction to detect the virus in a sample. We aim to

Domain Adversarial Training for QA Systems Stanford CS224N Default Project Mentor: Gita Krishna Danny Schwartz Brynne Hurst Grace Wang Stanford University Stanford University Stanford University deschwa2@stanford.edu brynnemh@stanford.edu gracenol@stanford.edu Abstract In this project, we exa

Stanford University Stanford, CA 94305 bowang@stanford.edu Min Liu Department of Statistics Stanford University Stanford, CA 94305 liumin@stanford.edu Abstract Sentiment analysis is an important task in natural language understanding and has a wide range of real-world applications. The typical sentiment analysis focus on

Bohr’s solution Quantization of angular momentum Leads to quantization of radii (“Bohr orbits”) Leads to quantization of energies Assume the “Bohr frequency condition” Yields the same “Rydberg formula” for allowed energy levels!!! a0 1 bohr (0.529 Å), Ry 1 Rydberg 2.17987 x 10-18 J

STEP 1: FINETUNING RN50 WITH QAT tf.contrib.quantize.create_training_graphadds quantization nodes in the RN50 graph. Quantization nodes are added at weights (conv/FC layers) and activation layers in the network. Load the pre-trained weights, finetune the QAT model and save the new weights. RN-50 graph tf.con

of quantization on various aspects of reinforcement learning (e.g: training, deployment, etc) remains unexplored. Applying quantization to reinforcement learning is nontrivial and different from traditional neural network. In the context of policy inference, it may seem that, due to the sequential decision making nature of reinforcement learning,