From Log-determinant Inequalities To Gaussian Entanglement Via . - QuTech

1y ago
14 Views
2 Downloads
3.25 MB
22 Pages
Last View : 8d ago
Last Download : 3m ago
Upload by : Annika Witter
Transcription

From log-determinant inequalitiesto Gaussian entanglement via recoverability theoryL. Lami, C. Hirche, G. Adesso, and A. Winter, IEEE 2017arXiv:1703:06149

Outline of the talk A bridge between probability theory, matrix analysis, and quantum optics. Summary of results. Properties of log-det conditional mutual information. Gaussian states in a nutshell. Main result: the Rényi-2 Gaussian squashed entanglement coincides with theRényi-2 Gaussian entanglement of formation for Gaussian states. Conclusions & open problems.

Connecting probability theory and matrix analysis It has been known for a long time that one can turn informationtheoretical inequalities into determinantal inequalities byapplying them to Gaussian random variables.1Gaussian:NT 2R R ,Di erential Rényi entropies: T N (0, V )1 2t V!1Z1tepV (t) p(2 )N det VdN t pV (t) 1 1N1 ln det V ln 2 ln ,22 1h (T ) lnAll differential Rényi entropies reduce to 1/2 ln det (V) up to additive constants! Balanced entropyinequalities become inequalities between linear combinations of log determinants.1. T.M. Cover and J.A. Thomas. Determinant inequalities via information theory. SIAM J. Matrix Anal. Appl.9(3):384-392, 1988.

Example: strong subadditivity Strong subadditivity (SSA) is the most important “Shannon-type” entropy inequality. It tells usthat any three random variables TA, TB, TC satisfyI(TA : TB TC ) . H(TA TC ) H(TB TC ) When the three variables are jointly normal:T (TA , TB , TC ) N (V ),I(TA : TB TC ) H(TC )VABC0VA @X Y H(TA TB TC )XVBZ 01YZA 0VC1 det VAC det VBC .ln . IM (A : B C)V2 det VC det VABCLog-det CMIIM is the conditional mutual information (CMI) formed using the following log-det entropydefined on positive definite matrices:M (V ). 1ln det V2

The grand plan Why is this relevant for quantum information?!In continuous variable systems, Gaussian random variables model the outcomes of Gaussianmeasurements performed on Gaussian states.!Rényi-2 entropies of Gaussian states are given by log-determinant expressions.Entropy inequalities forGaussian randomvariablesLog-det inequalities forpositive matricesClassicalEntropy inequalities formeasurement outcomesRényi-2 entropyinequalitiesQuantumGaussianmeasurements onGaussian states For Gaussian statesThis correspondences led to the introduction of operationally motivated Rényi-2 entropicquantifiers for Gaussian states.22. L. Mi!ta Jr. and R. Tatham. Gaussian intrinsic entanglement. Phys. Rev. Lett. 117:240505, 2016.

Our results in a nutshell We study general properties of the log-det conditional mutual information:!we analyse its behaviour under various matrix operations, most notably matrix inversion;!we show - among the other things - that the log-det mutual information is convex on thegeodesics of the “trace metric”.We then establish remainder terms for the strong subadditivity inequality. This is done in twoways:!perturbing known bounds; and!exploiting new techniques based on recoverability theory.Our main result establishes the equality between two apparently very different Gaussianentanglement measures, when computed on Gaussian states:!Rényi-2 Gaussian squashed entanglement; and!Rényi-2 Gaussian entanglement of formation.

Schur complements Definition.VAB ABVAXTXVB Schur complement:VAB /VA . VBX T VA 1 XSchur complements answer a number of problems in matrix analysis & probability theory.3!Positivity of block matrices:VAB 0 () VA 0 and VAB /VA 0!Determinant factorisation:det(VAB ) det(VA ) det(VAB /VA )!Formula for block inverse:V!1 (VAB /VA )1 Conditional distribution of normal variables:TAB N (VAB ) )TB (TA t) N (VAB /VA )3. F. Zhang (ed.). The Schur Complement and Its Applications. Springer New York, 2005.

First properties of log-det CMI Log-det (conditional) mutual information:1 det VAC det VBCIM (A : B C)V ln2 det VC det VABC IM (A : B)W1 det WA det WB ln2det WABTheorem. For all VABC 0, one hasIM (A : B C)V IM (A : B)VABC /VCIM (A : B C)V IM (A : B)V1 These are two ways to reduce a conditional mutual information to a simple mutual information. Thesecond one, in particular, is somewhat surprising. It will come in handy later. Sketch of proof. For the first identity, observe that TAB (TC t) is distributed normally, withcovariance matrix VABC /VC (which is independent from t). ThenIM(A : B C)V I(TA : TB TC ) ETC(I(TA : TB ) TC ) ETC IM (A : B)VABC /VC IM(A : B)VABC /VC

Second statement: block inversion formulae determinant factorisation rule:(V1)AB (VABC /VC )1,IM (A : B)V(V11)A (VABC /VBC )11 det(V 1 )A det(V ln2det(V 1 )AB,1(V1)B (VABC /VAC ))B1 det(VABC /VBC ) 1 det(VABC /VAC ) ln2det(VABC /VC ) 1 1det(VABC /VC )1ln2 det(VABC /VBC ) det(VABC /VAC )(det VABC )(det VC ) 11 ln2 (det VABC )(det VBC ) 1 (det VABC )(det VAC ) 11 det VAC det VBCln2 det VABC det VC IM (A : B C)V1

Application: lower bounds on log-det CMI Strong subadditivity is saturated iff the variables form a Markov chain. In other words,I(TA : TB TC ) 0 ()TA — T C — TBProblem: in the case of T (TA , TB , TC) being Gaussian, how can we read this from the covariancematrix? The question was answered by Ando & Petz4, but here we can give a one-line proof.0 I(TA : TB TC ) IM (A : B C)V IM (A : B)V1,VABC0VA @X Y 1XVBZ YZAVCNote that IM(A:B)V-1 0 is possible iff the off-diagonal blocks of (V-1)AB vanish. Introducing theprojectors 𝚷A and 𝚷B onto the A and B subspaces, this can be rephrased as0 A (VABC ) 1 B (VABC /VBC )1Saturation condition ( Markov chain condition):4. T. Ando and D. Petz. Acta Sci. Math. (Szeged) 75:265-281, 2009.XXY VC 1 Z (VBC /VC )Y VC 1 Z 01

The advantage of this approach over the traditional one is that by working a bit harder you canperturb this saturation condition and get a remainder term:IM (A : B C)V 1 Y VC Z )VB1/222Other remainder terms can be obtained by transforming the log-det CMI into a relative entropyand then applying any lower bound to the latter (e.g. negative log fidelity):I(TA : TB TC ) D(T kT 0 ) , 11/2VA(X2pT 0 (tA , tB , tC ) pTA TC (tA , tC ) pTB TC (tB tC )A necessary condition for this strategy to succeed is that we work out the distribution of T': thisnew variable can be thought of as an “attempt” to reconstruct the original T once TB has been lost,assuming that TA TC TB is a Markov chain.Also T' is distributed normally:T 0 N (V 0 ) ,0VA0. @ZV 1 Y VABCCY 1Y VC ZVBZ 1YZAVC

Matrix geometric mean The set !N of positive definite matrices is a differentiable manifold. All tangent spaces TK are isomorphic to T𝟙 (and hence to each other):TK 3 X 7! K 1/2XK1/2Tangent space TKK 02 T1T𝟙 (! Hermitian matrices) has a natural metric that comes from the Hilbert-Schmidt norm. Thisinduces a metric, called the trace metric, on the whole manifold: 1/2 ds . kK 1/2 dKK 1/2 k2 Tr (K 1 dK)2 Then !N becomes a Riemaniann manifold. How are its geodesics shaped?As one it turn out, can give an analytical expression5 of the geodesic connecting M and N:M !N (t) M1/2 M1/2NM1/2 tM 1/2 . M #t NWeighted geometric mean5. M. Moakher. SIAM J. Matrix Anal. & Appl. 26(3):735-747, 2005.𝛾(t)MN

The weighted geometric mean enjoys a wealth of useful properties:6 Determinant factorisation:det(M #t N ) (det M )1 t (det N )t Monotonicity under positive maps:(M #t N ) (M )#t (N )Consider bipartite block matrices VAB, WAB . Applying this monotonicity property to the map thatprojects onto the subspace A we get(V #t W )A A (V #t W ) A ( A V A )#t ( A W A ) VA #t WATaking the determinant:det (V #t W )A det (VA #t WA ) (det VA )1 t (det WA )t6. T. Ando. Linear Algebra Appl. 26:203-241, 1979.

An important property of log-det MI Theorem. The log-det mutual information is convex on the geodesics of the trace metric, i.e.IM (A : B)V #t W (1t)IM (A : B)V tIM (A : B)W This is surprising, given that in general the log-det mutual information is not convex in thecovariance matrix! It is also useful, as we shall see. Proof. Applying the determinantal inequality we have just found:IM (A : B)V #t W 1 (det(V #t W )A ) (det(V #t W )B )ln2det(V #t W )AB1 (det VA )1 t (det WA )t (det VB )1 t (det WB )t ln2(det VAB )1 t (det WAB )t 1t2 (1lndet VA det VBt det WA det WB lndet VAB2det WABt)IM (A : B)V tIM (A : B)W

Where’s the quantum? Until now we have explored the connections between classical probability theory and matrixanalysis. Why is this relevant for quantum information?Entropy inequalities forGaussian randomvariablesEntropy inequalities formeasurement outcomes?Log-det inequalities forpositive matricesClassicalRényi-2 entropyinequalitiesQuantumGaussianmeasurements onGaussian states First we need to introduce the basic formalism of quantum optics: Gaussian states, quantumcovariance matrices etc.

Quantum Gaussian states Quantum optics ! quantum mechanics applied to a finite number of harmonic oscillators.[x̂j , p̂k ] ijk!r̂. T(x̂1 , . . . , x̂n , p̂1 , . . . , p̂n ) ,T[r̂, r̂ ] i i 0110 Thermal states of quadratic Hamiltonians, also called Gaussian states, form a privileged class ofexperimentally relevant quantum states. As their classical relatives, they are parametrised by a mean vector w and a covariance matrix V. Covariance matrices of n-mode quantum states are exactly those 2n " 2n real matrices such thatVi Heisenberg uncertainty principle!Real symmetric matrices satisfying the above condition are called quantum covariance matrices(QCMs). Pure states are represented by minimal QCMs, or equivalently by QCMs with determinant 1. ˆG (V, w) pure()Vi and det V 1

Experimentally, Gaussian measurements are easily accessible. These can be described by POVMsparametrised by another QCM, called seed.When one makes a Gaussian measurement described by a seed 𝛾 on a Gaussian state withcovariance matrix V, the outcome T is again distributed normally:T N 1(V )2 Hence, its differential entropy becomes:1h(T ) ln det2 1(V ) n(ln 2 1)2The quantum entropyof the Gaussian stateitself is significantlymore complicated Moral: log-determinant entropies are the right thing to look at if what you care about aremeasured correlations. To recover log-determinant expressions from the quantum state directly one has to work withRényi-2 entropies:S2 (ˆ G (V, w)). 1ln Tr [ˆ (V, w) ] ln det V22

Gaussian entanglement measures Consider a bipartite Gaussian state. How to quantify its entanglement? An important measure isthe Rényi-! entanglement of formation, aka the convex roof of the Rényi-# entanglemententropy. Since we are dealing with Gaussian states, it makes sense to restrict to Gaussian decompositions inthe convex roof, and to look at # 2. In this way one obtains the Rényi-2 Gaussian entanglementof formation.7 The choice of # makes the expression extremely simple at the level of covariance matrices:GEF,2(A1: B)V inf IM (A : B)2s.t.ABpure QCM andAB VABIt has been conjectured to belinked to the secret key distillationrate in the Gaussian setting [Mi!ta& Tatham, PRL 2016].7. Wolf et al., Phys. Rev. A 69:052320, 2003 — Adesso et al., Phys. Rev. Lett. 109:190502, 2012.

Main result Theorem. For any quantum covariance matrix VABC , twice the Rényi-2 Gaussian entanglement offormation between A and B is a lower bound on the log-det CMI:1GIM (A : B C)VEF,2(A : B)V2Furthermore, the r.h.s can be recovered by taking the infimum of the l.h.s over all (legal)extensions VABC of VAB :VABC infi ABC1GIM (A : B C)V EF,2(A : B)V2Sketch of proof (first inequality). Start by defining8.AB . (VABC /VC ) #1/2 AB (VABC /VC )1 TABEven if it is not obvious at first glance, this is always a QCM, and moreover 𝛾AB VAB . Now,compute its determinant:detAB det(VABC /VC ) det AB (VABC /VC )1 TAB1/2 det(VABC /VC ) det(VABC /VC )1 1/2 1Hence, this 𝛾AB is a pure QCM. This means that we can use it as an ansatz in the inf that definesthe Rényi-2 Gaussian entanglement of formation!8. LL, C. Hirche, G. Adesso, and A. Winter. Phys. Rev. Lett. 117:220502, 2016.

Doing so yields:GEF,2(A : B)V Convexity of log-det MIon the geodesics of thetrace metricGetting rid of %(orthogonal matrix) AB VAB , AB1IM (A : B) pure 21IM (A : B)(VABC /VC )#1/2 ( (VABC /VC )21 )11 IM (A : B)VABC /VC IM (A : B) (VABC /VC )44 Properties of log-det CMIinf11IM (A : B)VABC /VC IM (A : B)(VABC /VC )441 111 IM (A : B C)V IM (A : B C)V44IM (A : B C)V IM (A : B)VABC /VCIM (A : B C)V IM (A : B)V1 1IM (A : B C)V2In the second part of the proof we had to construct suitable extensions that can saturate the abovebound (a bit more cumbersome).

ConsequencesVABCinfi ABC1GIM (A : B C)V EF,2(A : B)V2 The theorem reduces the inf on the l.h.s., which is in principle over extensions of unboundeddimension, to an optimisation over a compact set of matrices of fixed dimension. The optimised mutual information is reminiscent of the squashed entanglement:10Esq (A :B) . 1inf I(A : B C) ABC 2In fact, it is a “Rényi-2 Gaussian“ version of the squashed entanglement. For comparison, remember that a simple expression for the von Neumann squashed entanglementremains out of reach, even for very simple states. Our results may be useful to tackle a conjecture in [Mi!ta & Tatham, PRL 2016]: the Rényi-2Gaussian entanglement of formation coincides with the Gaussian intrinsic entanglement, i.e. theintrinsic information of the measured correlations, when all the parties are assumed to employ onlyGaussian processing.9. R.R. Tucci, arXiv:quant-ph/9909041. — M. Christandl and A. Winter, J. Math. Phys. 45(3):829-840, 2004.

Conclusions Log-determinant expressions appear:!in the entropies of normal variables;!in the entropies of the outcomes of Gaussian measurements on Gaussian states;!in the Rényi-2 entropies of Gaussian states. The log-determinant mutual information enjoys lots of useful properties: for instance,it is convex on the geodesics of the trace metric. These properties can be used to show that the Rényi-2 Gaussian squashedentanglement coincides with the Rényi-2 Gaussian entanglement of formation. This may shed light on the connections between these quantifiers and thecryptographically motivated Gaussian intrinsic entanglement.Thank you!

Outline of the talk A bridge between probability theory, matrix analysis, and quantum optics. Summary of results. Properties of log-det conditional mutual information. Gaussian states in a nutshell. Main result: the Rényi-2 Gaussian squashed entanglement coincides with the Rényi-2 Gaussian entanglement of formation for Gaussian states. .

Related Documents:

2 Solving Linear Inequalities SEE the Big Idea 2.1 Writing and Graphing Inequalities 2.2 Solving Inequalities Using Addition or Subtraction 2.3 Solving Inequalities Using Multiplication or Division 2.4 Solving Multi-Step Inequalities 2.5 Solving Compound Inequalities Withdraw Money (p.71) Mountain Plant Life (p.77) Microwave Electricity (p.56) Digital Camera (p.

Solving & Graphing Inequalities 2016-01-11 www.njctl.org Slide 3 / 182 Table of Contents Simple Inequalities Addition/Subtraction Simple Inequalities Multiplication/Division Solving Compound Inequalities Special Cases of Compound Inequalities Graphing Linear Inequalities in Slope-Intercept Form click on the topic to go to that section Glossary .

Compound inequalities are two inequalities joined by the word and or by the word or. Inequalities joined by the word and are called conjunctions. Inequalities joined by the word or are disjunctions. You can represent compound inequalities using words, symbols or graphs. 20. Complete the table. Th e fi rst two rows have been done for you. Verbal .

Chapter 4D-Quadratic Inequalities and Systems Textbook Title Learning Objectives # of Days 4.9 Graph Quadratic Inequalities 12. Graph quadratic inequalities and systems of quadratic inequalities on the coordinate plane. (4.9) 2 4.9 Systems of Inequalities 13. Graph systems of inequalities involving linear, quadratic and absolute value functions .

Chapter 8 Answers (continued) 34 Answers Algebra 2Chapter 8 Practice 8-3 1. 44 256 2. 70 1 3. 25 32 4. 101 10 5. 51 5 6. 8-2 7. 95 59,049 8. 172 289 9. 560 1 10. 12-2 11. 2-10 12. 38 6561 13. log 9 81 2 14. log 25 625 2 15. log 8 512 3 16. 13 169 2 17. log 2 512 9 18. log 4 1024 5 19. log 5 625 4 20. log 10 0.001 -3 21. log 4 -22.5 -223. log 8 -1 24. log

Primary SOL 7.15 The student will a) solve one-step inequalities in one variable; and b) graph solutions to inequalities on the number line Related SOL 7.13, 7.14, 7.16 Materials o Inequalities Practice (attached) o Solving Inequalities Matching Activity (attached) o Calculators Vocabulary

Inequalities 19 CC Investigation 3: Inequalities Teaching Notes Mathematical Goals DOMAIN: Expressions and Equations Solve word problems leading to one- and two-step inequalities. Graph the solutions to one- and two-step inequalities and interpret the solution set in the context of the problem. Vocabulary inequality Materials

Compound Inequalities The inequalities you have seen so far are simple inequalities. When two simple inequalities are combined into one statement by the words AND or OR, the result is called a compound inequality. NOTE the following symbols: Λ means AND V means OR