Esiste un limite di Chernoff inverso che limita che la probabilità di coda è almeno così grande.

cioè se X 1 , X 2 , … , X n

Esiste un limite di Chernoff inverso che limita che la probabilità di coda è almeno così grande.

cioè se X 1 , X 2 , … , X n

1

Il vostro esempio è chiedere troppo: con p = n - 2 / 3p=n−2/3 , uno standard Chernoff legato spettacoli che Pr [ | T ∩ S 1 | ≥ √1.1 n 1 / 3 ]Pr[|T∩S1|≥1.1−−−√n1/3] ePr[| T∩S2| √1.1 ≤n 1 / 3 ]Pr[|T∩S2|1.1−−−√≤n1/3] sono al massimoexp(-cn 1 / 3 )exp(−cn1/3) per un certocc .

—
Colin McQuillan,
Hai ragione, mi sono confuso su quale termine in Chernoff rilegato abbia il quadrato. Ho modificato la domanda per riflettere un limite più debole. Non penso che mi aiuterà nella mia attuale applicazione, ma potrebbe essere interessante per altri motivi.

—
Ashwinkumar BV,
Risposte:

Ecco una prova esplicita che un limite Chernoff standard è strettamente legato a fattori costanti nell'esponente per un determinato intervallo di parametri. (In particolare, se le variabili sono 0 o 1, e 1 con probabilità 1/2 o meno, e ε ∈ ( 0 , 1 / 2 )

Se trovi un errore, per favore fammi sapere.

**Lemma 1.** (ermeticità del limite di Chernoff)
*Sia X X la media di kk variabili casuali indipendenti 0/1 (rv). Per ogni ε ∈ ( 0 , 1 / 2 ]ϵ∈(0,1/2] e p ∈ ( 0 , 1 / 2 ]p∈(0,1/2] , assumendo ε 2 p k ≥ 3ϵ2pk≥3 ,*

(i)
*Se ogni rv è 1 con probabilità al massimo p p , allora *
Pr [ X ≤ ( 1 - ϵ ) p ] ≥ exp ( - 9 ϵ 2 p k ) .

(ii)
*Se ogni rv è 1 con probabilità almeno p p , allora *
Pr [ X ≥ ( 1 + ϵ ) p ] ≥ exp ( - 9 ϵ 2 p k ) .

*Prova. *
Usiamo la seguente osservazione:

**Rivendicazione 1.** Se 1 ≤ ℓ ≤ k - 1 , allora
( k

**Prova del reclamo 1.**
Con l'approssimazione di Stirling,
io ! = √2 π i (i/e)ieλdoveλ∈[1/(12i+1),1/12i

Pertanto, ( kℓ ) , che è k!

*Proof of Lemma 1 Part (i).*
Without loss of generality assume each 0/1 random variable in the sum X*exactly* p

Fix ℓ=⌊(1−2ϵ)pk⌋+1

The assumptions ϵ2pk≥3

To finish we show A≥exp(−ϵ2pk)

**Claim 2.** A≥exp(−ϵ2pk)

**Proof of Claim 2.**
The assumptions ϵ2pk≥3

By definition, ℓ≤pk+1

Substituting the right-hand side of (ii) for ℓ

The assumption, ϵ2pk≥3

From ϵ2pk≥3

(iv) and (v) together give the claim. QED

**Claim 3.** B≥exp(−8ϵ2pk)

**Proof of Claim 3.**
Fix δ

The choice of ℓ

Claims 2 and 3
imply AB≥exp(−ϵ2pk)exp(−8ϵ2pk)

*Proof of Lemma 1 Part (ii).*
Without loss of generality assume each random variable is 1

Note Pr[X≥(1+ϵ)p]=∑ni=⌈(1−ϵ)pk⌉Pr[X=i/k]

The last ϵpk

Several [math processing error]s -- any chance of fixing them?

—
Aryeh
Those math expressions used to display just fine. For some reason the \choose command is not working in mathjax. Neither is \binom. E.g. $a \choose b$ gives (ab)(ab) . Presumably this is a bug in the mathjax configuration. Hopefully it will be fixed soon. Meanwhile see Lemma 5.2 in the appendix of arxiv.org/pdf/cs/0205046v2.pdf or cs.ucr.edu/~neal/Klein15Number.

—
Neal Young
The Berry-Esseen theorem can give tail probability lower bounds, as long as they are higher than n−1/2

Another tool you can use is the Paley-Zygmund inequality. It implies that for any even integer k

Pr[|X|>=12(E[Xk])1/k]≥E[Xk]24E[X2k]

Together with the multinomial theorem, for X

If you are indeed okay with bounding sums of Bernoulli trials (and not, say, bounded random variables), the following is pretty tight.

Slud's Inequality*.Let {Xi}ni=1{Xi}ni=1 be i.i.d. draws from a Bernoulli r.v. with E(X1)=pE(X1)=p , and let integer k≤nk≤n be given. If either (a) p≤1/4p≤1/4 and np≤knp≤k , or (b) np≤k≤n(1−p)np≤k≤n(1−p) , then Pr[∑iXi≥k]≥1−Φ(k−np√np(1−p)),where ΦPr[∑iXi≥k]≥1−Φ(k−npnp(1−p)−−−−−−−−√), Φ is the cdf of a standard normal.

(Treating the argument to Φ

From here, you can use bounds on Φ

Other than that, and what other people have said, you can also try using the Binomial directly, perhaps with some Stirling.

(*) Some newer statements of Slud's inequality leave out some of these conditions; I've reproduced the one in Slud's paper.

The de Moivre-Laplace Theorem shows that variables like |T∩S1|

For lower bounds like n−c*Random graphs* by Béla Bollobás, Cambridge, 2nd edition, where further references are given to *An introduction to probability and its applications* by Feller and *Foundations of Probability* by Rényi.

The Generalized Littlewood-Offord Theorem isn't exactly what you want, but it gives what I think of as a "reverse Chernoff" bound by showing that the sum of random variables is unlikely to fall within a small range around any particular value (including the expectation). Perhaps it will be useful.

Formally, the theorem is as follows.

**Generalized Littlewood-Offord Theorem**: Let a1,…,an

It may be helpful to others to know that this type of result is also known as a "small ball inequality" and Nguyen and Vu have a terrific survey people.math.osu.edu/nguyen.1261/cikk/LO-survey.pdf. My perspective here slightly differs from yours. I think of a "reverse Chernoff" bound as giving a lower estimate of the probability mass of the small ball around 0. I think of a small ball inequality as qualitatively saying that the small ball probability is maximized by the ball at 0. In this sense reverse Chernoff bounds are usually easier to prove than small ball inequalities.

—
Sasho Nikolov
The exponent in the standard Chernoff bound as it is stated on Wikipedia is tight for 0/1-valued random variables. Let 0<p<1

Here, D(x‖y)=xlog2(x/y)+(1−x)log2((1−x)/(1−y))

As mentioned, the upper bound in the inequality above is proved on Wikipedia (https://en.wikipedia.org/wiki/Chernoff_bound) under the name "Chernoff-Hoeffding Theorem, additive form". The lower bound can be proved using e.g. the "method of types". See Lemma II.2 in [1]. Also, this is covered in the classic textbook on information theory by Cover and Thomas.

[1] Imre Csiszár: The Method of Types. IEEE Transactions on Information Theory (1998). http://dx.doi.org/10.1109/18.720546

It is also worth noting that D(p+δp‖p)=p2−2pδ2+O(δ3)D(p+δp∥p)=p2−2pδ2+O(δ3) , and for common case of p=1/2p=1/2 it is 12δ2+O(δ4)12δ2+O(δ4) . This shows that when δ=O(n−1/3) the typical e−Cδ2 bound is sharp. (And when δ=O(n−1/4) for p=1/2).

—
Thomas Ahle
Utilizzando il nostro sito, riconosci di aver letto e compreso le nostre Informativa sui cookie e Informativa sulla privacy.

Licensed under cc by-sa 3.0
with attribution required.