Una versione molto semplice del teorema centrale limitato come di seguito
Una versione molto semplice del teorema centrale limitato come di seguito
Risposte:
Bella domanda (+1) !!
Ricorderete che per variabili casuali indipendenti e Y , V a r ( X + Y ) = V a r ( X ) + V a r ( Y ) e V a r ( a ⋅ X ) = a 2 ⋅ V a r ( X ) . Quindi la varianza di ∑ n i = 1 X i èe la varianza di ˉ X =1ènσ2/n2=σ2/n.
Questo è per la varianza . Per standardizzare una variabile casuale, la dividi per la sua deviazione standard. Come sapete, il valore atteso di è μ , quindi la variabile
Per quanto riguarda il tuo secondo punto, credo che l'equazione mostrata sopra mostri che devi dividere per and not per standardizzare l'equazione, spiegando perché usisn(lo stimatore diσ and not .
aggiunta: @whuber suggests to discuss the why of the scaling by . He does it there, but because the answer is very long I will try to capture the essense of his argument (which is a reconstruction of de Moivre's thoughts).
Se aggiungi un numero elevato di + 1 e -1, puoi approssimare la probabilità che la somma sarà j con il conteggio elementare. Il registro di questa probabilità è proporzionale a - j 2 / n . Quindi se vogliamo che la probabilità sopra riportata converga in una costante quando n diventa grande, dobbiamo usare un fattore di normalizzazione in.
Using modern (post de Moivre) mathematical tools, you can see the approximation mentioned above by noticing that the sought probability is
which we approximate by Stirling's formula
There is a nice theory of what kind of distributions can be limiting distributions of sums of random variables. The nice resource is the following book by Petrov, which I personally enjoyed immensely.
It turns out, that if you are investigating limits of this type
There is a lot of mathematics going around then, which boils to several theorems which completely characterizes what happens in the limit. One of such theorems is due to Feller:
Theorem Let be a sequence of independent random variables, be the distribution function of , and be a sequence of positive constant. In order that
and
it is necessary and sufficient that
and
This theorem then gives you an idea of what should look like.
The general theory in the book is constructed in such way that norming constant is restricted in any way, but final theorems which give necessary and sufficient conditions, do not leave any room for norming constant other than .
s represents the sample standard deviation for the sample mean. s is the sample variance for the sample mean and it equals S/n. Where S is the sample estimate of the population variance. Since s =S/√n that explains how √n appears in the first formula. Note there would be a σ in the denominator if the limit were
N(0,1) but the limit is given as N(0, σ). Since S is a consistent estimate of σ it is used in the secnd equation to taken σ out of the limit.
Intuitively, if for some we should expect that is roughly equal to ; it seems like a pretty reasonable expectation, though I don't think it is necessary in general. The reason for the in the first expression is that the variance of goes to like and so the is inflating the variance so that the expression just has variance equal to . In the second expression, the term is defined to be while the variance of the numerator grows like , so we again have that the variance of the whole expression is a constant ( in this case).
Essentially, we know something "interesting" is happening with the distribution of , but if we don't properly center and scale it we won't be able to see it. I've heard this described sometimes as needing to adjust the microscope. If we don't blow up (e.g.) by then we just have in distribution by the weak law; an interesting result in it's own right but not as informative as the CLT. If we inflate by any factor which is dominated by , we still get while any factor which dominates gives . It turns out is just the right magnification to be able to see what is going on in this case (note: all convergence here is in distribution; there is another level of magnification which is interesting for almost sure convergence, which gives rise to the law of iterated logarithm).