Does the uniformly delimited sequence {X_i} of series of independent random variables converge or diverge depending on the behavior of the variance?

Uniformly limited sequence {X_i} of independent random variables then $ sum_ {i = 1} ^ { infty} X_i-E [X_i] $ converges or diverges depending on whether $ sum_ {i = 1} ^ { infty} sigma_i2 {$ converge or diverge?
I have searched everywhere for a test with little success. Can anyone provide a reference or proof? Thank you.

Variance estimation, covariance of linear regression estimators

I am examining this exercise: https://i.imgur.com/eDMtCaa.png From this image https://i.imgur.com/1WBO6iA.png I believe that the desired values ​​are given directly by the matrix provided. However, I doubt a little because that would be too easy and by the word "estimate."

Is that what they are looking for? For example, var (beta_1) = 0.5. Would that be the answer for part a?

st.statistics – Linear regression: equivalence of forms of the minimum variance related to unbiased estimator

Background

Consider the linear regression model:

$$ y = X beta + e \ E (e) = 0 E (ee ^ T) = V $$

It is well known that the minimum variance refines the unbiased estimator (MVAUE) of $ beta $ there is yes and only if $ X $ It has linearly independent columns. In this case, the MVAUE is unique and given by

$$ hat beta = My = arg min_ beta , (y – X beta) ^ T V_0 ^ + (y – X beta) $$

where

$$
begin {align}
M &: = (X ^ T V_0 ^ + X) ^ – 1} X ^ T V_0 ^ + \
V_0 &: = V + XUX ^ T
end {align}
$$

Y $ U $ is any positive semi-defined matrix such that $ mathrm {col} , X subseteq mathrm {col} , V_0 $, where $ V_0: = V + XUX ^ T $. The superscript "+" denotes the inverse of Moore-Penrose.

Chapter 4 (in particular, section i) of the "Linear statistical inference and its applications" of CR Rao and Chapter 13 of the "Magnus and Neudecker matrix differential calculation with applications in statistics and econometrics" are two good references on The subject, for the curious.

My question

I want to show that the matrix

$$ M = (X ^ T (V + XUX ^ T) ^ + X) ^ {1} X ^ T (V + XUX ^ T) ^ + $$

is independent of the particular choice of $ U $, as long as $ X $ it has linearly independent columns and that $ U $ it satisfies the aforementioned conditions (although I would be happy with a solution that would strengthen the assumption about $ U $ to $ U> 0 $)

I have solved the problem for two sub-boxes, $ V = 0 $ Y $ V> 0 $, and I offer these solutions below. A proof of the case where $ V geq 0 $ It is unique, but non-zero, it has been elusive.

Partial solution: assume $ V = 0 $ Y $ U> 0 $

We will show that $ M = X ^ + $. First note that $ MX = I $. Thus $ XMX = X $ Y $ MXM = M $. Now watch

$$
begin {align}
X ^ T (XUX ^ T) ^ + X & = (U – 1} X ^ + XU) X ^ T (XUX ^ T) ^ + X (UX ^ TX <+ T} U – 1 }) \
& = (U – 1 X ^ +) (XU X ^ T) (XUX ^ T) ^ + (X U X ^ T) (X + T U – 1)) \
& = (U – 1 X ^ +) (XU X ^ T) (X + T U – 1) \
& = U ^ – 1 U U – 1} \
& = U <-1>
end {align}
$$

Thus

$$
begin {align}
XM & = X (X ^ T (XUX ^ T) ^ + X) -1 – X ^ T (XUX ^ T) ^ + \
& = XUX ^ T (XUX ^ T) ^ +
end {align}
$$

It is symmetric, due to the properties of $ (XUX ^ T) ^ + $. We have shown that $ M $ satisfies the four conditions required to be the Moore-Penrose inverse of $ X $.

Partial solution: assume $ V> 0 $ Y $ U> 0 $

The application of the identity of the Woodbury matrix gives

$$
begin {align}
(V + XUX ^ T) -1 – & = V – 1 – V – 1 X (X ^ TV – 1 X + U – 1) -1 } X ^ TV ^ {- 1} \
(V + XUX ^ T) – 1 X & = V – 1 X (X ^ TV – 1 X + U – 1) – 1 U – 1 } \
X ^ T (V + XUX ^ T) – 1 X & = X ^ TV – 1 X (X ^ TV – 1 X + U – 1) -1 U ^ 1 \
(X ^ T (V + XUX ^ T) -1 X) -1 – & = U + (X ^ TV -1 X) -1
end {align}
$$

By combining these results in the right way, one can derive

$$ M = (X ^ TV ^ {- 1} X) ^ {- 1} X ^ TV ^ {- 1} $$

Find the expectation and variance of the "V" shape number in the random chart

P: There are n vertices where each pair of vertices is connected by an edge independently with probability p. Find the expectation and variance of the number of the form of "V" in the random graph since "V" is formed by 3 vertices {i, j, k} where there are exactly 2 edges between them.


First I let N = V shape number is the random graph and discovered that $ {E (N)} $ $ {=} $ $ {n choose 3} $$ {3 (p ^ 2) (1-p)} $.

$ {Var (n) =} $ $ sum _ {{(i, j, k)}} Var (Y) + sum _ {{(i, j, k) neq (i & # 39 ;, j & # 39 ;, k & # 39 ;)}} Cov (Y, Y & # 39; PS

where Y is the indicating function that there is a V shape in a random graph with vertices i, j, k.

$ {Var (Y) = E (Y ^ 2) + E (Y) ^ 2 = 3 (p ^ 2) (1-p) -9 (p ^ 4) (1-p) ^ 2} $

$ {Cov (Y, Y & # 39;) = E (Y {i, j, k }, Y {i, j, k & # 39; }) – E (Y {i, j, k }) E (Y {i, j, k & # 39; }) = 3 (p ^ 3) (1-p) ^ 2-9 (p ^ 4) (1-p) ^ 2} $

Therefore i have $ {Var (N) =} $ $ {n choose 3} $$ {(3 (p ^ 2) (1-p) -9 (p ^ 4) (1-p) ^ 2) +} $ $ {n choose 2} {n-2 choose 2} $$ {(2) (3 (p ^ 3) (1-p) ^ 2-9 (p ^ 4) (1-p) ^ 2) $

Can anyone help me see if my answer is correct? Thank you.

pr.probability – Variance of weighted sums of uniform variables

Suppose that $ {X_i } $, $ 1 le i le n $ they are i.i.d. uniform random variables, defined in the interval $ (- c / 2, c / 2) $, $ c in Re ^ + $. Leave $ { alpha_i mid alpha_i in mathbb {R} } $ be s.t. $ sum_ {i = 1} ^ n alpha_i ^ 2 = 1 $ (that is to say., $ alpha = ( alpha_1, ldots, alpha_n) in S ^ {n-1} $) Finally, leave $ Y = sum_ {i = 1} ^ n alpha_i X_i $, be the weighted sum of $ {X_i } $. Is there a convenient representation (i.e., closed form) of the variance of $ Y $? Should I know a lot about $ Y $, apart from guessing that we should have $ mathbb {E} (Y) = 0 $?

Unfortunately, I have hardly received any training in statistics, but I would like to understand something about the properties of a certain class of random vectors for my work.

correlation – internal product equal to 1? (where X has a mean of zero and a variance of 1)

I was looking at the test of two random variables, each correlated with a third.

I cannot understand some parts of the test regarding the internal product. For example, in the test, he mentions that we assume that X, a random variable, has an average of 0 and a variance of 1, so does Y. Why is that? < XX > = 1 and that < XY > is the correlation
coefficient of X and Y? by < XX >, I think Var (X) = 1 = E (X ^ 2) – 0 = < XX >/ n but no < XX >.

st.statistics: minimizes the asymptotic variance of an ergodic average subject to a set of restrictions

Leave

  • $ (E, E math, lambda) $ Y $ (E & # 39 ;, mathcal E & # 39 ;, lambda & # 39;) $ be measuring spaces
  • $ I $ be a finite set not empty
  • $ varphi_i: E & # 39; a E $ be $ ( mathcal E & # 39 ;, mathcal E) $-measurable with $$ lambda & # 39; circ varphi_i ^ {- 1} = q_i lambda tag1 $$ for $ i in I $
  • $ p, q_i: E to (0, infty) $ be $ mathcal E $-measurable with $$ int p : { rm d} lambda = int q_i : { rm d} lambda = 1 $$
  • $ w_i: E a (0,1) $ be $ mathcal E $-measurable, $ w_i & # 39 ;: = w_i circ varphi_i $, $$ p_i & # 39 ;: = begin {cases} frac p {q_i} circ varphi_i & text {on} left {q_i circ varphi_i> 0 right } \ 0 & text {on} left {q_i circ varphi_i = 0 right } end {cases} $$ Y $$ f_i & # 39 ;: = begin {cases} frac f {q_i} circ varphi_i & text {on} left {q_i circ varphi_i> 0 right } \ 0 & text {on} left {q_i circ varphi_i = 0 right } end {cases} $$ for $ i in I $
  • $ zeta $ denote the counting measure in $ (I, 2 ^ I) $ Y $$ nu & # 39 ;: = w & # 39; p & # 39; ( zeta otimes lambda & # 39;) $$

Leave $ f in L ^ 2 ( lambda) $. Assume $$ {q_i = 0 } subseteq {w_ip = 0 }, tag2 $$ $$ {p = 0 } subseteq {f = 0 } tag3 $$ Y $$ {pf ne0 } subseteq left { sum_ {i in I} w_i = 1 right }. tag4 $$

Leave $ ((T_n, X_n & # 39;)) _ {n in mathbb N} $ be the Markov chain (supposed to be in stationarity) generated by the Metropolis-Hastings algorithm with objective distribution $ nu & # 39; $ Y $$ A_n: = frac1n sum_ {i = 0} frac {f & # 39;} {p & # 39;} (T_i, X_i & # 39;) ; ; ; text {for} n in mathbb N. $$ I want to minimize asymptotic variance $$ sigma ^ 2: = lim_ {n to infty} n operatorname {Var} A_n $$ With respect to $ w_i $. How can we do that?

I know that yes $ (Y_n) _ {n in mathbb N_0} $ is any homogeneous Markov chain in time, $ mu: = math L (Y_0) $, $ g in L ^ 2 ( mu) $ Y $ B_n: = frac1n sum_ {i = 0} ^ {n-1} g (Y_i) $, so $ operatorname {Var} B_n = frac1n left ( operatorname {Var} _ mu (g) ​​+2 sum_ {i = 1} ^ {n-1} left (1- frac in right) operatorname {Cov} (f (Y_0), f (Y_i)) right) $. Furthermore, if $ L ^ 2_0 ( mu): = left {h in L ^ 2 ( mu): int h : { rm d} mu = 0 right } $, $ mathcal D (G): = left {h_0 in L ^ 2_0 ( mu): left ( sum_ {i = 0} ^ n kappa ^ ih_0 right) _ {n in mathbb N_0} text {is convergent} right } $, $$ Gh_0: = sum_ {n = 0} ^ infty kappa ^ nh_0 ; ; ; text {for} h_0 in mathcal D (G), $$ Y $ g_0: = g- int g : { rm d} mu in mathcal D (G) $, so $$ n operatorname {Var} B_n xrightarrow {n to infty} 2 langle Gg_0, g_0 rangle_ {L ^ 2 ( mu)} – operatorname {Var} _ mu (g) ​​ tag5. $$ In particular, leaving $ mathcal L: = – (1- kappa) $, we can consider the spectral gap of $ math L $, $$ operatorname {gap} mathcal L = inf _ { substack {h in L ^ 2 ( mu) setminus {0 } \ 1 : perp : h}} frac { langle- mathcal Lh, h rangle_ {L ^ 2 ( mu)}} { left | h right | _ {L ^ 2 ( mu)} ^ 2} = 1- left | kappa right | _ { mathfrak L (L ^ 2_0 ( mu))}, $$ where do we consider $ kappa $ as a non-negative self-attachment operator in $ L ^ 2 ( mu) $. With this definition, the right side of $ (5) $ it is at most $ left ( frac2 { operatorname {gap} mathcal L} -1 right) operatorname {Var} _ mu (g) ​​$.

stochastic processes: variance of a random variable obtained from a linear transformation

Edit: I needed to review this question as suggested.

Suppose there are $ N $ Realizations of the Gaussian process denoted as vectors $ mathbf {z} _ {j} in mathbb {R} ^ {n} $ for $ j = 1, ldots, N $. Leave $ and $ be a random variable such that $ y = sum_ {j = 1} ^ {N} ( mathbf {B} mathbf {z} _ {j}) (i) $
where $ mathbf {B} $ It is a unitary matrix. What is the variance of $ y2?

Explanation: Boldface represents the vector or matrix. $ ( mathbf {B} mathbf {x}) (i) $ represents the $ i $-th vector entry $ mathbf {B} mathbf {x} $.

probability – variance of a fair currency

Consider that Vamshi decides to throw a fair coin repeatedly until he gets a tail. He does almost $ 4 $ draws.The value of the variance $ T $ is ______


I have tried this standard deviation in student marks

$ x —– 1 —– 2 —— 3 —– 4 $

$ P (x) —- frac {1} {2} —– frac {1} {4} —— frac {1} {8} —- frac {1} {16} $

The average is $ frac {1} {4} left ( frac {1} {2} + frac {1} {2 ^ 2} + frac {1} {2 ^ 3} + frac { 1} {2 ^ {} right) = frac {15} {64} $

Then, the variance will be,

$ frac {1} {4} left ( left ( frac {15} {64} – frac {1} {2} right) ^ {2} + left ( frac {15} {64 } – frac {1} {4} right) ^ {2} + left ( frac {15} {64} – frac {1} {8} right) ^ {2} + left ( frac {15} {64} – frac {1} {16} right) ^ {2} right) = $$ frac {460} {16384} $


But the answer is like that.

$ E left (X ^ { right) = 1 ^ {2} times frac {1} {2} + 2 ^ {2} times frac {1} {4} + 3 ^ {2 } times frac {1} {8} + 4 ^ 2 times frac {1} {16} $

$ E left (X right) = 1 times frac {1} {2} +2 times frac {1} {4} +3 times frac {1} {8} +4 times frac {1} {16} $

$ V left (X right) = E left (X ^ {2} right) – left (E left (X right) right) ^ {2} $$ = frac {252} { 256} $


Why is my approach giving incorrect results?

probability – the variance of a sample from a normal population

Please consider the problem and my solution below. I agree with the answer in the back of the book, but somehow, my solution does not seem right to me. I did
Do it the right way?
Move

Issue:
A normal population has a variation of $ 15 $. If samples of size $ 5 $ are extracted from this population, which
you can expect the percentage to have variations (a) less than $ 10 $?
$ 10 $?

Reply:

Leave $ S ^ 2 $ be the variance of the sample and $ n $ be the size of the sample The expression $ nS ^ 2 / sigma ^ 2 $ will have a chi-square distribution with $ 4 $ degrees of freedom.
begin {align *}
sigma ^ 2 & = 15 \
frac {nS ^ 2} { sigma ^ 2} & = frac {5S ^ 2} {15} = frac {S ^ 2} {3} \
S ^ 2 & = 10 \
frac {nS ^ 2} { sigma ^ 2} & = 10/3 \
end {align *}

Using R we find:
pchisq (10/3, df = 4) = 0.496
Therefore the answer is $ 0.496.