probability – showing the independence of two random centred Gaussian vectors.

Let $X = (X_1, …, X_d)$ be a centred Gaussian vector composed of i.i.d random variables. I have two questions. The first one is whether my approaches correct.

  1. I want to show that: $O$ being and orthogonal $dxd$ matrix, $OX$ has the same law as $X$.

The way I did was by the following: I say that a general Gaussian vector X has the law $N(mu_X, sum_X)$. I wanna show that Y = $OX$ has the same law as $X$ which is equal to $O^{-1}X$. this is saying that $P_Y(y) propto P_X(O^{-1}X)$

(leaving the normalisation constant away)$propto exp(-1/2(O^{-1}Y – mu_X)^T sum_X^{-1}(O^{-1}(Y)-mu_X))$ = $exp(-1/2(Y – Omu_X)^T O^{-T} sum_X^{-1}O^{-1}(Y-Omu_X))$ = $exp(-1/2 (Y-O mu_X)^T (Osum O^T)^{-1}(Y-Omu_X))$ which has the law $N_Y(Omu_X , Osum_XO^T)$. Therefore this has the same law as X. Is this correct??? And if so what’s the last sentence of my argument?

  1. I want to show that when $a=(a_1,…,a_d)$ and $b=(b_1,…,b_d)$ are two orthogonal vectors on $R^n$, Then by considering orthogonal matrix $O$, whose first twotwo columns coincide with a and b, show that $sum_{i=1}^d a_i X_i$ and $sum_{i=1}^d b_i X_i$ are independent. How is this done???

pr.probability – Computating the expectation of a functional applied to a Gaussian Process

First, a definition : a process $Z$ over $mathbb R^n$ is said to be a Gaussian Process on $mathbb R^n$ with mean function $m(cdot)$ and covariance function $k(cdot, cdot)$ if for any integer $k$ and any set of points ${ x_1, …, x_k } subset mathbb R^n$, the vector $(Z(x_i))_{1 leq i leq k}$ is multivariate Gaussian with mean $(m(x_i))_{1 leq i leq k}$ and covariance $(k(x_i, x_j))_{1 leq i, j leq k}$.

Given $Z$, a Gaussian Process that is almost surely exponentially integrable on $(0, 1)^n$, and $U = (U_i)_{1 leq i leq n}$ a vector of iid random variables distributed uniformly on $(0, 1)$, can I say anything about the following quantity?

$$mathbb{E}_Z left(dfrac{1}{mathbb{E}_U left( e^{Z(U)}right)} right)$$

where $mathbb{E}_Z$ (resp $mathbb{E}_U$) denotes the expectation taken with respect to $Z$ (resp. $U$)

So far, I studied sufficient conditions on $m$ and $k$ for $Z$ to be exponentially integrable almost surely, and therefore to have $mathbb{E}_U left( e^{Z(U)}right) < +infty$.

However, I don’t know where to start for $mathbb{E}_Z left(dfrac{1}{mathbb{E}_U left( e^{Z(U)}right)} right)$, do you have any pointers or interesting references I can read ?

Thanks in advance.

brownian motion – Compute an integral related with gaussian

Can anyone help me to solve this integral, I think we can try polar coordinate and some property of Gaussian density, but I stuck for long time. Also the WolframAlpha cannot compute this integral.
$$
int_{0}^{infty} int_{-2^{n / 2}x}^{infty} frac{1}{2 pisqrt{(j-1)}} exp left(-frac{x^{2}}{2(j-1)}-frac{y^{2}}{2}right) d y d x
$$

Thanks a lot!

linear algebra – Limit spectrum of composition of projections onto random Gaussian vectors

Let $n > p$, let $X in mathbb{R}^{n times p}$ whose columns $X_1, ldots, X_p$ are zero-meaned Gaussian,of covariance $(rho^{|i – j|})_{i, j in (p)}$ ($rho in (0, 1)$).

Are there (asymptotics or not) known results on the eigenvalue distribution of:

$$left(mathrm{Id} – tfrac{1}{||{X_1}||^2} X_1 X_1^top right)
ldots left(mathrm{Id} – tfrac{1}{||{X_p}||^2} X_p X_p^top right) enspace ?$$

From a geometrical point of view, this is the matrix of the application which projects sequentially onto the orthogonal of the span of $X_p$, then onto that of $X_{p-1}$, etc, so all eigenvalues are in the unit disk, 0 is an eigenvalue, 1 also is an eigenvalue since $n > p$.
I would expect the spectrum to “move towards” 1 as $rho$ increases, but are there any quantitative results on that?

reference request – (random fields / gaussian process): On rewritting a certain expectation as a kernel function

Let $v = (v_1,ldots,v_n)$ and $(w_{1,1},ldots,w_{1,n},ldots, w_{n,m})$ be random vectors with iid coordinates, and also $v$ is independent of $w$, with $w_{i,j} sim N(0,1/m)$ and $v_j sim N(0,1/n)$, for $i=1,ldots,n$, $j=1,ldots,m$. Define a random function $f_{w,v}:mathbb R^m to mathbb R$ by $f_{w,v}(x) := sum_{i=1}^nv_iphi(sum_{j=1}^m w_{i,j}x_j)$, where $phi(t):=max(t,0)$. Fix a point $a, b in mathbb R^m$.

Question. How to got about computing
$$
p_{m,n}(a,b) := mathbb P(f_{w,v}(x)f_{w,v}(a, b) > 0),
$$

Or even just $lim_{n to infty}lim_{m to infty}p_{m,n}(a,b)$.

Note that $p_{n,m}(a,b)$ is simply the probability that the random field $x mapsto f_{w,v}(x)$ flips its sign between $x=a$ and $x=b$.

Observations

  • Conditioned on $w$, we compute (see this math.SE post)
    $$mathbb P(f_{w,v}(x)f_{w,v}(a, b) > 0) = kappa_0(phi_w(a),phi_w(b))
    $$

    where $mathbb R^n ni phi_w(x) := (phi(sum_{j=1}^mw_{ij}x_j))_{1 le i le n}$, and $kappa_0(z,z’) := 1-(1/pi)arccos(z^Tz’/|z||z’|)) in (0, 1)$ is the arc-cosine kernel of order $0$.
  • Thus, we have the identity

$$
p_{m,n}(a,b) = mathbb E_w(kappa_0(phi_w(a),phi_w(b))).tag{2}
$$

Question. Can the formula (2) be further simplified ? Is it linked to some other kernels ?

Does additive Gaussian noise preserves the Shannon entropy ordering?

Suppose that $Z$ is a Gaussian random variable independent of $X$ and $Y$. Moreover suppose that $h(X) geq h(Y)$, where $h(cdot)$ is the differential Shannon entropy.

Does relation $h(X+Z) geq h(Y+Z)$ hold in general? or under some conditions? Definitely, if they are Gaussian, it holds.

pr.probability – Projection onto column space perturbed by Gaussian noise

Suppose we have a matrix $Xinmathbb{R}^{mtimes n}$ (with $m le n$) with iid standard Gaussian entries, and suppose we have noise matrix $Winmathbb{R}^{mtimes n}$ with iid Gaussian entries, but with some small variance $sigma_W < 1$. We know that the columns of $X$ and $X+W$ are linearly independent almost surely, so they form a basis. I am interested in knowing how the noise $W$ changes the projection matrix $X(X^TX)^{-1}X^T$ of $X$. For instance, denoting $X_W = X+W$ and $x_j$ as the jth column of $X$, can we say anything about $X_W(X_W^TX_W)^{-1}X_W^Tx_j$? I am especially interested in references that discuss this kind of problem? I know standard matrix perturbation theory results could be applied on this, but I am looking for more systematic and refined analysis.

pr.probability – Convolution of two Gaussian mixture model

Suppose I have two independent random variables $X$, $Y$, each modeled by the Gaussian mixture model (GMM). That is,
$$
f(x)=sum _{k=1}^K pi _k mathcal{N}left(x|mu _k,sigma _kright)
$$

$$
g(y)=sum _{i=1}^N lambda _i mathcal{N}left(y|mu _i,sigma _iright)
$$

Now, I would like to know the PDF of another variable $Z$, where $Z=X+Y$.

Is there anyone who can write the explicit PDF of $Z$?

How do you calculate the expected value of $Eleft{e^{-|X|}right}$ e.g. for Gaussian X?

If $X$ is a random variable, I would like to be able to calculate something like $$Eleft{e^{-|X|}right}$$
How can I do this, e.g., for a normally distributed $X$?

Can Gaussian elimination find a zero vector when $m < n$?

Given $m$ rows with $n$ variables where $m < n$, and given that there exists a combination of some of the rows that gives a zero vector.

Is it promised when computing the reduced-row echelon form, with Gaussian elimination to find a zero vector?