probability theory – Control the sum of the square of martingale increments

Let $X_n$ is a martingale, which is $L^1$ bounded: $sup_{n}E(|X_n|)=K<infty$. Prove
$$
sum_{i=2}^infty (X_i-X_{i-1})^2<infty, a.s.
$$

Our prof gives a hint. He let us consider a truncation. Introduce a stopping time $tau_L:=inf{n:|X_n|ge L}$, to show
$$
E(sum_{i=2}^n (X_i-X_{i-1})^2 1_{{tau_L> n}})
$$

is uniformly bounded in $n$.

I do not know how to deal with the sum $sum_{i=1}^n (X_i-X_{i-1})^2 1_{{tau_L> n}}$. Do we have $
E(sum_{i=2}^n (X_i-X_{i-1})^2 1_{{tau_L> n}})= E((X_n-X_1)^2 1_{{tau_L> n}})le 4L^2$
?

pr.probability – Probability Density Function (Find C1)

Let R be the region inside of the function $r = 2+4cos(θ)$. Then I was first asked to find the value of $C_1$ so that $f(r,θ)$ is a joint density function. So, $f(r,θ) = C_1(1-r^2)$ , if $(r,θ)∈R$ but the issue is I am not exactly sure how to do this in my mind I am thinking I could integrate over the function $1-r^2$ first and then $2+4cos(θ)$ to find the ratio but no text book does this.

So how exactly should I go about this because all the examples in my text book provides bounds which makes it easy but this one does not. I know I can find the bounds easy but I am not sure what I am looking for should I just use the bounds of the function $z = 1-r^2$ for all parts thats inside the function $r=2+4cos(θ)$?

probability – How to calculate distribution of sine of a random variable

I am trying to prove $$
int_{Xi} exp left(|xi|^{a}right) mathbb{P}(mathrm{d} xi)<infty,
$$
where the random variable $xi$ complies with the distribution of $sin(X)$ with $X sim Nleft(mu, sigma^{2}right)$.

I figured out how to calculate the first two order moments. Just wondering how to calculate the distribution of $sin(X)$.

Any hint would be appreciated.

probability theory – Weak stationary white noise sequence that are identically distributed but not independent

Let ${Z_t}$ be a sequence of random variables that are weakly stationary, i.e., $mathbb{E}(Z_t)=0$ and $Var(Z_t)=sigma^2$ and $Z_t$‘s are uncorrelated, i.e., $cov(Z_s,Z_t)=0$ for all $tneq s$. Can we create such a sequence of $Z_t$ that are not independent? Also, can we have the other way around, that is, $Z_t$‘s are independent but not identically distributed?

My try:
For the first question, I am thinking of $Z_t = (-1)^tX_t$ where $X_t sim N(0,1)$ but do not know how to show they are dependent.

probability theory – Variance of the sum of the even numbers rolled

I tried to solve the following problem for my probability theory class :

A dice is rolled $100$ times. If $X$ is the sum of the even numbers
rolled, find the expected value and the variance of $X$.

My attempt:

Let $X_i$ be the number of times we rolled $i$ $left(text{for } i = overline{1,6}right)$. For example, if after 100 tries, we rolled $5$ seven times, then $X_5 = 7$.

Then $X=2X_2 + 4X_4 + 6X_6$, so by the linearity of expectation, we have:

$$ mathbb{E}(X) = mathbb{E}(2X_2 + 4X_4 + 6X_6) = 2mathbb{E}(X_2)+4mathbb{E}(X_4) + 6mathbb{E}(X_6) $$

Also,
$$
mathbb{P}(X_i = k) = {100 choose k} left( dfrac{1}{6} right)^k left( dfrac{5}{6} right)^{100-k}
$$

Therefore, $X_i sim text{Bin}left(100, dfrac{1}{6} right) $, so we have $mathbb{E}(X_i) = dfrac{100}{6}$
$$
Rightarrow mathbb{E}(X) = dfrac{100}{6}(2+4+6) = 200.
$$

To find the variance, I know that for two independent random variables $A$ and $B$, we have $$text{Var}(A+B) = text{Var}(A)+text{Var}(B),$$
which would eventually solve my problem, but somehow, I can’t figure out whether $X_2, X_4, X_6$ are independent…

Any help would be appreciated!

probability – distribution of inverse of random Bernoulli matrices

Recently I have been reading up for a paper of mine, and I seem stuck.

I have read the works related to this such as the paper by Tao and Vu. I get the gist of it, but my query is regarding what distribution can be expected of the inverse of the random Bernoulli matrix and what can its expectation be?

My guess so far has been that the distribution remains the same after inversion, i.e. still a random Bernoulli matrix.

Any explanation or resources for reading are welcome.

probability – Random values with the same distribution

My question relates to the very basics of probability theory.
I can’t understand why if two functions that have the same distribution are not equal (apart from the set of measure zero).

For example, the central limit theorem. There are the countable set of the “same” functions and their sum converges to another function…

Is there any geometric answer (or picture) why the functions with the same distribution are not equal?

combinatorics – Probability of exactly k runs of wins

How to calculate probability of exactly k run of wins out of exactly m wins and n losses? A run of win means 1 or more successive wins.

My try: Divide the m+n matches to portions

$L_1W_1L_2W_2L_3W_3….L_kW_kL_{k+1}$

here $L_i$ denotes a run of losses, i.e. one or more consecutive loss(es). Similarly $W_i$ denotes a run of wins.

For exactly k runs of wins, $L_2,..L_k$ should have atleast $1$ loss, but $L_1, L_{k+1}$ can have $0$ loss as well. All $W_i$s should have atleast one win.

Now, number of ways to get all different $L_i$s satisfying this criteria, is same as arranging $k$ sticks between $n$ stars, so that none of the sticks are adjacent. This is same as putting (n-k+1) balls into k-1 boxes, with empty boxes allowed= $(k-1)^{n-k+1}$. But each loss is indistinguishable from one another, so total number of ways of forming $L_1,…,L_{k+1}$ is $frac{(k-1)^{n-k+1}}{n!}$.

Similarly, number of ways to get all different $W_i$s satisfying the criteria, is same as putting m indistinguishable balls in to k boxes, with empty not allowed. Which is same as putting (m-k) indistinguishable balls into k boxes, with empty boxes allowed = $frac{k^{m-k}}{m!}$

So, total number of ways = $frac{(k-1)^{n-k+1}.k^{m-k}}{m!n!}$

Probability of exactly k runs of wins = $frac{(k-1)^{n-k+1}.k^{m-k}}{m!n!(m+n)!}$

But I am not feeling confident with this logic. It will be good if anyone can confirm if it is correct, or contradict if there is any loophole.

Derivation of the Expectation Value of a Probability Distribution

I am currently a Quantum Mechanics student, and we use the expectation value of probability distributions regularly. It intuitively makes sense why the expectation value is defined the way it is, but I was wondering: is there a derivation for the expectation value expression? I have not been able to derive it myself, and searching up derivations has also come out fruitless. I would love to see exactly where it comes from instead of using it somewhat blindly. Thank you for any help you have!

probability distributions – How to generate a Gaussian dataset in matlab

I want to generate a Gaussian dataset with dependent pairs of random variables using the randn function. The dataset includes a total of 900 trails of independent Gaussian pairs. The first random variable should have a variance 1 and the second will have a variance of 16 (so a covariance G= (1,0;0,16)). The pairs in the dataset will be multiplied by the following matrix:
Matrix image