## probability: in an unusual stochastic gradient rise.

Leave $$Y$$ be a random variable with parametric probability density function $$p (y | theta)$$ , assume that $$p (y | theta)$$ It is concave and differentiable in $$theta in Theta subset mathbb {R}$$ and that its partial derivative wrt $$theta$$ is Lipschitz continuous with constant $$L> 0$$, me. my.,
$$left | frac { partial p (y | theta_1)} { partial theta} – frac { partial p (y | theta_2)} { partial theta} right |
for all $$theta_1, theta_2 in Theta$$.

Suppose we are also given a sequence of observations.$${Y_t } _ {t in mathbb {Z}}$$ since $$Y$$.

Then leaving
$$theta_ {t + 1} = theta_t + alpha nabla p (Y_t | theta_t) quad {t in mathbb {Z}}$$

where $$0 < alpha le 1 / L$$and a beginning $$theta_0$$ is chosen to start the iteration. It is true that

$$E_ {Y_t} (I (p ( cdot | theta), p ( cdot | theta_t)) – I (p ( cdot | theta), p ( cdot | theta_ {t + 1 })) | Y_ {t-1})> 0 qquad forall t in mathbb {Z}$$

where $$I (p ( cdot | theta), p ( cdot | theta_t))$$ is the Kullback-Leibler divergence notoriously defined as

$$I (p ( cdot | theta), p ( cdot | theta_t)): = int _ { mathbb {R}} p (y | theta) ln left ( frac {p (y | theta)} {p (y | theta_t)} right) , dy$$

(Also suppose that $$theta_ {t} ne theta$$)

The motivation for this statement is that, since we are performing a gradient (stochastic) ascent step in the parameter, so we are approaching it on average to the maximum of the probability, it must follow (from the principle of maximum probability) that The Kullback Leibler divergence at each step will be closer to $$p ( cdot | theta)$$, the true density of probability. Is this true?

## stochastic processes – convergence of Brownian module movement

I'm looking for a brief argument that for a Brownian motion it started sometime $$x$$ we have
$$begin {equation} The | B_t | to infty text {a.s.} end {equation}$$ as $$t to infty$$.

I thought maybe the law of the iterated logarithm could give this. Also, I hope the statement is even true.

## stochastic processes: find the spectral measure of a stationary centered sequence with a covariance function

I am trying to find the spectral measure of a stationary centered sequence $${ xi _n } _ {n in Bbb Z}$$ with a covariance function $$gamma_n = a ^ {| n |}, a in Bbb C, | to | = 1$$. The spectral measure is the finite measure defined in the segment $$(- pi, pi)$$ $$sigma$$-algebra of Borel sets, so that $$gamma_n = int _ {- pi} ^ { pi} e ^ {i lambda n} F (d lambda)$$, where $$gamma_n$$ is the covariance function
My solution is: $$a ^ {| n |} = e ^ {i alpha | n |}, α in (0; 2 pi).$$
Thus, $$e ^ {alpha]| n |} = int _ {- pi} ^ { pi} e ^ {i lambda n} F (d lambda) = int _ {0} ^ {2 pi} e ^ {i ( theta- pi) n} F (d theta)$$. I need to know how $$F$$ It is defined in this case.

## Probability – The definition of the \$ m- \$ Gaussian process and the definition of the sequence of the stochastic process converge in \$ L ^ 2 \$

Q1) What does it mean : $$X$$ is a $$m-$$Gaussian dimensional process?

I guess that's it for everyone $$0 leq t_1 <… $$X = (X_ {t_1}, …, X_ {t_k})$$ it's a Gaussian vector But here I am confused because yes $$X_ {t_i}$$ are $$mathbb R$$ r.v., then the Gaussian vector means that for all $$alpha in mathbb R ^ n$$, the r.v. $$left$$ It is a normal distribution. But from here $$X_ {t_i}$$ are $$m-$$dimensional vector, I don't understand what it means $$left$$ Randomly distributed Even I'm not sure it really makes sense.

Q2) Leave $$(X ^ n)$$ A sequence of stochastic process. What does it mean $$X ^ n a X$$ in $$L ^ 2$$ ?

(1) Would that be for everyone $$t$$ $$mathbb E ((X_t ^ n-X_t) ^ 2) a 0$$ when $$n to infty$$ ?

(2) That for everyone $$0 leq t_1 <… , $$mathbb E ( | (X_ {t_1} ^ n-X_ {t_1}, …, X_ {t_k} ^ n-X_t) | ^ 2_2) a 0$$
when $$n to infty$$ where $$| cdot | 2$$ denote the Euclidean norm.

(3) something else?

(By the way, I have the impression that (1) and (2) are equivalent, right?)

## Stochastic processes: what is the probability that the mouse eats the cheese?

I am doing an exercise on the Markov chain.

A cheerful mouse moves in a maze. If it's on time $$n$$ in a room with $$k$$ adjacent horizontal or vertical rooms, will be on time $$n + 1$$ in one of the $$k$$ adjacent rooms, choosing one at random, each with probability $$1 / k$$. A fat and lazy cat stays in the room all the time $$3,$$ and a piece of cheese waits for the mouse in the room $$5$$ The mouse starts in the room. $$1$$. See the following figure:

The cat is not completely lazy: if the mouse enters the room inhabited by the cat, it will eat it. Also, if the mouse eats the cheese, rest forever. Leave $$X_ {n}$$ be the mouse position at the moment $$n$$.

What is the probability that the mouse can eat the cheese?

From the graph, the transition matrix is ​​as follows:

$$P = begin {pmatrix} 0 and 1/2 and 0 and 1/2 and 0 \ 1/2 and 0 and 1/2 and 0 and 0 \ 0 and 1/2 and 0 and 1/2 and 0 \ 1/3 and 0 and 1/3 and 0 and 1/3 \ 0 and 0 and 0 and 1 and 0 \ end {pmatrix}$$

So the probability that the mouse can eat the cheese is $$mathbb P left ( forall n in mathbb N: X_n neq 5 right)$$

Could you please leave me some clues to calculate this probability? Thank you very much!

## Test a Taylor development of a stochastic function

Do you have any idea how to try this formula?
$$E (f (x + B_t)) = f (x) + frac {1} {2} int_0 ^ t E (f & # 39; & # 39; (x + B_s)) ds$$

## Stochastic calculation – LDP, contraction principle, Freidlin-Wentzell Dembo and Zeitouni theory.

I had a problem with a simple test in Dembo and Zeitouni. Specifically, Theorem 5.6.7 page 215, which is a Freidlin-Wentzell theorem.

The idea of ​​the test is that we approximate the original SDE:

$$begin {equation} dX_t ^ epsilon = b (X_t ^ epsilon) dt + sqrt { epsilon} sigma (X_t ^ epsilon) dW_t, ~ 0 leq t leq 1, ~~ X_0 ^ epsilon = x, end {equation}$$

For a correction (Euler's scheme):

$$begin {equation} dX ^ { epsilon, m} _t = b (X ^ { epsilon, m} _ { frac { lfloor mt rfloor} {m}}) dt + sqrt { epsilon} sigma (X ^ { epsilon, m} _ { frac { lfloor mt rfloor} {m}}) dW_t, ~~~ 0 leq t leq 1, ~~ X ^ { epsilon, m} _0 = 0, end {equation}$$
Then test the LDP for disrectization and pass it to the LDP to the original SDE by exponential equivalence.

What I don't understand is the PLD for Euler's Scheme. We need to show the continuity of the map. $$F$$ defined on page 215, this was my attempt:

begin {align *} sup_ {t in ( frac {k} {m}, frac {k + 1} {m})} e (t) leq & C e ( frac {k} {m}) + sup_ {t in ( frac {k} {m}, frac {k + 1} {m})} left | sigma left (h_1 ( frac {k} {m}) right) left (g_1 (t) -g_1 ( frac {k} {m}) right) + sigma left (h_2 ( frac {k} {m}) right) left (g_2 (t) -g_2 ( frac {k} {m}) right) right | \ leq & C e ( frac {k} {m}) + sup_ {t in ( frac {k} {m}, frac {k + 1} {m})} left | C left (g_1 (t) -g_2 (t) + g_2 (t) -g_1 ( frac {k} {m}) right) right | \ & ~~~~~~~~~ + left | C left (g_2 (t) -g_1 (t) + g_1 (t) -g_2 ( frac {k} {m}) right) right | \ leq & C left (e ( frac {k} {m}) + || g_1-g_2 || right) end {align *}
with constant $$C < infty$$ from the Lipshitz and limited constants of the coefficients that may change at each step (in the book it says $$C$$ it just depends on $$g_1$$ me $$g$$ depends on $$m$$ since it comes from the local property of lipshitz, where am I making a mistake!).

Next notice that $$e (0) = 0$$ therefore for $$k = 0$$ the limit gives us
$$sup_ {t in ( frac {0} {m}, frac {1} {m})} e (t) leq || g_1-g_2 ||$$
therefore we can do $$C e ( frac {1} {m})$$ as small as we always want $$|| g_1-g_2 ||$$ It is taken small enough. Iterating this procedure during $$k = 0, ldots, m-1$$ gives the continuity of $$F ^ m$$

## stochastic integral convergence in probability

My goal is to show that for everyone $$t geq 0$$
$$begin {equation} frac { int_ {t} ^ {t + h} H_s dB_s} {B_ {t + h} -B_ {t}} rightarrow H_t end {equation}$$
in probability where $$B$$ it's a brownian movement and $$H$$ continuous and bounded.

Almost the same question was asked here: "Continuity" of the Wrt Brownian integral stochastic movement

$$begin {equation} P left ( left | frac { int_ {t} ^ {t + h} H_s dB_s} {B_ {t + h} -B_ {t}} – H_t right |> epsilon right) leq const ( epsilon, K) , frac {1} {h} , E left ( int_ {t} ^ {t + h} (H_s -H_t) ^ 2 ds right) + frac { epsilon} {4} end {equation}$$
because of Markov's inequality with ($$x ^ 2$$) and the isometry of Ito. What I don't understand is how I can follow that
$$begin {equation} frac {1} {h} , E left ( int_ {t} ^ {t + h} (H_s -H_t) ^ 2 ds right) end {equation}$$
converges to $$0$$ how $$h a 0$$. Any ideas?
(I hope I have not made any mistakes in the process).

## Stochastic calculation: If \$ M_tN_t \$ is a martingale and \$ M_t \$ is a martigale, then \$ N_t \$ is a martingale?

Suppose $$M_t$$ defined by $$M_t: = exp left ( int_0 ^ t a_sdB_s- frac {1} {2} int_0 ^ t a_s ^ 2ds right)$$ It is a martingale. Now define the process $$N_t$$ how

$$N_t: = exp left ( int_0 ^ t b_sdB_s- frac {1} {2} int_0 ^ t b_s ^ 2ds right).$$ Suppose we are given that $$M_tN_t$$ It is a martingale. This implies that $$N_t$$ it's a martingale given that $$P ( int_0 ^ t b_s ^ 2 ds < infty) = 1$$? Clearly, this will always be a local martingale since $$dN_t = N_tb_tdB_t.$$

Any help is appreciated. Thank you!

## reference request – Stochastic ordering of the empirical mean

The answer is no. In fact, suppose that $$X_1, X_2, points$$ they are random variables iid Bernoulli (r.v. & # 39; s) each with average $$p$$, $$bar X_n: = frac1n , sum_1 ^ n X_i$$Y
$$d_ {n, p} (t): = P (| bar X_ {n + 1} -p |> t) -P (| bar X_n-p |> t).$$
Graphic $${ big (t, d_ {1,1 / 5} (t) big) colon0 le t le4 / 5 }$$ It is shown here:

We see that the function (right-continue) $$d_ {1,1 / 5}$$ Take values ​​of both signs.
In particular, $$d_ {1,1 / 5} (1/5) = 4/25> 0> -4 / 25 = d_ {1,1 / 5} (3/10)$$.

Therefore, the family of r.v. $$(| bar X_n-p |) _ {n = 1} ^ infty$$ It is not stochastically monotonous.