stochastic processes: variance of a random variable obtained from a linear transformation

Edit: I needed to review this question as suggested.

Suppose there are $ N $ Realizations of the Gaussian process denoted as vectors $ mathbf {z} _ {j} in mathbb {R} ^ {n} $ for $ j = 1, ldots, N $. Leave $ and $ be a random variable such that $ y = sum_ {j = 1} ^ {N} ( mathbf {B} mathbf {z} _ {j}) (i) $
where $ mathbf {B} $ It is a unitary matrix. What is the variance of $ y2?

Explanation: Boldface represents the vector or matrix. $ ( mathbf {B} mathbf {x}) (i) $ represents the $ i $-th vector entry $ mathbf {B} mathbf {x} $.

stochastic integrals: $ left | int_0 ^ T f (t, omega) dW_t right | leq int_0 ^ T | f (t, omega) | dW_t $ do you have?

I was wondering if inequality
$$ left | int_0 ^ T f (t, omega) dW_t right | leq int_0 ^ T | f (t, omega) | dW_t $$ It is maintained for stochastic integral. In fact, I don't see that property in any book, nor on Google, so I have some doubts. What you think ?

Definition – Can the index established for a stochastic process be finite?

Consider the following definition for the set of indexes for a set

Set $ T $ it's called the set of indexes or
Stochastic process parameter set. Often this set is
some subset of the real line, such as natural numbers or a
interval, giving the set $ T $ the interpretation of
hour. In addition to these sets, the index set $ T $
they can be other linearly ordered sets or more general mathematicians
sets, such as the Cartesian plane $ R ^ 2 $
or $ R ^ n $where a
element $ t in T $ can represent a point in
space. But in general more results and theorems are possible
for stochastic processes when the index set is ordered.

It follows from the definition that it can be infinite. Can the set of indexes be finite?

performance: conditional increments based on the core of many stochastic processes

I have written that this function is part of a research project that involves analyzing time series data from stochastic processes. We have a small number (from 1 to 3) of independent observations of a scalar time series. The observations have different lengths, and each one contains approximately $ 10 ^ 4-10 ^ 5 $ data points The function below nKBR_moments.m it takes an array of observations cells as input, along with other configurations, and generates statistical quantities known as "moments of conditional increments". These are the variables. M1 Y M2. For more details of the theory, this research paper describes a similar method.

For research purposes, the function will eventually be evaluated tens of thousands of times, on a desktop computer. An evaluation of this function takes approximately 3 seconds with the test script that I have provided below. Thoughts on optimizing code performance, memory usage or scalability are appreciated.

MATLAB function:

function (Xcentre,M1,M2) = nKBR_moments(X,tau_in,Npoints,xLims,h)
%Kernel based moments, n-data
%
%   Notes:
%   Calculates kernel based moments for a given stochastic time-series.
%   Uses Epanechnikov kernel with built in computational advantages. Uses
%   Nadaraya-Watson estimator. Calculates moments from n sources of data.
%   
%
%   Inputs:
%   - "X"                       Observed variables, cell array of data
%   - "tau_in"                  Time-shift indexes
%   - "Npoints"                 Number of evaluation points
%   - "xLims"                   Limits in upper and lower evaluation points
%   - "h"                       Bandwidth
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Processing
dX = (xLims(2)-xLims(1))/(Npoints-1); % Bins increment
Xcentre = xLims(1):dX:xLims(2); % Grid
heff = h*sqrt(5); % Effective bandwidth, for setting up bins
eta = floor(heff/dX+0.5); % Bandwidth for bins optimizing

% Epanechnikov kernel
K= @(u) 0*(u.^2>1)+3/4*(1-u.^2).*(u.^2<=1);
Ks = @(u) K(u/sqrt(5))/sqrt(5); % Silverman's definition of the kernel (Silverman, 1986)
Kh = @(u) Ks(u/h)/h; % Changing bandwidth

% Sort all data into bins
Bextend = dX*(eta+0.5); % Extend bins
edges = xLims(1)-Bextend:dX:xLims(2)+Bextend; % Edges
ndata = numel(X); % Number of data-sets
Xloc = cell(1,ndata); % Preallocate histogram location data
nXdata = cellfun(@numel,X); % Number of x data
key = 1:max(nXdata); % Data key
for nd = 1:ndata
    (~,~,Xloc{nd}) = histcounts(X{nd},edges); % Sort
end
Xbinloc = eta+(1:Npoints); % Bin locations
BinBeg = Xbinloc-eta; % Bin beginnings
BinEnd = Xbinloc+eta; % Bin beginnings

% Preallocate
Ntau = numel(tau_in); % Number of time-steps
(M1,M2) = deal(zeros(Ntau,Npoints)); % Moments
(iX,iXkey,XU,Khj,yinc,Khjt) = deal(cell(1,ndata)); % Preallocate increment data

% Pre calculate increments
inc = cell(Ntau,ndata);
for nd = 1:ndata
    poss_tau_ind = 1:nXdata(nd); % Possible time-shifts
    for tt = 1:Ntau
        tau_c = tau_in(tt); % Chosen shift
        tau_ind = poss_tau_ind(1+tau_c:end); % Chosen indices
        inc{tt,nd} = X{nd}(tau_ind) - X{nd}(tau_ind - tau_c);
    end
end

% Loop over evaluation points
for ii = 1:Npoints

    % Start and end bins
    kBinBeg = BinBeg(ii);
    kBinEnd = BinEnd(ii);

    % Data and weights
    for nd = 1:ndata
        iX{nd} = and(kBinBeg<=Xloc{nd},Xloc{nd}<=kBinEnd); % Data in bins
        iXkey{nd} = key(iX{nd}); % Data key
        XU{nd} = X{nd}(iX{nd}); % Unshifted data
        Khj{nd} = Kh(Xcentre(ii)-XU{nd}); % Weights
    end

    % For each shift
    for tt = 1:Ntau
        tau_c = tau_in(tt); % Chosen shift

        % Get data
        for nd = 1:ndata            
            XUin = iXkey{nd}; % Unshifted data indices
            XUin(XUin>nXdata(nd)-tau_c) = (); % Clip overflow
            yinc{nd} = inc{tt,nd}(XUin); % Increments
            Khjt{nd} = Khj{nd}(1:numel(yinc{nd})); % Clipped weight vector
        end

        % Concatenate data
        ytt = (yinc{:});
        Khjtt = (Khjt{:});

        % Increments and moments
        sumKhjtt = sum(Khjtt);
        M1(tt,ii) = sum(Khjtt.*ytt)/sumKhjtt;

        y2 = (ytt - M1(tt,ii)).^2; % Squared (with correction)
        M2(tt,ii) = sum(Khjtt.*y2)/sumKhjtt;
    end
end
end

MATLAB test script (no comments are required for this):

%% nKBR_testing
clearvars,close all

%% Parameters

% Simulation settings
n_sims = 10; % Number of simulations
dt = 0.001; % Time-step
tend1 = 40; % Time-end, process 1
tend2 = 36; % Time-end, process 1
x0 = 0; % Start position
eta = 0; % Mean
D = 1; % Noise amplitude
gamma = 1; % Drift slope

% Analysis settings
tau_in = 1:60; % Time-shift indexes
Npoints = 50; % Number of evaluation points
xLims = (-1,1); % Limits of evaluation
h = 0.5; % Kernel bandwidth

%% Simulating
t1 = 0:dt:tend1;
t2 = 0:dt:tend2;

% Realize an Ornstein Uhlenbeck process
rng('default')
ex1 = exp(-gamma*t1);
ex2 = exp(-gamma*t2);
x1 = x0*ex1 + eta*(1-ex1) + sqrt(D)*ex1.*cumsum(exp(gamma*t1).*(0,sqrt(2*dt)*randn(1,numel(t1)-1)));
x2 = x0*ex2 + eta*(1-ex2) + sqrt(D)*ex2.*cumsum(exp(gamma*t2).*(0,sqrt(2*dt)*randn(1,numel(t2)-1)));

%% Calculating and timing moments

tic
for ns = 1:n_sims
    (~,M1,M2) = nKBR_moments({x1,x2},tau_in,Npoints,xLims,h);
end
nKBR_moments_time = toc;
nKBR_average_time = nKBR_moments_time/n_sims

%% Plotting

figure
hold on,box on
plot(t1,x1)
plot(t2,x2)
xlabel('Time')
ylabel('Amplitude')
title('Two Ornstein-Uhlenbeck processes')

figure
subplot(1,2,1)
box on
plot(dt*tau_in,M1,'k')
xlabel('Time-shift, tau')
title('M^{(1)}')
subplot(1,2,2)
box on
plot(dt*tau_in,M2,'k')
xlabel('Time-shift, tau')
title('M^{(2)}')

The test script will create two figures similar to the following.

Time series data of two OU processes

Calculated moments of the processes.

Probability: stochastic processes and continuity of expectations

Leave $ X $ be a continuous stochastic process in $ (0, 1) $ such that $ mathbb E (X_t) $ it's finite for everyone $ t in (0, 1) $. Given any non-null subset $ Y $ of the probability space, define $ mathbb Q_Y $ be the measure of restricted probability $ mathbb Q_Y (E) = P (E cap Y) / P (Y) $.

Do you still have any non-zero $ Y $ such that the function $ f: (0, 1) a R $ definite $ f (t) $ $ = $ $ mathbb E_ {Q_Y} (X_t) $ Is it continuous a.e.?

Name of a stochastic process.

Suppose we have $ n> 1 $ Cells that are arranged in a row. Each cell contains a coin.

We list the coins evenly by integers ranging from $ 1 $ to $ k $, where $ k $ is chosen in such a way that $ ell k = n $ for some parameter $ ell geq 1 $. The order of numbering is random.

Then we throw a $ k $face dice showing the number $ 1.2, points, k $ with probability $ p_1, p_2, dots, p_k $. All currencies that are listed by the number shown by the dice are moved from their cell to their adjacent cell (the adjacent cell of the last cell is the first cell). If there is already a coin in the cell, the coins are stacked. Only the top currency moves.

To clarify things a bit, consider the following example: $ n = $ 12 Y $ ell = 4 $, so, we have $ k = $ 3, that is, each currency is listed by numbers ranging from $ 1 $ to $ 3 $. There are three groups: three coins with a 1, three coins with a 2 and three coins with a 3.
A possible order could be
$$ begin {align *} left ( begin {array} {c} 1 \ 2 \ 1 \ 3 \ 2 \ 2 \ 1 \ 3 \ 3 end {array} right) end {align *}, $$
that is, the currency 1,3 and 10 are listed by $ 1 $, currency 2, 5 and 6 per $ 2 $ and the remaining coins for $ 3 $. If we were now throwing a dice and the face showed. $ 1 $, we would move coins 1, 3 and 10 to their adjacent cells and continue with the next roll.

What is the name of this stochastic process if we were interested in the number of coins in the round? $ t $ in a cell? Is this game a game known and studied stochastic process?

Stochastic processes: the probability distribution of "derivative" of a random variable.

Resignation: Cross-published in math.SE.

Let's set the stage;

Consider a stochastic PDE, which has to follow the form

$$ partial_t h (x, t) = H (x, t) + chi (x, t), $$
where $ H $ It is a deterministic function, and $ chi (x, t) $ It is a random variable.

In my case, the approximate solution of this sPDE is known (through experimental and numerical simulations):

$$ h (x, t) approx G (x, t) + epsilon (x, t), $$
where $ epsilon $ It is a stochastic variable.

Of course, the solution of $ h $ it is not differentiable in the usual sense, but if the underlying distribution of $ epsilon $ It is a symmetric distribution, like Gaussian, if you observe $ partial_t h $ long enough, deviations from $ h $ since $ G $ will be canceled, so that you can determine experimentally or numerically $ G $ quite accurately

However, this forces us to know the Distribution of changes in values. $ epsilon $.

In this sense, this is "take the derivative of". $ epsilon $.

A brief analysis revealed to me that, if the underlying probability distribution of $ epsilon $ is $ g $ (Let's suppose $ epsilon $ it's a function of only t for the sake of the argument), then

the probability that $ z-w $ change to occur is $ g (z) g (w) $, because if the value of $ epsilon $ at the time $ t $ is $ z $, Then in $ t + dt $, the probability that $ epsilon (t + dt) = w $ is $ g (w) $; therefore, considering that the probability that $ epsilon (t) = z $ first is $ g (z) $, then the probability that (Attention: abuse of notation) $ d epsilon = z-w $ is $ g (w) g (z) $ (Of course, this needs some normalization, but it is irrelevant to what I want to ask here).

For example, if my analysis is correct, the "derivative" of a Gaussian random variable remains a Gaussian.


Question:

Considering how "elementary" this idea made me wonder, is there a theory that captures the calculation in such a random variable? I would also like to integrate random variables (although I have not thought what that would mean physically or intuitively). I am looking for references / documents that deal with this type of theory; Not only do I "derive" from a random variable in a random sense, I need exactly above the way of thinking in theory.

I mean, I am aware of the existence of the calculation of Ito and the calculation of Malliavin, but every time I tried to learn what it is, or what is the underlying idea (as what it means physically to derive means in the theories) people would do. I begin to launch a terminology that I don't know. Don't get me wrong, I'm also a math student, but in math, doing theory without giving any motivation or the basic idea is so common, and I hate it in such a way that I don't read math books anymore. .

Stochastic processes: if they exist, are the limits of "almost certain convergence" and "average convergence" the same for a sequence of random variables?

I have a sequence of random variables. $ (X_n) _ {n in mathbb {N} _0} $ that converge both "almost sure" and "on average" to random variables $ A $ Y $ B $:
$$
P ( lim X_n = A) = 1 text {(almost certain convergence)} \
E (| X_n – B |) rightarrow B text {(convergence in the mean)}
$$

My guess would be that $ A = B $ Almost sure, but I could not find a test.

I've already looked at the Wikipedia page on stochastic convergence, but I could not find any information about this particular case there.

Do you know any results about it?

What additional properties does it have? $ (X_n) _n $ you need to have such $ A = B $ almost sure?

Do you know any example in which $ P (A neq B)> 0 $?

Stochastic calculus – Distances between crosses up and down in Gaussian processes

Given a Gaussian process $ g: = mathcal {GP} left ( mu, Sigma right) $,
where $ mu $ is the average and $ Sigma $ is the covariance function, I'm interested in estimating the average value $ L_m $ Of the distances between up and down with a constant level. $ u $, that is, these distances:

enter the description of the image here

In this plot, I use $ u = 0 $, but ideally I would like $ u $ be generic I suspect this is related to Rice's formula, which estimates the number of ascending crosses for a given Gaussian process and a given length domain, but I do not know how

stochastic calculation: condition for which f (B_t) is a martingale

How can we find all the functions g such that g (B$ _t $Is it a martingale? (Suppose that g (x) is twice continuously differentiable). Come on x$ _t $ = g (B$ _t $), using the Ito formula, I get

dX$ _t $ = f$ B& # 39; (SECOND$ _t $) dB$ _t $ + $ frac {1} {2} $F$ BB& # 39; & # 39; (SECOND$ _t $) dt, then integrating both sides, I get X$ _t $ = g (t) + $ frac {1} {2} $Sun$ B& # 39; (SECOND$ _t $) $ – $ $ frac {1} {2} $Sun$ B& # 39; (0). It is required that $ textbf {E} $[X[X[X[X$ _t $|$ mathcal {F} _s $]= X$ _s $ to T $ geq $ s.

It follows that we need $ textbf {E} $[g(t)-g(s)+[g(t)-g(s)+[g(t)-g(s)+[g(t)-g(s)+$ frac {1} {2} $[Sun[g[sol[g$ _B $& # 39; (SECOND$ _t $) – g$ _B $& # 39; (SECOND$ _s $)]| $ mathcal {F} _s $]= 0

It is right? I'm not sure how to continue solving it. Any suggestion?