## probability – Confusion on when to use CDF and Poisson process

I’m going through the MIT OCW probability course (6.041sc), but I’m having trouble on when to use CDF and the Poisson process. Here’s the problem (Recitation 15, problem 1).

## Problem Statement:

Beginning at time $$t=0$$, we begin using bulbs, one at a time, to illuminate a room. Bulbs are replaced immediately upon failure. Each new bulb is selected independently by an equally likely choice between a a type-A bulb and a type-B bulb. The lifetime, $$X$$, of any particular bulb of a particular type is a random variable, independent of everything else, with the following PDF:
begin{aligned}text{for type-A bulbs: }f_X(x) &= begin{cases}e^{-x}, xgeq0,\0, text{ otherwise}end{cases}\text{for type-B bulbs: }f_X(x) &= begin{cases}3e^{-3x}, xgeq0,\0, text{ otherwise}end{cases}end{aligned}

Find the probability that there are no bulb failures before time $$t$$.

## My Attempt:

I used the total probability theorem and then computed the CDF, $$F_X(t)=P(Xleq t)$$
:
begin{aligned}P(text{no bulb failure before time }t)&=P(A)P(Xleq t|A)+P(B)P(Xleq t|B)\&=frac{1}{2}int_0^t{e^{-x}}{dx}+frac{1}{2}int_0^t{3e^{-3x}}{dx}\&=frac{1}{2}left(1-e^{-t}right)+frac{1}{2}left(1-e^{-3t}right)end{aligned}

## Solution:

begin{aligned}P(text{no bulb failure before time }t)=frac{1}{2}e^{-t}+frac{1}{2}e^{-3t}end{aligned}

I was able to reproduce this result using the PMF for the number of arrivals $$N_t$$ in a Poisson process with rate $$lambda$$, over an interval of length $$t$$.
begin{aligned}P_{N_t}(k)=e^{-lambda t}frac{(lambda t)^k}{k!}, text{ }k=0,1,dotsend{aligned}
In this context, we’re looking at no arrivals, so $$k=0$$. And I figured that the arrival rate would be $$lambda=1,3$$ for type-A(and type-B respectively) but I’m not sure why. Plugging in the appropriate numbers and using the total probability theorem we get the answer above.

## My questions:

1. Why did the CDF give me a different result? I’m sure that computing $$P(Xleq t)$$ was a valid approach, because that’s the probability of the lifetime being at most $$t$$, but I must have some sort of conceptual misunderstanding on this.
2. How would I know that the arrival rate for type-A is $$1$$(and $$3$$ for type-B)? The only way I’d think of that is the fact that both type A and B are exponentially distributed with parameter $$lambda=1,3$$.

## statistics – Problem about Multivariate Probability Distributions

I was solving this problem and I have faced a doubt regarding part c].

I attach you the statement:
enter image description here

My question was regarding part c], maybe it is so simple, but I do not know how to correctly approach it.

Thank you,

A.N.J.

## If $nu^{ast 2}$ is a tight probability measure, is $nu$ itself tight?

Let $$E$$ be a normed $$mathbb R$$-vector space and $$mu$$ be a tight$$^1$$ probability measure on $$mathcal B(E)$$. Assume $$nu$$ is another probability measure on $$mathcal B(E)$$ and$$^2$$ $$mu=nu^{ast k}tag1$$ for some $$kinmathbb N$$.

Are we able to show that $$nu$$ is tight as well?

I know how we can prove that the convolution of tight measures is tight and how we can show that if $$nu_1$$ and $$nu_1astnu_2$$ are tight, then $$nu_2$$ is tight as well, but I’m not sure how we could prove the desired claim here.

(To give some context: I would like to show that if $$mu$$ is infinitely divisible (i.e. for all $$ninmathbb N$$, there is a probability measure $$nu_n$$ such that $$mu=nu_n^{ast n}$$) and tight, then the convolution roots $$mu^{astfrac1n}$$ are well-defined for all $$ninmathbb N$$. In order to show uniqueness, we need that the $$nu_n$$ are tight.)

$$^1$$ i.e. for all $$varepsilon>0$$, there is a $$Kinmathcal B(E)$$ with $$mu(K^c).

$$^2$$ If $$nu_1,ldots,nu_k$$ are measures on $$mathcal B(E)$$ and $$theta_k:E^kto E;,;;;xmapsto x_1+cdots+x_k,$$ then the convolution of $$nu_1,ldots,nu_k$$ is defined to be the pushforward measure $$nu_1astcdotsastnu_k:=theta_k(nu_1otimescdotsotimesnu_k)$$ of the product measure $$nu_1otimescdotsotimesnu_k$$ with respect to $$theta_k$$. If $$nu_1=cdots=nu_k$$, we simply write $$nu_1^{ast k}:=nu_1astcdotsastnu_k$$.

## probability theory – Proving asymptotic independence of random variables

I’m working on trying to prove asymptotic independence of two series of discrete random variables. Suppose I have two sequences of random variables $$X_n,Y_n$$ with probability generating function $$G_{X_n}(z),G_{Y_n}(z)$$ respectively. When $$X_n,Y_n$$ converge in distribution to $$X,Y$$ respectively, then I would like to prove that

$$G_{X_n,Y_n}(z_X,z_Y) xrightarrow{n rightarrow infty} G_{X}(z_X) cdot G_{Y}(z_Y)$$.

Now suppose that for example $$Y_n$$ does not converge in distribution to some $$Y$$, for example because $$Y_n sim Poi(lambda = n)$$. Would it be sufficient to prove asymptotic independence by showing that either

$$frac{G_{X_n,Y_n}(z_X,z_Y)}{G_{X_n}(z_X) cdot G_{Y_n}(z_Y)} xrightarrow{n rightarrow infty} 1$$

or

$$G_{X_n,Y_n}(z_X,z_Y) – G_{X_n}(z_X) cdot G_{Y_n}(z_Y) xrightarrow{n rightarrow infty} 0$$

## plotting – Dot Density Plot for 3D Probability Density Function of {x, y, z}

I have been calculating probability density functions for different states in the hydrogen atom, and want to plot them in 3D using a dot density plot like this one. However, I have not figured out how to convert my function into a list distribution that can be plotted using a density plot. Here are the three functions I’ve been working on.

probs=
{(E^(-2 Im(ArcTan(x, y)) - Re(Sqrt(x^2 + y^2 + z^2))) Abs(x^2 + y^2))/(896 (Pi)),
(2 E^(2 Im(ArcTan(x, y)) - 2/3 Re(Sqrt(x^2 + y^2 + z^2))) Abs(Sqrt(x^2 + y^2) z)^2)/(45927 (Pi)),
(3 E^(-4 Im(ArcTan(x, y)) - 1/2 Re(Sqrt(x^2 + y^2 + z^2))) Abs((x^2 + y^2) (-12 + Sqrt(x^2 + y^2 + z^2)))^2)/(29360128 (Pi))}


These are each for the states |211>, |32,-1>, |422>.

## probability or statistics – Stochastic noise with known propability density function

How can I generate samples for random function, which functional “probability density” is known? When I say “probability density” for random function $$xi(mathbf{q},t)$$, I mean, that there is such a functional $$P(xi(mathbf{q},t))$$, that all the mean values due to the $$xi$$ realizations should be calculated as following path integral:

$$langle A(xi)rangle_{xi}=int mathcal{D}xi A(xi) P(xi)$$.

In my particular case this PDF is the following:

$$P(xi(mathbf{q},t))= exp left(- int d z int frac{d^{2} q}{(2 pi)^{2}} q^{11/3}|{xi}|^{2}right)$$

So the process is gaussian in a functional sense. I understand, that it is always possible to make your model discrete and manually simulate the process, but is there some built-in instrument in Mathematica, which allows you to deal with such things?

## probability distributions – Finding $c$ and $d$ such that two functions represent densities

Given are the functions

$$f(x) = left{ begin{array}{ll} cx^2 & mbox{if } x in (0,1) \ 0 & mbox{else } end{array} right.$$

and

$$g(x) = left{ begin{array}{ll} frac{1}{sqrt{x}} & mbox{if } x in (0,d) \ 0 & mbox{else } end{array} right.$$

I’m stuck when it comes to finding $$c$$ and $$d$$, so that $$f$$ and $$g$$ are densities.
I don’t know what the approach here is to find that out.

Once we have these values, how can we calculate this probability for a random variable $$X$$ with density $$f$$?

$$P(X leq frac{1}{2})$$

Any help is appreciated.

## probability theory – expected value of map generate algorithm

I designed a program to create a map in my 2D game program. And I have three questions…

algorithm:

step1:

create a cell in (0,0), and select it as first cell, and mark the step1 is round 0


step2:

in round i (start from 1), for every cell created in i - 1 round :
for adjacent index in up, down, left, and right:
generate a random value between (0, 1), create a new cell in this index if the random value if less than P



step3:

if some cells created in this round, go to step 2, else finish this algorithm


here is my python code

    def calc(P):
mp = {}
s = (0, 0)
ds = ((-1,0), (0,-1), (1,0),(0,1))
q = (s)
ql = 0
mp(s) = 1
while len(q) > ql:
idx = q(ql)
round = mp.get(idx, -1)
ql += 1
for d in ds:
cur_idx = (idx(0) + d(0), idx(1) + d(1))
if mp.get(cur_idx, -1) == -1 and P(round) > random.random():
mp(cur_idx) = round + 1
q.append(cur_idx)
return len(mp) # count of cells


this algorithm will create a map by a function that gradually decays based on the generation rounds. But I don’t know how to calculate the expected value of how many cells will be created by this algorithm. the $$P$$ is a function about round

question 1: what’s the expected value of how many cells will be created when $$P(mathtt{round}) = C$$, where $$C$$ is a constant value and greater or equal than $$0$$.

C Simulation results of my program
0.1 1.545
0.15 2.043
0.2 3.051
0.25 4.316
0.3 7.108
0.35 13.104
0.4 30.791
0.45 160.748

question 2: If the number of generated cells is limited, what should $$P(mathtt{round})$$ satisfy?

question 3: what’s the expected value of how many cells will be created when
$$P(mathtt{round}) = exp(-mathtt{round}/a),$$
with $$a>1$$.

a Simulation results of my program
1 3.132
2 7.985
3 14.951
4 24.016
5 34.462
10 117.747
15 243.395
20 413.916
25 627.373
30 886.66
35 1180.763
40 1512.886
45 1888.011
50 2319.398
60 3274.22
70 4403.592
80 5690.979
90 7134.92

## Probability permutation in turned to cycle

Let $$M$$ be a $$0/1$$ square matrix having one $$1$$ per row and column (permutation matrix).

If you permute the columns and rows independently what is the probability resulting permutation matrix is a complete cycle?