Probability fo infected people – Mathematics Stack Exchange

Suppose that a town has the population of $100$ and $10$ of them are infected with coronavirus. Assume that currently I am not infected and that if I meet with a group of one or more people that includes a infected person then I will get infected.

(a) What is the probability that I will get infected when I meet with one person?

(b) What is the probability that I will get infected when I meet with two persons?

(c) What is the probability that I will get infected when I meet with three persons?

I was even wondering if there exists a probabilistic approach to solve this problem, rather than brute-forcing or using some definition. Pure logic-thinking, and counting?

artificial intelligence – In counterfactual regret minimization, why are additions to regret weighted by reach probability?

I’m reading the algorithm on page 12 of An Introduction to Counterfactual Regret Minimization. On lines 25 and 26, we accumulate new values into $r_i$ and $s_i$:

  • $25.space space r_I(a) ← r_I(a) + pi_{-i} . (v_{sigma I rightarrow a}(a) – v_{sigma}(a))$
  • $26. space space s_I(a) ← s_I(a) + pi_{i} . sigma^t(I, a)$

$r_I(a)$ is the accumulated regret for information set $I$ and action $a$. $s_I(a)$ is the accumulated strategy for information set $I$ and action $a$.

$pi_{i}$ is the probability of reaching this game state for the learning player (for whom we’re updating strategy and regret values in the current CFR iteration). $pi_{-i}$ is the probability of reaching this game state for the other player.

Why do we multiply by $pi_{-i}$ and $pi_{i}$ to accumulate the strategy and regret on lines 25 and 26? Couldn’t we just do this:

  • $25.space space r_I(a) ← r_I(a) + (v_{sigma I rightarrow a}(a) – v_{sigma}(a))$
  • $26. space space s_I(a) ← s_I(a) + sigma^t(I, a)$

It seems to me it doesn’t matter exactly how much we adjust the strategy and regrets in this CFR iteration—so long as we do enough CFR iterations, won’t we end up with good values for $r_I$ and $s_I$ in the end?

pr.probability – Probability of wind speed generator hitting a certain m/s

So I’m making this game, in which there will be wind. Here are the specifications:

  • Wind speed can range from 0 to 30 m/s (meters per second, must not be confused with milliseconds).
  • The wind speed changes with a certain interval, for example every 5 or 10 minutes.
  • Every time the wind speed changes it will either go up or down with a random amount between 0 and 1 m/s.
  • Whenever the wind speed goes up or down in each change depends on a function f(x) where x is the current wind speed. This function is something I will modifiy as I’m developing the game to stabelize the wind speed.
  • If the wind speed is at it’s maximum or minimum, which is 30 and 0, it can not go higher or lower, so if the speed for example is 30 and a change happens that tells it to go up, it will stay at 30.

Question:
How do I calculate the probability of the wind speed to hit y (between 0 and 30) over z changes? Example: If the wind speed changes 60 times, what is the chance that the wind speed will at any time reach 30?

This is something that I have been thinking about for a while, I can not seem to find anything that even closely resembles an answer. It’s way over my head.
Thank you for your time. 🙂

syntax – Probability Generating Function(s)?

I’m very new to Mathematica and I have started to use it since I have been tackling some Physics related problems(I’m an engineer by training).

Specifically, I’m dealing with Master Equations and one of the tools used to try to solve Master equations under certain conditions are Probability generating functions. i.e.

$T(z) = sum_{a=0}^{infty} z^a P(a)$

where $P(a)$ is a probability mass function with discrete support, e.g. Poisson distribution

  • Is there a way to compute such transformation?
  • Should I use the Z-transform which has a similar definition?
  • Can somebody show me how the problem would be approached as I’m very
    new to the syntax?

probability – Showing almost-sure convergence, given condition.

Let $(Omega, F, P)$ be a probability space with $X_1,…:Omegato R$ independent random variables. Take $E(X_i)=0$ for all $iin N$ and
$sum_i E(X_i^2 chi_{{|X_i|le 1}} + |X_i| chi_{{|X_i|>1)})< infty$

show $sum_i X_i$ converges $P$– almost everywhere.

Now I am thinking to use Kolmogorov’s three series theorem. I am struggling to convert the characteristic functions to probabilities – this is a method I havent been able to understand properly in class.

probability – Exponential moments of expected return time

Let $left( X_nright)_{ngeqslant0}$ be a Mrkov chain in $E$ (finite set of cadrinality $m$). For all $xin E$, we define the return time to $x$ by
$$tau_x=infleft{ngeqslant1colon X_n=xright}$$

If the chain is irreducible, we know that for any $xin E$, we have $mathbb{E}_xleft(tau_xright)<infty$.

Show that there exists $varepsilon>0$ such that $$mathbb{E}_xleft(expleft(varepsilontau_xright)right)<+infty$$

Deduce that for all $pgeqslant1$, $$mathbb{E}_xleft(left(tau_xright)^pright)<+infty$$

The second part is easy if we have the first part, but I do not know how to get this one…

Any hints ?

Probability question, choosing a random integer number from [0, a]

Every time choosing a random integer number from [0, a], what is the probability of after k times, the sum of all chosen numbers is greater or equal to b.

probability theory – Optimal algorithm to distinguish given black box access

This is a variant of this question. Consider two probability distributions $D$ and $U$, over $n$-bit strings, where $U$ is the uniform distribution. Assume that $D$ and $U$ are far apart in total variation distance, ie,
begin{equation}
d_{text{TV}}(D, U) geq frac{2}{3}.
end{equation}

We are not given an explicit description of $D$: we are only given black-box access, ie, we are only given a sampling device that can sample from $D$. Consider a sample $z in {0, 1}^{n}$, taken from either $D$ or $U$. We want to know which one is the case and to do that, we consider polynomial-time algorithms that use the sampling device. I am looking for a single optimal deterministic algorithm that works optimally for all $D$.

Let $A$ be this optimal algorithm. For each $D$, our algorithm $A$ is optimal in the sense that

$$
Pr_{z sim D} (A(z) = 1) geq frac{3}{4}, \
Pr_{z sim U} (A(z) = 0) geq frac{3}{4},
$$

Output $1$ is interpreted as “the sample comes from $D$, and output $0$ is interpreted as “the sample comes from $U$.”

Let $A$ use the black-box sampling device a polynomial number of times (at most) and get samples $z_{1}, z_{2}, ldots, z_{k}$ from $D$, for some polynomial $k$. My intuition is that, if this best algorithm decides that $z$ indeed came from $D$, then it must be true that $z_{i} = z$ for at least one $i in (k)$. In other words, since we know nothing about $D$ or its support, we have to “see” $z$ at least once in the samples we collect from $D$ to ascertain that $z$ indeed came from $D$. How do I mathematically formalize this statement?

Also, does this same intuition hold if we are given a polynomial number of samples as input (taken from either $D$ or $U$) instead of just one and are also given access to a black-box sampler for $D$?

probability or statistics – How to find the inverse of CDF of geometric function?

The cdf of geometric distribution is given by $ F(x)=1-(1-p)^x$.
I want to calculate the inverse of it, for example, $F^{-1}(U)$

I am doing the following

f(x_) := 1 - (1 - p)^x;  InverseFunction(f(u))

But I do not get anything.

my ultimate goal is to generate sample of a random variable that has geometric distribution.

probability – stopping time of random walk

Let $(S_{1},S_{2},…)$ be a symmetric simple random walk (with the step $X_{n}$ taking on only the values $1$ and $-1$ with the same probabilities). Let $A=min(n:S_{n}=5)$ and $B=min(n:S_{n}<-3)$ and we want to know if $min(A,B)+1$ is a stopping time of the sequence $X=(X_{1},X_{2},….)$. I have no clue how to solve it. Thanks for any help.