Probability – Testing a friend's probability of a fall

It is an element drop in a game.

Therefore, a friend of mine does not believe that if a boss has a probability of 1/100 of abandoning a package and then obtaining an item with a probability of 1/25.6 of the package is easier than obtaining the element directly from the boss with 1 / 2560 opportunity.
How can I prove that this is incorrect / correct?

Markov chains: probability of crossing all other states and finally landing in one state

This is a cross publication from math.stackexchange.com. There has been no response there.

Given a Markov chain of finite states with constant transition probabilities, what is the method to calculate the probability of a road finally landing in a particular state for the first time after having crossed all the other states? Is there any reference? The following is a puzzle as an example.

$ 12 $ People sit at a round table to play a variation of the phone game. They are numbered from $ 1 $ to $ 12 $ in the order of clockwise, that is, people with adjacent numbers (included $ 1 $ Y $ 12 $) sit next to each other. Person # 1 chooses a secret word and begins the game by randomly selecting one of the two neighbors and whispering the word to that person. Upon hearing the word, each person continues the game by randomly selecting one of the neighbors and whispering the word. The game ends when everyone knows the secret word.

What is the probability that the last person who whispers the word is numbered? $ 6 $?

probability – decomposition of expected values

I have four random variables, $ S1 $, $ P1 $, $ S2 $, $ P2 $, with the following properties:

  • S1 and P2 are independent
  • S2 and P1 are independent

All other combinations have correlations.

If I want to evaluate the expected value.
$$
E [P1 P2 S1 S2]
$$

Is there any way to break it down as correlators, e.g.
$$
E [P1 P2], E [S1 S2], …
$$

Can some kind of "conditioned" expectation values ​​be used?

Probability of k events in an arbitrary time step, while there are no events before

I want to calculate the following probability. Suppose we have a Poisson distribution $ P (k, t) = frac {(a (1-s) ^ t) ^ k} {k!} Exp (-a (1-s) ^ t $ with average $ a (1-s) ^ t $. While $ 0 le to le0.5 $ Y $ 0 le s le 1 $ Fixed variables are the mean decreases with a discrete time t. Now I want to calculate the probability of having k> 0 events in an arbitrary time step, whereas no event (k = 0) occurs in steps of previous times.
Initially I thought that the solution would be $ P (k) = sum_ {k = 1} ^ infty P (k, t) prod_ {t & # 39; = 1} ^ infty P (0, t & # 39;) $ but this probability is wrong and is not normalized. Can you help me? Thanks in advance 🙂

pr.probability – continuous analog of discrete probability trick

When discussing functions that take random information from a discrete distribution, a standard argument is: $ f: D rightarrow O $ be some function, and assume that $ O $ it is finite for each $ o in O $ We can define $ P_o: = Pr_x[f(x) = o]$. So,
$$ sum_o P_o = 1 $$
Y
$$ sum_ {or in O} P_o ^ 2 leq max_o P_o sum_ {or in O} P_o = max_ {or in O} P_o $$
Can this argument be repeated in the continuous domain? Yes $ f: mathbb {R} rightarrow mathbb {R} $ receives information from a continuous distribution, there is an analogous argument that says that the sum of these events (which $ f $ takes a particular value) squared is bounded up by the most probable event?

Thank you very much for the help

probability – Evaluation of the mean of uniform distribution using mgf

For the uniform [0,1] R.V, the mgf is $ M (s) = frac {e ^ s-1} {s} $ and the derivative is $ M & # 39; (s) = frac {se ^ s-e ^ s + 1} {s ^ 2} $ To calculate the average we have to take the limit. $ lim_ {s to0} M & # 39; (s) $. Why is it that for this distribution we have to take the limit and we can not evaluate it at 0. While for other distributions we can evaluate at 0 directly.

A probability that involves a tournament

A tournament is made up of 8 players. It should work in the following way:

  • they all start with 100 points;
  • each match will be between two players;
  • There can not be ties.
  • after a match, the winner will continue with the same number of points and the loser will lose 6;
  • In this tournament, each player has already played exactly 5 matches.
    We know that a player has won all 5 matches. What is the probability that, if a sixth match occurs, this winner plays with a player who has a lower score than everyone?

[Credits to my friend Gabriel]

Question of probability of rohatgi – Mathematics Exchange of pile

I have a problem with this question.

Let A, B and C be three boxes with three, four and five cells, respectively.
There are three yellow balls numbered from 1 to 3, four green balls numbered from 1 to 4,
and five red balls numbered from 1 to 5. The yellow balls are placed randomly in the box
A, green in B and red in C, with no cell receiving more than one ball.
Find the probability that only one of the boxes shows no matches.

matrices – What are the properties of $ mathbf {M} $ so that $ mathbf {pM} = mathbf {e} $ is achieved with $ mathbf {p} $ as a probability vector?

Assume that $ mathbf {M} $ is a $ N times N $ non-singular matrix with non-negative elements, where each row adds up to $ N $. Assume that $ mathbf {p} $ is a probability vector, where each element is positive, less than 1, and the sum of all the elements of $ mathbf {p} $ It is one. So, what are the properties of $ mathbf {M} $ to

$$ mathbf {pM} = mathbf {e} $$

to hold, where $ mathbf {e} $ is the row vector of all? Keep in mind that the above is maintained in general, since $ mathbf {M} $ is a $ N times N $ Non-singular matrix, but not always achieved by probability vector $ mathbf {p} $, I mean, yes $ mathbf {M} $ It does not satisfy some properties, so the above is achieved with $ mathbf {p} $ Not being a probability vector.

I have a conjecture that $ mathbf {M} $ It must be such that there is a matrix. $ hat M $ obtained as a permutation row of $ mathbf {M} $ It has the largest elements in its main diagonal.

probability distributions – find the entropy of the Markov chain

There is a source of information in the alphabet of the information source. $ A = {a, b, c } $ represented by the following state transition diagram
markov chain

the $ i $-The output of this information source is represented by a random variable $ X_i $. It is known that the user is now in a state. $ s_1 $. In this state, let's go $ H (X_i mid s_1) $ denotes entropy by observing the following symbol $ X_i $, find the value of $ H (X_i mid s_1) $, entropy of this information source, Calculate $ H (X_i mid X_ {i-1}) $ Y $ H (X_i) $ respectively. Assume $ i $ it's pretty big

How can I find $ H (X_i | s1)? $ I know that $ H (X_i mid s_1) = – sum_ {i} p left (x_i, s_1 right) cdot log_b ! Left (p left (x_i | s_1 right) right) = – sum_ {i} p left (x_i, s_1 right) cdot log_b ! left ( frac {p left (x_i, s_1 right)} {p left (s_1 right)} right) PS
but I do not know $ p (s_1). $

$ A = begin {pmatrix} 0.25 & 0.75 & 0 \ 0.5 & 0 & 0.5 \ 0 & 0.7 & 0.3 end {pmatrix}. $

From the matrix I can know that $ p (s_1 | s_1) = 0.25 $, etc

but what is the probability of $ s_1 $?
and how can I calculate $ H (X_i | X_ {i-1}) $? Is this stationary distribution too?