mathematics – Calculating probabilities for a custom drop table system

I have a “drop table” / “loot table” / “item table” / “whatever you want to call it” system and I need to solve two Problems. Apologies in advance for the text wall :/

The Problems:

  1. Calculate the probability of pulling a specific item from a table
  2. Calculate the probability of pulling ANY item from a table with a specific tag (such as “Epic”, “Rare”, “Weapon”, “Sword”, “Currency”, etc)

A few details about the system:

  1. Tables are setup with a certain number of ‘Slots’ and ‘Rolls’
  2. Slots determines the total number of unique items that can be dropped from the table
  3. Rolls determines how many times we ‘pull’ an item out of the table
  4. Each row within a table will drop one of a pre-calculated set of items
  5. Each row can be marked as ‘Guaranteed’, ‘Random’ or both (more on that in a bit)
  6. Rows have a ‘MaxRollsToConsume’ field (0 == infinite rolls), and a Weight field (used for weighted randomness, 0 Weight = never pulled)

And a bit of pseudo code for how we actually pull items out of the table

create a guaranteed set of rows
create a random pool of rows
create a result with a set number of slots available to fill 

(note: adding an item to the result will only fill a slot if that item does not already exist in the result)

foreach row in table
   if row is guaranteed, then add to guaranteed set
   if row is random, then add to random pool

foreach row in guaranteed set
   pull item out of row and add it to result
   consume a roll

while there are rolls remaining and empty slots in the result
   select a row from the random set using weighted randomness and respecting the MaxRollsToConsume field
   pull item out of row and add it to result
   consume a roll
   if this row has reached its max rolls then remove it from the random pool

while there are rolls remaining
   select an item from the current result using weighted randomness and respecting the MaxRollsToConsume field
   add the item to the result
   consume a roll

return the result

I understand how to calculate the probabilities for the ‘Guaranteed’ rows because, well, they are events that are guaranteed to happen so it’s pretty easy to calculate.

Where I’m falling short is on how to calculate the ‘Random’ rows (especially with respect to the MaxRollsToConsume field) and only pulling a certain number of times. For example, I have 5 random rows, but only 2 Rolls to consume. How do I calculate the probability of an item being pulled from one of the those 2 Rolls while also taking into account how many Rolls are allowed per row?

I’m fairly confident that if I can solve Problem 1 with some help then I’ll be able to solve Problem 2 on my own. Any help is appreciated and thanks in advance! If anything is unclear or more info is needed please let me know! 🙂

statistics – Is it possible to model these probabilities in AnyDice?

I was asked by a pal to help him model a dice mechanic in AnyDice. I must admit I am an absolute neophyte with it, and I offered to solve it using software I’m better versed in. I’d like to be able to help them do this in AnyDice.

The mechanic is as follows:

  • The player and their opponent are each assigned a pool of dice. This is done via other mechanics of the game, of which details are not germane. Suffice to say, the player will have some set of dice (say 2D6, 1D8, and 1D12) that will face off against the opponents pool (which will generally be different from the player’s, say 3D6, 2D8 and 1D12).
  • The player and their oppoent roll their pools.
  • The opponent notes their highest value die. This is the target.
  • The player counts the number of their dice that have a higher value than the target, if any.
  • The count of the dice exceeding the target, if any, is the number of success points.

I searched the AnyDice tag here for questions that might be similar, the closest I found was “Modelling an opposed dice pool mechanic in AnyDice”, specifically the answer by Ilmari Karonen.

That question and answer, however, deals with only a single die type.

Can a question like “What are the probabilities of N successes when rolling 4D6 and 6D20 as the player against 6D6 and 4D20 for the opponent?”, be handled in AnyDice and produce a chart similar to that below?

enter image description here

Which notation to use to express summing probabilities with increasing conditionals?

How to write this:
$$
Pr(x_1) + Pr(x_2|x_1) + Pr(x_3|x_1,x_2) + ldots + Pr(x_n | x_1,x_2,ldots,x_{n-1})
$$

but by using the $sum$ notation?

I’m trying to write an entropy equation with growing conditionals in $ldots$:
$$
– sum_{iin{1,2,ldots,n}} Pr(x_i | ldots) ln(Pr(x_i | ldots))
$$

but I don’t know how to do express it.

tools – How to calculate the probabilities for eliminative dice pools (dice cancelling mechanic) in Neon City Overdrive?

The game Neon City Overdrive uses the following resolution mechanic for checks:

  1. create a pool of Action Dice and (possibly) another pool of differently-colored Danger Dice (all d6, generally up to 5 or 6 dice in each pool)
  2. roll all the dice
  3. each Danger Die cancels out an Action Die with the same value – both are discarded
  4. the highest remaining Action Die (if there is any) is the result (the precise meaning of which is irrelevant for the purposes of this question)
    • any extra Action Dice showing 6 (i.e. in addition to the single highest die read as the result) provide a critical success (called a boon)

I’m struggling to find the proper way to model the probabilities of this mechanic in anydice.

I realize that a good starting point would be this answer to a very similar question regarding the mechanic in Technoir (which clearly was a source of inspiration for Neon City Overdrive). Unfortunately, despite my best efforts I can’t say I fully comprehend how the code provided there works, and there’s an important difference between the two games: in Technoir a single “negative die” eliminates all matching “positive dice”, whereas in NCO this happens on a one-to-one basis.

I would be very grateful for any help.

neural networks – Using the features embedding of the output from a transformers to represent probabilities of categorical data

I was considering using a transformer, on input data which can be represented as an embedding, so I can use the attention mechanism in the transformer architecture. As my data is of variable input and output length and the input is sequential. My question is that my output data is suppose to be either numerical or probabilities for each output variable. The output was originally supposed to 13 numerical outputs but I decided to use a probability score as way of normalizing the output. My question is can I use two output vectors with 7 features each instead of 13 numeric outputs. Each feature would map to one of the original output vectors and the the last feature would always be 0. As PyTorch expects your output to be the same number of features as your input. My input variables are embedded as 7 features. Should this approach work, as I am unsure of how the loss function works or is there a loss function that would allow for this.

Asymptotically vanishing probabilities

Let $P_n$ be a sequence of probability measures on $(mathbb{R}^{n},mathcal{R}^{n})$, where $mathcal{R}^{n}$ is the Borel $sigma$-field associated to $mathbb{R}^n$. For any sequence of measurable sets sets $E_n’in mathcal{R}^n$ and $epsilon>0$, define the $epsilon$-enlargements via
$$
phi_epsilon(E_n’)=cup_{x in E_n’}{times_{i=1}^{n} (x_i-epsilon,x_i+epsilon) }
$$

where $x=(x_1,ldots,x_n)in mathbb{R}^n$.

Question Let $E_n in mathcal{R}^n$, and assume that each $E_n$ is not a singleton. Let $P_n$ have no atoms and assume that there exists $overline{epsilon}>0$ such that, for all $epsilonin(0,overline{epsilon})$ and $E_n’$ such that $phi_epsilon(E_n’)=E_n$,
$$
lim_{nto infty}P_n(E_n’)=0.
$$

Since $epsilon$ can be taken arbitrarily small, can we also conclude that $lim_{nto infty}P_n(E_n)=0$?

Clearly, $E_n’ subset E_n$, thus $0=lim_{nto infty}P_n(E_n’)leqliminf_{nto infty}P_n(E_n’)$. I was considering a quite simple reasoning by contradiction: if for any $delta>0$ and $n_0$ there exists some $ngeq n_0$ such that $P_n(E_n)>delta$, then there should also exists a sufficiently small $epsilon>0$ such that $P_n(E_n’)>delta$, where $E_n’$ satisfies $phi_epsilon(E_n’)=E_n$. But this should lead to a contradiction. Am I missing some nasty behaviour of the sets $E_n$ and/or the probability measures $P_n$ which makes this simple reasoning invalid?

Channel coding and Error probability. Where are these probabilities from?

From where are the following probabilities?

We consider BSCε with ε = 0,1 and block code C = {c1, c2} with the code words c1 = 010 and c2 = 101. On the received word y we use the decoder D = {D1,D2} which decodes the word to the code word which has the lowest hamming distance to y. Determine D1 and D2 and the global error probability
ERROR(D) if the code words have the same probability.
Hint: To an output y there exists only one x which gets to a failing decoding.
(Y = 100 will only gets wrong decoded if we sent message is x = c1 = 010.) So the term (1− p(D(y)|y)) is equal to Pr(X = x|Y = y) for a suitable x.

Nun

$$begin{aligned} &text { Hamming-Distance: }\ &begin{array}{c|cc} text { Code } & 010 & 101 \ hline hline 000 & 1 & 2 \ hline 001 & 2 & 1 \ hline 010 & 0 & 3 \ hline 011 & 1 & 2 \ hline 100 & 2 & 1 \ hline 101 & 3 & 0 \ hline 110 & 1 & 2 \ hline 111 & 2 & 1 \ hline end{array} end{aligned}$$ $$left.D_{1}={000,010,011,110} text { (Decides for } 010right)$ $left.D_{2}={001,100,101,111} text { (Decides for } 101right)$$ $$begin{aligned}
E R R O R(D) &=sum_{y in Sigma_{A}^{3}} p(y)(1-p(D(y) | y)) \
&=overbrace{2 cdot p(y)(1-p(D(y) | y))}+quad overbrace{6 cdot p(y)(1-p(D(y) | y))}^{ } \
&=2 cdotleft(frac{729}{2000}+frac{1}{2000}right)left(frac{7}{250}right)+6 cdotleft(frac{81}{2000}+frac{9}{2000}right)left(frac{757}{1000}right)
end{aligned}$$
How do I get to the probabilities $$ frac{7}{250}$$ and $$frac{757}{1000}$$??

I don’t get this calculation. It should be right. But I don’t get how to get to these probabilities.

Could someone explain me this?

How to solve the probabilities on the natural numbers?

Given a simple function f:

f[n_, b_] := Mod[n, b]

How to calculate the probability of f[n_, b_] == 0 for arbitrary $ n in ℕ ^ + $? That is, the probability of f[n, b] == 0 is $ 1 / b $ for $ n in ℕ ^ + $.

In fact, this function is trivial, but I'm looking for general ways to solve these kinds of problems. I'm not sure if Probability it is suitable for this, as it requires a distribution.

pr.probability – Which orthant probabilities are the greatest? (For a multivariate normal distribution)

I have a $ k $three-dimensional multivariate normal distribution $ X∼N (0, Sigma) $ with covariance matrix $ Sigma $. $ Sigma $ has two distinct eigenvalues, let's say $ lambda_1> lambda_2 $, with own orthogonal spaces $ V_1 $ Y $ V_2 $. I am interested in orthant odds; given an orthant defined by $ epsilon = ( epsilon_1, ldots, epsilon_k) $, with each $ epsilon_i in {1, -1 } $, there is an orthant probability $ p_ epsilon = Pr ( forall i: ; epsilon_i X_i geq 0) $.

There seems to be a literature on how to find closed forms for these in special cases, but I don't necessarily need a closed form, I just want to know when $ p _ { epsilon} geq p _ { epsilon & # 39;} $. It can be written $ epsilon $ as a sum $ epsilon = v_1 + v_2 $ with $ v_1 in V_1 $ Y $ v_2 in V_2 $Y $ k = || epsilon || ^ 2 = || v_1 || ^ 2 + || v_2 || ^ 2 $. My guess is that the older $ || v_1 || $ is the eldest $ p_ epsilon $ is (remember that $ lambda_1> lambda_2 $) Geometrically, this means that the diagonal vector of the octant defined by $ epsilon $ it is closer to the longest axes of the ellipsoids, which are the equidensity contour of the distribution.

Is this true? And if so, how is it tested? I'm puzzled (although I know little about probability).

(In the particular case that interests me, I have a complete graph $ K_ ell $, and I'm generating a random map from the edges of $ K_ ell $ to $ mathbb {R} $. The covariance matrix has, as its two proper spaces, the cut space and the edge space of $ K_ ell $.)

pr.probability – Do finitely additive disintegrable probabilities have disintegrable extensions?

Leave $ Omega $ be a set, leave $ mathscr F $ be a $ sigma $-field of subsets of $ Omega $, and let $ mathscr G $ be a sub-$ sigma $-field of $ mathscr F $. the $ mathscr G $-atom for $ omega in Omega $ is defined to be
$$ mathscr G ( omega): = bigcap _ { omega in G in mathscr G} G. $$
Note that $ { mathscr G ( omega): omega in Omega } $ is a (not necessarily measurable) partition of $ Omega $.

A $ pi $-strategy $ sigma ( cdot mid cdot) $ is a function of $ mathscr F times Omega $ inside $ (0.1) $ with the following properties:

(I) $ sigma $ is constant in $ mathscr G $-atoms, that is, for everyone $ omega, omega & # 39; in Omega $Yes $ mathscr G ( omega) = mathscr G ( omega & # 39;) $, then $ sigma ( cdot mid omega) = sigma ( cdot mid omega & # 39;) $;

(ii) For everyone $ omega in Omega $, $ sigma ( cdot mid omega) $ is a finitely additive measure of probability in $ ( Omega, mathscr F) $;

(iii) For everyone $ omega in Omega $ Y $ G in mathscr G $, $ sigma (G mid omega) = delta_ omega (G) $, where $ delta_ omega $ is point of mass in $ omega $.

Yes $ P $ is a finitely additive probability in $ ( Omega, mathscr F) $, We say that $ P $ it is disintegrable in $ pi $ if there is a $ pi $-strategy $ sigma $ such that
$$ P (A) = int sigma (A mid omega) mu (d omega) $$
for some finitely additive probability measure in $ ( Omega, 2 ^ Omega) $ and all $ A in mathscr F $. (Equivalently $ P $ is in the closed convex hull of $ { sigma ( cdot mid omega): omega in Omega } $, with the closure taken in the topology of the product in $ (0,1) ^ { mathscr F} $.)

Question. Yes $ P $ is disintegrable in $ pi $ Y $ mathscr H $ is a $ sigma $-field of subsets of $ Omega $ such that $ mathscr H supset mathscr F $, Is there an extension of $ P $ to $ ( Omega, mathscr H) $ that's disintegrable in $ pi $?