Graphics: Preservation, Loss or Gain Detection in a Creation Game with Items and Recipes

Suppose we are designing a game like Minecraft where we have many items $ i_1, i_2, …, i_n in I $ and a lot of recipes $ r_1, r_2, …, r_m in R $. Recipes are functions. $ r: (I times mathbb {N}) ^ n rightarrow I times mathbb {N} $That is, they take some items with nonnegative whole weights and produce a whole quantity of another item.

For example, the cake recipe in Minecraft is:

3 milk + 3 wheat + 2 sugar + 1 egg $ rightarrow $ 1 pie

… and the recipe for torches is:

1 bar + 1 charcoal $ rightarrow $ 4 torches

Some recipes might even be reversible, for example:
9 diamonds $ leftrightarrow $ 1 block of diamonds

If there is any combination of recipes that we can apply repeatedly to get more of the elements we started with, then the game is poorly balanced and players can take advantage of it.
It is more desirable that we design the game with recipes that retain elements or possibly lose some elements (thermodynamic entropy in the real world; toast cannot be easily undone).

Is there an efficient algorithm that can decide whether a set of recipes:

  • keep items?
  • lose items due to inefficiency?
  • win items?

Is there an efficient algorithm that can find problematic recipes if a game is out of balance?

The first thing I think is that there is a graphical structure / peak flow problem here, but it is very complex and looks like a backpack problem. Or maybe it could be formulated as a SAT problem; This is what I'm considering coding at the moment, but there could be something more efficient.

We could encode recipes in an array $ mathbf {R} ^ {m times n} $ where the rows correspond to recipes and the columns correspond to elements. Column entries are negative if an item is consumed by a recipe, positive if it is produced by the recipe, and zero if it is not used. Similar to a well known matrix method for graph cycle detection, we could increase $ mathbf {R} $ at high power and get sums from each row to see if item totals keep going up, stay balanced or turn negative. However, I'm not sure this will always work.

Any discussion, code, or recommended reading is greatly appreciated.

Git replaces merge history (or preservation) of files

Long ago, and long before joining the project, my project was migrated from clearcase to git. This migration led to the following file design:

.
├── bar
│   ├── bar.c
│   └── bar.h
├── foo
│   ├── foo.c
│   └── foo.h
└── patches
    ├── bar
    │   └── bar.c
    └── foo
        └── foo.h

The compilation system makes patches hide patched files. The patched files have been kept as testimony of the patches that had been made in the old clearing time.
The patched files (normally) have not undergone changes since the migration.

Now is the time to apply these patches!

I see two options:

Cheat developers to believe that there has never been a patch!

# This totally is a pseudo script. Sorry, friday night, not at work anymore!
pfile=patches/bar/bar.c
ofile=bar/bar.c
for commit in $(git log -- pfile)
   git show commit:pfile > ofile
   git commit -m"%B" --date %aN --author %cI

(This could be possible through some kind of overflow in which I have not worked today, do not hesitate to improve)

Pros

  • git log -- bar/bar.c It really shows the file history with the application of the clearcase patch as one of the main commits.
  • If I want to run a filter to move foo Y bar for its own repository, history will be preserved (and clean)

Cons

  • This adds a lot of commitment!
  • Do not easily retain commitment consistency: if commit 01234567 modifies both bar/bar.h Y patches/bar/bar.c, will now appear in two different confirmations

The way to save the commitment. It can be found, for example, here:

git rm bar/bar.c
git commit -m 'Remove unpatched clearcase file'
git mv patches/bar/bar.c bar/bar.c
git commit -m 'Apply clearcase and git patches'

Pros

  • Few commits added
  • If I put git config log.follow true (Thanks @VonC), my fellow developers will be fooled by looking at the story but the initial patch is clear. This will be enough.

Cons

  • Unless you put a lot of effort, this is not resistant to future (and expected) subdirectory filters.
  • Initial clearcase patch hidden. To see the history of unpatched files, I will need some git tricks now that --follow is the default or set an alias get_clearcase_patch That will calculate the difference.

Is there a third option that I have not seen (that preserves history, I thought of one that, of course, not ^^)?
Are there any advantages and disadvantages that you have not seen?
What is the best solution?

dnd 5e – Self preservation: How DM NPCs love to live?

I keep in mind that humanoid NPCs do not "get mad at the death of light" as much as they should. Combat encounters often result in a total massacre, when realistically a single fatality would inculcate in the minds of all an idea the seriousness of the situation, which would result in a terrified retreat or an unconditional surrender. Anyone who is injured should begin to favor the preservation of their own mortality on success in combat. Obviously, these rules do not apply when NPCs are forced against their will.

Does anyone have experience executing this campaign style? How does a combat scene develop with these restrictions? How would the experience be rewarded? Can the game be rewarding and fun?

Probability: conditional expected value and variance of the average preservation propagation

My first post here, and my math skills are more than a little rusty. I have a simple question for you: suppose that Y is an average spread of X.

Is it always true that: E (X | X> Y) <E (Y | Y <X)? How to prove it?

Is it also true that, for some value of C> 0, Var (Y | Y> C) < Var(X|X>DO)?

Thank you very much in advance!

multivariable calculation – Separability of a function, preservation only under linear transformation

I have a question about the following problem:

It allows to define a multivariable function. $ u (x): mathbb {R} ^ {n} rightarrow mathbb {R} $, which has the property of "additive separability", which means that we can write $ displaystyle u (x) = sum_ {l = 1} ^ {n} u_ {l} (x_ {l}) $
We need to show that a monotonic transformation preserves this property ONLY if this transformation is affine.

The first step is to show that a related function retains this property called "additive separability":

It allows to create the function. $ hat {u} (x) = au (x) + b $, where $ a> 0 $ Y $ b in mathbb {R} $.
So:

begin {align}
hat {u} (x) = a sum_ {l = 1} ^ {n} u_ {l} (x_ {l}) + b \
hat {u} (x) = sum_ {l = 1} ^ {n} a u_ {l} (x_ {l}) + b \
hat {u} (x) = sum_ {l = 1} ^ {n} (a u_ {l} (x_ {l}) + frac {b} {n}) \
hat {u} (x) = sum_ {l = 1} ^ {n} tilde {u} l (x_ {l})
end {align}

Now, my problem is to understand the second part of the test:

We need to show that if any monotonous transformation preserves property, it is necessarily a related transformation:
The test that I found says:
It allows to create a monotonic transformation. $ u ^ {*} (x) = f (u (x)) $
We want to show that if $ u ^ {*} (x) = f (u (x)) = f (u (x)) = f ( sum_ {l = 1} ^ {n} u_ {l} (x_ {l})) = sum_ {l = 1} ^ {n} f_ {l} (x_ {l}) $ so $ f () $ It is a related function.

First:

$ dfrac { partial u ^ {*} (x)} { partial x_ {j}} = dfrac { partial f ( sum_ {l = 1} ^ {n} u_ {l} (x_ {l }))} { partial x_ {j}} = f & # 39; ( sum_ {l = 1} ^ {n} u_ {l} (x_ {l})) u_ {j} & # 39; (x_ {j}) hspace {2cm} (1) $

Also, since we know that $ u ^ {*} (x) = sum_ {l = 1} ^ {n} f_ {l} (x_ {l}) $, so:

$ dfrac { partial u ^ {*} (x)} { partial x_ {j}} = dfrac { partial sum_ {l = 1} ^ {n} f_ {l} (x_ {l}) } { partial x_ {j}} = f_ {j} & # 39; (x_ {j}) hspace {2cm} (2) $

From (1) and (2):

$ f & # 39; ( sum_ {l = 1} ^ {n} u_ {l} (x_ {l})) = dfrac {f_ {j} & # 39; (x_ {j})} {u_ {j} & # 39; (x_ {j})} hspace {2cm} (3) $

So, to show that $ f () $ It's affinity is enough to prove that $ f & # 39; () $ it's constant

It allows to use the following for the vectors: $ x 1 Y $ x2 $, both in $ mathbb {R} ^ {n} $, such that:

$ sum_ {l = 1} ^ {n} u_ {l} (x ^ {1}) {neq sum_ {l = 1} ^ {n} u_ {l} (x ^ {2}} _ {l}) $ Y $ x1 {j} = x2 {j} = x_ {j} $

Then, due to (3):

begin {align}
f & # 39; ( sum_ {l = 1} ^ {n} u_ {l} (x ^ {1})) = dfrac {f_ {j} & # 39; (x1 {1}}} {u_ {j} & # 39; (x1 {1}}} = dfrac {f_ {j} & # 39; (x_ {j})} {u_ {j} & # 39; (x_ {j})} \
f & # 39; ( sum_ {l = 1} ^ {n} u_ {l} (x ^ {2} l)) = dfrac {f_ {j} & # 39; (x 2 {j})} {u_ {j} & # 39; (x 2 {j})} = dfrac {f_ {j} & # 39; (x_ {j})} {u_ {j} & # 39; (x_ {j})}
end {align}

So finally:

$ f & # 39; ( sum_ {l = 1} ^ {n} u_ {l} (x ^ {1})) = f & # 39; ( sum_ {l = 1} ^ {n} u_ {l} (x ^ {2} l)) $

$ square $

I understand everything, except why we can affirm without losing generality that the j dimension of $ x 1 Y $ x2 $ They are equal? Is there any other way to prove this without this assumption? It seems a strong assumption.

Thank you.