## ❕NEWS – Ethereum Miners are unhappy about the new transaction fee model | Proxies123.com

As we all know, the transaction fee when we make a crypto transaction is highly dependent on the traffic at the time that we make the transaction. Of course if you increase the transaction fee that you are willing to pay, your payment could take priority and you will notice that your funds are transferred sooner.

However, the new Ethereum transaction fee model aims to do away with this, and this has the miners outraged. According to the new plan, EIP 1559, the transaction fee will be set at a standard rate, and there will be a minor/small tip that is added to that fee for the miners.

Of course this will be highly beneficial to us, and as a crypt user this will mean that we can transact at any time without having to worry about the traffic, but miners have the most to lose from this according to this article.

What do you guys think? I look forward to your comments. I for one, am all for this idea and hope they even implement it in btc in the future.

## nvidia – Different memory allocation on GTX 1080 ti, Tesla k80, Tesla v100 for the same pytorch model

I have tried loading a distilbert model in pytorch over 3 different GPUs (GeForce GTX 1080 ti, tesla k80, tesla v100). According to the pytorch cuda profiler, the memory consumption is identical in all of these GPUs(534MB). But “nvidia-smi” shows different memory consumption for each of them (GTX 1080 ti- 1181MB, tesla k80 – 898MB, tesla v100- 1714MB).

I chose v100, hoping to accommodate more processes because of it’s extra memory. Because of this, I am not able accommodate any more processes in v100 compared to k80.

Versions: Python 3.6.11, transformers==2.3.0,
torch==1.6.0

Any help would be appreciated.

Following are the memory consumption in the GPUs.

—————-GTX 1080ti———————

``````2020-10-19 02:11:04,147 - CE - INFO - torch.cuda.max_memory_allocated() : 514.33154296875
2020-10-19 02:11:04,147 - CE - INFO - torch.cuda.memory_allocated() : 514.33154296875
2020-10-19 02:11:04,147 - CE - INFO - torch.cuda.memory_reserved() : 534.0
2020-10-19 02:11:04,148 - CE - INFO - torch.cuda.max_memory_reserved() : 534.0
``````

The output of “nvidia-smi” :

``````2020-10-19 02:11:04,221 - CE - INFO - | ID | Name                | Serial          | UUID                                     || GPU temp. | GPU util. | Memory util. || Memory total | Memory used | Memory free || Display mode | Display active |
2020-10-19 02:11:04,222 - CE - INFO - |  0 | GeForce GTX 1080 Ti | (Not Supported) | GPU-58d5d4d3-07a1-81b4-ba67-8d6b46e342fb ||       50C |       15% |          11% ||      11178MB |      1181MB |      9997MB || Disabled     | Disabled       |
``````

—————-Tesla k80———————

``````2020-10-19 12:15:37,030 - CE - INFO - torch.cuda.max_memory_allocated() : 514.33154296875
2020-10-19 12:15:37,031 - CE - INFO - torch.cuda.memory_allocated() : 514.33154296875
2020-10-19 12:15:37,031 - CE - INFO - torch.cuda.memory_reserved() : 534.0
2020-10-19 12:15:37,031 - CE - INFO - torch.cuda.max_memory_reserved() : 534.0
``````

The output of “nvidia-smi” :

``````2020-10-19 12:15:37,081 - CE - INFO - | ID | Name      | Serial        | UUID                                     || GPU temp. | GPU util. | Memory util. || Memory total | Memory used | Memory free || Display mode | Display active |
2020-10-19 12:15:37,081 - CE - INFO - |  0 | Tesla K80 | 0324516191902 | GPU-1e7baee8-174b-2178-7115-cf4a063a8923 ||       50C |        3% |           8% ||      11441MB |       898MB |     10543MB || Disabled     | Disabled       |
``````

—————-Tesla v100———————

``````2020-10-20 08:18:42,952 - CE - INFO - torch.cuda.max_memory_allocated() : 514.33154296875
2020-10-20 08:18:42,952 - CE - INFO - torch.cuda.memory_allocated() : 514.33154296875
2020-10-20 08:18:42,953 - CE - INFO - torch.cuda.memory_reserved() : 534.0
2020-10-20 08:18:42,953 - CE - INFO - torch.cuda.max_memory_reserved() : 534.0
``````

The output of “nvidia-smi” :

``````2020-10-20 08:18:43,020 - CE - INFO - | ID | Name                 | Serial        | UUID                                     || GPU temp. | GPU util. | Memory util. || Memory total | Memory used | Memory free || Display mode | Display active |
2020-10-20 08:18:43,020 - CE - INFO - |  0 | Tesla V100-SXM2-16GB | 0323617004258 | GPU-849088a3-508a-1737-7611-75a087f18085 ||       29C |        0% |          11% ||      16160MB |      1714MB |     14446MB || Enabled      | Disabled       |
``````

## Does response variable scaling in a multivariate linear model affect the independence of the observations?

e.g. in a multivariate linear model: response matrix Y (n x q) regressed onto X (n x p). Column-wise scaling of Y (that is, dividing each y.j by its standard deviation) will affect the independence of the i rows?

## programming languages – How would you model Rust procedural macros?

In Rust programming language one can write a compiler extension function that works on abstract syntax tree, effectively modifying source code before it gets converted into machine instructions.
In other words, macro function has signature

`````` Abstract Syntax Tree -> Abstract Syntax Tree
``````

Can these be thought of as higher order functions? Usually higher order functions use function composition to produce their output, not source code manipulation.

## Facebook M2M-100 Language Translation Model Now Open Sourced

Facebook has a new AI multilingual, 100 language translation model, M2M-100, which it releasing to open source.

## database design – Cardinality in a Logical Model by number of rows Vs natural relationship between entities

How should one determine Cardinality in a Logical Model ?

Should it be based on how the rows of an entity relate to another entity or should we consider the natural relationship between the entities i.e. conceptual relationship between the entities ?

Example: If I have an entity Course and an entity Course Type, what would be the cardinality ? Each course can have only one course type. For example, Bachelor of Arts is a course of course type Bachelors and Master of Science is of course type Masters

If I have Course Type as part of Course entity, then Course Type would only contain list of valid course types and it would be “many-to-one” (non-identifying) as there are many courses which will could 1 course type.

On the other hand if I model it in such a way that Course Type entity has Course ID (foreign Key) and Course Type , then the relationships between Course and Course Type is “one-to-one” (identifying).

Basically what I am trying to understand is, which one the following is right ?

each course has one course type OR many courses have one course type

How should one make this decision ? Are there any guidelines ?

P.S. : I am a beginner and using Oracle Data Modeler

## What do MacBook Pro 2020 model codes mean?

There’s no 2020 or 2021 model of the 16″ MacBook Pro currently. The 16″ MacBook Pro only exists as the Late 2019 model – there are no others.

The codes you have refer to various build-to-order variants. For example you can buy the 16″ model with 16 GB RAM, 32 GB or 64 GB RAM, there’s 3 different CPU models, 3 GPU variants, 5 SSD sizes, and so on.

## dnd 5e – How do I model the fighter’s Great Weapon Fighting fighting style in Anydice?

You seem to be asking for something like the built-in `(explode DIE)` function in AnyDice, except for rerolling the die (once only) if the original roll is below a certain limit.

If you take a look at the AnyDice function library (in the left-hand-side menu) and click the `explode` entry, it actually has a convenient “Do it yourself” section that shows how to reimplement the built-in `explode` function yourself. The trick for making the syntax nice and clean is to use two functions: a wrapper function that takes in the unrolled die as a parameter and calls a helper function for every possible outcome of the roll (i.e. passing the same die to the helper function, which expects a number).

We can use the same trick here:

``````function: reroll DIE:d if under LIMIT:n {
result: (reroll DIE as DIE if under LIMIT)
}
function: reroll ROLL:n as DIE:d if under LIMIT:n {
if ROLL < LIMIT { result: DIE }
else { result: ROLL }
}

loop SIDES over {4,6,8,12,20} {
output (reroll dSIDES if under 3) named "d(SIDES) with GWF"
}
``````

Here, `(reroll DIE if under LIMIT)` is the wrapper function, which simply calls the inner function `(reroll ROLL as DIE if under LIMIT)` for every possible roll of the die. The inner function then just checks if the roll is below the limit, and if so, returns the “re-rolled” die instead of the original roll.

Of course, you could also just call the inner function directly, as in:

``````loop SIDES over {4,6,8,12,20} {
output (reroll dSIDES as dSIDES if under 3) named "d(SIDES) with GWF"
}
``````

and get the same results. But sometimes it’s nice to avoid repeating a parameter like that. In fact, if we’re only interested in modeling rerolls due to Great Weapon Fighting, we might as well leave the constant `LIMIT` parameter out, too, and simplify our wrapper function into just:

``````function: gwf DIE:d {
result: (reroll DIE as DIE if under 3)
}
``````

Bonus: The output of the function(s) given above is itself a die (i.e. a probability distribution over the integers), and thus can be assigned into a custom die that “automatically rerolls itself”. You can then roll as many of these custom dice as you want, or even mix them with other dice.

For example, to get the results of rolling 2dX with Great Weapon Fighting, you could do:

``````loop SIDES over {4,6,8,12,20} {
GWF: (gwf dSIDES)
output 2dGWF named "2d(SIDES) with GWF"
}
``````

or, alternatively, just:

``````loop SIDES over {4,6,8,12,20} {
output 2d(gwf dSIDES) named "2d(SIDES) with GWF"
}
``````

## set theory – Notions of “completeness” and “sufficiency” of a mathematical model

I’m modelling a real-world problem as having instances $$i$$ in a set $$P$$. The structure of the problem and the model itself are irrelevant to my question so I’ll omit them.

I define certain restrictions $$A$$ on $$P$$ using logical formulae over structure of $$i$$.
$$A$$ is a necessary condition, i.e. any problem in the problem domain, if represented using $$i in P$$ would satisfy $$A$$. We can say that $$P$$ under $$A$$, i.e. $$P’ = {p : p in P land p~text{satisfies}~A}$$.

But it’s possible that $$exists i’ in P’$$ such that $$i’$$ is a valid mathematical structure but doesn’t actually correspond to valid any real-world problem.

What is the standard terminology to say

1. All valid problem instances are a part of $$P’$$ (I’m informally calling this “sufficiency”)
2. All problem instances which are a part of $$P’$$ are valid (I’m informally calling this “completeness”)

I may then use this terminology to say that in my example, $$P’$$ is sufficient but incomplete (replacing these two words with the actual terminology)

## fitting – How to fit 3 data sets to a model of 3 differential equations?

I want to fit 3 data sets to a model consisting of 3 differential equations and 7 parameters. I want to find the parameters best fitting to my model.

I have searched for several related examples. I borrowed this：https://mathematica.stackexchange.com/questions/28461/how-to-fit-3-data-sets-to-a-model-of-4-differential-equations

``````sol = ParametricNDSolveValue({Sg'(t) == -((kg Sg (t) X(t))/(
Ksg + Sg(t))), Sg(0) == 490,
Sc'(t) == -((kc Sc (t) Sg(t) X(t))/((Ksc + Sc(t)) (Ksg + Sg(t)))),
Sc(0) == 230,
X'(t) == -b X(t) - (
kc Sc(t) Sg(t) X(t))/((Ksc + Sc(t)) (Ksg + Sg(t)) T) + (
kg Sg(t) X(t) Y)/(Ksg + Sg(t)), X(0) == 22}, {Sg, Sc, X}, {t, 0,
100}, {kg, Ksg, kc, Ksc, b, T, Y});

abscissae = {0., 18., 30., 45., 58., 64., 68., 73., 78., 83.5, 90.5,
95., 99.};
ordinates = {{490.18, 467.06, 442.16, 420.82, 322.32, 248.67, 209.15,
161.54, 98.73, 28.71, 5.34, 0.76, 0.31
}, {231.3, 232.8, 209.1, 167.1, 127.3, 100.0, 87.5, 76.8, 52.8,
52.7, 57.7, 57.0, 58.5}, {22, 30, 36, 60, 77, 92, 107, 115, 125,
138, 151, 156, 156}};

data = ordinates;
ListLinePlot(data, DataRange -> {0, 100}, PlotRange -> All,
AxesOrigin -> {0, 0})

transformedData = {ConstantArray(Range@Length(ordinates),
Length(abscissae)) // Transpose,
ConstantArray(abscissae, Length(ordinates)), data}~
Flatten~{{2, 3}, {1}};

model(kg_, Ksg_, kc_, Ksc_, b_, T_, Y_)(i_, t_) :=
Through(sol(kg, Ksg, kc, Ksc, b, T, Y)(t), List)((i)) /;
And @@ NumericQ /@ {kg, Ksg, kc, Ksc, b, T, Y, i, t};

fit = NonlinearModelFit(transformedData,
model(kg, Ksg, kc, Ksc, b, T, Y)(i, t), {kg, Ksg, kc, Ksc, b, T,
Y}, {i, t}, Method -> "Gradient");
Show(Plot(Evaluate(Table(fit(i, t), {i, 3})), {t, 0, 100},
PlotLegends -> {Sg, Sc, X}),
ListPlot(data, DataRange -> {0, 100}, PlotRange -> All,
AxesOrigin -> {0, 0}))
``````

But it’s not working well. This is the first time I use MMA for fitting. I would really appreciate your help!