## Geometry: Cosine sine functions seem discontinuous, which breaks my mathematical calculations: /

I'm sorry to bother you with this, as there may be a very obvious answer, but still: I just stumbled upon some old trigonometry stuff and realized that:

`sin(0π) ≠ 0` Y
`sin(π/2) ≠ 1` Y
`sin(3π/2) ≠ -1`

but the answer to all those sinus entries should be `undefined` Since all these numbers (0, 1, -1) are actually limits, we cannot have a relation of 0 since that would mean that there is an angle 0 that is not possible in the right triangle.

The same applies to 1 and -1 since there cannot be 2 right angles in a triangle :).

then, later:

### the `range` breast

it is not (-1, 1) but rather: (-1, 0) and (0, 1)

### the `domain` (sine entries) are:

everybody `x` in real numbers |

x% π! = 0 # <- otherwise, get a ratio of 0 that is impossible

x% π / 2! = 0 # <- otherwise, get a ratio of 1 that is impossible

x% 2π / 3%! = 0 # <- otherwise, get a ratio of -1 that is impossible

The chart should look like this, with the points excluded from the chart:

sine (x)

My question is: `why my thinking is wrong?`

Otherwise, all those fancy things like Euler's identity won't work for π, for example

e ^ (iπ) = cos (π) + isin (π)

it would mean that:

e ^ (iπ) = undefined + i * undefined

which makes no sense.

`Are all those -1, 0, 1 values just a little helper crutches to keep us going so the math will "somehow" work?`

• the same applies to the cosine function, of course
** sorry for some awkward notations, I'm degenerated by programming

## dnd 5e – Mounting size calculations

First of all: Triceratops is a huge, not medium beast (MM 80).

If it was actually Medium, then only Small and Small creatures could ride it. PHB 198:

A willing creature that is at least one size larger that you and that have a proper anatomy can serve as a mount

A rider and his mount are still considered different creatures and retain their statistics, including size.

## Design – How to keep user inputs consistent with the assumed inputs to decrease calculations in the backend?

Background:

I am writing a clinical trial simulator. The user defines the options for future trials, for example, a trial with 100 placebo patients, 200 treatment patients, "optimistic" outcome scenario, etc. There may be 1-20,000 of such options. For each option, between 10,000 and 100,000 test results are simulated. Simulated data is used later for analysis, for each option assumed.

Implementation:

It is an Angular / Electron desktop application (in the future, there will be a web extension). The front end sends a REST API request with test options. The Python backend performs simulations in parallel. Store the results in a PostgreSQL database, with one row per option. The database is in the user's laptop or external storage. There is only one user working on a simulation.

Performance:

Simulations can take a long time and their results require a lot of memory. Therefore, once the user adds the test options, it would only simulate for these additions, rather than for each option. I also allow to abort a running simulation.

Data consistency:

What the user assumes are the inputs (test options) must match the entries of the last saved simulations (and subsequent analyzes). Therefore, I always want to notify the user about a difference between the front-end and the database. Therefore, analyzes would be disabled if there was a difference

Question:

How could I ensure this consistency? For example, the front-end could track three sets of test options: (1) for completed simulations; (2) for simulations in progress and (3) for current user inputs without simulations initiated. But this seems fragile. Also, I don't feel comfortable with the use of the front-end, instead of the back-end and the database as the source of truth. Does this front-end approach make sense, or should it go with the back-end? Or what else would work best?

Note: Similar software is analyzed when designing predictive modeling software, although with stable user inputs and a focus on performance. In addition to this, I have not found much relevant information about SE.

## acegen – slow calculations for Mathematica 12.0 + Ace 7.006

I have updated Matehmatica to version 12.0 and Ace to version 7.006. I am surprised that the calculations now last much longer than using an earlier version of Ace 6.804. For example, using the built-in example:

AceGen -> Help -> AceFEM Manual -> Examples of AceFEM -> Cyclic tension test, advanced postprocessing, animations

I have the following total absolute times:

As 7.006: 122. 9 s

Ace 6,804: 14.3 s

Can anyone explain the reason? Obviously, the calculations are made using the same computer and software, the only difference is the Ace version. Now I can't calculate anything advanced, it gets stuck in the first step.

## python: simple carbon emissions from fuel and fire calculations

I would like to obtain the simple carbon emission calculations below written in the most pitonic / direct way.

``````FUEL_FACTORS = {'diesel': {'energy_factor': 38.6,
'co2_emission_factor': 69.9,
'ch4_emission_factor': 0.1,
'n2o_emission_factor': 0.5
},
'petrol': {'energy_factor': 34.2,
'co2_emission_factor': 67.4,
'ch4_emission_factor': 0.5,
'n2o_emission_factor': 1.8
},
'avgas':  {'energy_factor': 33.1,
'co2_emission_factor': 67.0,
'ch4_emission_factor': 0.05,
'n2o_emission_factor': 0.7}
}

FIRE_FACTORS = {'gwp_co2': 1,
'gwp_ch4': 25,
'gwp_n2o': 298,
'ch4_co2_mass_ratio': 7.182e-3,
'n2o_co2_mass_ratio': 1.329e-5,
}

def get_fuel_emission_factor(fuel):
"""Get fuel total CO2 emission factor."""
factors = FUEL_FACTORS(fuel)
energy_f = factors('energy_factor')
fuel_emission_factor = 0
for i in ('co2', 'ch4', 'n2o'):
gas_emission_factor = factors(i + '_emission_factor')
fuel_emission_factor += energy_f * gas_emission_factor * 1e-3
return fuel_emission_factor

def fuel_to_co2(fuel, quantity):
"""Get total tCO2 emission by kL of fuel."""
fuel_emission_factor = get_fuel_emission_factor(fuel)
return fuel_emission_factor * quantity

def burned_c_to_co2(tC):
"""Get CH4 and N2O emissions in tCO2 from tonnes of C burned"""
ch4_emissions = tC * FIRE_FACTORS('gwp_ch4') * FIRE_FACTORS('ch4_co2_mass_ratio')
n2o_emissions = tC * FIRE_FACTORS('gwp_n2o') * FIRE_FACTORS('n2o_co2_mass_ratio')
return ch4_emissions + n2o_emissions

fuel_emissions = fuel_to_co2('petrol', 2.7395)
fire_emissions = burned_c_to_co2(5.74)
$$```$$
``````

## mysql – Use FormatNumber or FormatCurrency for my calculations in Vb.Net

``````Private Function Suma() As String
Dim remu, sac, vac, ext, asig As Decimal
Dim total As Decimal

remu = FormatNumber(TextBox1.Text, 2)

ext = FormatNumber(TextBox2.Text, 2)

asig = FormatNumber(TextBox3.Text, 2)

vac = FormatNumber(TextBox4.Text, 2)

sac = FormatNumber(TextBox5.Text, 2)

total = remu + ext + asig + vac + sac

Return FormatNumber(total, 2,,, TriState.False)

End Function``````

## computational geometry: is there any way to perform calculations of the Mandelbrot set using only integers?

I would like to create a program in JavaScript (JS) that draws the Mandelbrot set with arbitrary precision (zoom). In JS there is a BigInt integer construction that supports simple operations such as +, *, /, power in arbitrary precision integers. JS does not support calculations on floating point numbers of arbitrary precision.

Question: Is there a way to perform calculations of the Mandelbrot set using calculations based only on integers (with arbitrary precision)? If yes, how should I do it?

## dnd 5e: can this "AC by CR" chart be used in "DPR by level" calculations?

Inspired by this, this and this, I just processed this list of monsters to get the average AC (and standard deviation) in monsters, by CR. Understand all SRD monsters, I think.

A common DPR chart shows the average DPR of a character at a given level. The DPR often depends on the hit bonus of the characters and the AC of the monsters. For example, at level 10, Bob can:

• Increase your damage bonus by +2
• Increase your hit bonus by +2

We can't justify which one is better without knowing how often Bob hits his attacks on his enemies.

Can this graphic be used in DPR graphics? Ideally, it could be translated to find, for a character of a given level, the average CA of the monsters he faces.

One way would be to consider the character's level to match the CR of the enemies. So, for example, Bob would calculate his accuracy against an enemy of CR10. Bob has a +5 to hit, deals 9 damage per attack, 3 attacks per round and a CR10 monster has an average of 18 BC.

• With +2 damage, Bob has a 40% hit chance, 11 damage per attack and 13.2 DPR.
• With +2 to hit, Bob has a 50% chance to hit, 9 damage per attack and 13.5 DPR.

If a MonsterCR match to CharacterLevel is not appropriate, what would it be? Can we use the standard rules for creating matches to build a table that matches the level of the player with the average AC of the enemy?

## calculation: density of the absolute value of a normal random variable (error in calculations)

Suppose the random variable $$X$$ It is distributed as $$N (0, sigma ^ 2)$$.

We want to find the density of $$| X |$$.

Leave $$F$$ Y $$f$$ be the cumulative distribution function and the density function of $$| X |$$ respectively.

So
$$F (x) = mathbb {P} (-x leq X leq x) = int_0 ^ x frac {1} { sqrt {2 pi} sigma} e ^ {- frac { u ^ 2} {2 sigma ^ 2}} , du – int_0 ^ {- x} frac {1} { sqrt {2 pi} sigma} e ^ {- frac {u ^ 2} {2 sigma ^ 2}} , du.$$
Differentiation and application of the fundamental theorem of calculus give
$$f (x) = frac {1} { sqrt {2 pi} sigma} e ^ {- frac {x ^ 2} {2 sigma ^ 2}} – frac {1} { sqrt {2 pi} sigma} e ^ {- frac {(- x) ^ 2} {2 sigma ^ 2}} (-1) = frac {2} { sqrt {2 pi} sigma} e ^ {- frac {x ^ 2} {2 sigma ^ 2}}.$$
But nevertheless,
$$int _ { mathbb {R}} frac {2} { sqrt {2 pi} sigma} e ^ {- frac {x ^ 2} {2 sigma ^ 2}} , dx = 2,$$
so $$frac {2} { sqrt {2 pi} sigma} e ^ {- frac {x ^ 2} {2 sigma ^ 2}}$$ It cannot be a density function.

I have checked this simple calculation many times and I still cannot find the error. Any ideas?

## linear algebra: projected minimum own value calculations on a large scale

I am interested in efficient numerical procedures to solve large-scale instances of the following projected minimum value problem:

$$mu = min_ {v in ker (A)} frac {v ^ T H v} { lVert v rVert ^ 2}$$

where $$H$$ it's symmetric $$n times n$$ matrix and $$A$$ It is a (possibly deficient in range) $$m times n$$ matrix. I am aware of $$mu$$ It can (in principle) be determined directly as:

$$mu = lambda_ {min} left (Z ^ T H Z right)$$

where $$Z$$ is a $$n times p$$ matrix whose columns form an orthonormal basis for $$ker (A)$$; however, for my intended use case $$H$$ Y $$A$$ are large enough to perform an orthogonal decomposition of $$A$$ It will be prohibitively expensive. On the other hand, I suppose that I can perform matrix vector multiplications with each of them efficiently, and that I have efficiently computable preconditioning operators at my disposal for $$H$$ and / or the block matrix:

$$begin {pmatrix} H&A ^ T \ A and 0 end {pmatrix}$$,

and possibly $$A$$ too.

Sorry if this is a trivial question: this seems to be the kind of problem that has probably been studied before, but I still haven't found anything in the literature that seems to fit these limitations. Any suggestions / pointers would be greatly appreciated!