calculation: density of the absolute value of a normal random variable (error in calculations)

Suppose the random variable $ X $ It is distributed as $ N (0, sigma ^ 2) $.

We want to find the density of $ | X | $.

Leave $ F $ Y $ f $ be the cumulative distribution function and the density function of $ | X | $ respectively.

So
$$ F (x) = mathbb {P} (-x leq X leq x) = int_0 ^ x frac {1} { sqrt {2 pi} sigma} e ^ {- frac { u ^ 2} {2 sigma ^ 2}} , du – int_0 ^ {- x} frac {1} { sqrt {2 pi} sigma} e ^ {- frac {u ^ 2} {2 sigma ^ 2}} , du. $$
Differentiation and application of the fundamental theorem of calculus give
$$ f (x) = frac {1} { sqrt {2 pi} sigma} e ^ {- frac {x ^ 2} {2 sigma ^ 2}} – frac {1} { sqrt {2 pi} sigma} e ^ {- frac {(- x) ^ 2} {2 sigma ^ 2}} (-1) = frac {2} { sqrt {2 pi} sigma} e ^ {- frac {x ^ 2} {2 sigma ^ 2}}. $$
But nevertheless,
$$ int _ { mathbb {R}} frac {2} { sqrt {2 pi} sigma} e ^ {- frac {x ^ 2} {2 sigma ^ 2}} , dx = 2, $$
so $ frac {2} { sqrt {2 pi} sigma} e ^ {- frac {x ^ 2} {2 sigma ^ 2}} $ It cannot be a density function.

I have checked this simple calculation many times and I still cannot find the error. Any ideas?

linear algebra: projected minimum own value calculations on a large scale

I am interested in efficient numerical procedures to solve large-scale instances of the following projected minimum value problem:

$ mu = min_ {v in ker (A)} frac {v ^ T H v} { lVert v rVert ^ 2} $

where $ H $ it's symmetric $ n times n $ matrix and $ A $ It is a (possibly deficient in range) $ m times n $ matrix. I am aware of $ mu $ It can (in principle) be determined directly as:

$ mu = lambda_ {min} left (Z ^ T H Z right) $

where $ Z $ is a $ n times p $ matrix whose columns form an orthonormal basis for $ ker (A) $; however, for my intended use case $ H $ Y $ A $ are large enough to perform an orthogonal decomposition of $ A $ It will be prohibitively expensive. On the other hand, I suppose that I can perform matrix vector multiplications with each of them efficiently, and that I have efficiently computable preconditioning operators at my disposal for $ H $ and / or the block matrix:

$ begin {pmatrix}
H&A ^ T \
A and 0
end {pmatrix} $
,

and possibly $ A $ too.

Sorry if this is a trivial question: this seems to be the kind of problem that has probably been studied before, but I still haven't found anything in the literature that seems to fit these limitations. Any suggestions / pointers would be greatly appreciated!

centos7 – iowait linked server – Load calculations and process programming

I have a script that I write that runs defective blocks against a disk shelf full of drives and I am trying to understand the server load that develops and at what point the server load is critical in this use case.

In general, I have adhered to the general guidelines that having a server load <= # of_cores is ideal, while <= 2x #of_cores will generally not cause significant performance degradation unless it serves sensitive workloads in real time, but I don't think the generality applies in this use case

In the following screenshot from the top, you see that I am running badblocks against 8 devices, the associated load is ~ 8, which I understand since there are 8 processes effectively stuck in the queue due to the nature of the badblocks. But only 2 CPU cores are linked by these processes. Then a couple of questions:

1.> Am I slowing my badblock by trying to try so many tests simultaneously and, if so, why not use the available cores?

2.> Do I assume that this generally "non-ideal" CPU load would not affect the service of other requests, such as data shared from other units on the server? (assuming there is no bottleneck on the sas card) because 2 cores are free and available, correct?

3.> If 2 cores are capable of supporting 8 incorrect blocking processes without impact between them (as shown), why is it that 2 incorrect blocking processes use a core while a third makes a second core be used? one to assume that 8 processes should consume 3-4 cores, not 2 optimally programmed, right?

The platform is Centos 7 | – | The processor is Intel e3-1220 v2 (quad-core without hyper-threading) | – | The disk shelf is connected to the server through an external SAS HBA (no incursion)

top - 16:03:12 up 6 days, 15:21, 13 users,  load average: 7.84, 7.52, 6.67
Tasks: 171 total,   2 running, 169 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni, 99.7 id,  0.3 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  :  0.3 us,  6.0 sy,  0.0 ni,  0.0 id, 93.6 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni, 95.7 id, 4.3 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  2.3 us,  3.0 sy,  0.0 ni,  0.0 id, 94.7 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  7978820 total,  7516404 free,   252320 used,   210096 buff/cache
KiB Swap:  4194300 total,  4194300 free,        0 used.  7459724 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
22322 root      20   0  122700   9164    832 D   3.3  0.1   18:36.77 badblocks
22394 root      20   0  122700   9164    832 D   1.3  0.1  15:52.98 badblocks
23165 root      20   0  122700   9152    820 D   1.3  0.1   0:36.94 badblocks
23186 root      20   0  122700   5792    808 D   1.3  0.1   0:02.54 badblocks
23193 root      20   0  122700   5004    768 D   1.3  0.1   0:02.17 badblocks
23166 root      20   0  122700   9152    820 D   1.0  0.1   0:36.11 badblocks
23167 root      20   0  122700   9148    820 D   1.0  0.1   0:39.74 badblocks
23194 root      20   0  122700   6584    808 D   1.0  0.1   0:01.47 badblocks

Solving equations: is it an error in the formula or an inaccuracy in numerical integration calculations?

According to the code / calculations below, it seems that you only need higher values ​​to MaxRecursion. Then only slow convergence messages are given ("NIntegrate :: slwcon").

a = 7; m = 5; n = 1;
Print("nEquation: z^", m, " - ", a, "*z^", n, " - 1 = 0n");
Print("Ordinary solution:");
NSolve((z^m - a z^n - 1))
sol = z /. NSolve((z^m - a z^n - 1))

(* During evaluation of In(88):= 
Equation: z^5 - 7*z^1 - 1 = 0


During evaluation of In(88):= Ordinary solution: *)

(* {{z -> -1.58871}, {z -> -0.142866}, {z -> 
   0.0355442 - 1.62852 I}, {z -> 0.0355442 + 1.62852 I}, {z -> 
   1.66049}}

{-1.58871, -0.142866, 0.0355442 - 1.62852 I, 
 0.0355442 + 1.62852 I, 1.66049} *)

Print("Solution with definite integration:"); S = 
Table(Exp(2 j Pi I/m) + 
   1/(2 Pi I) (Exp((2 j + 1) Pi I/m)*
       NIntegrate(
        Log(1 + a t^n/(1 + t^m) Exp((2 j + 1) Pi I n/m)), {t, 0, 
         Infinity}, MaxRecursion -> 200) - 
      Exp((2 j - 1) Pi I/m)*
       NIntegrate(
        Log(1 + a t^n/(1 + t^m) Exp((2 j - 1) Pi I n/m)), {t, 0, 
         Infinity}, MaxRecursion -> 200)), {j, 0, m - 1});

(*
During evaluation of In(93):= Solution with definite integration:

During evaluation of In(93):= NIntegrate::slwcon: Numerical integration converging too slowly; suspect one of the following: singularity, value of the integration is 0, highly oscillatory integrand, or WorkingPrecision too small.

During evaluation of In(93):= NIntegrate::slwcon: Numerical integration converging too slowly; suspect one of the following: singularity, value of the integration is 0, highly oscillatory integrand, or WorkingPrecision too small. *)

S 

(* {1.66049 + 0. I, 
     0.0355442 + 1.62852 I, -1.58871 + 7.8*10^-8 I, -0.142866 - 
     7.8*10^-8 I, 0.0355442 - 1.62852 I} *)

Here we see that you have all the values ​​in your "ordinary solution" (using a certain tolerance):

Complement(S, sol, 
 SameTest -> (Abs(#1 - #2)/Norm({#1, #2}, Infinity) < 10^-6 &))

(* {} *)

javascript – If I want to create a site that delivers calculations, should I calculate the calculations in the browser or on the server?

I already wrote the code in python with tkinter and sqlite.
I am learning html, css and javascripts.

As I intend to create a site that offers the same, I have the doubt how to start, make the code in javascripts? But if so, it will run in the browser and my code will be exposed to plagiarism … And if I do it on the server, the calculation would take much longer? I would also have the risk that if users increase, can the site collapse?

Any advice … thank you very much …

Mining theory: how many hash calculations are needed to reach a BTC?

The double SHA 256 follows a uniform distribution that means that all hashes have the same probability of occurring. Therefore, your chances of finding the block header hash that is less than the goal is the same for each round of hashing.

With the current mining difficulty of 10183488432890, the target bits are 0x171ba3d1. This means that you will need to find the block header hash that is less than or equal to 0x0000000000000000001ba3d10000000000000000000000000000000000000000. For a double hash round that is a probability of 2,28631×10-2. 3. Call it P. So, the probability of not finding a valid hash is (1-P). For N hash attempts, the probability of not finding the valid hash is (1-P)north. To make sure you find a block, we must make sure that (1-P)^N -> 0. As P -> 0, (1-P)^N = 1-N*P (using the Taylor expansion). Solving both equations you get N = 1 / P = 4.3738×1022

At your hash rate of 1 TH / s, it will take you 4.3738×1010 seconds that is 1386.91 years.

Mining theory: are the profitability calculations of mining builders / sellers true?

Regarding this question here, how many hash calculations are needed to reach a BTC? , it takes about 1387 years with a power rate of 1 TH / s to reach 1 BTC.

So, if someone has a machine with a power rate of 53TH / s and his machine runs 24 hours a day, 7 days a week, he can reach 1 BTC in 1387/53 = 26.1 years, and if he has 100 of that machine, You can earn 1 BTC in 95.5 days, or almost 4 BTC per year.

So, if the price of 1 BTC is about $ 9500 today, you earn 9500 * 4 = $ 38,000 per year (let's think electricity consumption is free!). But on many websites like this https://www.asicminervalue.com/miners/bitmain/antminer-s17-pro-53th, it calculated the profitability of the same machine for $ 4,333 per year and will be $ 433,000 per year for 100 machines! (He also considered the consumption of electrical energy).

How is it possible?

Are Google Play storage space calculations inaccurate?

Why exactly does this "More storage space" message appear when an application is installed through the Google Play store, even when there is enough space, theoretically available (more x MB needed)? And, in addition to that, after deleting one or more applications, this message continues to appear, wanting additional space. Interestingly, sometimes x becomes taller than it was before removing some applications. Only when I delete very large applications, the store continues with the installation. Is it because Play Store reserves part of the storage space for itself, which is not available for anything else?

java – Backtracking – optimizing my code to eliminate some unnecessary calculations

I solved the following problem using tracking:

They give us a list of military units, their weight and their
(numerical) force. We have several boats with a limit
Loading capacity. Determine how to load ships with military units, then
that the load capacity of each ship is not superior and that we have
the maximum possible military force in our ships
.

However, my code is not fast enough, since many unnecessary calculations are being made, which do not lead to an adequate solution. How would you optimize my code? My idea is to use memorization, but how would you implement it? Thanks for any help!

I tried to comment on my code and refactor my code the best I could.

public class HeroesOntTheBoat {

private int() arrayWithGeneratedNumbers;
private int numberOfShips;
private int max;
private int() bestArray;
private ArrayList strengths;
private ArrayList weights;
private ArrayList carryingCapacities;

public HeroesOntTheBoat() {
    carryingCapacities = new ArrayList(); // create arraylist for the carrying capacities of ships
    strengths = new ArrayList(); //create arraylists for strengths and weights of units
    weights = new ArrayList(); //create arraylists for weights
    max = 0; // global variable max for finding the best strength
}

private void generate(int fromIndex) { // I generate all combinations between 0 and numberOfShips
    if (fromIndex == arrayWithGeneratedNumbers.length) { // 0 is representing that unit is being left behind
        processGeneratedNumbers();      // 1,2,3,4..n are representing the number of ship that the unit is loaded on
        return;                         // index of a number in array is representing a specific unit               
    }

    for (int i = 0; i <= numberOfShips; i++) {
        arrayWithGeneratedNumbers(fromIndex) = i;
        generate(fromIndex + 1);
    }
}

public void input(String input) {
    Scanner sc = null;
    try {
        sc = new Scanner(new File(input)); // load my input from a textfile
        numberOfShips = sc.nextInt();  // load the number of ships
        for (int i = 0; i < numberOfShips; i++) { //add carrying capacities to arraylist
            carryingCapacities.add(sc.nextInt());
        }
        bestArray = new int(weights.size()); // array where we will remember the best combination of units
        while (sc.hasNext()) {
            weights.add(sc.nextInt());
            strengths.add(sc.nextInt());
        }
        arrayWithGeneratedNumbers = new int(weights.size()); // array where we will generate numbers
        generate(0); // run the generation
        System.out.println(Arrays.toString(bestArray) + " this is the best layout of units"); // after the generation is over
        System.out.println(max + " this is the max strength we can achieve"); // print the results
    } catch (FileNotFoundException e) {
        System.err.println("FileNotFound");
    } finally {
        if (sc != null) {
            sc.close();
        }
    }
}

public void processGeneratedNumbers() {
    int currentStrength = 0; // process every generated result
    boolean carryingCapacityWasExceeded = false;
    int() currentWeight = new int(numberOfShips + 1);
    for (int i = 0; i < arrayWithGeneratedNumbers.length; i++) { // calculate weights for every ship and ground
        currentWeight(arrayWithGeneratedNumbers(i)) += weights.get(i);
    }
    for (int i = 0; i < currentWeight.length; i++) { // is capacity exceeded?
        if (i != 0 && currentWeight(i) > carryingCapacities.get(i - 1)) { // ignore 0, because 0 represents ground(units I left behind)
            carryingCapacityWasExceeded = true;
        }
    }
    if (carryingCapacityWasExceeded == false) { // if capacity isn't exceeded
        for (int i = 0; i < arrayWithGeneratedNumbers.length; i++) { // calculate strength
            if (arrayWithGeneratedNumbers(i) != 0) { // ignore 0, because I left units with 0 behind on the ground
                currentStrength += strengths.get(i);
            }
        }
        if (currentStrength > max) { // is my current strength better than global maximum?
            max = currentStrength; // replace my global maximum
            bestArray = arrayWithGeneratedNumbers.clone(); // remember the new best layout of units
        }

    }

}


public static void main(String() args) {
    HeroesOnTheBoat g = new HeroesOnTheBoat();
    g.input("inputFile");
}

Sample Entry:

2 60 40
30 400
20 500
50 100
40 100
30 50
60 75
40 20

Differential geometry: detailed calculations of the geodetic equation (using Langrangian or not)

Given a multiple and one way $ gamma $ In this multiple, I want to know if this path is really geodesic.

From what I read, I should calculate the geodetic equation with that route, then check if it is equal to zero. Therefore, you should calculate:

$ {d ^ 2 x ^ mu over ds ^ 2} + Gamma ^ mu {} _ { alpha beta} {d x ^ alpha over ds} {d x ^ beta over ds} $

Where, if I understand correctly, $ s $ it is the variable that parameterizes $ gamma $Y $ x ^ mu $ are the components of $ gamma $ in some generalized coordinate system.

Even understanding Christoffel's symbols is really difficult for me, and I may not be ready to use them yet.

I read that there is another formulation (probably simpler) that uses a Lagrangian, which I would need to calculate, if I'm right:

$ frac { partial dot { gamma}} { partial x ^ mu}
– frac { mathrm d} { mathrm ds} left ( frac { partial dot { gamma}} { partial dot {x} ^ mu} right) $

Again, I am failing to manipulate such expressions.

I would like you to help me with this simple example: in $ mathbb {R} ^ 3 $, let the manifold be the radio sphere $ 1 $ and center $ Or $ and consider:

$ gamma: s mapsto begin {pmatrix} cos s \ sin s \ 0 end {pmatrix} $

Obviously, this parametrizes a large circle of the sphere and, therefore, is a geodesic. So I know that our calculations should give us $ 0 $, still and I don't get it. We assume that we have Euclidean metric.

Computing $ dot { gamma}: s mapsto begin {pmatrix} – sin s \ cos s \ 0 end {pmatrix} $ It is not what poses a problem.

To my knowledge, we have $ x ^ 1 = cos s $So what does he do $ frac { partial (- sin s)} { cos s} $ value ? Does it mean I should express $ – { sin s} $ since $ cos s $, which would give something like $ – { sin s} = pm sqrt {1 – ( cos s) ^ 2} $ ?

Some tips about me:

  • I understand Einstein's sum convention;
  • You should be familiar with Leibniz differential notations, although detailing the steps does not hurt;
  • I have been initiated with tensioners, but I cannot pretend to dominate them;
  • I know a differential vector operator, but I would rather avoid using nabla notation;
  • I know that the Lagrangian is somehow a "potential" that we want to minimize, but I have no intuition about it.

Thanks for your attention.