## Problem to calculate a numerical integration with c ++, for a probability set

This program shows a table of results of a probability equation by Simpson's method of numerical integration and adds it to a txt, but none of the results should be greater than 1, and I do not know what the problem is?
Within the code, n is the number of iterations that the "Calculate" function has, which returns a string to be able to put it more easily to the txt. There are 600 equations and each will have n cycles, so neither can be very large.
The integral is for normal distribution, to form its tables from -6 to 6.

``````#include
#include
#include
#include
#include
using namespace std;
string Calculate (float b) {
char bchar[4], finalize[12];
float a = -6;
float n = 15000;
int i;
float sum = 0.0, power, formula, constant, pot, e;
final float = (b-a) / (2 * n);
float sigx = (b-a) / n;
constant = (1 / 2.506628274631000502415765284811);
// printf ("sigx:% f  n", sigx);
float x = a;
for (i = 0; i<=n; i++){
//printf("X%d=%fn", (i-1), x);
potencia=pow(x,2);
potencia/=-2;
e=pow(2.71828182,potencia);

if(i==0 || i==n){
formula=e;
}else{
formula=(e*2);
}
/*if(i==n){
printf("Potencia=%.20fn",potencia);
printf("e=%.20fn",e);
printf("formula=%.20fn",formula);
}*/
suma+=formula;
//printf("Suma=%.25fn",suma);
x+=sigx;
}
final=final*suma*constante;
//printf("B=%.2f-->Result =%. 10f  n ", b, final);

sprintf (bchar, "%. 2f", b);
sprintf (end, "%. 10f", end);

bstring = bchar;
finalstring = finalchar;
// cout << "B =" << bstring << endl;
// cout << "Final =" << bstring <"+ finalstring;
// cout << answer << endl;
return response;
}

int main () {
printf ("This program shows the results of the numeric integral of:  n");
printf ("(1 / ((2pi) ^ (1/2))) (e ^ ((- 1/2) (z ^ 2)))  n");
printf ("Evaluated from -6 to different limits up to 6  n");
system ("pause");
float b = -5.98;
int i = 1;
string response;
ofstream write ("bes.txt");
int cont = 1;

printf (" n");
while (b <= 6.00) {
/ * if ((i% 2) == 0) {
printf ("");
}

if ((i% 2) == 0) {
printf (" n");
} printf ("% d:", i); * /

i ++;

//printf("%d.- B =% f  n ", i, b);
b = b + 0.02;
if (i == 26 || i == 76 || i == 128 || i == 181 || i == 233 || i == 578 || i == 338 || i == 394 || i == 451 || i == 501 || i == 551) {
b = b + 0.000001;
}
if (i == 578) {
b = b-0.000001;
}
if (cont == 4) {
cont = 1;
} else {
cont ++;
}
}
write.close ();
return 0;
``````

}

## Probability theory: given a calculation process \$ X \$ and a measure μ in the space of Skorohod, how can we prove that μ is the law of \$ X \$ up to the dependence of the initial value?

Leave

• $$( Omega, mathcal A, operatorname P)$$ be a probability space
• $$(E, d)$$ be a compact locally separable complete metric space
• $$D ([0infty)E):=left{x:[0infty)aEmidxtext{iscàdlàg}right}[0infty)E):=left{x:[0infty)toEmidxtext{iscàdlàg}right}[0infty)E):=left{x:[0infty)aEmidxtext{iscàdlàg}right}[0infty)E):=left{x:[0infty)toEmidxtext{iscàdlàg}right}$$ Be equipped with the Skorohod topology.
• $$X: Omega a D ([0infty)E)[0infty)E)[0infty)E)[0infty)E)$$ be $$( mathcal A, mathcal B (D ([0infty)E)))[0infty)E)))[0infty)E)))[0infty)E)))$$-measurable
• $$mu$$ be a measure of probability in $$mathcal B (D ([0infty)E))[0infty)E))[0infty)E))[0infty)E))$$

How can we formalize the condition of what? $$mu$$ is "essentially the distribution of $$X$$ Until the dependence of the initial value "?

As you probably do not know exactly what I mean, let me explain it in more detail: from $$E$$ It is Polish, $$D ([0infty)E)[0infty)E)[0infty)E)[0infty)E)$$ It is also Polish and, therefore, there is a Markov core. $$kappa$$ with fountain $$(E, mathcal B (E))$$ and objective $$(D ([0infty)E)mathcalB(D([0infty)E)))[0infty)E)mathcalB(D([0infty)E)))[0infty)E)mathcalB(D([0infty)E)))[0infty)E)mathcalB(D([0infty)E)))$$ with $$operatorname P left[Xin Bmid X_0right]= kappa (X_0, B) ; ; ; text {almost safe for all} B in mathcal B (D ([0infty)E))Tag1[0infty)E))tag1[0infty)E))Tag1[0infty)E))tag1$$ Note that $$operatorname P left[Xin Bright]= int operatorname P left[X_0in{rm d}x_0right] kappa (x_0, B) ; ; ; text {for all} B in mathcal B (D ([0infty)E))Tag2[0infty)E))tag2[0infty)E))Tag2[0infty)E))tag2$$ Now leave $$mu_0$$ be a measure of probability in $$mathcal B (E)$$ Y $$mu: = mu_0 kappa$$ be the composition of $$mu_0$$ Y $$kappa$$. So $$mu left ( left {x in D ([0infty)E):x(0)inB_0right}right)=mu_0(B_0);;;text{forall}B_0inmathcalB(E)tag3[0infty)E):x(0)inB_0right}right)=mu_0(B_0);;;text{forall}B_0inmathcalB(E)tag3[0infty)E):x(0)inB_0right}right)=mu_0(B_0);;;text{paratodos}B_0inmathcalB(E)tag3[0infty)E):x(0)inB_0right}right)=mu_0(B_0);;;text{forall}B_0inmathcalB(E)tag3$$ Y $$mu (B) = int mu_0 ({ rm d} x_0) kappa (x_0, B) ; ; ; text {for all} B in mathcal B (D ([0infty)E))Tag4[0infty)E))tag4[0infty)E))Tag4[0infty)E))tag4$$

Yes $$mu$$ it's the way $$(4)$$, then we see from $$(2)$$ that $$mu$$ is "essentially the distribution of $$X$$ Until the dependence on the initial value. "How can we formalize this?

Intuitively, $$mu$$ Y $$operatorname P left[Xin;cdot;right]$$ It must coincide in some kind of "trace" of $$mathcal B (D ([0infty)E)[0infty)E)[0infty)E)[0infty)E)$$. In particular, in $$left {B in mathcal B ([0, infty), E): mu_0 ( pi_0 (B)) = operatorname P left[0, infty), E): mu_0 ( pi_0 (B)) = operatorname P left[0, infty), E): mu_0 ( pi_0 (B)) = operatorname P left[0,infty),E):mu_0(pi_0(B))=operatorname Pleft[X_0inpi_0(B)right] right }, tag5$$ where $$pi_t: D ([0infty)E)aE;;;xmapstox(t)Tag6[0infty)E)toE;;;xmapstox(t)tag6[0infty)E)aE;;;xmapstox(t)Tag6[0infty)E)toE;;;xmapstox(t)tag6$$ But it is $$pi_0 (B) en mathcal B (E)$$ for all $$B en mathcal B ([0infty)E)[0infty)E)[0infty)E)[0infty)E)$$. We clearly know that $$mathcal B ([0infty)E)supseteqsigma(pi_t:tge0)tag7[0infty)E)supseteqsigma(pi_t:tge0)tag7[0infty)E)supseteqsigma(pi_t:tge0)tag7[0infty)E)supseteqsigma(pi_t:tge0)tag7$$ (and by separability of $$E$$ we even get equality).

## Probability of being matched in a chess tournament.

The chess clubs of two schools consist, respectively, of 8 and 9 players. Four members of each club are chosen at random to participate in a contest between the two schools. The chosen players of one team are randomly matched with those of the other team, and each pair plays a game of chess. Suppose that Rebecca and her sister Elise are in chess clubs in different schools. What is the probability that Rebecca and Elise are paired?

The answer is 4/8 * 4/9 * 1/4. How did you get the 1/4? Would not there be 16 possible matches, one of which would be sister against sister, then should not it be 4/8 * 4/8 * 1/16?

## Probability: shows that additive Gaussian noise never increases dispersion

Leave $$mathbf {1} in mathbb {R} ^ d$$ be the $$d$$All-a-three dimensional vector and leave $$n sim mathcal {N} (0, sigma ^ 2 I_ {d times d})$$, show that
$$frac { | mathbf {1} + n | _1} { | mathbf {1} + n | _2} ge c sqrt {d}$$
with high probability in $$d$$, by constant $$c$$ (independent of $$sigma, d$$).

That is, to prove that adding Gaussian noise never significantly improves dispersion in the sense of $$ell_1 / ell_2$$ proportion. The generalization to dense arbitrary vectors is, of course, welcome.

## Functional analysis. Convergence of the regression coefficients to the probability density.

By simulation we create a vector. $$Y = (y_1, y_2, …, y_n)$$where each $$y_i in R$$ it is extracted independently of a given non-degenerate distribution.

Next we create a vector by simulation. $$xi = ( xi_1, xi_2, …, xi_n)$$ where each $$xi_i$$ are independent realizations of a random variable that takes only a finite number of values $$[alpha_1,alpha_2,…alpha_k]$$ with probabilities $$p_1, p_2, …, p_k$$ respectively. $$alpha_i$$ They are given.

Suppose we have function $$f: R a R$$

We make a regression of$$begin {bmatrix} f (y_1 + xi_1) \ f (y_2 + xi_2) \ … \ f (y_n + xi_n) end {bmatrix}$$ in $$begin {bmatrix} f (y_1 + alpha_1) & f (y_1 + alpha_2) & … & f (y_1 + alpha_k) \ f (y_2 + alpha_1) & f (y_2 + alpha_2) & … & f (y_2 + alpha_k) \ … and … and … and … \ f (y_n + alpha_1) & f (y_n + alpha_2) & … & f (y_n + alpha_k) end {bmatrix}$$

By regression I mean we are optimizing. $$beta_i$$ minimize:

$$sum_ {i = 1} ^ n (f (Y + xi) – sum_ {j = 1} ^ k beta_jf (Y + alpha_j)) ^ 2$$

Intuitively I think that as $$n a infty$$ The least squares procedure should give us the following equation:

$$f (Y + xi) = p_1 * f (Y + alpha_1) + p_2 * f (Y + alpha_2) + … + p_k * f (Y + alpha_k)$$

where $$f (Y + xi)$$ Y $$f (Y + alpha_i)$$ they are only representations of vector columns above.

So my guess is that as $$n a infty, beta_i a p_i$$.

My question is what conditions should be imposed on the function. $$f$$ to get the equation above? Is it my correct intuition that we should normally get such an equation? Maybe we need to impose some conditions on the distribution of $$y_i$$ also.

## probability: how to calculate this formula \$ operatorname {E} ({ frac {2} {n}} Y_ {i} sum _ {j = 1} ^ {n} Y_ {j}) \$

I am learning the justification of the variance of the sample.

{ displaystyle { begin {aligned} operatorname {E} [sigma _{Y}^{2}]& = operatorname {E} left[{frac {1}{n}}sum _{i=1}^{n}left(Y_{i}-{frac {1}{n}}sum _{j=1}^{n}Y_{j}right)^{2}right] (1.1) \[5pt] & = { frac {1} {n}} sum _ {i = 1} ^ {n} operatorname {E} left[Y_{i}^{2}-{frac {2}{n}}Y_{i}sum _{j=1}^{n}Y_{j}+{frac {1}{n^{2}}}sum _{j=1}^{n}Y_{j}sum _{k=1}^{n}Y_{k}right ] (1.2) \[5pt] & = { frac {1} {n}} sum _ {i = 1} ^ {n} left[{frac {n-2}{n}}operatorname {E} [Y_{i}^{2}]- { frac {2} {n}} sum _ {j neq i} operatorname {E} [Y_{i}Y_{j}]+ { frac {1} {n ^ {2}}} sum _ {j = 1} ^ {n} sum _ {k neq j} ^ {n} operatorname {E} [Y_{j}Y_{k}]+ { frac {1} {n ^ {2}}} sum _ {j = 1} ^ {n} operatorname {E} [Y_{j}^{2}]Right](1.3) \[5pt] end {aligned}}}

it is easy to understand how equation (1.1) would take equation (1.2)

I am trying to understand how equation (1.2) would be carried out in equation (1.3)

{ displaystyle { begin {aligned} operatorname {E} [sigma _{Y}^{2}]& = operatorname {E} left[{frac {1}{n}}sum _{i=1}^{n}left(Y_{i}-{frac {1}{n}}sum _{j=1}^{n}Y_{j}right)^{2}right] (1.1) \[5pt] & = { frac {1} {n}} sum _ {i = 1} ^ {n} operatorname {E} left[Y_{i}^{2}-{frac {2}{n}}Y_{i}sum _{j=1}^{n}Y_{j}+{frac {1}{n^{2}}}sum _{j=1}^{n}Y_{j}sum _{k=1}^{n}Y_{k}right ] (1.2) \[5pt] & = { frac {1} {n}} sum _ {i = 1} ^ {n} left[ operatorname {E}(Y_{i}^{2})- operatorname {E} ({frac {2}{n}}Y_{i}sum _{j=1}^{n}Y_{j})+operatorname {E}({frac {1}{n^{2}}}sum _{j=1}^{n}Y_{j}sum _{k=1}^{n}Y_{k})right ] (2.2) \[5pt] end {aligned}}}

with linearity of expectation equation (1.2) would lead to equation (2.2)

The question is, how to calculate the second term within the brackets in equation (2.2)?

$$operatorname {E} ({ frac {2} {n}} Y_ {i} sum_ {j = 1} ^ {n} Y_ {j})$$

## probability – How to configure the problem P (Z ^ 4 – 17> = 9) for exponential distribution (4)

I am working on part (d) of the problem that was discussed here in Math.StackExchange. The answer to that problem makes sense to me, and I have been able to use that answer for other similar problems and their variations. Part (d); however, it has a different configuration than what I have seen before, and I expected someone to give me a clue or a way to approach me. 🙂

The problem is to calculate $$P (Z ^ 4 – 17> = 9)$$ for $$Exponential (4)$$ distribution. I know from the back of the book that the answer is $$e ^ -25$$. I'm not sure what to do algebraically to the $$Z ^ 4 – 17> = 9$$ part of the problem to establish the limits of the density function of the integral distribution.

For example, I tried to isolate $$Z ^ 4$$ and taking out the fourth root, but the final one. $$e$$ the power of the term was not equal $$-25$$.

So, in summary, what is the strategy to use in this type of problem?

Thank you!

## textbooks of calculus, linear algebra and probability and statistics for nonmathematical careers

What are the most popular textbooks of calculus, linear algebra and probability and statistics for majors that are not mathematics in the United States?

## Probability and balanced functions – Mathematics Stack Exchange

A function $$f: {0,1 } ^ n rightarrow {0,1 }$$ it is said to be balanced if half of its entries are assigned to $$0$$, and the other half of the map to $$1$$ (a partition in two of the domain). If I want to test if $$f$$ It is balanced or constant, I can choose. $$k leq 2 ^ {n-1}$$ tickets say $$x_1, cdots, x_k$$ and test if $$f (x_1) = cdots = f (x_k)$$. Now I wonder what the probability of failure is, that is, that I choose $$k$$ entries, and all are assigned to $$0$$, or the whole map for $$1$$, but in fact I have just chosen $$k$$ elements in a partition $${0,1 } ^ n$$.

Suppose that $$f$$ It is a balanced function. I want to determine the probability that $$f (x_1) = cdots = f (x_k)$$, but that $$f$$ It is balanced instead of constant. If he $$k$$ The entries are different, I think the probability should be:
$$frac {1} {2} frac {2 ^ {n-1} -1} {2 ^ n} cdots frac {2 ^ {n-1} – k} {2 ^ n}$$, while if the $$k$$ the inputs are not necessarily different, so the probability is $$frac {1} {2 ^ k}$$ . Is this correct?

## probability or statistics: the simulation runs only for a smaller number of iterations

The following code is not executed for a greater number of iterations. It is executed for n = 5, 10, etc., but not for n = 50 or more. I tried to limit the precision goal and precision goal in & # 39; FindDistributionParameters & # 39;. Any suggestions? Do you wonder where I should try? NumericQ!

``````Clear all[a0, d0]
ZP[sigma_, alpha_, theta_] : =
Probability distribution[alpha theta/
sigma (1 - Exp[-(x/sigma)^alpha]) ^ (theta - 1) Exp.[-(x/sigma)^
alpha ]* (x / sigma) ^ (alpha - 1), {x, 0, infinity},
Assumptions -> sigma> 0 && alpha> 0 && theta> 0];