## Monte Carlo method

How do you use a Monte Carlo method to approximate the area enclosed by the circle of radius 1, centered at the origin, and the circle of radius 2, centered at (1,1).

## Features – Performance improvement of a Monte Carlo XY

I usually write my Monte Carlo codes in Fortran for speed, but I was doing a fast and dirty job and wrote one in Mathematica for the XY model in a square network (see Kosterlitz-Thouless Transition).

Monte Carlo is usually about speed, since we are generally interested in very large systems (that is, the "thermodynamic limit"). I usually use Mathematica for quick and dirty calculations and for its graphic capabilities, and I have rarely considered my code's performance very carefully. I'm curious to know if anyone can point out important flaws in this code.

``````(*parameters*)
L = 20;
T = 1.0;
(Alpha) = 1.;
pi = N@(Pi);

(*acceptance rate array*)
accepts = ConstantArray(1, L^2);

(*mod for periodic boundaries*)
mod(n_) := 1 + Mod(n - 1, L)

(*lists of all lattice indexes *)
pos = Flatten(Table({i, j}, {i, L}, {j, L}), 1);

(*list of neighbors of site (i,j)*)
neighbors = Table({{i, mod(j + 1)}, {mod(i + 1), j}, {i, mod(j - 1)}, {mod(i - 1), j}}, {i, 1, L}, {j, 1, L});

(*initialized lattice of 2d spins*)
XY = Map(Normalize, RandomReal({-1., 1.}, {L, L, 2}), {-2});

(*metropolis update for site (i,j)*)
MCUpdate({i_, j_}) := (
v = XY((i, j)); (*current vector*)
u = RotationMatrix((Alpha) RandomReal({-pi,pi})).v; (*new proposed vector*)
dE = (Total@Extract(XY, neighbors((i, j)))).(u - v); (*change in energy by changing to new vector*)
n = j + (i - 1) L;
If(RandomReal() < Min(1, Quiet@Exp(-dE/T)),
(*update vector if Metropolis condition is satisfied*)
XY((i, j)) = u;
accepts((n)) = 1
,
(*reject update*)
accepts((n)) = 0
)
)

(*performs n sweeps of the lattice*)
Sweep(n_) := Do(
MCUpdate /@ pos;
(*make sure everything is normalized*)
Map(Normalize, XY, {-2});
If(Mean(accepts) < 0.45, (Alpha) *= .95,
If(Mean(accepts) > 0.55, (Alpha) *= 1.05));
, n)
``````

Some basic explanations:

• `accepts` track the last `L^2` try to update one of the variables, so that the magnitude of the proposed changes, `(Alpha)`, can be adjusted so that the acceptance rate is maintained `0.5`.
• `MCUpdate` proposes a rotation to the vector on the site `(i,j)`, calculate the change in energy `dE` due to this change, and then the Metropolis condition applies to determine if it accepts or rejects with probability `min(1,exp(-dE/T))` where `T` is the temperature

If you want to play with this, the following function will find all the vortices and color them:

``````(* list of indexes for each plaquette*)
plaq = Flatten(Table({{i, j}, {i, mod(j + 1)}, {mod(i + 1), mod(j + 1)}, {mod(i + 1), j}}, {i, L}, {j, L}), 1); (*find vortices and color them*)
vortexcolors = {Blue, Transparent, Red};
vortex := Table(
p = plaq((i));
spins = Extract(XY, plaq((i)));
(CapitalDelta) = Round@Sum(
s1 = spins((j))~Append~0; s2 = spins((1 + Mod(j, 4)))~Append~0;
Sign(Cross(s1, s2).{0, 0, 1}) ArcCos(s1.s2)/(2 (Pi))
,
{j, 4});
Graphics({vortexcolors(((CapitalDelta) + 2)), Disk(p((1)) - {.5, .5}, 0.25)}), {i, L^2})
``````

and this will allow you to trace all the turns, starting from a random configuration and "sweeping" the network sometimes to reach thermal equilibrium:

``````(*random starting configuration*)
XY = Map(Normalize, RandomReal({-1., 1.}, {L, L, 2}), {-2});
(*arrows to represent vectors*)
plotXY(col_) := (
Map((p = First@Position(XY, #) - {1, 1}; Graphics({col, Arrow({p, p + #/1.5})})) &, XY, {-2})
)
(*100 monte carlo sweeps*)
Sweep(100);
(*plot the configuration with vortices and square lattice grid*)
p1 = {plotXY(Black), vortex};
grid = Graphics({Lighter@Red, Line(#)}) & /@ (Table({{i, 0}, {i, L}}, {i, 0, L - 1})~Join~Table({{0, i}, {L, i}}, {i, 0, L - 1}));
Show({grid, p1}, PlotRange -> {{-1, L}, {-1, L}})
`````` Edit: As an added benefit, is there a better way than `Quiet@Exp(-dE/T)` to deal with the mistake that `Exp(-large number)` Can not be represented by machine precision numbers? I would expect it to simply produce zero.

## c – Evaluation of π using Monte Carlo methods – Serial vs OMP

I wrote this simple code to evaluate the π using the Monte Carlo method. This is the serial version:

``````long double compute_pi_serial (long interval const) {

srand (time (NULL));

double x, y;
int i, circle = 0;

for (i = 0; i <interval; i ++) {
x = (double) (rand () / (double) RAND_MAX);
y = (double) (rand () / (double) RAND_MAX);

if (pow (x, 2) + pow (y, 2) <= 1.0) circle ++;
}

return (double long) circle / interval * 4.0;
}
``````

Afterwards, I wrote a parallel version using `OpenMP` And this is the result:

``````long double compute_pi_omp (const long interval, const int threads) {

double x, y;
int i, circle = 0;

#pragma omp parallel private (x, y) num_threads (threads)
{

srand (SEED);

#pragma omp for reduction (+: circle)
for (i = 0; i <interval; i ++) {
x = (double) (rand () / (double) RAND_MAX);
y = (double) (rand () / (double) RAND_MAX);

if (pow (x, 2) + pow (y, 2) <= 1) circle ++;
}
}

return (double long) circle / interval * 4.0;
}
``````

This is an efficient method or there is a more efficient version using always `OpenMP`?

## Can you solve the problem of the shortest route using the Monte Carlo Tree Search?

I think the Monte Carlo Tree Search could be used to find the shortest path, but it seems that this method is only used considering the win / lose results in the simulation step.

If we consider the length of the road as the result of the simulation step, how would the backward propagation work? It seems that one of the nodes along the optimal path could be penalized if a child ends up with a long path in the simulation step.

## Order each column in a list contained in an association (about the replicas of Monte Carlo)

I have an association that contains three 1000×200 matrices.
Each row of a matrix denotes a different set of parameters, while
Each column contains the results of the function for a specific value of x.

I want to sort each column in ascending order, so that the first row now contains the lowest values, while the last row will contain the highest ones.
These rows will be used to create a function that represents the envelope of the different results of a Monte Carlo method.

I tried using Sort, along with Transpose, but the result is messy.

Thanks for the help

## Artificial intelligence – Value of C in the Monte Carlo algorithm.

I have developed the Monte Carlo search algorithm in the tabs.
https://en.wikipedia.org/wiki/Monte_Carlo_tree_search

My question is. What should be the value of C in the equation https://www.geeksforgeeks.org/ml-monte-carlo-tree-search-mcts/?

The general consensus is that you should use a root value (2), but when I use a value of 10 it works, but when I use the root (2) I lose many times.

So, how can I find the best value for C, which best suits my problem?

## pr.probability – Sample of separable importance of Monte Carlo integration

I want to calculate the multivariable integral in the $$R$$ hypercube

$$int_R f (x) dx$$

I use a base method of montecarlo in the importace sample:

$$I_f = int_R f (x) , dx = int_R frac {f (x)} { omega (x)} omega (x) , dx = E left[frac{f(X)}{omega(X)}right]$$

The optimal density function (which minimizes the variance) is:

$$omega (x) = frac {| f (x) |} { int_R | f (z) | , dz}$$

But I do consider a separable density function. I have to prove (and this is my question) that the optimum in this case is (suppose that $$R = [0,1]^ d$$):

$$omega_1 (x_1) = frac { sqrt { int_0 ^ 1 dots int_0 ^ 1 frac {f ^ 2 (x_1, dots, x_d)} { omega_ {x_2} (x_2) dots omega_ {x_d} (x_d)} , dx_2 dots , dx_d}} { int_0 ^ 1 sqrt { int_0 ^ 1 dots int_0 ^ 1 frac {f ^ 2 (x_1, dots, x_d )} { omega_ {x_2} (x_2) dots omega_ {x_d} (x_d)} , dx_2 dots , dx_d} , , dx_1}$$

$$omega (x_1, dots, x_d) = omega_1 (x_1) dots omega_d (x_d)$$

Sorry, my English is not very good.

## Decrease in stochastic advantage in general Monte Carlo algorithms

Recall that a Monte Carlo algorithm is $$p$$-Correct if you give a correct answer with probability at least $$p$$. In the case of decision problems, where the response is binary, repeating the MC algorithm can increase the confidence that a correct answer is obtained.

However, in the case that more than one answer is correct, this confidence may decrease. I wonder how many times $$k$$ Is it necessary to execute an algorithm of this type to obtain a confidence in the response that is less than 50%?

This is what I have done:

Suppose that $$k = 3$$ and I have a 75% correct MC algorithm that generates 5 possible answers, 4 of which are correct and 1 is incorrect. I believe that in this case, the 75% "correction" of the algorithm is divided among all the possible correct answers, so that each correct answer has a "probability" of 75% / 4.

Suppose that the correct answers are $$a, b, c, d$$ and an incorrect answer is $$e$$.

In this case, if I list all the possible combinations of outputs of an MC algorithm that I execute $$k = 3$$ Sometimes, I get a list of tuples $$(a, a, a)$$, $$(a, a, b)$$… and so on, each of which is associated with a probability that it will occur. For example $$(a, a, a)$$ has probability $$(0.75 / 4) ^ 3$$ of happening and $$(a, a, e)$$ has probability $$(0.75 / 4) ^ 2 * 0.25$$.

So, if I build a table associating each possible tuple with its probability of occurrence, and add up the probabilities of all the events in which the algorithm would yield the correct answer by majority vote (the ties are divided by random selection), this should give me the final "Confidence" of the algorithm. In this case, I get numbers around 0.65-0.75 (due to the randomness in the draws division).

However, I could not find a $$k$$ which gets this value below 50%.

Any idea if I'm doing something wrong? The help is very appreciated.

## Monte Carlo Algorithms: Is there a problem where two opposing Monte Carlo algorithms can solve it?

I started reading about probabilistic algorithms and Monte-Carlo algorithms. Since a Monte-Carlo can only give a certain answer for True or False, I was wondering if it is possible that for the same problemThere are two opposing Monte Carlo algorithms capable of giving a certain response. (On the contrary, I mean that one would be sure when it is FALSE, and the other would be true when it is TRUE)

For example : There is an algorithm based on Monte-Carlo of prime number test that can verify if a number "n" is prime or not. If the answer is FALSE, then "n" is not a prime number (it's a compound number). However, if the answer were TRUE, then "n" COULD be primordial (with some probability). As far as I know, there is no efficient algorithm (Monte-Carlo) able to say with certainty if "n" is a prime number.

## c ++ – Fill a hand with random cards that are not yet stolen – Monte Carlo

I would like to know if there is a more efficient way to accelerate below the code. This function is intended to complete a poker hand game with the rest
Cards with Mersenne Twister for a Monte Carlo simulation.

``````void HandEvaluator :: RandomFill (std :: vector <std :: shared_ptr> & _Set, std :: vector <std :: shared_ptr> & _Dead, unsigned int _Target) {
// Add the cards that are currently in Set as dead cards
for (auto const and CardInSet: _Set)
{
if (CardInSet == nullptr)
break;

}

unsigned int RequiredAmt = _Target - _Set.size ();

unsigned int CardIndex = 0;
std :: uniform_int_distribution Distribution Cards (0, 51);

for (unsigned int Index = 0; Index < RequiredAmt; Index++)
{
while (true)
{
CardIndex = CardsDistribution(MTGenerator);

{
if (ReferenceDeck[CardIndex]->GetRank () == Dead-> GetRank () && ReferenceDeck[CardIndex]-> GetSuit () == Dead-> GetSuit ())
{
break;
}
}

{
_Set.push_back (ReferenceDeck[CardIndex]);
break;
}
}
}
}
``````

The Visual Studio Profiler had identified that this line

``````                CardIndex = CardsDistribution (MTGenerator);
``````

It is the main culprit of the high computing time. Is Mersenne Twister itself not intended for a Monte Carlo simulation and, instead, should another PRNG be used? Or are there some inefficient lines that I had missed?