plotting – How to plot and animate numerically calculated Poisson integral

I want to plot the expression formed from numerically calculated Poisson integrals (aka fundamental solutions of heat equation). I can only get numerical values. This question arises from my previous one.

ODE system. We extract the solutions.

s = NDSolve({u'(x) == -3 W(x) + x, W'(x) == u(x) - W(x)^3, u(0) == -1,
    W(0) == 1}, {u, W}, {x, 0, 200})
G = First(u /. s)
g = First(W /. s)

They will serve as initial conditions in Poisson integrals.
Now we choose parameter, some x and t and integration limits.

(Epsilon) = 1/10
T = -1/2
X = 10
p1 = -200
p2 = 200

Now we consctruct the expression.

Q1 = 1/( 2 Sqrt(Pi *((T) + 1)*((Epsilon))^(2))) NIntegrate(
   Exp(-(Abs(X - (Xi)))^2/(4*((T) + 
          1)*((Epsilon))^(2))) g((Xi)) G((Xi)) 
(-1/(2*((Epsilon))^2)), {(Xi), p1, p2})
Q2 = 1/( 2 Sqrt(Pi *((T) + 1)*((Epsilon))^(2))) NIntegrate(
   Exp(-(Abs(X - (Xi)))^2/(4*((T) + 
          1)*((Epsilon))^(2))) g((Xi)), {(Xi), p1, p2})
q = (-2 ((Epsilon))^2 )*(Q1/Q2)

I need to plot and animate q for any x and t intervals. By the code above I can only get numerical values.

Tried to get the tables of values and then animate it like this…

plots = Table(
   Plot((1/( 2 Sqrt(Pi *((t) + 1)*((Epsilon))^(2))) NIntegrate(
        Exp(-(Abs(x - (Xi)))^2/(4*((t) + 
               1)*((Epsilon))^(2))) g((Xi)) G((Xi)) (-1/(2*(
(Epsilon))^2)), {(Xi), p1, p2}))/(1/( 
         2 Sqrt(Pi *((t) + 1)*((Epsilon))^(2))) NIntegrate(
        Exp(-(Abs(x - (Xi)))^2/(4*((t) + 
               1)*((Epsilon))^(2))) g((Xi)), {(Xi), p1, p2})), {x, 
     0, 2}, PlotRange -> {-10, 10}), {t, -2, 0, .25});

But Mathematica is just running and I receive the error that one number “is too small to represent as a normalized machine number”.

I think it should be very simple but I am a newbie in Wolfram Mathematica so I’m sorry if the question is too trivial. Hope to get help.

equation solving – Numerically obtaining roots as a function of another variable

I have an equation $f(x,y) =0$ that Mathematica can solve numerically using NSolve for a fixed $y$. I want to perform the following operation:

For y =0.5 to 1

alpha(y) = max(real root of f(x,y) = 0)



This is not difficult to do in Matlab. I was wondering if it’s possible to do the above easily in Matehmatica, and if yes, how can I achieve this.

numerical value – N won’t evaluate expression numerically

Using N(expr) is not evaluating this expression numerically for me and I can’t figure out why. I originally thought it’s because I wasn’t doing NIntegrate but I’ve seen multiple examples of people using N(integral expr) to get a numerical result.

c1 = 1/(Integrate(e^(-0.04 x), {x, 5, 60}));
c2 = 1/(Integrate(e^(-0.16 x), {x, 5, 60}));
f1(x_) := c1*e^(-0.04 x);
f2(x_) := c2 * e^(-0.16 x);
P1 = N(f1(10)*f1(32)*f1(38)*f1(40))
P2 = N(f2(10)*f2(32)*f2(38)*f2(40))

out $$frac{log ^4(e)}{left(frac{25.}{e^{0.2}}-frac{25.}{e^{2.4}}right)^4 e^{4.8}}$$
out $$frac{log ^4(e)}{left(frac{6.25}{e^{0.8}}-frac{6.25}{e^{9.6}}right)^4 e^{19.2}}$$

I’ve tried evaluating it numerically in different stages like at c1, or in the function and still no numerical result. Any help would be appreciated

numerical methods – How can I compute (exp(t) – 1)/t in a numerically stable way?

The expression (exp(t) - 1)/t converges to 1 as t tends to 0. However, when computed numerically, we get a different story:

In (19): (exp(10**(-12)) - 1) * (10**12)                                        
Out(19): 1.000088900582341

In (20): (exp(10**(-13)) - 1) * (10**13)                                        
Out(20): 0.9992007221626409

In (21): (exp(10**(-14)) - 1) * (10**14)                                        
Out(21): 0.9992007221626409

In (22): (exp(10**(-15)) - 1) * (10**15)                                        
Out(22): 1.1102230246251565

In (23): (exp(10**(-16)) - 1) * (10**16)                                        
Out(23): 0.0

Is there some way I can compute this expression without encountering these problems? I’ve thought of using a power series but I’m wary of implementing this myself as I’m not sure of implementation details like how many terms to use.

If it’s relevant, I’m using Python with scipy and numpy.

numerical integration – Numerically Approximating Solutions to Differential Equation

I’m trying to numerically approximate solutions to a messy differential equation, given below
$$(1-alpha frac{1}{pi^{‘}(s_2)})(s_2-pi(s_2)+frac{beta}{2}pi(s_2)-frac{alphabeta}{2}s_2)+(p-alpha s_2)(-1+frac{beta}{2}-frac{alphabeta}{2pi^{‘}(s_2)})=0$$
and. I want to understand how the solution $pi(s_2)$ changes as we change $alpha$ and $beta$ and what forms such solutions will take. The initial condition is given by $pi(1)=1$, however I am open to investigating other boundary conditions that aren’t $pi(0)=0$. However Mathematica does not give any output and I’m not sure why. My code is given below.

DSolve({-(p(s2)-a s2)+(s2-p(s2))(1-a/(p'(s2)))+b(p(s2)-a s2)(1-a/(p'(s2)))==0,p(1)==1},p(s2),s2) Manipulate(NSolve(-FractionBox(RowBox({RowBox({RowBox({"(", RowBox({RowBox({"-", "1"}), "+", "a"}), ")"}), " ", RowBox({"Log", "(", RowBox({RowBox({"-", "a"}), "+", FractionBox(RowBox({"p", "(", "s2", ")"}), "s2")}), ")"})}), "+", RowBox({"a", " ", RowBox({"Log", "(", RowBox({"1", "-", RowBox({"a", " ", "b"}), "+", FractionBox(RowBox({RowBox({"(", RowBox({RowBox({"-", "2"}), "+", "b"}), ")"}), " ", RowBox({"p", "(", "s2", ")"})}), "s2")}), ")"})})}), RowBox({RowBox({"-", "1"}), "+", RowBox({"2", " ", "a"})}))(Equal)FractionBox(RowBox({RowBox({"Log", "(", RowBox({"1", "-", "a"}), ")"}), "-", RowBox({"a", " ", RowBox({"Log", "(", RowBox({"1", "-", "a"}), ")"})}), "-", RowBox({"a", " ", RowBox({"Log", "(", RowBox({RowBox({"-", "1"}), "+", "b", "-", RowBox({"a", " ", "b"})}), ")"})})}), RowBox({RowBox({"-", "1"}), "+", RowBox({"2", " ", "a"})}))+Log(s2),p(s2)),{a,-1,1,.1},{s2,0,1,.1},{b,-1,1,0.1})

differential equations – Ways to numerically find the monodromy group of an ODE

I’m interested in finding the monodromy group of some solutions of an ODE that I know only either approximately or numerically. For example, I might have a 3rd order ODE of which I cannot find exact solutions, but I can find them to very high accuracy as a series expansion. For the sake of the example, however, let me take an ODE of which I know the solutions, so I can compare whatever numerical method I have to the analytic solution.

Let me take the ODE
$$f”(z)+frac{(2-5 z) f'(z)}{4 z (1-z)}-frac{3 f(z)}{64 (z-1)^2}=0$$
that has the two linear independent solutions
$$f_1=frac{sqrt{1-z}+sqrt{z}+1}{2 sqrt{sqrt{z}+1} (1-z)^{1/8}}$$
$$f_2=frac{-sqrt{1-z}+sqrt{z}+1}{sqrt{sqrt{z}+1} (1-z)^{1/8}}$$

Say that I start with $0<z<1$, I analytically continue $z$ to the complex plane, go around $z=1$, which is a branch point, and I come back to $0<z<1$. From the explicit solutions, we can see that they change to
left( {begin{array}{c}
f_1 \
f_2 \
end{array} } right)to
left( {begin{array}{cc}
0 & frac{1}{2} e^{-frac{1}{4} i pi }\
2 e^{-frac{1}{4} i pi } & 4 \
end{array} } right) left( {begin{array}{c}
f_1 \
f_2 \
end{array} } right)

Now, let’s assume I don’t know exactly the form of $f_1$ and $f_2$. Maybe I know them for example as a series expansions around z=0 with as many orders as I want. I wanna find out how the monodromy group acts on these solutions.

One way to do this would be as explained here. The idea is to discretize the path around z=1, solve the ODE as a series expansion in all of these points, and match these approximate solutions to the ones closeby. In this way we go around z=1 approximately. I’ve tried this and, assuming I’ve done this correctly, I get a sensible result, but even with 16 points and keeping 200ish orders of the series expansion, I get an error which is roughly 1-2%.

I would like to get these numbers with a higher precision. Is there any way that I could do this, without using the explicit form of $f_1$ and $f_2$? Maybe there’s a smart way of using NDSolve?

interpolation – Numerically solve a non-differential equation with InterpolatingFunction as output

I have some a function that I need to solve numerically over an entire domain (Ideally I would like an InterpolatingFunction object as output), which does not include any derivatives. I have found a way to do it using NDSolve, however I feel like there should be a better solution. Here is a simple example to show my NDSolve method:


The solutions:
$y(x) = frac{1}{2}left(-1 pm sqrt{1+4x}right)$

Numerically Solving using NDSolve:

NDSolve(D(y(x)^2 + y(x), x) == D(x, x) && y(0)^2 + y(0) == 0, y(x), {x, -5, 5}) //Quiet

This operation gives me two InterpolatingFunctions that, when plotted, match the analytical solutions, but feels very hacky. I have also noticed that for some functions it does not produce solutions for the entire function domain. Is there a better way to do this?

numerical algorithms – Numerically solving an ode with infinitely many variables of which only finitely many are significant in magnitude

Suppose I have an ode that involves infinitely many variables, with the property that at any given time, only finitely many of them are large enough to be of interest (say $>10^{-10}$). However, at different times, different variables may become large.

It is also the case that given such a set of interesting variables, only a finite number of equations contain terms that are large. This is somewhat like a generalized version of locality.

The question is, is there any research on solving such equations numerically? My idea is to keep track of the variables of interest, and also “secondary” variables, which are significantly (in magnitude) coupled with the “interesting” variables; we can also keep track of “tertiary” variables and so on. We then go on solving the ode, ignoring the uninteresting variables (assuming them to be 0), and check regularly if a new variable comes into (or goes out of) interest.

To give the background, I’m working on an artificial chemistry simulation. All the reactions, and their reaction rate formulae (following Arrhenius) can be determined by my set of rules. For example, $$A + X to B\ B to X + C$$ simulates the conversion $A xrightarrow X C$ with catalyst $X$. In this case, $C$ is initially small, but eventually becomes large. This system is finite, and so is solvable by conventional methods. But if the reations are infinite (but recursively enumerable and decidable, and for simplicity, each combination of reactants only result in a finite number or reactions), it creates an infinitely large set of ode.

Numerically integrating exact differentials – Mathematica Stack Exchange

I would like to be able to numerically integrate an exact differential 2-form. Mathematica can do this symbolically using DSolve (though I don’t know how to insert an initial condition), but I get a “system is overdetermined” error when I try to do this numerically using NDSolve.

For instance, consider the following symbolic example posted by bbgodfrey on this forum:

   {D(f(x, y), x) == 1 + 2 E^(x^2 + y^2) x Cos(x + y) - E^(x^2 + y^2) Sin(x + y), 
    D(f(x, y), y) ==  3 y^2 + 2 E^(x^2 + y^2) y Cos(x + y) - E^(x^2 + y^2) Sin(x + y)}, 
    f(x, y), {x, y}) // FullSimplify

output: x + y^3 + C(1) + E^(x^2 + y^2) Cos(x + y)

That works fine… but there are two problems. The first (smaller) problem is that putting in the initial condition f(0,0)=0 (which forces c(1) to be -1) doesn’t work:

DSolveValue({D(f(x, y), x) == 
    1 + 2 E^(x^2 + y^2) x Cos(x + y) - E^(x^2 + y^2) Sin(x + y), 
   D(f(x, y), y) == 
    3 y^2 + 2 E^(x^2 + y^2) y Cos(x + y) - E^(x^2 + y^2) Sin(x + y), 
   f(0, 0) == 0}, f(x, y), {x, y}) // FullSimplify

output: DSolveValue({D(f(x, y), x) == 
    1 + 2 E^(x^2 + y^2) x Cos(x + y) - E^(x^2 + y^2) Sin(x + y), 
   D(f(x, y), y) == 
    3 y^2 + 2 E^(x^2 + y^2) y Cos(x + y) - E^(x^2 + y^2) Sin(x + y), 
   f(0, 0) == 0}, f(x, y), {x, y}) // FullSimplify

But the second (more important) problem is that I can’t get NDSolve to numerically solve the exact PDE (with the initial condition f(0,0)=0):

NDSolveValue({D(f(x, y), x) == 
   1 + 2 E^(x^2 + y^2) x Cos(x + y) - E^(x^2 + y^2) Sin(x + y), 
  D(f(x, y), y) == 
   3 y^2 + 2 E^(x^2 + y^2) y Cos(x + y) - E^(x^2 + y^2) Sin(x + y), 
  f(0, 0) == 0}, f, {x, -2, 2}, {y, -2, 2})

output: NDSolveValue::overdet: There are fewer dependent variables, {f(x,y)}, than equations, so the system is overdetermined.

Can anyone help?

numerical methods – I have a function which depends on two variables, $sigma(P,B)$. If I know $sigma$ and $P$, can I always find $B$ numerically?

To give some detail:

I am running some numerical simulations in Matlab, using gaussian data which is put through a transformation process. The output data has a standard deviation, $sigma$, and the transformation can take input parameters $B$ and $P$.

I have plotted some simulations, which result in a graph showing that $sigma$ closely follows a power law:
sigma = a(B P)^b

where I have calculated the parameters $a$ and $b$ to a good precision. This function is of course one-to-one, so one product $BP$ will always return the same $sigma$, within an error bar or so.

My next step is to essentially reverse engineer the process. I feed the programme a $P$, tell it what I want $sigma$ to be, and ask it to transform the data with the corresponding $B$.

However, I feel like I’m going crazy. When I get it to tell me the ratio of the standard deviation to what I wanted, it is never $1$ like I’d hope; if I’m lucky it’s $0.6$, sometimes it can be as much as $30$ or as low as $0.1$.

I’ve checked and triple checked my code. There are no other parameters that I haven’t accounted for, at least that I can think of.

My question is simple: is there a mathematical reason why my method shouldn’t work? I’m at a loss.