## Plot the numerical solution of the differential equation for 0 ≤ t ≤ 50: x 00 + 0.15x 0 − x + x 3 = 0.3 cost , x(0) = −1 , x 0 (0) = 1

What am I doing wrong here ??? ## numerical value – How to extract angle \$0 le varphi

Suppose you have a complex number of the standard shape $$z = a + i , b$$ (where $$a$$ and $$b$$ are real numbers). How can we convert it to the polar shape $$z = A , e^{i varphi}$$? More specifically, I would like to get a pair of positive numbers: $$A$$ and $$varphi < 2pi$$.

The command `Abs(z)` gives us the amplitude $$A > 0$$ (a real number). The command `Arg(z)` gives an angle, but apparently it’s not a always a positive value: $$0 le varphi < 2pi$$ (for example: `Arg(-4.20 - 5.86 I) = -2.192`, which is not what I need). So what should be the simplest way to get an angle $$varphi$$ that stays positive, and in the usual anti-clock sense of rotation?

## differential equations – Force WhenEvent and NDSolve to numerical evaluation and Best Practice

I’m trying to perform an hybrid kalman filter for a nonlinear system.
The following code works but it is very slow due to the fact that it does symbolic evaluation.
What I wish is that for the filter equations first of all it does numeric substitution for the time dependent function and only then do all the actions specified by the algorithm step.
This is the code, only the last part is what really interests me.

``````(*SOME FUNCTIONS I'VE USED*)

CreateWhiteNoise((Mu)_, s_, t0_, tfin_, dt_, p_) := Module({distr, n = Length((Mu)), listt, nc, listrn},
listt = Range(t0, tfin, dt);
nc = Length(listt);
distr = MultinormalDistribution((Mu), s);
listrn = RandomVariate(distr, nc);
(Interpolation(Thread(Join({listt, listrn((All, #))})))(p))&/@Range@n
)

ListForm(mat_) := DeleteCases((Thread@#&/@Thread@mat//Flatten ), True)

SymRed(mat_) := Module({i, j, temp},
temp = Thread@#&/@Thread@mat;
Normal@SparseArray({{i_, j_} /; i >= j :> temp((i, j)), {i_, j_} /; i < j :> True}, Dimensions@temp)//ListForm
)

(*SYSTEM PART*)
t0 = 0;
tfin = 4 Pi;
pstl = {xstl, ystl};
R0l((Theta)_) = {{Cos((Theta)), -Sin((Theta))} , {Sin((Theta)),
Cos((Theta))}};
pmk1 = {xmk1, ymk1};
pmk2 = {xmk2, ymk2};
numericalvalues =
Thread({l1, l2, xstl, ystl, xmk1, ymk1, xmk2, ymk2} -> {8, 5,
3, -2, 32, 16, -15, 2});

qalist = {x, y, (Theta), xstls, ystls};
qa(t_) = (ToString(#) <> "(t)") & /@ qalist // ToExpression;
qad(t_) = D(qa(t), t);
qadd(t_) = D(qa(t), t);

v(t_) = 2;
(Omega)(t_) = 1;
input(t_) = {v(t), (Omega)(t)};

g1 = {Cos(#), Sin(#), 0, 0, 0} & @(#((3))) &;
g2 = {0, 0, 1, 0, 0};
dyn = (g1@#*v(t) + g2*(Omega)(t)) &;

pstf = ({#1, #2} + R0l(#3 ).{#4, #5}) &;
h1 = (Sqrt(#.#) &(pmk1 - pstf @@ #)) &;
h2 = (Sqrt(#.#) &(pmk2 - pstf @@ # )) &;

CreateWhiteNoise((Mu)_, s_, t0_, tfin_, dt_, p_) :=
Module({distr, n = Length((Mu)), listt, nc, listrn},
listt = Range(t0, tfin, dt);
nc = Length(listt);
distr = MultinormalDistribution((Mu), s);
listrn = RandomVariate(distr, nc);
(Interpolation(Thread(Join({listt, listrn((All, #))})))(p)) & /@
Range@n
) (*Just create white noise continuous time sample*)

cov = {{0.008, 0}, {0,
0.008}}; (*(0.008//Sqrt) *3 (Equal) 0.268 (m)*)
media = {0, 0};
noise = CreateWhiteNoise(media, cov, t0, tfin + 0.1, 0.04, t );

h = {h1@#, h2@#} & ;
hn = (h@# + noise) &;
output(t_) = Join(h@qa(t), hn@qa(t)) /. numericalvalues;

eqdyn = qad(t) == dyn@qa(t);

qa0 = Flatten@{-2, 1, Pi/3, pstl} /. numericalvalues;
eqin = qa(0) == qa0;

monitor = {qa(t), input(t), output(t)};

{state, in, out} =
NDSolveValue({eqdyn, eqin}, monitor, {t, t0, tfin});

(*ParametricPlot(state((1 ;; 2)), {t, t0, tfin}) -> circumference *)

(*FILTER PART*)
qlisthat = (ToString@# <> "hat" // ToExpression) & /@ qalist ;
qahat(t_) = (ToString@# <> "(t)" // ToExpression) & /@ qlisthat;
qahatd(t_) = D(qahat(t), t);
nstate = Length@qlisthat;
Pmat = Normal@
SparseArray({{i_, j_} /; i >= j :>
ToExpression("p" <> ToString@i <> ToString@j),
{i_, j_} /; i < j :>
ToExpression("p" <> ToString@j <> ToString@i )},
nstate*{1, 1});
P(t_) = Array(ToExpression((ToString@Pmat((#1, #2)) <> "(t)")) &,
Dimensions(Pmat));
Pd(t_) = D(P(t), t);
P0 = IdentityMatrix(nstate)*{0.005, 0.005, 0.3 Degree, 0.005,
0.005 };
stima0 = RandomVariate(MultinormalDistribution( qa0, P0));
initstima = qahat(0) == stima0;
initcov = P(0) == P0 // SymRed;

predoutput = h@qahat(t) /. numericalvalues;
errstima = {xstls(t) - xstlshat(t), ystls(t) - ystlshat(t)};
A = D(dyn@qa(t), {qa(t)}) /. Thread( qa(t) -> qahat(t));
Ci = D(h@qa(t), {qa(t)}) /. Thread(qa(t) -> qahat(t)) /.
numericalvalues;
K = P(t).Ci(Transpose).Inverse(cov);
errpred = hn@qa(t) - predoutput /. numericalvalues;

initEKF = {initstima, initcov};

monitorEKF = {qa(t), qahat(t), errstima};

(* INTERESTING PART - HYBRID KALMAN FILTER*)

dt = 0.1;

Sk = Ci.P(t).Ci(Transpose) + cov;
Lk = P(t).Ci(Transpose).Inverse(Sk);
updateP = (IdentityMatrix(nstate) - Lk.Ci).P(t).Transpose(
IdentityMatrix(nstate) - Lk.Ci) + Lk.cov.Lk(Transpose);
updateStima = qahat(t) + Lk.errpred;

eqCorr = WhenEvent(Mod(t, dt),
Evaluate@{qahat(t) -> updateStima,
P(t) -> updateP // SymRed, {"RestartIntegration"}});

eqPredStima = qahatd(t) == dyn@qahat(t);
eqPredCov =  Pd(t) == A.P(t) + P(t).A(Transpose) // SymRed ;
eqEkfIbrid = {eqPredStima, eqPredCov, eqCorr};

{statoEkfIbrid, stimaEkfIbrid, errstimaEkfIbrid} =
NDSolveValue({eqns, eqin, eqEkfIbrid, initEKF, altEKF},
monitorEKF, {t, t0, 2},
Method -> {"EquationSimplification" -> "Residual"});

Plot(errstimaEkfIbrid, {t, t0, 2})

`````` At the moment the code is terribly slow, I’m sure it can be improved by forcing in some way numerical evaluation.
An other question: What is the best best practice do formulate an algorithm as this one and more in general to make NDSolve give the best performances ? Do you think is better to explicit every time the arguments of each quantity so that we can use the _?NumericQ pattern test to force numerical evaluation ? I’ve also seen used for example NDSolve inside Block sometimes, does it help ?

## numerical integration – Solving Fredholm Equation with composite unknown function

I would like to numerically solve a Fredholm Equation where the unknown function is composite. For example, an equation like the one described in Solving Fredholm Equation of the second kind but having composite functions as unknowns.

Consider then the Fredholm Equation:
$$phileft(frac{x^2}{2}-1right) = 1 + frac12 int_{0}^{pi} text{cos}left(x-sright) , phileft(frac{s^2}{2}-1right) ,ds$$
for $$xin(0,pi)$$.

How could one use Mathematica to find a numerical solution?

## numerical integration – How to define a polygonal region in 2D to subsequently integrate over it?

Here is an example in 12.2.

``````poly = Polygon({{0, 0}, {1/2, Sqrt(3)/2}, {1, 1/Sqrt(3)}, {1, 0}});
NIntegrate(Log(x + y + 1), {x, y} (Element) poly)
``````

`0.366623`

Let us verify it by

``````Integrate(Log(x + y + 1), {x, y} (Element) poly)
``````

`-((36 - 12 Sqrt(3) + 12 Log(2) + 228 Sqrt(3) Log(2) + 138 Log(3) + 54 Sqrt(3) Log(3) + 9 Log(4) - 3 Sqrt(3) Log(4) + 48 Sqrt(3) Log(6) - 2 Log(8) - 2 Sqrt(3) Log(8) + 2 Sqrt(3) Log(9) - 48 Sqrt(3) Log(2 - 2/Sqrt(3)) + 90 Log(2 - Sqrt(3)) + 54 Sqrt(3) Log(2 - Sqrt(3)) - 90 Log(3 - Sqrt(3)) - 54 Sqrt(3) Log(3 - Sqrt(3)) - 180 Log(-1 + Sqrt(3)) - 108 Sqrt(3) Log(-1 + Sqrt(3)) + 72 Log(1 + Sqrt(3)) - 48 Sqrt(3) Log(1 + Sqrt(3)) - 36 Log(2 + Sqrt(3)) + 24 Sqrt(3) Log(2 + Sqrt(3)) - 18 Log(3 + Sqrt(3)) - 90 Sqrt(3) Log(3 + Sqrt(3)) - 48 Log(6 + Sqrt(3)) - 52 Sqrt(3) Log(6 + Sqrt(3)) - 72 Log(3 + 2 Sqrt(3)) + 48 Sqrt(3) Log(3 + 2 Sqrt(3)) + 36 Log(9 + 5 Sqrt(3)) - 24 Sqrt(3) Log(9 + 5 Sqrt(3)))/(8 Sqrt( 3) (19 + 11 Sqrt(3)) (-45 + 26 Sqrt(3))))`

``````N(%)
``````

`0.366623`

Addition. `NIntegrate` produces a different result if the vertices are taken couunter-clockwise as

``````poly1 = Polygon({{1, 1/Sqrt(3)}, {1/2, Sqrt(3)/2}, {0, 0}, {1, 0}});
NIntegrate(Log(x + y + 1), {x, y} (Element) poly1)
``````

`0.17812`

shows. `Integrate` produces the same:

``````Integrate(Log(x + y + 1), {x, y} (Element) poly1)
``````

`(-18 - 18 Sqrt(3) + 117 Log(2) + 88 Sqrt(3) Log(2) + 297 Log(3) + 141 Sqrt(3) Log(3) - 417 Log(4) - 209 Sqrt(3) Log(4) - 574 Log(8) - 287 Sqrt(3) Log(8) - 396 Log(27) - 228 Sqrt(3) Log(27) + Log(216) + Sqrt(3) Log(216) + 6 Log(1728) + 4 Sqrt(3) Log(1728) - 6 Log(46656) - 4 Sqrt(3) Log(46656) + 4 Log(452984832) + 2 Sqrt(3) Log(452984832) - 594 Log(18 - 8 Sqrt(3)) - 342 Sqrt(3) Log(18 - 8 Sqrt(3)) - 288 Log(11 - 5 Sqrt(3)) - 144 Sqrt(3) Log(11 - 5 Sqrt(3)) + 594 Log(9 - 3 Sqrt(3)) + 342 Sqrt(3) Log(9 - 3 Sqrt(3)) + 1188 Log(5 - Sqrt(3)) + 684 Sqrt(3) Log(5 - Sqrt(3)) + 288 Log(-8 (-2 + Sqrt(3))) + 144 Sqrt(3) Log(-8 (-2 + Sqrt(3))) - 591 Log(6 + Sqrt(3)) - 279 Sqrt(3) Log(6 + Sqrt(3)) + 1179 Log(7 + Sqrt(3)) + 567 Sqrt(3) Log(7 + Sqrt(3)) + 297 Log(15 + 8 Sqrt(3)) + 141 Sqrt(3) Log(15 + 8 Sqrt(3)) - 297 Log(17 + 9 Sqrt(3)) - 141 Sqrt(3) Log(17 + 9 Sqrt(3)))/(8 Sqrt( 3) (-3 + 2 Sqrt(3)) (9 + 5 Sqrt(3)))`

``````N(%)
``````

`0.17812`

## numerical methods – Does there exist an transform that allow me to solve arbitrary linear real valued differential equation by solving upper triangular matrix ODEs?

Linear differential equations can be solved using the matrix exponential. However for upper diagonal matrices we can observer the following simplifications (diagonal matrices commute):

$$x(t)=exp(begin{bmatrix} d_1 & u_1 & u_2 \ 0 & d_2 & u_3 \ 0 & 0 & d_3 end{bmatrix}(t-t_0))x_0 = big(begin{pmatrix} exp (d_1(t-t_0)) \ exp (d_2(t-t_0)) \ exp (d_3(t-t_0)) end{pmatrix} cdot exp(begin{bmatrix} 0 & u_1 & u_2 \ 0 & 0 & u_3 \ 0 & 0 & 0 end{bmatrix}(t-t_0))big)x_0$$
This can be simplified further since the strictly upper diagonal matrix is nil potent:
$$x(t)=Big(begin{pmatrix} exp (d_1(t-t_0)) \ exp (d_2(t-t_0)) \ exp (d_3(t-t_0)) end{pmatrix} cdot big(begin{bmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 end{bmatrix}+begin{bmatrix} 0 & u_1 & u_2 \ 0 & 0 & u_3 \ 0 & 0 & 0 end{bmatrix}(t-t_0)+begin{bmatrix} 0 & 0 & (u_2)^2+u_1u_3 \ 0 & 0 & 0 \ 0 & 0 & 0 end{bmatrix}(t-t_0)^2big)Big)x_0$$
$$x(t)=Big(begin{pmatrix} exp (d_1(t-t_0)) \ exp (d_2(t-t_0)) \ exp (d_3(t-t_0)) end{pmatrix} cdot begin{bmatrix} 1 & u_1(t-t_0) & u_2(t-t_0) + ((u_2)^2+u_1u_3)(t-t_0)^2 \ 0 & 1 & u_3(t-t_0) \ 0 & 0 & 1 end{bmatrix}Big)x_0$$
This is extremely simple to solve, in particular if it is know that the degree of nilpotency is much smaller than the size of the matrix. Systems with the same degree of nil potency as this one occur in rigid body dynamics.

### Does there exists an transform that transforms an arbitrary real linear ode into (one or more) upper diagonal linear (complex) ODEs whose solutions i can transform back into a solution to the arbitrary linear ODE?

The obvious idea of splitting up the matrix in an strictly upper, strictly lower and an diagonal part doesn’t work as the strictly lower part doesn’t commute with (diagonal+strictly upper). I’m not very familiar with matrix transforms or transforms of differential equations.

## numerical integration – High Precision NIntegrate

Define the integrand (as a function of two variables $$x$$ and $$y$$):

``````        integrand = -((64 (1 - x)^(7/2) (Sqrt(x) Sqrt(y) Sqrt(1 - x y) (-3 + 2 x y) (-1 + 4x y) - 3 ArcSin(Sqrt(x) Sqrt(y))) HypergeometricPFQ({1, 4, 89/20}, {5/2, 109/20}, x))/(1869 (Pi)^(3/2) x^2 y^(11/10) (1 - x y)^(5/2)))
``````

I am trying to do the following numerical integral

``````    nint(prec_) :=  NIntegrate(integrand, {x, 0, 1}, {y, 0, 1}, WorkingPrecision -> prec) // Timing
``````

For instance, at precision 10 I get:

`````` In(985):= nint(10)

Out(985)= {12.3328, 0.08959551665}
``````

I would like to get as much precision as possible, ideally around 40 digits. Sadly, the problem currently scales really badly as I increase the WorkingPrecision. For instance, WorkingPrecision=20 required 3349 seconds. Also, I do not think I am actually getting as many digits of precision as the Working precision, as the WorkingPrecision=20 matches the WorkingPrecision=10 result up to 6 digits, not 10 as I might have hoped.

Are they any tricks to improve the precision, or the speed of the integral? I would be happy to run the integral overnight or even several days, if that would get me to 40 digits.

## numerical linear algebra – Eigenvectors of the 1d Laplacian in python – Why do I get different answers

Consider this code

``````import numpy as np
from numpy import pi
from matplotlib import pyplot as plt
import scipy.linalg as linalg

## Domain
L = 5
M = 50
dx = L/(M-1)
xx = np.linspace(0, L, M)

## Diff matrix
row0 = np.zeros(M)
row0((-1, 0, 1)) = (1, -2, 1)
row0 /= dx**2
D = linalg.circulant(row0)

# Dirichlet BCs -- implicitly set boundaries to 0
D = D(1:-1, 1:-1)

# Use negative laplacian
D *= -1

## Eigenvalue decomposition
ews, evs = linalg.eigh(D)

# ews, evs = linalg.eig(D)
# ews = ews.real

## Plot
fig, (ax1, ax2) = plt.subplots(nrows=2)
modulation = np.cos(pi/dx * xx(1:-1))  # (-1, +1, -1, +1, ...)
for ev in evs.T(:3):
ax1.plot(xx(1:-1), ev)
ax2.plot(xx(1:-1), ev*modulation)
ax1.set_title("Unmodulated")
ax2.set_title("Modulated by (-1, +1, -1, +1, ...)")
plt.show(block=False)
fig.tight_layout()
``````

with the following output Now, if I don’t do `D *= -1` then the contents of the two panels are swapped (i.e. the modulation becomes necessary to recover the true eigenvectors).
But theoretically, the eigenvectors should not have changed. So why do I get different answers?

Note1: both the modulated and unmodulated eigenvectors pass the validation of comparing `ev` to `D @ ev`. But the eigenvectors are supposed to be unique (up to a scaling), so what gives?

Note2: The issue seems related to aliasing, since the modulation is by a high-frequency signal. But I can’t fathom what that has to do with eigenvalue problems.

Note3: If I uncomment the use of the (general) `eig` function, then it does not matter whether `D *= -1` was done or not (the eigenvectors always require the modulation). Isn’t this disconcerting for the “stability” of these routines?

Note4: this is not the same issue as in here, as I take care to transpose the eigenvector matrix.

## compile – Numerical errors in compiled function involving real and complex numbers as output

I am trying to use the `Compile` command in Mathematica to reduce the computation time because of its fast execution within Mathematica and ability to use the `C` compiler for even faster computation. As this is my first time using the command, I referred to the documentation and some usage by Leonid Shilfrin and others as an answer to this post.

After searching for a lot, I was not able to get some satisfying answers regarding the `Compile` command involving both `Real` and `Complex` outputs. I am trying to write a code analogous to my actual code which involves a similar problem just to get rid of many parameters and increased complexity. The function mentioned here takes real (random numbers generated by `RandomReal` in actual problem) inputs mentioned explicitly and returns a mandatory real and complex (maybe real in some case) output.

``````somefunction = Compile({x,_Real},
Module({data},
data = ArcSin(x)//N;
{x, data})
)
``````

which works fine without any error within the input range `-1 <= x <= 1`, but raises an error beyond the specified range due to the complex output.

``````someFunction(0.5)
(*{0.5, 0.523599}*)
someFunction(2)
CompiledFunction::cfn: Numerical error encountered at instruction 1; proceeding with uncompiled evaluation.
(*{2, 1.5708 - 1.31696 I}*)
``````

The answer is indeed correct in the latter case but the error produced while executing creates problems (making the computation even slower in the actual case). Using `CompiledFunctionTools`CompilePrint(someFunction)` to debug this issue, I found that there is no room to accommodate `Complex` number as an output of `ArcSin` in this case.

I need to get rid of this error. In trying to do so, I have tried using `CompilationOptions` with `{"InlineExternalDefinitions" -> False}` and `{"InlineExternalDefinitions" -> True}` following some answers here without actually knowing its utility. I had also tried to use `Join` to evade `MainEvaluate` in my original problem but none of this worked quite well. I need to use a `Module` inside `Compile` and encountered errors while trying to define any sort of pure function. The output is also within `Module` to minimize memory usage. I would also like to know what the letters `R`, `A`, `C`, `I` etc. mean in the output of the `CompilePrint` command so that I can debug similar issues by myself in future.

My actual computation is very CPU and Memory intensive because the actual function (similar to `someFunction` mentioned above) takes 12 (up to 20 in some case) arguments (generated by `RandomReal`) with 8 outputs (some of which may be complex in some cases) and it loops over `10^9` times which I am going to implement via a script file with `.m` extension. This is the reason behind using `Compile` and `Module` rather than using a pure function with `SetDelayed` symbol.

It would be extremely helpful if someone can guide me through this problem or mention any similar posts related to this kind of problem which I might have missed. Thanks for any suggestions in advance.

## numerical value – N won’t evaluate expression numerically

Using N(expr) is not evaluating this expression numerically for me and I can’t figure out why. I originally thought it’s because I wasn’t doing NIntegrate but I’ve seen multiple examples of people using N(integral expr) to get a numerical result.

``````c1 = 1/(Integrate(e^(-0.04 x), {x, 5, 60}));
c2 = 1/(Integrate(e^(-0.16 x), {x, 5, 60}));
f1(x_) := c1*e^(-0.04 x);
f2(x_) := c2 * e^(-0.16 x);
P1 = N(f1(10)*f1(32)*f1(38)*f1(40))
P2 = N(f2(10)*f2(32)*f2(38)*f2(40))
``````

out $$frac{log ^4(e)}{left(frac{25.}{e^{0.2}}-frac{25.}{e^{2.4}}right)^4 e^{4.8}}$$
out $$frac{log ^4(e)}{left(frac{6.25}{e^{0.8}}-frac{6.25}{e^{9.6}}right)^4 e^{19.2}}$$

I’ve tried evaluating it numerically in different stages like at c1, or in the function and still no numerical result. Any help would be appreciated