## real analysis: check if the following first order differential equation is linear

To consider $$(y ^ 2-1) dx + xdy = 0$$

When i consider $$x$$ as independent and $$and$$ as a dependent variable then,

$${dy over dx} = {(1-y ^ 2) over x}$$ That is clearly not linear.

When i consider $$and$$ as independent and $$x$$ as a dependent variable then,

$${dx over dy} = {- x over y ^ 2-1}$$ but how $${1 over and ^ 2-1}$$ It is not continuous, it is not linear. Can I say that it is linear in $$mathbb {R} setminus {- 1,1 }$$

Can anyone read what I have written?

Thank you.

## differential equations: a non-atomic expression is expected at position 1 of First[None]

This is a follow-up question to a previous question: Solve a PDE system in a polynomial domain by parts.

I tried to solve the system of equations of one of my previous publications with the different coefficients:

``````<.125);
F={r,z}(Function){r f(z),z};
mesh1=ToElementMesh("Coordinates"->F@@@mesh("Coordinates"),"MeshElements"->mesh("MeshElements"));
mesh1("Wireframe")
`````` ``````Emod=2*10^11;(Nu)=0.3;(Rho)=7850;g=9.8066;
System = {Emod/((1 + (Nu)) (1 -
2 (Nu))) ((1 - (Nu)) (D(r*U(r, z), r, r))/r - (Nu)*
D(r*V(r, z), r, z)/r) + (Emod/(2 (1 + (Nu)))) (D(U(r, z), z,
z) + D(V(r, z), r, z)) ==
0, (Emod/(2 (1 + (Nu)))) (D(r*U(r, z), r, z)/r +
D(r*V(r, z), r, r)/
r) + (Emod/((1 + (Nu)) (1 - 2 (Nu)))) ((1 - (Nu)) D(
V(r, z), z, z) - (Nu)*D(U(r, z), r, z)) == 0, U(r, 4) == 0,
V(r, 4) == 0, V(r, 0) == 0.00001}

{uif1,vif1}=NDSolveValue(System,{U,V},Element({r,z},mesh1));
(Sigma)r  = Function({r,z},Evaluate(Emod/(((Nu)+1) (2 (Nu)-1)) (((Nu)-1) D(uif1(r,z),r)-(Nu) (D(vif1(r,z),r)+(uif1(r,z))/r))));
P = Plot3D((Sigma)r(r,z),{r,f(0),f(4)},{z,0,4})
Export("Sigma_r.dat",First@FirstCase(P,_GraphicsComplex,None,(Infinity)),"TSV");
``````

If I give any value for r0 $$<$$ 0.5, for example, I want r0 to be equal to the first value of coeffcient a, r0 = 0.33 I get an error: Expected non-atomic expression at position 1 in First (None).

## differential equations: use of MinValue with NDSolve

A simplified example looks like this. The basic idea is to maximize `fHelper` adjusting `[Alpha]` for each `t`, and then use the result `f` in a differential equation However, in real-world application, the functions and the differential equation are much more complicated.

``````fHelper[t_, u_] := t + u;
f[t_] := MinValue[{fHelper[t, u], u >= 0.15, u <= 1.5}, u];
solution = NDSolve[{g'
``````

The last line produces a series of errors, which seem to be related to a failed evaluation of `f`.

``````NMinimize::nnum:
The function value 0.211738 + t is not a number at {u} = {0.211738}.

NMinimize::nnum:
The function value 0.211738 + t is not a number at {u} = {0.211738}.

NMinimize::nnum:
The function value 0.211738 + t is not a number at {u} = {0.211738}.

General::stop: Further output of NMinimize::nnum
will be suppressed during this calculation.
``````

I checked `f` to see if there is something wrong with him, but he behaves exactly like a normal pure function, and I have no idea why Mathematica complains.

``````In:= f[0.01]

Out= 0.16

In:= f[0.02]

Out= 0.17
``````

## differential equations – Mathematica 12: avoid failures when using NDSolve in a large domain

I have a set of coupled PDEs, which I would like to solve numerically using Mathematica (11.3 or 12.0).
The problem is that I need to solve in a large domain to make sure there are no limit effects.
Mathematica seems to have problems with such a calculation (probably only because of my RAM). So I decided to divide the problem into small intervals of time and simply export parts of the solution and start with the last solution as new initial conditions.
Now, Mathematica continues to fail in this calculation, for example, after 30 runs of the cycle. Why? Is there any way to avoid that?

Here is the code:

``````\$HistoryLength = 0; (* save *)
(*PDEs*)
pde11 :=
D(pp(t, x), t) ==
1.*Laplacian(pp(t, x), {x}) +
pp(t, x)*(1 - c11*pp(t, x) - z(t, x)/(1 + pp(t, x)^2));
pde21 := D(z(t, x), t) ==
1.*Laplacian(z(t, x), {x}) +
z(t, x)*(eps*pp(t, x)/(1 + pp(t, x)^2) - m);
(*Initial conditions*)
lo = 7498;
hi = 7502;
domlen = 15000;
ic11(x_) := Which(x > lo && x < hi, 6, True, 0);
ic21(x_) := Which(x < hi && x > lo, 0.5, True, 1/c11);
eps = 1.4434; m = 0.3; c11 = 0.1732;
tfin = 30;
For(i = 0, i <= IntegerPart(6000/30), i++,
Print(i);
sol1d = NDSolve({pde11, pde21, z(0, x) == ic11(x),
pp(0, x) == ic21(x)}, {pp, z}, {t, 0, tfin}, {x, 0, domlen},
MaxStepSize -> 0.1);
resultsForExport = {};
For(j = 0, j < tfin, j = j + 0.1,
resultsForExport =
Append(resultsForExport, Evaluate(z(j, 7500)) /. sol1d);
);
resultsForExport = Flatten(resultsForExport);
Export("largedomain" <> ToString(i) <> ".dat", resultsForExport);
ic11(x_) := sol1d((1, 2, 2))(tfin, x);
ic21(x_) := sol1d((1, 1, 2))(tfin, x);
)
``````

I guess my expiration may not be too fancy, I'm sorry.
I've run this several times (it takes quite a while) and it still fails between $$i = 20$$ Y $$i = 30$$.

Any thoughts / help / solutions / comments are appreciated.

## dg. differential geometry: how many unique values ​​are there for this function in the Stiefel collector

Leave $$mathcal {M} = {X in mathbb {R} ^ {n times r}: X ^ TX = I }$$ and define a function $$f$$ in $$mathcal {M}$$:
$$f (X) = sum_ {i = 1} ^ r left ( sum_ {j = 1} ^ rv_jX_ {ij} ^ 2 right) ^ 2$$
where $$(v_1, …, v_r)$$ is a unit vector how many singular values ​​does it have $$f$$ to have?

Obviously, $$f (X) in (0.1)$$. The set of singular values ​​of $$f$$ Is defined as:
$${f (X): X in mathcal {M}, nabla f | _X = 0 }$$
where $$nabla$$ is the gradient of $$f$$ in $$mathcal {M}$$.

If the exact number cannot be obtained, can we obtain an upper limit of the number?

## differential geometry – Hesse matrix based on cylindrical coordinates

I have a scalar value function, f, defined in a Euclidean space of 2N dimensions. I want Taylor to expand this function on a point $$P$$. I need to be able to explicitly write all the terms in the expansion of at least 2nd order.

If I were working on Cartesian coordinates, I would define a base such that $$P = (x_1 ^ prime, y_1 ^ prime, x_2 ^ prime, y_2 ^ prime, …, x_N ^ prime, y_N ^ prime)$$, and Taylor's expansion would be given by
$$f (x_1, y_1, …) = f (x_1 ^ prime, y_1 ^ prime, …) + sum_ {i = 1} ^ N Big ((x_i-x_i ^ prime) frac { partial f} { partial x_i} | _ {x_1 ^ prime, y_1 ^ prime, …} + (y_i-y_i ^ prime) frac { partial f} { partial y_i} | _ {x_1 ^ prime, y_1 ^ prime, …} Big) + \ frac {1} {2!} sum_ {i = 1} ^ N sum_ {j = 1} ^ N Big ((x_i-x_i ^ prime) (x_j-x_j ^ prime) frac { partial ^ 2 f} { partial x_i partial x_j} | _ {x_1 ^ prime, y_1 ^ prime, …} + (x_i-x_i ^ prime) (y_j-y_j ^ prime) frac { partial ^ 2 f} { partial x_i partial y_j} | _ {x_1 ^ prime, y_1 ^ prime, …} + (y_i-y_i ^ prime) (x_j-x_j ^ prime) frac { partial ^ 2 f} { partial y_i partial x_j} | _ {x_1 ^ prime, y_1 ^ prime, …} + (y_i-y_i ^ prime) (y_j-y_j ^ prime) frac { partial ^ 2 f} { partial y_i partial y_j} | _ {x_1 ^ prime, y_1 ^ prime, …} Big) + …$$

However, I want to work on polar coordinates, $$(r_1, theta_1, r_2, theta_2, …)$$. So, I should define $$P = (r_1 ^ prime, theta_1 ^ prime, …)$$, and Taylor's expansion, written explicitly in the first order, resembles the following (if I have this correct).

$$f (r_1, theta_1, r_2, theta_2, …) = f (r_1 ^ prime, theta_1 ^ prime, …) + sum_ {i = 1} ^ N Big (( r_i-r_i ^ prime) frac { partial f} { partial r} | _ {r_1 ^ prime, theta_1 ^ prime, …} + r_i ( theta_i- theta_i ^ prime) frac { partial f} { partial theta_i} | _ {r_1 ^ prime, theta_1 ^ prime, …} Big) + …$$

I feel that this formula should be written somewhere, but I can't find it. I know that the second order terms can be written as a tensor product $$x ^ i H_ {ij} x ^ j$$, where $$H_ {ij}$$ it is the matrix of Hesse (tensor), which would be useful if I could find an explicit formula for the Hesse in a polar coordinate base.

Can anyone write the second order terms in Taylor's expansion, or equivalently, provide the elements of the Hessian on a polar basis? Keep in mind that I am an engineer, so ideally I am looking for a written response explicitly using polar coordinates, rather than covariant gradients, Levi-Civita symbols, etc. Although any help to make progress towards the explicit formula is greatly appreciated.

## Differential equations: definition of recursive function and variable change

In my code I solve a non-linear EDO system to obtain the positive functions `RadEnDens` Y `PhiEnDens`, all based on `b` (By the way, it is a problem of the Cosmology of the Early Universe …):

``````Trh = 10^3;
Mpl = 2.4*10^18;
grh = 106.75;
Gammaphi = Sqrt(4 Pi^3*grh/45)*Trh^2/Mpl;
const = Gammaphi*Sqrt(3)*Mpl;
h(a_) := Sqrt(y(a) + x(a));
x0 = 0.001;
y0 = 10^64;

system1 = {y'(b) + 3*y(b) == -const*y(b)/h(b), x'(b) + 4*x(b) == const*y(b)/h(b), x(0) == x0, y(0) == y0};
{xsol, ysol} = NDSolveValue(system1, {x(b), y(b)}, {b,0,60});
system2 = ReplacePart(system1 /. {x -> (Exp((CurlyPhi)(#)) &), y -> (Exp((Psi)(#)) &)}, {{3} -> (CurlyPhi)(0) ==
Log(x0), {4} -> (Psi)(0) == Log(y0)});
{nds1, nds2} = NDSolveValue(system2, {(CurlyPhi), (Psi)}, {b,0,60}, WorkingPrecision -> MachinePrecision);
PhiEnDens(b_) := Exp(nds2(b));
hubble(b_) := Sqrt(RadEnDens(b) + PhiEnDens(b))/(Sqrt(3)*Mpl);

mydata = Import("gstar.xlsx", {"Data", 1, All, ;; 2});
g = Interpolation(mydata2, Method -> "Spline");
gstar(t_) := Piecewise({{g(t), t <= 1000}, {g(1000), t > 1000}});
``````

Here, I created the function. `g` from https://drive.google.com/file/d/1UhkYdoaqXQIsf6opOx-EBgG7xPxxHga3/view

I want to do two things now:

1) Define a function $$T$$ (the temperature) as something like that `T=(30/(Pi^2*g(T(b)))*RadEnDens(b))^(1/4)`, but this way it doesn't work. In general, it should be a diminishing function.

2) use `T` as my new variable instead of `b`, therefore, redefining all the different quantities (`RadEnDens(T)` Y `PhiEndDens(T)` in particular).

Thanks for your help, I'm going crazy with this: P

## dg. differential geometry – Expressing Riemann metric as recoil metric

For a Riemannian collector $$M$$ with some original metric $$g$$, for any other possible metric $$g & # 39;$$ in $$M$$ Is there a diffeomorphism $$f: (M, g) rightarrow (M, g)$$ such that the recoil metric of $$f$$ is $$g & # 39;$$? If this is not true in general, what type of metrics is true and what properties make it not true in general? I am specifically interested in the simple case where $$M$$ is $$mathbb {R} ^ n$$ Y $$g$$ It is the usual Euclidean metric, but I am also interested in the general case.

## dg. differential geometry – equivalent forms S1

I was reading the article “ Equivalent Cohomology & # 39; & # 39; from L.W. Your. On page 3 (on page 425) describe any $$S ^ 1$$ the equivalent 2n form in a compact and oriented smooth multiple M is given by $$alpha = omega_ {2n} + cdots + u ^ n$$. Where $$u in H * {S ^ 1} (pt; mathbb {R})$$ Y $$omega_ {2j}$$ is a $$S ^ 1$$-informant 2j in $$M$$.

$$underline {Ques-1}$$ :

Let's take a $$S ^ 1$$ equivalent to 2 in M, then, clearly, one should get a $$S ^ 1$$ equivalent form 3 per $$d_g ( alpha_2) = d_g ( alpha_2) ( omega_2 + omega_0 u)$$, where $$omega_2$$ is a $$S ^ 1$$ invariant form 2, $$omega_0$$ is a $$S ^ 1$$ invariant soft function and $$d_g$$ the equivalent external derivative, defined by $$d_g ( alpha) (X) = (d + i_ {X ^ { #}}) ( alpha (X))$$.

But $$d_g = d omega_2$$ + second term that implies the contraction of a zero form. So how is an equivalent form 3? Can someone help me please?
Or the only equivalent forms in M ​​are the even forms?

$$underline {Ques-2}$$ :

Since M can be embedded in some $$mathbb {R} ^ n$$ (Whitney's inclusion theorem), so I'm curious to know that we can think in some way (maybe extension is a correct word to use) the above $$S ^ 1$$ equivalent form 2 as a $$S ^ 1$$ equivalent form 2 in some $$mathbb {R} ^ n$$?

If possible, it should be explicitly expressed in terms of some $$f dx wedge dy$$ For some f, something like this, I guess.