differential equations: I am more or less closed to solve the Laplace equation. but I need help with something

I have written the following Mathematica codes to solve the LAPLACE equation. using the finite difference method.

  In(1):= Remove(a, b, Nx, Ny, h, xgrid, ygrid, u, i, j)
a = 0; b = 0.5; n = 4;
h = (b - a)/n;
xgrid = Table(x(i) -> a + i h, {i, 1, n});
ygrid = Table(y(j) -> a + j h, {j, 1, n});
eqnstemplate = {-4 u(i, j) + u(i + 1, j) + u(i - 1, j) + u(i, j - 1) +
      u(i, j + 1) == 0};
BC1 = Table(u(i, 0) == 0, {i, 1, n - 1});
BC2 = Table(u(i, 4) == 200 x(i), {i, 1, n - 1});
BC3 = Table(u(4, j) == 200 y(j), {j, 2, n - 1});
BC4 = Table(u(0, j) == 0, {j, 2, n - 1});
Eqns = Table(eqnstemplate, {i, 1, n - 1}, {j, 1, n - 1}) /. xgrid /. 
    ygrid // Flatten;
systemEqns = Join(Eqns, BC1, BC2, BC3, BC4) /. xgrid /. ygrid


Out(12)= {u(0, 1) + u(1, 0) - 4 u(1, 1) + u(1, 2) + u(2, 1) == 0, 
 u(0, 2) + u(1, 1) - 4 u(1, 2) + u(1, 3) + u(2, 2) == 0, 
 u(0, 3) + u(1, 2) - 4 u(1, 3) + u(1, 4) + u(2, 3) == 0, 
 u(1, 1) + u(2, 0) - 4 u(2, 1) + u(2, 2) + u(3, 1) == 0, 
 u(1, 2) + u(2, 1) - 4 u(2, 2) + u(2, 3) + u(3, 2) == 0, 
 u(1, 3) + u(2, 2) - 4 u(2, 3) + u(2, 4) + u(3, 3) == 0, 
 u(2, 1) + u(3, 0) - 4 u(3, 1) + u(3, 2) + u(4, 1) == 0, 
 u(2, 2) + u(3, 1) - 4 u(3, 2) + u(3, 3) + u(4, 2) == 0, 
 u(2, 3) + u(3, 2) - 4 u(3, 3) + u(3, 4) + u(4, 3) == 0, u(1, 0) == 0,
  u(2, 0) == 0, u(3, 0) == 0, u(1, 4) == 25., u(2, 4) == 50., 
 u(3, 4) == 75., u(4, 2) == 50., u(4, 3) == 75., u(0, 2) == 0, 
 u(0, 3) == 0}

I need this to be in the form of a matrix only for the unknown variable with the substitution of the known value to obtain the linear system that arises from the resolution of the equation of laplacses.

Differential equation plot solution

M1 = Array(Subscript(y, #1, #2)(t) &, {2, 2});
M0 = {{1, 0.00001}, {0.00001, 0}};
ci = Thread(Flatten(M1) == Flatten(M0)) /. {t -> 0};
s = NDSolve({ I D(M1, t) == (M''.M1 - M1.M'')/20, ci}, Variables(M1), {t, 0, 10})

I have solved the previous differential equation. It offers 4 different solutions, that is, the 4 different matrix elements of M1 as interpolation functions.

How to draw these different solutions. I tried to do

Plot(M1 /. s, {t, 0, 10})

But it shows nothing.
I'm sorry, I forgot to mention, M & # 39; & # 39; is defined as an array

M''={{3.58368*10^-6, -9.3358*10^-6}, {-9.3358*10^-6, -3.58368*10^-6}}

It is showing solution as

> {{Subscript(y, 1, 1)(t) -> InterpolatingFunction({{0., 10.}}, <>)(t), 



Subscript(y, 1, 2)(t) -> InterpolatingFunction({{0., 10.}}, <>)(t), 


 Subscript(y, 2, 1)(t) -> InterpolatingFunction({{0., 10.}}, <>)(t), 


 Subscript(y, 2, 2)(t) -> InterpolatingFunction({{0., 10.}}, <>)(t)}}

tensors: I'm not sure why the geodetic derivation equation involved a second ordinary derivative

My question is related to this link (18:10)

Recently I have seen a video about volume derivation.
In this video, the presenter tried to relate the ordinary derivative of the Volume with the Ricci tensor, but I get stuck when I try to understand it.

My question is that I don't understand why $$ ddot {S ^ {u_j} _j} $$
is equal to $$ -R ^ {u_j} {} _ {xyz} s ^ y_j v ^ z y ^ x $$
at 18:10

The reason is
$$ ddot {S ^ {u_j} _j} $$
should not be equal to
$$ -R ^ {u_j} {} _ {xyz} s ^ y_j v ^ z y ^ x $$
(where $$ (∇v∇vS) ^ {uj} = R ^ {u_j} {} _ {xyz} s ^ y_j v ^ z y ^ x $$) since they are different things.
The first is the second ordinary derivative and the second is the second covariant derivative completely.

I cannot see why it is not the second ordinary derivative since the volume itself is scalar.
If you take the derivative of the volume with the ordinary derivative, the logic must be followed in a similar manner and the second derivative of the vector component
(part of the decomposition due to the product rule) must be the ordinary type derivative

I give an example of why they are not equal:
Given a 2D volume (area) that extends by vector $$ vec {A} $$ and $$ vec {B} $$

the volume is $$ V = epsilon_ {ijk} A ^ iB ^ j $$
if you take the ordinary derivative of the volume ($$ d (V) / d ( lambda) $$ , this implies that it will take the ordinary derivative of $$ A ^ i $$ (as a result of the product rule) which is $$ d (A ^ i) / d ( lambda) $$
However, this is only part of ∇vA: $$ (∇vA) ^ k = d (A ^ k) / d ( lambda) (1.) + V ^ j gamma ^ {k} {jj} (2) = v ^ j * d (A ^ k) / d (x ^ j) + v ^ j gamma ^ {k} _ {ij} $$

where$$ gamma ^ {k} _ {ij} $$is the connection factor that is lost and $$ v ^ j * d () / dx ^ j $$ is equal to $$ d () / ( lambda) $$.

It shows that the ordinary derivative of $$ A ^ i $$ It is only (1.) and (2) is neglected.
(1.) is only the ordinary derivative part of the volume derivation
(2) is the complete derivative that includes the connection factor
It is clear that (1.) is not equal to (2)
follow the line of similar reasoning, I think the double covariant derivative would not be the same in the second derivative, I am not sure why they are equal
$$ ddot {S ^ {u_j} _j} = d ^ 2 (S ^ {u_j}) / d ( lambda) = ?? (∇v∇vS) ^ {uj} = R ^ {u_j} {} _ {xyz} s ^ y_j v ^ zy ^ x $$

Could anyone help me please
Here is my photo:
enter the description of the image here

Nonlinear Schrodinger equation

I am trying to solve the nonlinear Schrodinger equation for a fiber optic model, in one dimension, I used the pitaevskii equation and the bogoliubov-de-gennes approximation method to solve the nonlinear Schrodinger equation. The code is the following:

https://drive.google.com/open?id=1I8jhP2kCnSiq9y53ipiOUMHhCYVv8QVF

Thank you

Prove the equation using the geometric equation formula 1 + k + k (k-1) + —— + k (k-1) ^ n-1 = (k (k-1) ^ n – 2) / (k-2)

Test the equation using the geometric equation formula

1 + k + k (k-1) + —— + k (k-1) ^ n-1 = (k (k-1) ^ n – 2) / (k-2)

Please look at the photo.
enter the description of the image here

game mechanics – Simple orbital transfer and interception equation

I am working on a game that has a game board made up of objects that orbit the center of the board. You can consider it as our solar system, but very simplified in terms of physics. All objects have a mass and gravity of 1.0. Its orbital velocities are simply a linear function of its radius from the center (Sun).

The player has a base, which in terms of our solar system would be Earth. The player needs to send a spaceship to another planet in a different orbit. The spacecraft has a constant acceleration (or deceleration) and unlimited fuel.

Simple example:

I need to find an equation for this spacecraft to go to one of the planets and intercept it. I have seen some web pages of orbital mechanics, but this is a little beyond my mathematical skills and I think it is too complex for what I am trying to achieve. Any help would be greatly appreciated.

NOTE: If this is more suitable for SE.Mathmatica, do not hesitate to move.

Algebraic geometry: show that, in general, an equation of a linear geometric place is a linear equation in two variables.

Many books, without any proof, indicate the following result:

An equation of a, locus that is, straight line is linear equation in
two variables

I have tried a lot to prove this, but I think I made some mistakes.
If you find any error, please tell me and also post the "correct" proof of it.

Consider three points on that straight line: $ (x_1, y_1),
(x_2, y_2), (x_3, y_3) $
.

Now, as they are collinear, therefore, the area of ​​the triangle formed by these
points as vertices is zero, that is $$ Delta = frac {1} {2} vert x_1 (y_2 –
y_3) + x_2 (y_3 – y_1) + x_3 (y_1 – y_2) vert = 0 $$

So: $$ vert x_1 (y_2 – y_3) + x_2 (y_3 – y_1) + x_3 (y_1 – y_2)
vert = 0 $$

Now consider an equation: $ ax + for + c = 0 $.

There may be two cases:

(1) $ ax + for + c = 0 $ it's straight line equation (2) $ ax + for + c = 0 $ is not to be
straight line equation.

If we choose that case (2) be true, then we can say that: $$ ax_1 + by_1
+ c neq 0 $$
$$ ax_2 + by_2 + c neq 0 $$ $$ ax_3 + by_3 + c / neq0 $$.

If we multiply the first equation by $ x_2 $ and the second by $ x_1 $,
and then we subtract the second from the first, we get: $$ b neq
frac {-c (x_2 – x_1)} {y_1x_2 – y_2x_1} $$
In the same way, for
removing $ b $, we obtain $$ a neq frac {-c (y_2-y_1)} {x_1y_2 – x_2y_1} $$

Now we substitute these values ​​in the third equation, we get:$$ frac {-c (y_2
– y_1)} {x_1y_2 – x_2y_1} x_3 + frac {(- c) (x_2 – x_1)} {x_2y_1 – x_1y_2} y_3 + c neq 0 $$

Now multiply -1 in the coefficient of $ x_3 $ by the numerator and in
the coefficient of $ y_3 $ by the denominator, we get:

$$ c left ( frac {(y_1 – y_2) x_3} {x_1y_2 – x_2y_1} + frac {(x_2 –
x_1) y_3} {x_1y_2 – x_2y_1} + 1 right) neq 0 $$

$$ left ( frac {c} {x_1y_2 – x_2y_1} right) {x_3 (y_1 – y_2) + x_2y_3
– x_1y_3 + x_1y_2 – x_2y_1 } neq 0 $$
$$ x_1 (y_2 – y_3) + x_2 (y_3 – y_1) + x_3 (y_1 – y_2) neq 0 $$

This result is contradiction. So, our assumption that (2) is true is
incorrect. Now, if you choose (1) to be true, then following similar steps,
we arrive at the correct result, then (1) is correct. $ Q.E.D $

Incorrect solution to a simple equation

The solution that Solve give for this simple equation

(9 + 12*x + x^2)/(3 + x) == 6 - 18/(3 + x)

is {{x -> -3}}. as well Reduce gives x == -3. Why?

rectify the integral curves of the equation dot x = x + cos t

I am studying the ODE. My teacher suggested Arnold's book. I am studying rectifications. and it's not clear to me at all.
Do you know any link that explains what rectification means and what are the steps to follow?
This is an example question that I have no idea what you are asking for.

a-Rectify the integral curves of the equation $ dot x = x + cos t $
b-Linearize in (0,1) … (However, I have seen an example of linearization using the Jacobean matrix and discussing the stability of the points, etc.)

Physics – Solve the functional equation $ h (y) + h ^ 1 (y) = 2y + y ^ 2 $

I was trying to solve a certain physics problem and I found the functional equation that contains a function $ h $ and its inverse $ h <-1>:
begin {equation}
h (y) + h ^ 1 (y) = 2y + y ^ 2. tag {1}
end {equation}

First I thought there are differentiable solutions. So after differentiating we get
$$
h & # 39; (y) + frac {1} {h & # 39; (y)} = 2 + 2y.
$$

However after solving for $ h & # 39; (y) $ and integration I get a function that cannot satisfy the initial equation $ (1) $. By contradiction one can see from here that $ (1) $ You cannot have a differentiable solution.

Q: $ (1) $ It has a unique solution and is it possible to find it in closed form?

Equation $ (1) $ It seems quite simple and probably also simple to analyze, but I couldn't figure out how.


Background information: The physics problem I was trying to solve is finding the dependence on the current $ J_0 $ in the voltage $ U_0 $ in this infinite chain that contains non-linear elements with characteristics of quadratic volt-amps $ I (V) =? V ^ 2 $ and ohmic resistances $ R $:
enter the description of the image here

According to the dimensional analysis you can write
$$
J_0 (U_0) = frac {1} {α R ^ 2} f (α RU_0).
$$

Solving a simple system of equations, I obtained the following functional equation for the unknown function $ f $
$$
f (x) = (x-f (x)) ^ 2 + f (x-f (x)). tag {2}
$$

Now enter another function $ h $ according to
$$
x-f (x) = h (x).
$$

Then equation $ (2) $ becomes
$$
x-h (x) = h ^ 2 (x) + h (x) -h (h (x)).
$$

Leave $ h (x) = y $, then $ x = h -1 (x) $ and we get $ (1) $.

This is not a textbook problem and I don't even know if it has a solution. I was studying it out of curiosity.