dynamic – how to solve the two differential equations?

i want to solve the following equation to find k at r max is 15,17 and 19 and plot tension(kx) vs angle(theta) there have any way to solve if it have many unknows?

mgSin((Theta)(t))) – (m(r”(t) – r(t) (((Theta)'(t))^2))) ==
k (r(t) – 14) —-1

gCos((Theta)(t)) == r(t)(Theta)”(t) + 2r'(t)(Theta)'(t) ——2

m=81 g=9.81 r0=15 theta0=arcsin((r max(15,17,19)-12)/15)

thank you very much.

differential equations – optimization of second order ODE with more then one parameter

I want to optimize second order ODE with more than one variable

Exp(2*alpha*x)*D(theta(x), {x, 2}) + (2*alpha + A)*Exp(2*alpha*x)*
D(theta(x), x) - B^2*(theta(x) - thetaa) - C*(theta(x) - thetaa)^2 + D*Exp(2*alpha*x) == 0;

With boundary conditions

theta(1) == 1, theta'(0) == 0.20;

Here A, B, C, D are parameters. I want to maximize theta(x) for different parameters
I have no idea how to do it please help me

Plot the numerical solution of the differential equation for 0 ≤ t ≤ 50: x 00 + 0.15x 0 − x + x 3 = 0.3 cost , x(0) = −1 , x 0 (0) = 1

What am I doing wrong here ???

In(195):= DSolve({x''(t) + 0.15 x'(t) - x(t) + x(t)^3 == 0.3 Cos(t), x(0) == -1,x'(0) == 0}, x(t), t)
Out(195)= DSolve({-x(t) + x(t)^3 + 0.15 Derivative(1)(x)(t) + (x^(Prime)(Prime))(t) == 0.3 Cos(t),x(0) == -1, Derivative(1)(x)(0) == 0}, x(t), t)

stability in odes – Finding all equilibrium points of a differential system

So, I want to find all equilibrium points of the next differential system

x’ &= ,gz – hx\
y’ &= , frac{c}{a+bx}+ky\
z’ &= ey – fz

And I know I can write it as:
h & 0 & g\
? & k & 0\
0 & e & f

But I don’t know what to do with the $frac{c}{a+bx}$ (hence the ? in the matrix).

I also tried to make them all equal to zero and I ended up with the next problem:

$x=frac{g}{h}z$ and $y=frac{f}{e}zimplies frac{c}{a+frac{bgz}{h}}=frac{kfz}{e}$. And finding z in there is a nightmare.

It would be great I you could give me any insight.

differential geometry – Nowhere-vanishing form $omega$ on $S^1.$

This is an example(19.8 and 17.15) from Intro to manifolds by Tu.

Let $S^1$ be the unit circle defined by $x^2+y^2=1$ in $mathbb{R}^2$. The 1-form $dx$ restricts from $mathbb{R}^2$ to a $1-form$ on $S^1.$ At each point $pin S^1$, the domain of $(dxmid_{S^1})_p$ is $T_p(S^1)$ instead of $T_p(mathbb{R}^2)$: $(dxmid_{S_1})_p:T_p(S^1)rightarrowmathbb{R}$. At $p=(1, 0)$, a basis for the tangent space $T_p(S^1)$ is $partial/partial y$. Since $(dx)_p(frac{partial}{partial y})=0,$ we see that although $dx$ is nowhere-vanishing $1-$form on $mathbb{R}^2$, it vanishes at $(1, 0)$, when restricted on $S^1.$ Define a $1-form$ $omega$ on $S^1$ by $omega=frac{dy}{x}$ on $U_x$ and $omega=-frac{dx}{y}$ on $U_y$ where $U_x={(x,y)in S^1mid xneq 0}$ and $U_y={(x,y)in S^1mid yneq 0}$.

I understand $omega$ is $C^infty$ and nowhere-vanishing. I want to understand why $omega$ on $S^1$ is the form $-ydx+xdy$ of Example below:

Example 17.15 (A 1-form on the circle). The velocity vector field of the unit circle $c(t)=(x,y)=(cos t, sin t)$ in $mathbb{R}^2$ is $c'(t)=(-sin t, cos t)=(-y, x)$. Thus $X=-yfrac{partial}{partial x}+xfrac{partial}{partial y}$ is a $C^infty$ vector field on the unit circle $S^1$. What this notation means is that if $x,y$ are the standard coordinates on $mathbb{R^2}$ and $i:S^1hookrightarrowmathbb{R}^2$ is the inclusion map, then at a point $p=(x,y)in S^1$, one has $i_ast X_p=-ypartial/partial xmid_p+xpartial/partial ymid_p$, where $partial/partial xmid_p$ and $partial/partial ymid_p$ are tangent vectors at $p$ in $mathbb{R}^2$. Then if $omega=-ydx+xdy$ on $S^1$, then $omega(X)equiv 1.$

partial differential equations – Solving Black-Scholes PDE for any non path dependent arbitrary final condition

Good evening,

I’m currently working on the following problem for my PDE class and I would like an opinion on it,

Let’s consider the Black-Scholes model with (time-varying) volatility, $sigma = sigma(t)$, and (time varying) risk free return rate,$r=r(t)$.

$$ V_t + frac{sigma^2(t)}{2}S^2 V_{SS} + r(t)V_S-r(t)V = 0 space, space S>0,space 0<t<T $$

And the following final condition: $$V(S,T) = phi(S)space , space S>0$$ where $phi$ represents the option’s payoff.

I started by considering the following variable change, $$ S = e^x$$ $$ t = T – theta $$ This allowed me to consider the following functions:

$$ U(x,theta) = V(e^x,T-theta) space,space hatsigma(theta) = sigma(T-theta) space,space hat r(theta) = r(T-theta) $$ This also turned my final condition into an initial condition, $U(x,0) = phi(e^x) $, and I derived the following transformation.

$$ U_{theta} = frac{hatsigma^2(theta)}{2}U_{xx} + Big(hat r(theta) – frac{hatsigma^2(theta)}{2}Big)U_x – hat r(theta)U space,space x in mathbb{R} space,space 0 < theta < T $$

Then, I introduced a new time variable, $$ tau(theta) = frac{1}{2} int_{0}^{theta} hatsigma^2(xi)dxi$$ I managed to prove that this function is a bijection from an interval $(0,T)$ to an interval $(0,Upsilon)$. Therefore, $tau$ is invertible and we can have $theta = theta(tau)$

With this new time variable, I defined the following functions, $$ R(tau) = hat r(theta(tau)) space,space Sigma(tau) = hatsigma(theta(tau))$$ Which then allowed me to to define $$ k(tau) = 2 frac{R(tau)}{Sigma^2(tau)} $$

Given $u(x,tau) = U(x,theta(tau))$, I derived the following new equation, $$u_{tau} = u_{xx} + (k(t)-1)u_x -k(t)u space,space x in mathbb{R} space,space 0 < tau < Upsilon $$

I then defined the following “updating factor”, $$d(tau) = e^{{int_{0}^{tau}k(xi)dxi}} $$ and a new function $$ v(x,tau) = d(tau)u(x,tau) $$ This new function allowed me to derive the following transformation, $$v_tau = v_{xx} + (k(t)-1)v_x space,space x in mathbb{R} space,space 0 < tau < Upsilon $$

I then solved the following PDE problem,

$$ psi_tau = (k(t)-1)psi_x space,space x in mathbb{R} space,space 0<tau<Upsilon $$
$$ psi(x,0) = x $$

This problem has the following solution, $$psi(x,tau) = x + int_{0}^{tau} k(xi)-1 dxi $$

With this $psi$ solution, with $psi = y$, I made a new transform with the following function, $$v(x,tau) = w(psi(x,tau),tau) $$ This transformation allowed me to achieve the heat equation, $$w_tau = w_{yy} $$ With the initial condition, $$w(y,0) = phi(e^y)$$

Having all of these transforms and functions, my main goal is to solve the first problem, given all this information above.

$$ V_t + frac{sigma^2(t)}{2}S^2 V_{SS} + r(t)V_S-r(t)V = 0 space, space S>0,space 0<t<T $$ $$V(S,T) = phi(S)space , space S>0$$

In order not to have this post being twice as long as it is, I won’t explicit any reasoning behind these proofs, but I’m glad to provide any reasoning if needed.

My question here is the following: should I start by solving the heat equation, by applying a Fourier transform, and reverse each transform I’ve done so far one by one? Or is there a simpler way to solve this? I’ve also looked into the Carr-Madan decomposition for this matter, but I haven’t learned that yet.

I was looking forward into having some kind of clue in order to have a starting point, because I’m really lost in all of this “mess”. I really appreciate if you have read this far, and I apologize for the long post.

Thank you!

differential equations – boundary values with DSolve and plotting

I have been examining https://library.wolfram.com/infocenter/Books/8509/DifferentialEquationSolvingWithDSolve.pdf but cannot resolve my syntax issue

eqn = 0.1*y''[x] + 2 y'[x] + 2 y[x] == 0;
sol = DSolve[{eqn, y[0] == 0, y[1] == 1}, y[x], x]

but this yields

enter image description here

I am not sure why it is written “True, True” instead of applying the boundary condition.

differential equations – time dependent hamiltonian with random numbers

I have a Hamiltonian (Z) in matrix form, I solved it for time independent random real numbers, now want to introduce time dependent in such a way at any time the random real numbers change between the range {-Sqrt(3sigma2), Sqrt(3sigma2)}, here is my code

Nmax = 100; (*Number of sites*)

tini = 0; (*initial time*)

tmax = 200; (*maximal time*)

(Sigma)2 = 0.1; (*Variance*)

n0 = 50; (*initial condition*)

ra = 1; (*coupling range*)

(Psi)ini = Table(KroneckerDelta(n0 - i), {i, 1, Nmax});

RR = RandomReal({-Sqrt(3*(Sigma)2), Sqrt(3*(Sigma)2)}, Nmax);

Z = Table(
    Sum(KroneckerDelta(i - j + k), {k, 1, ra}) + 
     Sum(KroneckerDelta(i - j - k), {k, 1, ra}), {i, 1, Nmax}, {j, 1, 
     Nmax}) + DiagonalMatrix(RR);

usol = NDSolveValue({I D((Psi)(t), t) == 
     Z.(Psi)(t), (Psi)(0) == (Psi)ini}, (Psi), {t, tini, tmax});

What can I do for introduce this time dependent and solve the differential equation(usol)? I hope my question is clear

differential equations – How to pose Dirichlet and Neumann BCs on same boundary?

Let’ s look on the Laplace equation in a rectangle area:

Eq0 = Inactive[Laplacian][u[x, y], {x, y}]
[CapitalOmega] = Rectangle[{0, 0}, {2, 1}]

and try to solve it with various pairs of Dirichlet an Newman BCs on horizontal boundaries:

BCD0 = DirichletCondition[u[x, y] == 0, y == 0]
BCD1 = DirichletCondition[u[x, y] == 1, y == 1]
BCN0 = NeumannValue[1, y == 0]
BCN1 = NeumannValue[1, y == 1]

NDSolve yields reasonable solution when the Dirichlet and Neumann BCs are posed on different edges of the rectangle. For example:

u1 = NDSolveValue[{Eq0 == BCN1, BCD0}, 
  u, {x, y} [Element] [CapitalOmega]]
ContourPlot[u1[x, y], {x, y} [Element] [CapitalOmega], 
 AspectRatio -> Automatic
  , PlotLegends -> Automatic]

enter image description here

However it fails if the BCs are set on same edge:

u2 = NDSolveValue[{Eq0 == BCN0, BCD0}, 
  u, {x, y} [Element] [CapitalOmega]]
ContourPlot[u2[x, y], {x, y} [Element] [CapitalOmega], 
 AspectRatio -> Automatic
  , PlotLegends -> Automatic]

enter image description here

Nevertheless it is obvious that solution exists and is equal to u[x_,y_]=y.

My Question is: Is it possible to set 2 BCs on same edge of the rectangle?

differential equations – Series solution of an ODE with nonpolynomial coefficients

Basically, I have a second-order differential equation for g(y) (given below as odey) and I want to obtain a series solution at $y=infty$ where g(y) should vanish. That would be easy if the ODE contains polynomial coefficients, hence the Frobenius method can used. But in my case, the coefficients are not polynomial because of the presence of powers proportional to p (can take positive non-integer values). I have also expanded ir at infinity and have taken up to first order (given by irInf) since if I directly use ir, then it would be a mess later for the ODE.

ir(y_) := (Sqrt)(-5 + y^2 + (3 2^(1/3))/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) - (6 2^(1/3)y^2)/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) + (3 (2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3))/2^(1/3))
dir(y_) := D(ir(x), x) /. x -> y
irInf(y_) = Series(ir(y), {y, (Infinity), 1}) // Normal

p=1/10; (*p>=0*)
odey = (2 irInf(y) - p irInf(y)^(1 - p)) D(irInf(y), y) g'(y) + irInf(y)^2 g''(y) - l (l + 1) g(y) // Simplify

What steps can I take to solve this? Thanks