## ca.classical analysis and odes – Almost-differential functional equations

The ODE $$y'(x)+P(x)y(x)=Q(x)$$ has solution $$I(x)y(x)=int I(x)Q(x),dx$$ where $$I(x)=expint P(x),dx$$. Equivalently, $$Y(x)+P(x)int_0^xY(t),dt=Q(x)tag1$$ has solution $$Y(x)=frac d{dx}frac{int I(x)Q^*(x),dx}{I(x)}=Q^*(x)-frac{I'(x)int I(x)Q^*(x),dx}{I(x)^2}$$ where $$Y=y’$$ and $$Q^*(x)=Q(x)+P(x)y(0)$$. Equation $$(1)$$ gives the limiting case, where $$int_0^xY(t),dt=lim_{ntoinfty}frac xnsum_{k=0}^nYleft(frac{kx}nright).$$ Given $$P(x),Q(x)$$, what could be said about the solutions of the functional equation $$Y(nx)+frac{xP(x)}nsum_{k=0}^nY(kx)=Q(x)tag2,$$ where $$n$$ is no longer under the limit? That is, what is the behaviour of the families of solutions to $$(2)$$ as $$n$$ increases?

(Cross-posted on MathSE but received no input.)

## dynamic – how to solve the two differential equations?

i want to solve the following equation to find k at r max is 15,17 and 19 and plot tension(kx) vs angle(theta) there have any way to solve if it have many unknows?

mgSin((Theta)(t))) – (m(r”(t) – r(t) (((Theta)'(t))^2))) ==
k (r(t) – 14) —-1

gCos((Theta)(t)) == r(t)(Theta)”(t) + 2r'(t)(Theta)'(t) ——2

m=81 g=9.81 r0=15 theta0=arcsin((r max(15,17,19)-12)/15)

thank you very much.

## differential equations – optimization of second order ODE with more then one parameter

I want to optimize second order ODE with more than one variable

``````Exp(2*alpha*x)*D(theta(x), {x, 2}) + (2*alpha + A)*Exp(2*alpha*x)*
D(theta(x), x) - B^2*(theta(x) - thetaa) - C*(theta(x) - thetaa)^2 + D*Exp(2*alpha*x) == 0;
``````

With boundary conditions

``````theta(1) == 1, theta'(0) == 0.20;
``````

Here `A, B, C, D` are parameters. I want to maximize `theta(x)` for different parameters

## partial differential equations – Solving Black-Scholes PDE for any non path dependent arbitrary final condition

Good evening,

I’m currently working on the following problem for my PDE class and I would like an opinion on it,

Let’s consider the Black-Scholes model with (time-varying) volatility, $$sigma = sigma(t)$$, and (time varying) risk free return rate,$$r=r(t)$$.

$$V_t + frac{sigma^2(t)}{2}S^2 V_{SS} + r(t)V_S-r(t)V = 0 space, space S>0,space 0

And the following final condition: $$V(S,T) = phi(S)space , space S>0$$ where $$phi$$ represents the option’s payoff.

I started by considering the following variable change, $$S = e^x$$ $$t = T – theta$$ This allowed me to consider the following functions:

$$U(x,theta) = V(e^x,T-theta) space,space hatsigma(theta) = sigma(T-theta) space,space hat r(theta) = r(T-theta)$$ This also turned my final condition into an initial condition, $$U(x,0) = phi(e^x)$$, and I derived the following transformation.

$$U_{theta} = frac{hatsigma^2(theta)}{2}U_{xx} + Big(hat r(theta) – frac{hatsigma^2(theta)}{2}Big)U_x – hat r(theta)U space,space x in mathbb{R} space,space 0 < theta < T$$

Then, I introduced a new time variable, $$tau(theta) = frac{1}{2} int_{0}^{theta} hatsigma^2(xi)dxi$$ I managed to prove that this function is a bijection from an interval $$(0,T)$$ to an interval $$(0,Upsilon)$$. Therefore, $$tau$$ is invertible and we can have $$theta = theta(tau)$$

With this new time variable, I defined the following functions, $$R(tau) = hat r(theta(tau)) space,space Sigma(tau) = hatsigma(theta(tau))$$ Which then allowed me to to define $$k(tau) = 2 frac{R(tau)}{Sigma^2(tau)}$$

Given $$u(x,tau) = U(x,theta(tau))$$, I derived the following new equation, $$u_{tau} = u_{xx} + (k(t)-1)u_x -k(t)u space,space x in mathbb{R} space,space 0 < tau < Upsilon$$

I then defined the following “updating factor”, $$d(tau) = e^{{int_{0}^{tau}k(xi)dxi}}$$ and a new function $$v(x,tau) = d(tau)u(x,tau)$$ This new function allowed me to derive the following transformation, $$v_tau = v_{xx} + (k(t)-1)v_x space,space x in mathbb{R} space,space 0 < tau < Upsilon$$

I then solved the following PDE problem,

$$psi_tau = (k(t)-1)psi_x space,space x in mathbb{R} space,space 0
$$psi(x,0) = x$$

This problem has the following solution, $$psi(x,tau) = x + int_{0}^{tau} k(xi)-1 dxi$$

With this $$psi$$ solution, with $$psi = y$$, I made a new transform with the following function, $$v(x,tau) = w(psi(x,tau),tau)$$ This transformation allowed me to achieve the heat equation, $$w_tau = w_{yy}$$ With the initial condition, $$w(y,0) = phi(e^y)$$

Having all of these transforms and functions, my main goal is to solve the first problem, given all this information above.

$$V_t + frac{sigma^2(t)}{2}S^2 V_{SS} + r(t)V_S-r(t)V = 0 space, space S>0,space 0 $$V(S,T) = phi(S)space , space S>0$$

In order not to have this post being twice as long as it is, I won’t explicit any reasoning behind these proofs, but I’m glad to provide any reasoning if needed.

My question here is the following: should I start by solving the heat equation, by applying a Fourier transform, and reverse each transform I’ve done so far one by one? Or is there a simpler way to solve this? I’ve also looked into the Carr-Madan decomposition for this matter, but I haven’t learned that yet.

I was looking forward into having some kind of clue in order to have a starting point, because I’m really lost in all of this “mess”. I really appreciate if you have read this far, and I apologize for the long post.

Thank you!

## differential equations – boundary values with DSolve and plotting

I have been examining https://library.wolfram.com/infocenter/Books/8509/DifferentialEquationSolvingWithDSolve.pdf but cannot resolve my syntax issue

``````eqn = 0.1*y''[x] + 2 y'[x] + 2 y[x] == 0;
sol = DSolve[{eqn, y[0] == 0, y[1] == 1}, y[x], x]
``````

but this yields

I am not sure why it is written “True, True” instead of applying the boundary condition.

## differential equations – time dependent hamiltonian with random numbers

I have a Hamiltonian (Z) in matrix form, I solved it for time independent random real numbers, now want to introduce time dependent in such a way at any time the random real numbers change between the range {-Sqrt(3sigma2), Sqrt(3sigma2)}, here is my code

``````Nmax = 100; (*Number of sites*)

tini = 0; (*initial time*)

tmax = 200; (*maximal time*)

(Sigma)2 = 0.1; (*Variance*)

n0 = 50; (*initial condition*)

ra = 1; (*coupling range*)

(Psi)ini = Table(KroneckerDelta(n0 - i), {i, 1, Nmax});

RR = RandomReal({-Sqrt(3*(Sigma)2), Sqrt(3*(Sigma)2)}, Nmax);

Z = Table(
Sum(KroneckerDelta(i - j + k), {k, 1, ra}) +
Sum(KroneckerDelta(i - j - k), {k, 1, ra}), {i, 1, Nmax}, {j, 1,
Nmax}) + DiagonalMatrix(RR);

usol = NDSolveValue({I D((Psi)(t), t) ==
Z.(Psi)(t), (Psi)(0) == (Psi)ini}, (Psi), {t, tini, tmax});
``````

What can I do for introduce this time dependent and solve the differential equation(usol)? I hope my question is clear

## How to solve non-linear equations coming out of lagrange multiplier?

Using Lagrange multipliers I obtained the following system of equations:

begin{align*}x+y+z &= 20\ x^2 + y^2 + z^2 &= 200\ yz &= lambda + 2 mu x\ xz &= lambda + 2 mu y\ xy &= lambda + 2 mu z end{align*}

I am struggling to solve this system of equations. I have managed to relate $$lambda$$ and $$mu$$. First I square the first equation and substitute the second equation:

$$(x+y+z)^2 = 200 + 2xy + 2xz + 2yz = 400$$

$$xy+xz+yz = 100$$

Then I add the last three equations

$$xy+xz+yz=3lambda+2mu(x+y+z) =3lambda + 40 mu=100$$

$$3 lambda=100 -40mu$$

But then I am stuck.
I have confirmed that there are 6 distinct solutions to this system. What is the strategy when trying to solve this kind of system of equations?

## differential equations – How to pose Dirichlet and Neumann BCs on same boundary?

Let’ s look on the Laplace equation in a rectangle area:

``````Eq0 = Inactive[Laplacian][u[x, y], {x, y}]
[CapitalOmega] = Rectangle[{0, 0}, {2, 1}]
``````

and try to solve it with various pairs of Dirichlet an Newman BCs on horizontal boundaries:

``````BCD0 = DirichletCondition[u[x, y] == 0, y == 0]
BCD1 = DirichletCondition[u[x, y] == 1, y == 1]
BCN0 = NeumannValue[1, y == 0]
BCN1 = NeumannValue[1, y == 1]
``````

NDSolve yields reasonable solution when the Dirichlet and Neumann BCs are posed on different edges of the rectangle. For example:

``````u1 = NDSolveValue[{Eq0 == BCN1, BCD0},
u, {x, y} [Element] [CapitalOmega]]
ContourPlot[u1[x, y], {x, y} [Element] [CapitalOmega],
AspectRatio -> Automatic
, PlotLegends -> Automatic]
``````

However it fails if the BCs are set on same edge:

``````u2 = NDSolveValue[{Eq0 == BCN0, BCD0},
u, {x, y} [Element] [CapitalOmega]]
ContourPlot[u2[x, y], {x, y} [Element] [CapitalOmega],
AspectRatio -> Automatic
, PlotLegends -> Automatic]
``````

Nevertheless it is obvious that solution exists and is equal to u[x_,y_]=y.

My Question is: Is it possible to set 2 BCs on same edge of the rectangle?

## differential equations – Series solution of an ODE with nonpolynomial coefficients

Basically, I have a second-order differential equation for `g(y)` (given below as `odey`) and I want to obtain a series solution at $$y=infty$$ where `g(y)` should vanish. That would be easy if the ODE contains polynomial coefficients, hence the Frobenius method can used. But in my case, the coefficients are not polynomial because of the presence of powers proportional to `p` (can take positive non-integer values). I have also expanded `ir` at infinity and have taken up to first order (given by `irInf`) since if I directly use `ir`, then it would be a mess later for the ODE.

``````ir(y_) := (Sqrt)(-5 + y^2 + (3 2^(1/3))/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) - (6 2^(1/3)y^2)/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) + (3 (2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3))/2^(1/3))
dir(y_) := D(ir(x), x) /. x -> y
irInf(y_) = Series(ir(y), {y, (Infinity), 1}) // Normal

p=1/10; (*p>=0*)
odey = (2 irInf(y) - p irInf(y)^(1 - p)) D(irInf(y), y) g'(y) + irInf(y)^2 g''(y) - l (l + 1) g(y) // Simplify
``````

What steps can I take to solve this? Thanks

## differential equations – Series solution of an ODE with nonpolynomial coefficients

Basically, I have a second-order differential equation for `g(y)` (given below as `odey`) and I want to obtain a series solution at $$y=infty$$ where `g(y)` should vanish. That would be easy if the ODE contains polynomial coefficients, hence the Frobenius method can used. But in my case, the coefficients are not polynomial because of the presence of powers proportional to `p` (can take positive non-integer values). I have also expanded `ir` at infinity and have taken up to first order (given by `irInf`) since if I directly use `ir`, then it would be a mess later for the ODE.

``````ir(y_) := (Sqrt)(-5 + y^2 + (3 2^(1/3))/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) - (6 2^(1/3)y^2)/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) + (3 (2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3))/2^(1/3))
dir(y_) := D(ir(x), x) /. x -> y
irInf(y_) = Series(ir(y), {y, (Infinity), 1}) // Normal

p=1/10; (*p>=0*)
odey = (2 irInf(y) - p irInf(y)^(1 - p)) D(irInf(y), y) g'(y) + irInf(y)^2 g''(y) - l (l + 1) g(y) // Simplify
``````

What steps can I take to solve this? Thanks