Solve Partial Differential Equation with Neumann Boundary Condition

I am trying to solve the heat equation with certain boundary conditions (one-dimensional in space).
I tried it this way:

heatequation= (D(T(x, t), t) - a (D(T(x, t), {x, 2}))) == 0; mysol = 
 NDSolve({heatequation, T(0, t) == 4, D(1, 0)(T(x, t))(0, t) == 100, 
   T(x, 0) == 293}, T, {x, 0, 1}, {t, 0, 600})

I get this error: 0 is not a valid variable.

Which zero is this about? The one in the Neumann boundary? The stack trace does not make it clear.
enter image description here

Thank you in advance for your help.

differential equations – How to implement limit boundary condition in solving PDE

I have to solve a partial differential equation for a function $F(x,t)$ where one of the boundary condition is formulated in terms of a limit:

$lim_{xrightarrow +infty} e^x partial_xF(x,t)=0$

Is it possible to implement this as a boundary condition in NDSolve (or possibly NDSolve`FiniteDifferenceDerivative)?

at.algebraic topology – Surface separating the boundary of a cylinder

Let $M^2$ be a connected closed surface. Suppose there exists an smooth embedding from a connected closed surface $N$ into the interior of $M times [0,1]$ such that $N$ separates $M times {0}$ and $M times {1}$.

If $N$ is homeomorphic to $M$, can we prove that the region bounded by $M times {0}$ and $N$ is homeomorphic to $M times [0,1]$?

If the conclusion is true, can we generalize it to the higher dimensional case?

Second order elliptic PDE problem with boundary conditions whose solutions depend continuously on the initial data

Consider the following problem
$$begin{cases}
-Delta u+cu=f,&xinOmega\
u=g,&xinpartialOmega
end{cases}$$

where $Omegasubseteqmathbb R^n$ is open with regular boundary, $cgeq0$ is a constant, $fin L^2(Omega)$ and $g$ is the trace of a function $Gin H^1(Omega)$. If we consider $u$ a weak solution to this problem, and define $U=u-Gin H_0^1(Omega)$, it is easy to see that $U$ is a weak solution to the following problem
$$begin{cases}
-Delta U+cU=f+Delta G-cG,&xinOmega\
U=0,&xinpartialOmega
end{cases}$$

It is also easy to see that we can apply Lax-Milgram theorem with the bilinear form
$$B(u,v)=int_Omegaleft(sum_{i=1}^nu_{x_i}v_{x_i}+cuvright)$$
and the bounded linear functional
$$L_f(v)=int_Omega(f-cG)v-int_Omegasum_{i=1}^n G_{x_i}v_{x_i}$$
to conclude there exists a unique weak solution $U$ to the auxiliary problem defined above. If we define $u=U+Gin H^1(Omega)$, it is clear then that this function will be a solution to the original problem.

Now to the question: I would like to prove that this solution $u$ depends continuously on the initial data, that is, that there exists a constant $C>0$ such that
$$lVert urVert_{H^1(Omega)}leq C(lVert frVert_{L^2(Omega)}+lVert GrVert_{H^1(Omega)})$$
I feel that the work I have done to prove that $L_f$ is bounded should be relevant for our purposes, because
$$lVert urVert_{H^1(Omega)}leqlVert UrVert_{H^1(Omega)}+lVert GrVert_{H^1(Omega)}$$
and
$$lVert UrVert_{H^1(Omega)}leq C B(U,U)^{1/2}= C|L_f(U)|^{1/2}$$
The problem is that I don’t know how to manipulate $L_f(U)$ to obtain the result. I have managed to prove a completely useless inequality, for it involves the norm of $U$.

I would appreciate any kind of suggestion. Thanks in advance for your answers.

P.S. The problem is that a priori $Delta G$ doesn’t have to be in $L^2(Omega)$, which makes it hard to use the $H^2$ regularity of $U$ (which would solve the problem instantly).

P.S.S. Also posted this question in SE.

fa.functional analysis – Poincare Inequality for $H^2$ function satisfying homogeneous Robin boundary conditions

Let $Omegasubsetmathbb{R}^3$ be a bounded smooth domain. In general, for a Poincare inequality of the type
$$|u|_{L^2}le C |nabla u|_{L^2}$$
to hold for all $uin Xsubset H^1(Omega)$ and $C$ independent of $u$, then $X$ needs to be such that it doesn’t contain constant translates. That is, if we consider $u+M$ for large $M>0$, the left hand side of the inequality increases indefinitely while the right hand side is unchanged, so we need some extra constraint in the definition of $X$. So common choices are $X=H^1_0(Omega)$ or $X={uin H^1(Omega)| int_Omega u,dx=0}$.

Here’s my question. Suppose, we’d like to say that there exists $C$ such that for all $uin X={uin H^2(Omega)|(partial_n u+u)_{|partialOmega}=0}subset H^2(Omega)$ we have
$$|u|_{L^2}le C|nabla u|_{L^2}.$$
First, is this true? If so, how does one prove such a statement? Essentially the requirement that $u$ satisfies the homogeneous Robin condition $(partial_n u+u)_{|partialOmega}=0$ should at least formally rule out constant translates, since $(partial_n u+u)_{|partialOmega}=0$ is not invariant under translation of $u$ by constants.

My guess is that it IS true, however, the usual proof I know of such statements usually relies on some compactness argument. For example, if $X$ were simply $H_0^1(Omega)$, then for the sake of contradiction, if we assume that there exists a sequence $u_nin H_0^1$ such that
$$|u_n|_{L^2}ge n|nabla u_n|_{L^2}$$
then, defining $v_n=u_n/|u_n|_{L^2}$, we have
$$frac{1}{n}ge |nabla v_n|_{L^2}.$$
Thus we have a bounded sequence in $H^1$ and a subsequence that converges strongly in $L^2$ and weakly in $H^1$ to some $vin H^1$. Because $|nabla v_n|_{L^2}to 0$, $v$ is constant. And since the trace map is continuous (and weakly continuous) from $H^1(Omega)$ to $H^frac{1}{2}(partialOmega)$ we have that $v$ is in fact in $H^1_0(Omega)$ and therefore $v=0$. Then we have a contradiction because $|v_n|_{L^2}=1$ for each $n$ implies that $|v|_{L^2}=1$.

Now this argument doesn’t work for Robin boundary conditions because now the relevant (Robin) trace operator is continuous and weakly continuous from $H^2(Omega)$ to $H^frac{1}{2}(partialOmega)$. In particular, if $v$ is the weak $H^1$ limit of a sequence $v_nin H^2$, then $v$ could be in $H^1$ but not $H^2$ and thus the notion of the normal derivative $(partial_n v)_{|partialOmega}$ may not even make sense for $v$. And without being able to say $(partial_n v+v)_{|partialOmega}=0$, we can’t necessarily say that $v=0$ like we did in the previous paragraph. So this is where I’m stuck. Any help would be appreciated.

complex analysis – Bounded analytic function that can not be extended to the boundary of the unit disc

I am trying to find a Luzin example as promised on the website encyclopediaofmath, but I don’t have access to Luzin collected works, which are referred to in the article. In particular, I don’t even read Russian, so I doubt I could make much use from them. Does anyone know the actual counterexample and could explain the construction process? I don’t need a proof that most can be extended, but would simply like to see an actual example of a function where it cannot be extended to the boundary on some points, potentially even on an infinite (uncountably?) subset of measure $0$ of the unit disc.

Cheers for any help on this.

ordinary differential equations – Given boundary conditions and initial condition, solve the PDE

Question, solve the given PDE :

$$ frac{partial C}{partial t} = a frac{partial^2 C}{partial x^2} -kC $$
where a, k are constants.

boundary conditions : $C= C_0$ at $x=0$, $C=0 $ at $x =$ infinitum

initial condition : $C=0$ at $t=0$

My attempt :

$$C(x,t) = X(x)~T(t)$$
$$ C = X~T$$
$$ boxed{ frac{partial C}{partial t} = T^{prime} X,~ space frac{partial C}{partial x}= X^{prime}T, ~ frac{partial^2 C}{partial x^2} = X^{prime prime } T}$$
replacing the partial derivatives into the Original equation :
$$ T^{prime}X = a ~X^{prime prime}T-k(X~T) $$
$$ X (T^{prime} +KT)=a ~X^{prime prime}T $$
$$frac{T^{prime}}{T} = a ~frac{ X^{prime prime}}{X} -k$$
$$ boxed{frac{T^{prime}}{T} = J, ~frac{ X^{prime prime}}{X} -k = J}$$
equating each side to a constant $J = -lambda^2$
$$ boxed{frac{d T}{d t} = -lambda^2 T ,~ frac{d^2 X}{d x^2} = left (frac{-lambda^2 + k}{a} right) X}$$
solving each differential equation :
$$T(t) = Ae^{- lambda t} , ~ X(x) = B cos{left (x sqrt{frac{lambda^2 -k}{a}} right)} + C sin{left (-x sqrt{frac{lambda^2 -k}{a}} right) } $$
Note : D = AB and E = AC
$$boxed{C(x,t) = left(D cos{left (x sqrt{frac{+lambda^2 -k}{a}} right)} + ~E sin{left (x sqrt{frac{lambda^2 – k}{a}} right) } right) e^{- lambda t}}$$
first boundary condition $C= C_0$ at $x=0$ :
$$ D e^{- lambda t} = C_0$$
second boundary condition $C=0 $ at $x =$ infinitum : DNE
$$ C(x,t) = C_0cos{left (x sqrt{frac{+lambda^2 -k}{a}} right)} + ~E sin{left (x sqrt{frac{lambda^2 – k}{a}} right) } e^{- lambda t}$$
using initial condition $C=0$ at $t=0$ :
$$ C_0 cos{left (x sqrt{frac{lambda^2 -k}{a}} right)} = -E sin{left (x sqrt{frac{lambda^2 – k}{a}} right) } $$
final equation :
$$boxed{ C(x,t) = -E sin{left (x sqrt{frac{lambda^2 – k}{a}} right) } + E sin{left (x sqrt{frac{lambda^2 – k}{a}} right) }e^{- lambda t} }$$

Could you guys please verify the answer also is “second boundary condition : DNE” true?

differential equations – How to solve the problem with a particular “Boundary value condition”?

When I try to solve the boudary condition problem with the following code

Clear("Global`*")
u = 0.1; v = 0.1; c1 = u (1 - 3 v)/(1 - 2 v); c2 = 
 u (1 - v)/(1 - 2 v); c3 = u (2 v)/(1 - 2 v); A = 0.6; B = 1.;
eqs = r''(R) + (c1/(c2*R) - R/(2 r(R)^2)) r'(R)^3 - 1/(2 R) r'(R) == 0;
bcs = {r'(A) == -A (c2/(c1*A^2 + c3*r(B)^2))^(1/2), 
   r'(B) == -B (c2/(c1*B^2 + c3*r(A)^2))^(1/2)};
NDSolve(Flatten@Join({eqs, bcs}), r, {R, A, B})

it appears the warnings of “Power::infy”. Anyone can help explain this warning or solve this boudary condition problem?

differential geometry – Calculate surface normals at the boundary of a Graphics3D object

First of all, we need 2 more options in boundedOpenCone. The option BoundaryStyle -> Automatic creates a Line on the boundary so we can easily locate the coordinates of point on the boundary. PlotPoints -> 100 isn’t actually necessary, but will make the resulting boundary smoother.

boundedOpenCone(centre_, tip_, Rc_, vec1_, vec2_, sign_) := 
 Module({v1, v2, v3, e1, e2, 
   e3},(*function to make 3d parametric plot of the section of a cone bounded between 
two vectors:tvec1 and tvec2*){v1, v2, v3} = # & /@ HodgeDual(centre - tip);
  e1 = Normalize(v1);
  e3 = Normalize(centre - tip);
  e2 = Cross(e1, e3);
  ParametricPlot3D(
   s*tip + (1 - s)*(centre + Rc*(Cos(t)*e1 + Sin(t)*e2)), {t, 0, 2 (Pi)}, {s, 0, 1}, 
   Boxed -> False, Axes -> False, Mesh -> None, BoundaryStyle -> Automatic, 
   RegionFunction -> 
    Function({x, y, z}, 
     RegionMember(HalfSpace(sign*Cross(vec1 - tip, vec2 - tip), tip), {x, y, z})), 
   PlotPoints -> 100, PlotStyle -> ColorData("Rainbow")(1)))

vec1 = {1, 0, 0}; vec2 = (1/Sqrt(2))*{1, 1, 0};
coneTip = {0, 0, 3};
cvec = {0, 0, 0};
Rc = Norm(vec1 - cvec);

pplot = boundedOpenCone(cvec, coneTip, Rc, vec1, vec2, -1);

Then we modify normalsShow from the document of VertexNormals a little to preserve only the normals on the boundary:

boundarynormals(g_Graphics3D) := 
  Module({pl, vl, boundaryindexlst = Flatten@Cases(g, Line(a_) :> a, Infinity)}, 
    {pl, vl} = First@Cases(g, 
      GraphicsComplex(pl_, prims_, VertexNormals -> vl_, 
        opts___?OptionQ) :> {pl, vl}(Transpose)((boundaryindexlst))(Transpose), 
      Infinity);
   Transpose@{pl, pl + vl/3});

vectors = boundarynormals@pplot;

Graphics3D({Arrowheads(0.01), Arrow@vectors})~Show~pplot

Mathematica graphics

complex analysis – Where do the boundary conditions go in 2D Laplace equation?

While reading Fourier Series and Boundary Value Problems by R.V. Churchhill, I came across a very strange solution strategy for solving the wave equation.

Consider the wave equation $u_{tt} = a^2u_{xx}$ subject to the boundary conditions $u(x,0)=f(x)$ and $u_t(x,0) = 0$.

By using the substitutions $r = x+at$ and $s=x-at$ the problem simplifies to $u_{rs} = 0$ which can be solved easily by successive integration getting $u = g(r)+h(s)$. If we now change coordinates and solve this subject to the two boundary conditions we have the solution:
$$
u = frac{1}{2}(f(x+at)+f(x-at))
$$

We only had 2 boundary conditions to start, but we should have had 4 free choices throughout the solution process. What happened to the other 2 choices?

I have a theory that it might be easier to hash this out if we make the strange substitution $a=i$ which will transform the wave equation into Laplace’s equation.
So we now have the equation $u_{tt}+u_{xx}=0$ subject to $u(x,0)=f(x)$ and $u_t(x,0) = 0$. With a solution given by:
$$
u = frac{1}{2}(f(x+it)+f(x-it))
$$

This shockingly will still give you actual solutions to the problem e.g. $f(x)=x(2-x)$ yields the soultion $u = x(2-x) +t^2$ which indeed is a solution to Laplace’s equation.

These are solutions you just cannot get using a Fourier method, because we do not have the periodic boundary conditions we are comfortable with. Still I am lost as to where these two free choices went? I wonder if their are certain conditions on $f$ that actually take the place of our boundary conditions, Does $f$ have to be a harmonic function? How does that limit the values of $u$?

I am confused and would appreciate any of your thoughts!