## Existence of first-order ordinary differential equation solution

Consider the PIV: y & # 39; (t) = 2$$sqrt (f (t))$$, y (0) = a, t in R.
So here f (t, y) is continuous and the partial derivative of f w.r.t y is discontinuous at 0.
Therefore, we cannot conclude on the uniqueness of the solution.

But if a = 0, we get an infinite number of solutions.

So here is my confusion, does the continuity of the partial derivative of f w.r.t and depend on the value of a?

If a> 0, what will happen to its continuity? If it is continuous, then, according to Picard's theorem, we could conclude the uniqueness.
So

1. The partial derivative of f w.r.t y is continuous if a> 0?
2. If so, in Picard's theorem, where is the continuity of the partial derivative of f w.r.t y defined?

## algebraic geometry ag: the Picard scheme of an ordinary singular curve

Leave $$k$$ be an algebraically closed field, $$C$$ an appropriate reduced connected schematic about $$k$$ dimension 1, whose singularity is at worst ordinary, $$pi: tilde {C} a C$$ normalization of $$C$$ Y $$D_i$$ be the irreducible components of $$tilde {C}$$.
And write the structure of the morphism $$f: C to k, g: tilde {C} to k$$.
So, since "grouping algebraic spaces locally of finite type" on an archive are schemas, their functor Picard is representable by a schema, and denotes its identity component by $$J$$.
As $$C$$ it's a curve $$J$$ it is smooth, so is an algebraic group.
How can I demonstrate that? $$J$$ is it semi abelian?

This is what I have tried:

This is written in Chapter 9.2, Proposition 10 in Bosch et. Neron models from al.
Write first $$r = ( text {the number of irreducible components of} C)$$, $$C _ { text {sing}} = {x_i } _ {i = 1, points, N}$$, $$pi ^ {- 1} (x_i) = {x_ {ij} } _ {j = 1, dots, n_i}$$ Y $$M$$ the range of $$H ^ 1 ( Gamma, mathbb {Z})$$, where $$Gamma$$ is the graph associated with $$C$$.
We have an exact brief (S) $$1 to mathscr {O} _C ^ * to pi _ * mathscr {O} _ tilde {C} ^ * to mathscr {Q} a 1$$, where $$mathscr {Q} = oplus_i k_ {x_i} ^ {n_i – 1}$$, where $$k_x$$ is the sheaf of the skyscraper in $$x in C$$ associate with $$k$$.
So the author says that we have the exact length
$$1 to f_ * mathbb {G} _ {m, C} to f_ * pi_ * mathbb {G} _ {m, tilde {C}} to f_ * mathscr {Q} a R ^ 1 f_ * mathbb {G} _ {m, C} a R ^ 1 (f_ * pi_ *) mathbb {G} _ {m, tilde {C}} a 1$$
in the great etale topology in $$operatorname {Spec} k$$.
I do not understand why.
(And I don't know what it is $$mathscr {Q}$$ like a sheaf etale.
An almost coherent modulus bundle (in the usual sense) induces the large etale bundle.
But now $$mathscr {Q}$$ it is not a quasi-coherent module).

I think there is the "great version etale" of (S), $$1 to mathbb {G} _ {m, C} to pi_ * mathbb {G} _ {m, tilde {C}} to mathscr {C} a 1$$ in the great etale topology of $$C$$, where
$$mathscr {C} = oplus_i i_ {x_i *} mathbb {G} _ {m, k} ^ {n_i – 1}$$, $$i_x$$ is the canonical morphism $${x } a C$$.
If so, then since $$R ^ 1 f_ * mathbb {G} _ {m, C} = operatorname {Pic} _C, (R ^ 1 f_ *) circ pi_ * = R ^ 1 g_ *, R ^ 1f_ * mathscr {C} = 0$$ Y $$f_ * mathscr {C} = mathbb {G} _ {m, k} ^ M$$, we have
$$0 to mathbb {G} _m to mathbb {G} _m ^ r to mathbb {G} _m ^ M to operatorname {Pic} _ {C / k} to operatorname {Pic } _ { tilde {C} / k} a 0$$
in the great etale topology in $$operatorname {Spec} k$$So it is fine.
How can I show it?

I could show it for the standard Neron n-gon (in a general scheme $$S$$)
(For the definition of the Neron polygon, see the Deligne-Rapoport document.)
But I couldn't show it for general stable curves.

Any other proof will be appreciated, but if possible I want references available online.

Thank you!

## [ Politics ] Open-ended question: According to the Republican Party tax philosophy, instead of giving ordinary people \$ 1,000 each, shouldn't millionaires get \$ million and let it drip?

[Policy] Open-ended question: According to the fiscal philosophy of the Republican Party, instead of giving ordinary people \$ 1,000 each, shouldn't millionaires get \$ million and let it drip?

## Ordinary differential equations: where am I wrong when calculating the divergence of a vector function?

$$x & # 39; = y + x ( alpha – x ^ 2 – y ^ 2)$$

$$y & # 39; = -x + y (1 – x ^ 2 – y ^ 2)$$

I think I have discovered its polar coordinate form.

$$r & # 39; = r left (( alpha – 1) cos ^ 2 theta + 1 – r ^ 2 right)$$

$$theta & # 39; = -1 – ( alpha – 1) sin theta cos theta$$

I am trying to calculate the divergence.

$$F_r = r & # 39; = r left (( alpha – 1) cos ^ 2 theta + 1 – r ^ 2 right)$$

$$F_ theta = theta & # 39; = -1 – ( alpha – 1) sin theta cos theta$$

$$F_z = 0$$

You should be using the cylindrical version of the divergence method, right?

$$text {div} F = frac {1} {r} frac {d} {dr} left (r F_r right) + frac {1} {r} frac {d F_ theta} { d theta} + frac {d F_z} {dz}$$

When I substitute everything and work the algebra, I get

$$r F_r = ( alpha – 1) r ^ 2 cos ^ 2 theta + r ^ 2 – r ^ 4$$

$$frac {d} {dr} (r F_r) = 2 ( alpha – 1) r cos ^ 2 theta + 2r – 4r ^ 3$$

$$frac {1} {r} frac {d} {dr} (r F_r) = 2 ( alpha – 1) cos ^ 2 theta + 2 – 4r ^ 2$$

Y

$$F_ theta = -1 – ( alpha – 1) sin theta cos theta$$

$$frac {d F_ theta} {d theta} = – ( alpha – 1) left ( cos ^ 2 theta – sin ^ 2 theta right)$$

$$frac {1} {r} frac {d F_ theta} {d theta} = – frac { alpha – 1} {r} left ( cos ^ 2 theta – sin ^ 2 theta right)$$

so

$$text {div} F = 2 ( alpha – 1) cos ^ 2 theta + 2 – 4 r ^ 2 – frac { alpha – 1} {r} left ( cos ^ 2 theta – sin ^ 2 theta right)$$

But I was reviewing the book and it seems like I should be getting

$$text {div} F = alpha + 1 – 4 r ^ 2$$

I checked and tried it many times, but it doesn't seem to work. What did I do wrong?

## ordinary differential equations – About the resolution for the general solution of a flat system

Problem: solve for a general solution $$y & # 39; = begin {bmatrix} 0 & 4 \ -2 & -4 end {bmatrix}$$

Solve own values, obtain $$lambda_1 = -2 + 2i$$

$$| A- lambda_1 I | = begin {bmatrix} 2-2i & 4 \ -2 & -2-2i end {bmatrix}$$

$$(2-2i) x + 4y = 0 rightarrow v_1 (own vector) = begin {pmatrix} 2 \ i-1 end {pmatrix}$$

Question 1: for the matrix, why is it not necessary to reduce the rows? Why is the second row redundant? Are there also other linearly independent eigenvectors that also fit here? Do the initial conditions limit the possibilities of different eigenvectors?

$$lambda_2 = (2, -i-1) ^ T$$

The fundamental solutions are $$Z_1 (t) = e ^ {(- 2 + 2i) t} (2, i-1) ^ T$$ Y $$Z_2 (t) = e ^ (- 2-2i) t} (2, -i-1) ^ T$$

So the fundamental solutions of real value are $$y_1 (t) = frac {1} {2} (z_1 + z_2)$$ Y $$y_2 (t) = frac {1} {2i} (z_1-z_2)$$

Question 2: For fundamental solutions of real value, I don't understand why $$y_1 (t) = frac {1} {2} (Z_1 + Z_2)$$ Y $$y_2 (t) = frac {1} {2i} (Z_1-Z_2)$$? Where do those$$frac {1} {2}$$ Y $$frac {1} {2i}$$ comes from?

## Ordinary differential equations: determination of the stability of the ode system

To consider $$x & # 39; = a- (1 + b) x + x ^ 2y$$ $$y & # 39; = bx-x ^ 2y$$

Find all the equilibrium points based on a, b. Linearize the system near the points and
Calculate the eigenvalues. Determine the conditions related to a, b with the stability of the instability of
balance points

What have I done:
The equilibrium points are given by $$f (a, b) = (0, b)$$ Yes $$a = 0$$, $$f (a, b) = (a, b / a)$$ Yes $$a neq 0.$$ Then for the first point $$(x, y) = (0, b)$$ that is to say $$a = 0$$, I found that the e values ​​are $$lambda_1 = – (1 + b)$$, $$lambda_2 = 0$$So, what I am seeing is that for this point of equilibrium we have a stable line of equilibrium if $$b in (-1, infty)$$I am not sure how to show if it is an asymptotic stability of Lyapunv's stability. Then for $$b in (- infty, -1)$$ I have $$lambda_1> 0$$ which makes this unstable by this point. Next, I have that for the second point $$(x, y) = (a, b / a)$$ that is when $$a neq 0$$, that $$lambda _ { pm} = frac {- (a ^ 2- (b-1)) pm sqrt {(a ^ 2- (b-1)) ^ 2-4a ^ 2}} { 2}$$
that came from replacing this point of the equliprium in the Jacobian rule $$Df (x, y)$$. I have trouble showing cases here. I will really appreciate any help.

## dnd 5e – Can an ordinary whip be poisoned?

I am running a ranger / rogue who drives a whip, and I wonder if it is possible to poison a whip, both from the point of view of the rules and from the practical point of view.

As for the rules, there is this from the DMG:

Injury. A creature that receives sharp or penetrating damage from a weapon or ammunition covered with wound poison is exposed to
their efects.

Since a whip causes radical damage, it suggests that it is at least legal. The PHB seems to support this:

Poison, Basic You can use the poison in this vial to cover a cutting or piercing weapon or up to three pieces of ammunition.
Applying the poison takes an action …

Am I missing something that specifies weapons with blades or metal?
So, assuming it is legal, the other question is does it make sense? I am not an expert on whips, although in our local Renaissance Faire there is a guy who dips his chainmail whip into gas and ignites it in flames, suggesting that a whip could absorb a liquid, like a poison.

Is there anyone out there who can present a case study for or against poisoning a whip?

## Ordinary differential equations: how to introduce variations in the competitive Lotka-Volterra model?

First, we have these two Lotka-Volterra equations for prey and predators, respectively:

$$frac {dx} {dt} = r_ {x} x (1- alpha y)$$
$$frac {dy} {dt} = r_ {y} y ( beta x -1)$$
$$r_ {x}, r_ {y}, alpha, beta gt 0$$

These equations mean the predator-prey model without interaction between species of the same condition. If there were competition between prey and predators, the equations would be:

$$frac {dx} {dt} = r_ {x} x (1-x- alpha y)$$
$$frac {dy} {dt} = r_ {y} y ( beta x + gamma y-1)$$
$$r_ {x}, r_ {y}, alpha, beta gt 0, gamma in R$$

My question is: what should I do to change these equations if I wanted to introduce more conditions in addition to competition between species such as, for example, the life expectancy of both species, parasitism, diseases, lack of food according to the season? etc.?

Thanks for the help!

## Ordinary differential equations – existence and uniqueness theorems in EDO beyond Picard and Peano?

Picard's singularity theorem requires the continuity of Lipschitz in $$f$$ for uniqueness But, I've also seen examples where $$f$$ It is not Lipschitz and yet the solution is unique. Similarly with Peano's theorem of existence.

So i have $$2$$ questions

1. Are there more general theorems of existence and uniqueness for EDO?
2. If not, is there a theorem that guarantees uniqueness even when $$f$$ it's not Lipschitz, or existence even when $$f$$ isn't it continuous?

If the answers to any of the above questions are yes, could anyone recommend me some resources from which I can get information?