## equation solving – Instability (divergence) in NIntegrate solution

I am facing a problem with the solution of `NIntegrate`. Here is my `NIntegrate` command:

``````For(i = 2, i <= Length(t), i++,
x = coef *NIntegrate((1 - (ah*θ((i - 1)))^m)/Sqrt(t((i)) - t0), {t0,0, t((i))});
θ = Insert(θ, x, -1));
``````

And here is the solution:

The solution diverges from the real answer at around t=2.5. When I increase the time step, it happens later but still happens. I have changed my `AccuracyGoal` and `PrecisionGoal` but they didn’t work.
Any help would be appreciated!

## equation solving – How to remove unnecessary answers from NSolve without losing speed?

This code:

``````eqs =
{Ca/u^2 +
r (6 a (c^2 - d^2) + 12 b c d + 6 (a^2 + b^2 + c^2 + d^2) Ca +
Ca^3) == 0,
a (1/u^2 - 1) +
r (3 a (a^2 + b^2) + 6 a (c^2 + d^2) + 3 c^2 Ca - 3 d^2 Ca +
3 a Ca^2) == 1/2,
b (1/u^2 - 1) +
r (3 b (a^2 + b^2) + 6 b (c^2 + d^2) + 6 c d Ca + 3 b Ca^2) == 0,
c (1/u^2 - 1/4) +
r (6 (a^2 + b^2) c + 3 c (c^2 + d^2) + 6 (a c + b d) Ca +
3 c Ca^2) == 0,
d (1/u^2 - 1/4) +
r (6 (a^2 + b^2) d + 3 d (c^2 + d^2) + 6 (-a d + b c) Ca +
3 d Ca^2) == 0, c > 0, d >= 0,
QQQ == c^2 + d^2, QQQ != 0
} // Rationalize(#, 0) &;
NSolve(eqs /. {u -> 5, r -> 0.04}, {a, b, c, d, Ca, QQQ}, Reals)
``````

``````{{a -> -0.732167, b -> 0, c -> 1.06332, d -> 0, Ca -> 0.443594,
QQQ -> 1.13066},
{a -> -0.732167, b -> 0, c -> 1.06332, d -> 0,
Ca -> 0.443594, QQQ -> 1.13066},
{a -> -0.732167, b -> 0,
c -> 1.06332, d -> 0, Ca -> 0.443594,
QQQ -> 1.13066}, {a -> -0.732167, b -> 0, c -> 1.06332, d -> 0,
Ca -> 0.443594, QQQ -> 1.13066}, {a -> -0.698614, b -> 0,
c -> 0.622043, d -> 0.622043, Ca -> 0,
QQQ -> 0.773876}, {a -> -0.698614, b -> 0, c -> 0.622043,
d -> 0.622043, Ca -> 0, QQQ -> 0.773876}, {a -> -0.698614, b -> 0,
c -> 0.622043, d -> 0.622043, Ca -> 0,
QQQ -> 0.773876}, {a -> -0.698614, b -> 0, c -> 0.622043,
d -> 0.622043, Ca -> 0, QQQ -> 0.773876}, {a -> -0.698614, b -> 0,
c -> 0.622043, d -> 0.622043, Ca -> 0,
QQQ -> 0.773876}, {a -> -0.698614, b -> 0, c -> 0.622043,
d -> 0.622043, Ca -> 0, QQQ -> 0.773876}, {a -> -0.698614, b -> 0,
c -> 0.622043, d -> 0.622043, Ca -> 0, QQQ -> 0.773876}}
``````

But as you see, there are several identical answers.
And if a make WorkingPrecision -> 3, for example, this code is slow down.
Maybe there is other methods to make this code faster?

## pr.probability – A functional equation involving the inverse function

$$g(x)-g^{-1}(x)=ep, p(x)quadforall xinRtag1$$
has a solution $$gcolonRtoR$$, which is an increasing continuous function such that $$g(x)>x$$ for all real $$x$$.

It is clear that, if a pdf $$pin P$$ is good, then for any real $$a$$ and any real $$b>0$$ the pdf $$p_{a,b}$$ given by the formula $$p_{a,b}(x):=b,p(a+bx)$$ for real $$x$$ is good as well.

The problem here is to characterize the set of all good pdf’s $$pin P$$.

Of course, there is always a tautological characterization: a pdf $$pin P$$ is good if and only if it is good. Any non-tautological characterization would be of interest, including incomplete ones, such as conditions that are only sufficient or only necessary for the goodness. In particular, it would be of interest to know if the “triangular” pdf $$p_triangle$$ given by the formula $$p_triangle(x):=max(0,1-|x|)$$ for real $$x$$ is good.

This question is related to this answer.

## “Further output of NIntegrate” ERROR at diffusion equation

Good day everybody,

I am trying to solve the diferential equation for diffusion:

dt(u)=A*dx(u^(m)*dx(u)),

where “m” is the difussion coefficient. In this case, I am trying to make “m” a function of x, and solve the equation numerically.

``````GCo = First(u /. NDSolve({D(u(x,t),t)==dCo*D((u(x,t)^mCo(x)*D(u(x,t),x)),x),(D(u(x,t),x)./->-20)=0,(D(u(x,t),x)./->40)=0,u(x,0)=FC0(x)},u,{x,-20,40},{t,0,10}))
``````

Where “FCo(x)” is the initial conditions.

I get the following errors:

“General::stop: Further output of NIntegrate::nlim will be suppressed during this calculation.”

“General::stop: Further output of General::munfl will be suppressed during this calculation.”

Is there any way to solve this?

Thank you very much for your help

## Steps to find the real part of the stress equation

enter image description here

Please provide me how they achieved the solution as shown in the image.

## Uniqueness of solution to heat equation when initial condition is a generalized function

Say $$u(t,x)$$ be a solution to the heat equation $$partial_t = partial_{xx} quad (t,x) in (0,T) times (-1,1)$$ subject to the initial/boundary conditions
$$u(0,x) = f(x), quad x in (-1,1), \ u(t,pm 1) = g^{pm}(t), quad t in (0,T),$$ with the usual compatibility conditions in corners: $$f(pm 1) = g^{pm}(0)$$. Suppose also that $$f$$ and $$g$$ are bounded and continuous. Then one can envoke the maximum principle or the energy method to prove that $$u$$ is the only solution.

What happens when $$f$$ or $$g$$ are unbounded? Say for instance, when $$f(x) = delta_0(x)$$ (point mass at zero) and $$g^pm equiv 0$$? This problem has a solution that can be easily represented as a series.

How does one go about proving uniqueness in such a situation?

In fact, come to think of it, how does one prove the uniqueness of the fundamental solution $$v(t,x) = exp {-x^2 / (4t)}/ sqrt{4 pi t}$$?

Is it some kind of weak uniqueness, where you show uniqueness of all classical solutions resulting from mollification of initial/boundary conditions? Is that the best one can do?

Any references would be deeply appreciated.

## equation solving – Approaches to expression reduction

I am still learning how to use Mathematica more efficiently, but something that I keep bumping into is problems with it reducing formulas. Often, I’ll ask it to compute a certain formula (take a derivative, Fourier transform, etc.) and the output is sometimes very messy, but could easily be reduced to something much neater.

It’s usually not a big deal, because I can carry the simplification on my own, but it would still be convenient to have a systematic way of reducing everything, and not have to little tricks on a case by case basis. In some cases, I have tried, `FullSimplify` or `TrigReduce` without much success, or even using certain assumptions. For example, this should reduce

``````Cos((Theta)) ((Sqrt(-((g H)/(-1 + Sin((Theta)))))Sqrt(-g H (-1 + Sin((Theta)))))/g - (H Sin((Theta)))/(-1 + Sin((Theta))))
``````

$$cos (theta ) left(frac{sqrt{-frac{g H}{sin (theta )-1}} sqrt{-g H (sin (theta )-1)}}{g}-frac{H sin (theta )}{sin (theta )-1}right)$$

to this

``````Cos((Theta)) (H-(H Sin((Theta)))/(-1+Sin((Theta))))
``````

$$cos (theta ) left(H-frac{H sin (theta )}{sin (theta )-1}right)$$

But even when doing `FullSimplify` on it, it just outputs the same expression. I don’t have examples on top of my head, but I have been in similar situations before, where I want to reduce something, but I jut can’t seem to do anything to reduce it.

Are there extra ways of forcing it into ‘seeing’ the possible simplifications? Or is it possible that from its perspective the expression is reduced enough? To be honest, I am not asking for a specific case of simplification, but more on the general approaches one would take to simplify/reduce an expression.

## integration – Number of roots of the equation \$f(x)= int_0^x (t-1)(t-2)(t-3)(t-4)dt =0\$

I have the following question before me:
Find the number of roots of the equation $$f(x)= int_0^x (t-1)(t-2)(t-3)(t-4)dt =0$$ in the interval $$(0,5)$$.
$$0$$ is clearly one of the roots. But how can I find other roots, if any? I tried evaluating the integral and came up with a fifth degree polynomial in $$x$$ having no constant term. The equation seemed quite daunting to me. How can I get the roots quicker? Please suggest.

## solve this equation with solve or reduce?

in this equation i must find or clear rb , but MMA is reluctant to calculate it .
Am I doing something wrong? Logarithms should always be positive

``````Reduce(qb == (ta -
tb)/(ri ((1/(ri*hi)) + log((re/ri)/k) +
log((rb/re)/k0) + (1/(re*he)))) && ri != 0 && hi != 0 &&
re != 0 && he != 0 && k != 0 && k0 != 0 && rb != 0 &&
rb/(k0 re) > 0 && re/(k ri) > 0, rb) // FullSimplify
``````

note that ri not is r * i

## Differential Equation Solution By Power Series

Solve $$(1 + x)y’ = py; y(0) = 1$$, where $$p$$ is an arbitrary constant.

First I plugged in the guess $$y = sum_{n = 0}^infty a_n x^n$$:

$$(1 + x)(sum_{n = 0}^infty a_n x^n)’ = psum_{n = 0}^infty a_n x^n$$

Then I expanded the derivative and multiplication:

$$sum_{n = 0}^infty n a_n x^{n – 1} + sum_{n = 0}^infty n a_n x^n = psum_{n = 0}^infty a_n x^n$$

Then I shifted the left index (the first term yielding $$0$$ allows the lower bound to remain $$0$$) and algebraically combined the summations:

$$sum_{n = 0}^infty (n + 1)a_{n + 1} x^n + (n – p)a_n x^n = 0$$

This leads to the following recurrence relation:

$$a_{n + 1} = frac{p – n}{n + 1}a_n$$

Thus for various values of $$n$$:

$$a_1 = p a_0$$, $$a_2 = frac{p(p – 1)}{2}a_0$$, $$a_3 = frac{p(p – 1)(p – 2)}{6} a_0$$, etc.

So applying definitions for the exponential taylor series and falling factorial, the guessed solution would be:

$$y = sum_{n = 0}^infty frac{p! a_0 x^n}{n! (p – n)!} = sum_{n = 0}^infty a_0 e^x p^{underline n}$$

Solving the initial value problem:

$$1 = sum_{n = 0}^infty a_0 e^0 p^{underline n} implies a_0 = frac{1}{sum_{n = 0}^infty p^{underline n}}$$

My final solution is:

$$y = frac{sum_{n = 0}^infty e^x p^{underline n}}{sum_{n = 0}^infty p^{underline n}}$$

However, the answer is supposed to be $$y = (1 + x)^p$$. Are these identical, or did I make an error somewhere?