INTEGRATION technique, with a contiunous second derivative

Thanks for contributing an answer to Mathematics Stack Exchange!

  • Please be sure to answer the question. Provide details and share your research!

But avoid …

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, see our tips on writing great answers.

calculus and analysis – Derivative for conditional expectation of non-independent gaussian variables is increasing

Consider two non-independent gaussian random variables:
$$(t,c)∼text{BiNormal}((πœ‡π‘‘,πœ‡π‘),(πœŽπ‘‘,πœŽπ‘),𝜌)$$

I’m interested in understanding when (i.e. for which values of the distribution parameters) it’s true that $$frac{partial E(t |c<f(t)+x)}{partial x}>0$$
where

f(t_):=b*CDF(NormalDistribution(ΞΌa, Οƒa), t)

What I’ve tried:

  1. I’ve tried generating an analytical expression for the conditional expectation, but this fails (output same as input)

Expectation(t | c < b*CDF(NormalDistribution(ΞΌa, Οƒa), t) + x, {t,c}~BiNormal((πœ‡π‘‘,πœ‡π‘),(πœŽπ‘‘,πœŽπ‘),𝜌))

  1. I’ve also tried spelling out the expectation via integrals over the joint distribution, and then taking derivatives. This does produce a valid analytical output, but I cannot reduce the expression (error message “This system cannot be solved with the methods available to Reduce”)

D(Integrate(t*PDF(BinormalDistribution({Β΅t,Β΅c},{Οƒt,Οƒc},ρ),{t,c}),{c,-∞,CDF(NormalDistribution(Β΅a,Οƒa),t)+x},{t,-∞,∞)})/ Integrate(PDF(BinormalDistribution({Β΅t,Β΅c},{Οƒt,Οƒc},ρ),{t,c}),{c,-∞,CDF(NormalDistribution(Β΅a,Οƒa),t)+x},{t,-∞,∞)}),x))


Here is a related question on the analytical treatment of this problem: https://math.stackexchange.com/questions/3692958/parameters-for-which-conditional-expectation-of-non-independent-gaussian-variabl

Is there any inference we can make about a function if its second derivative is constant?

I recently learned about the concept of the 2ns derivative test, and while solving some sums I encountered some functions which had a constant 2nd derivative. Now I made a simple inference that it means that the function has only one out of the two extremes. Is there any other inference we can make as well, given a constant value for the 2nd derivative test?

Limit of the convolution of derivative of Gaussian heat kernel

I’m looking for the following limit:
$$lim_{varepsilonto 0^+}int_{-sqrt{varepsilon}}^{sqrt{varepsilon}}frac{1}{sqrt{2pi}varepsilon^{3/2}}left(-1+frac{x^2}{varepsilon}right)e^{-frac{x^2}{2varepsilon}}l(a+x)dx=???$$
Where $l$ is a bounded and nice function ($lin C^{infty}$) with $l(a)neq 0$. We remark that
$$frac{partial^2}{partial x^2}frac{1}{sqrt{2pivarepsilon}}e^{-frac{x^2}{2varepsilon}}=frac{1}{sqrt{2pi}varepsilon^{3/2}}left(-1+frac{x^2}{varepsilon}right)e^{-frac{x^2}{2varepsilon}}.$$

symbolic – Defer evaluation of function but not of its derivative

Does this work for you? I only tested it on your example

ClearAll(f, x);
f(x_) := Exp(x^2)
(f'(x) /. DownValues(f)((1, 2)) -> HoldForm(f(x))) // Simplify

Mathematica graphics

  (1 + D(f(x), x) /. DownValues(f)((1, 2)) -> HoldForm(f(x))) // Simplify

Mathematica graphics

You can also use Defer in place of HoldForm above. I do not see how this would work if you Defer the original f(x) at the source, since Mathematica will not be able to take its derivative in first place. So the idea is to do the derivative, then replace the downvalue of f(x) by its name back in the result.

If this does not work for you, will delete this answer.

calculus – How to calculate the derivative of the phase in this complex equation?

I am trying to find the group delay of a filter for which I have already solved the phase delay here.

Phase delay and group delay are described by:

$$ tau_ phi ( omega) = – frac { phi ( omega)} { omega} $$

$$ tau_g ( omega) = – frac {d phi ( omega)} {d omega} $$

$ omega $ represents the normalized frequency where $ f_s $ is the sampling frequency:

$$
omega = frac {2 pi f} {f_s}
$$

Therefore, for group delay, I must take the phase differential.

My final equations for phase delay were:

$$ phi (Ο‰) = atan2 (n, m) – atan2 (q, p) $$

$$ tau _ { phi} ( omega) = – frac { phi ( omega)} { omega} $$

$ m $, $ n $, $ p $and $ q $ they were real value constants resolved from the filter coefficients.

So at this point I think I just need to take the derivative of $ phi (Ο‰) = atan2 (n, m) – atan2 (q, p) $ regarding omega and that's it. Any help with what it would be like in this case? This calculation is on my head in the present.

Thank you.

Show that the derivative of the trace (A (X ^ -1) B) wrt X is – ((X ^ -1) AB (X ^ -1)) ^ T

Where X is an NxN matrix,
A is 1xN,
B is NX1
Try this

How to trace the derivative of the proper functions obtained using NDEigensystem?

I am using NDEigensystem to get the functions of the Laplacian operator. After that I want to plot frac{1}{funs[[1]]}*frac{d funs[[1]]}{dx}*funsvs x where funs is the set of proper functions obtained. Can anyone tell me how can I do this?

spectral theory of sp. – Fourier transform of the Green function and its derivative

Consider a true Sturm-Liouville operator $ L $ in $ (0, + infty) $ and use the following annotations: https://www.encyclopediaofmath.org/index.php/Titchmarsh-Weyl_m-function

Assume $ a = 0 $, $ alpha in (0, pi) $ is fixed, $ w equiv 1 $ and let's say that we are in the case of the limit point so that $ m ( lambda) $ is unique and for $ beta in (0, pi) $ permanent,
$$ m ( lambda) = ell _ { infty} ( lambda) = lim limits_ {b to + infty} ell_b ( lambda) = lim limits_ {b to + infty } – frac { cot beta , phi (b, lambda) + p (b) phi & # 39; (b, lambda)} { cot beta , psi (b, lambda) + p (b) psi & # 39; (b, lambda)} $$
by $ 0 <b leq + infty $, consider the green function in $ (0, b) $
$$ G_b (x, y, lambda): = begin {cases} phi (x, lambda) chi_b (y, lambda) quad 0 leq x leq y leq b \
chi_b (x, lambda) phi (y, lambda) quad b geq x> y geq 0 \
0 x, y gt b
end {cases} $$

In the Sturm-Liouville and Dirac operators of Levitan and Sargsjan, on page 57, the Fourier transformation of $$ dfrac {dG} {dx} (x, y, z) $$ is exactly
$$ int_0 ^ {+ infty} dfrac {dG} {dx} (x, y, z) phi (y, lambda) dy = dfrac { phi & # 39; (x, lambda) } {(z- lambda)} $$
but at this point in the book (at this point, we have proved the limit point theorem and the limit circle, an integral expansion theorem for arbitrary functions, and an integral representation for the solver. For example, the surjectivity of the Fourier transform is untested), it is unclear why this is true. In fact, in the regular case, we have
$$ int_0 ^ {b} dfrac {dG_b} {dx} (x, y, z) phi (y, lambda_ {n, b}) dy = dfrac { phi & # 39; (x, lambda_ {n, b})} {(z- lambda_ {n, b})} $$
where he $ lambda_ {n, b} $are the eigenvalues ​​of the Sturm-Liouville operator $ L $ prohibited for $ (0, b) $.
In the book, it is said that it is enough "to take the limit $ b a + infty $"but I don't understand why this should work.

Real analysis: finding the derivative of a reparameterized variable

Hello MathExchange people,

I am currently working to derive a derivative that seems to be a bit stubborn.

Suppose I have a multivariate function $ C: mathbb {R} ^ {n, n} times mathbb {R} ^ {n, n} to mathbb {R}, (p, q) mapsto x $ with its derivative $ frac { partial C} { partial q_ {ij}} = frac {- p_ {ij}} {q_ {ij}} $. Now I want to parameterize the $ q_ {ij} $ with a softargmax such that $ q_ {ij} = frac {exp (z_ {ij})} { sum_ {k} sum_ {l} exp (z_ {kl})} $ and get $ frac { partial C} { partial z_ {ij}} $.

What I have done so far is simply divide the derivative into two parts and use the above derivative like that:

$
begin {align *}
frac { partial C} { partial z_ {ij}} & = frac { partial C} { partial q_ {ij}} frac { partial q_ {ij}} { partial z_ {ij}} \
& = frac { partial C} { partial q_ {ij}} frac { partial frac {exp (z_ {ij})} { sum_ {k} sum_ {l} exp (z_ {kl} )}} { partial z_ {ij}} \
& = frac { partial C} { partial q_ {ij}} frac {exp (z_ {ij}) sum_ {k} sum_ {l} exp (z_ {kl}) – exp (z_ {ij }) exp (z_ {ij})} {( sum_ {k} sum_ {l} exp (z_ {kl})) ^ 2} \
& = frac { partial C} { partial q_ {ij}} (q_ {ij} – q_ {ij} ^ 2) \
& = frac {- p_ {ij}} {q_ {ij}} q_ {ij} (1-q_ {ij}) \
& = -p_ {ij} + p_ {ij} q_ {ij}
end {align *} {}
$

The problem is that I know that I must get to $ -p_ {ij} + q_ {ij} $. Does anyone see where I made my mistake?