inverse – How to get x value where area under curve is some number?

My goal is to get x value where area under the curve (from the x ~ infinity) is about 0.05. But the code which I have tried shows errors. How to correct it?

*SubsuperscriptBox(((Integral)), (x), ((Infinity)))(
*SuperscriptBox((E), ((-x)/2)) 
*SuperscriptBox((x), (3/2))), (3 
*SqrtBox((2 (Pi))))) (DifferentialD)x)) == 0.05, x)

plotting – Plot a functional relation involving a derivative and inverse

I have a function f(x) obtained by solving certain ODE. Thus, it is given as an interpolation function. I need to plot $frac{dx}{df}$ as a function of $f$.

Below I will give a simple analytical example just to explain what I mean. Let $$f(x)=arctan(x).$$ Then we have
$$frac{df}{dx}=frac{1}{1+x^2},quad text{or} quad frac{dx}{df}=1+x^2.$$

Now we express $x$ in terms of $f$, i.e., $$x=tan(f),$$ and substitute in the equation above:

Thus, given $f(x)=arctan(x)$, I would like to get a plot of $$1+tan(f)^2.$$

One naive way to do it is to parametrize $f$ and $frac{dx}{df}$ in terms of $x$ and use ParametricPlot

ParametricPlot({f(x), 1/f'(x)}, {x, -10, 10}, AspectRatio -> 1)

However, for my numerically defined function this does not work very well. Additionally, I would like to get the dependence in a functional form, the best would be again an interpolation function. How can I achieve this, maybe it is possible to formulate the problem as ODE and use NDSolve?

co.combinatorics – What is the inverse Laplace transform of $frac{(1/s)_{n}}{s} $?


So far, I have found (p. 5) the following generating functions of the unsigned Stirling numbers of the first kind:

begin{equation} tag{1} label{1} sum_{l=1}^{n} |S_{1}(n,l)|z^{l} = (z)_{n} = prod_{k=0}^{n-1} (z+k) =: g(z) , end{equation} and begin{equation} tag{2} label{2} sum_{n=l}^{infty} frac{|S_{1}(n,l)|}{n!}z^{n} = (-1)^{l} frac{ln^{l}(1-z)}{l!} .end{equation}

Instead of summing the latter expression over $n$, I’m curious whether there is a simpler or different expression of the latter generating sum when summed over the other indices:

begin{equation} tag{3} label{3} f(z) := sum_{k=1}^{n} frac{|S_{1}(n,k)|}{k!}z^{k} . end{equation} (One could also take the sum from $k=1$ to $k=infty$, as $|S_{1}(n,k)| = 0$ when $k>n$.)

Work so far

Approach 1

We see that $(3)$ is the egf version of the ogf in $(1)$. So one of the ways I’ve tried to find $f(cdot)$ is by converting the first equation to the third one by applying the inverse Laplace transform to $g(1/s)/s$.

From $(1)$, we see that $g(1/s)/s = frac{(1/s)_{n}}{s}$. The tricky part of find the inverse Laplace transform lies in the numerator. Therefore, I tried to express it in terms of other functions of which the inverse Laplace transform might be known.

For instance, note that the Chu-Vandermonde identity states: begin{equation} tag{4} label{4} _2F_1left(-n,b;c;1right) = frac{(c-b)_{n}}{(c)_{n}} . end{equation}

Now, set $c=1$ and $b = 1 – 1/s$. Then:

begin{equation} tag{5} label{5} _2F_1left(-n,1-1/s;1;1right) = frac{(1/s)_{n}}{n!} =: h(s) . end{equation}

If the inverse Laplace transform of $h(cdot)$ would be known, then I would only have to multiply by $n!$ and convolve it with $ { mathcal{L}^{-1} (1/s) } (t) = u(t) $, where $u(t)$ is the unit step function.

However, I did not find an expression of the inverse Laplace transform of the hypergeometric function in $(5)$ in the “Tables of Laplace Transforms” by Oberhettinger and Badii (1973).

Approach 2

Another approach I tried is to note that begin{equation} tag{6} label{6} g(1/s) = (1/s)_{n} = frac{Gamma(1/s + n)}{Gamma(1/s)} . end{equation}

Unfortunately, the inverse laplace transform of begin{equation} tag{7} label{7} q(s) := frac{Gamma(s+a)}{Gamma(s+b)} end{equation} is only given by Oberhettinger and Badii (p. 308) in the case when $Re(b-a) >0$, which is not the case here.

Approach 3

Finally, I tried rewriting $g(1/s)/s$ as follows:

begin{align} g(1/s)/s &= frac{1}{s^{2}} cdot frac{1+s}{s} cdot frac{1+2s}{s} dots frac{1+ns}{s} \
&= frac{ prod_{k=1}^{n} (1+ks) }{s^{n+2}} \
&= frac{n! prod_{k=1}^{n}Big{(}s+frac{1}{k} Big{)} }{s^{n+2}}. end{align}

Observe that ${ mathcal{L}^{-1} s^{-(n+2)} }(t) = ( (n+1)! )^{-1} t^{n+1} $, so we are left with finding the inverse Laplace transform of the numerator (and convolve afterwards). However, I have not been able to do so thusfar.


  1. Is the inverse Laplace transform of $frac{(1/s)_{n}}{s}$ known, or can it be calculated somehow?
  2. Are there already any other expressions known for $f(cdot)$ that can be found by other means than the ones I laid out so far?

N.B. this is a more elaborate version of a question I asked earlier on MSE.

inverse trigonometric functions in traditional form

Is there a way to have ‘arcsin’ rather than ‘sin^-1’ as arcsin in TraditionalForm? I couldn’t find any way to set options for customizing the way in which expressions are turned into traditional form, and feel that ‘arcsin’ is widely more used among mathematicians than sin^-1.
Thanks in advance to everybody.

Is the inverse of MST cut property true? Why?

If we partition the nodes of a graph into sets A and B, there is an edge e of weight larger than any other edge crossing the cut between A and B, e would never be in the minimum spanning tree?

Limit of two convergent sequences of complex numbers equals the limit of the arithmetical sum of their inverse productory

Down below i leave my quesiton. It’s pretty much self explaining.

Let $(z_{n})_{n=0}^{+infty}$ and $(w_{n})_{n}^{+infty}$ convergent sequences of complex numbers. Prove that

lim_{n rightarrow +infty}frac{z_{0}w_{n}+z_{1}w_{n-1}+dots+z_{n}w_{0}}{n+1} =(lim_{k rightarrow +infty}z_{k})(lim_{m rightarrow +infty}w_{m})

What i have got so far:

I have recognized that
frac{z_{0}w_{n}+z_{1}w_{n-1}+dots+z_{n}w_{0}}{n+1} = frac{1}{n+1}prod_{i=0}^{n}(z_{i}w_{n-i})

I am kind of stuck on here right now. I have tried to split the limit (using the product propertie) to try and use the following result:

lim_{nrightarrow +infty}frac{z_{0}+z_{1}+dots+z_{n-1}}{n} = lim_{k}z_{k},

for any $(z_{n})_{n=0}^{+infty}$ convergent sequence of complex numbers.

If i could prove that $prod_{i=0}^{n}(z_{i}w_{n-i})$ can be adapted to be written as $v_{0}+v_{1}+dots+v_{n}$ and that $lim_{n}v_{n} = (lim_{k rightarrow +infty}z_{k})(lim_{m rightarrow +infty}w_{m})$ the exercise would become a lot more easier.
I wasn’t able to do that, if anyone could help me with this one i would be really thankfull. Have a nice one guys.

reference request – Hadamard’s global inverse function theorem for an open subset of $mathbb{R}^n$?

Hadamard’s global inverse function theorem states that: Let $F : mathbb{R}^N to mathbb{R}^N$ be a $C^2$ mapping. Suppose that $F(0) = 0$ and that the Jacobian determinant of $F$ is nonzero at each point. Further suppose furthermore that $F$ is proper (inverse image of compact set is compact). Then $F$ is one-to-one and onto.

What I was wondering was can we replace the domain $mathbb{R}^N$ with an open subset $U$? in other words does the following version of Hadamard’s theorem hold?

Let $U subset mathbb{R}^N$ be a unit $L^2$ ball centered at $0$ and $F : U to mathbb{R}^N$ be a $C^2$ mapping. Suppose that $F(0) = 0$ and that the Jacobian determinant of $F$ is nonzero at each point of $U$. Further suppose furthermore that $F$ is proper (inverse image of compact set is compact). Then $F$ is one-to-one and surjective onto its image.

I have searched around but I could only find Hadamard’s theorem for $mathbb{R}^N$. If someone is familiar with this topic I appreciate any comments or references.

Getting inverse of x^2 with positive x

My goal it to get inverse of x^2 for positive x.
But the result shows always minus -Sqrt[y].
How to correct

u[x_] := x ^2
$Assumptions = x > 0;
Refine[InverseFunction[u][y], y [Element] Reals]

I have tried this code also but is also shows -Sqrt[y].

Refine[x /. Solve[y == u[x], x][[1]], y [Element] Reals]

Asymptotic expansion around infinity for inverse cdf of normal distribution

I’m trying to get a asymptotic expansion as $xrightarrowinfty$ for a particular expression. I have

f(x_) := 1/x*InverseCDF(NormalDistribution(0, 1), 1 - Exp(-x^(1/4)));

As $xrightarrowinfty$, we have $f(x)rightarrow 0$ but I am interested in how it behaves at $infty$. I tried using AsymptoticSolve and Series but neither seems to get me a nice expansion.

How can I proceed with this?

artificial intelligence – Proving facts in inverse reinforcement learning

I was going through paper titled “Algorithms for Inverse Reinforcement Learning” by Andrew Ng and Russell.

It states following basics:

  • MDP $M$ is a tuple $(S,A,{P_{sa}},gamma,R)$, where

    • $S$ is a finite seto of $N$ states
    • $A={a_1,…,a_k}$ is a set of $k$ actions
    • ${P_{sa}(.)}$ are the transition probabilities upon taking action $a$ in state $s$.
    • $R:Srightarrow mathbb{R}$ is a reinforcement function (I guess its what it is also called as reward function) For simplicity in exposition, we have written rewards as $R(s)$ rather than $R(s,a)$; the extension is trivial.
  • A policy is defined as any map $pi : S rightarrow A$

  • Bellman Equation for Value function $V^pi(s)=R(s)+gamma sum_{s’}P_{spi(s)}(s’)V^pi(s’)quadquad…(1)$

  • Bellman Equation for Q function $Q^pi(s,a)=R(s)+gamma sum_{s’}P_{sa}(s’)V^pi(s’)quadquad…(2)$

  • Bellman Optimality: The policy $pi$ is optimal iff, for all $sin S$, $pi(s)in text{argmax}_{ain A}Q^pi(s,a)quadquad…(3)$

  • All these can be represented as vectors indexed by state, for which we
    adopt boldface notation $pmb{P,R,V}$.

  • Inverse Reinforcement Learning is: given MDP $M=(S,A,P_{sa},gamma,pi)$, finding $R$ such that $pi$ is an optimal policy for $M$

  • By renaming actions if necessary, we will assume
    without loss of generality that $pi(s) = a_1$.

Paper then states following theorem, its proof and a related remark:

Theorem: Let a finite state space $S$, a set of actions $A={a_1,…, a_k}$, transition probability matrices ${pmb{P_a}}$, and a discount factor $gamma in (0, 1)$ be given. Then the policy $pi$ given by $pi(s) equiv a_1$ is optimal iff, for all $a = a_2, … , a_k$, the reward $pmb{R}$ satisfies $$(pmb{P}_{a_1}-pmb{P}_a)(pmb{I}-gammapmb{P}_{a_1})^{-1}pmb{R}succcurlyeq 0 quadquad …(4)$$
Equation (1) can be rewritten as
$thereforepmb{V}^pi=(pmb{I}-gammapmb{P}_{a_1})^{-1}pmb{R}quadquad …(5)$
Putting equation $(2)$ into $(3)$, we see that $pi$ is optimal iff
$pi(s)in text{arg}max_{ain A}sum_{s’}P_{sa}(s’)V^pi(s’) quad…forall sin S$
$iff sum_{s’}P_{sa_1}(s’)V^pi(s’)geqsum_{s’}P_{sa}(s’)V^pi(s’)quadquadquadforall sin S,ain A$
$iff pmb{P}_{a_1}pmb{V}^pisucccurlyeqpmb{P}_{a}pmb{V}^piquadquadquadforall ain Atext{\} a_1 quadquad …(6)$
$iffpmb{P}_{a_1} (pmb{I}-gammapmb{P}_{a_1})^{-1}pmb{R}succcurlyeqpmb{P}_{a} (pmb{I}-gammapmb{P}_{a_1})^{-1}pmb{R} quadquad text{…from (5)}$
Hence proved.

Remark: Using a very similar argument, it is easy to show (essentially by replacing all inequalities in the proof above with strict inequalities) that the condition $(pmb{P}_{a_1}-pmb{P}_a)(pmb{I}-gammapmb{P}_{a_1})^{-1}pmb{R}succ 0 $ is necessary and sufficient for $piequiv a_1$ to be the unique optimal policy.

I dont know if above text from paper is relevant for what I want to prove, still I stated above text as a background.

I want to prove following:

  1. If we take $R : S → mathbb{R}$—there need not exist $R$ such that $π^*$ is the unique optimal policy for $(S, A, T, R, γ)$
  2. If we take $R : S × A → mathbb{R}$. Show that there must exist $R$ such that $π^*$ is the unique optimal policy for $(S, A, T, R, γ)$.

I guess point 1 follows directly from above theorem as it says “$pi(s)$ is optimal iff …” and not “unique optimal iff”. Also, I feel it also follows from operator $succcurlyeq$ in equation $(6)$. In addition, I feel its quite intuitive: if we have same reward for any given state for every action, then different policies choosing different actions will yield same reward from that state hence resulting in same value function.

I dont feel point 2 is correct. I guess, this directly follows from the remark above which requires additional condition to hold for $pi$ to be “uinque optimal” and this condition wont hold if we simply define $R : S × A → mathbb{R}$ instead of $R : S → mathbb{R}$. Additionally, I feel, this condition will hold iff we had $=$ in equation $(3)$ instead of $in$ (as this will replace all $succcurlyeq$ with $succ$ in the proof). Also this also follow directly from point 1 itself. That is we can still have same reward for all actions from given state despite defining reward as $R : S × A → mathbb{R}$ instead of $R : S → mathbb{R}$, which is the case with point 1.

Am I correct with the analysis in last two paragraphs?


After some more thinking, I felt I was doing it all wrong. Also I feel the text from the paper which I specified is of not much help in proving these two points. So let me restate new intuition for proofs for the two points:

  1. For $R: Srightarrow mathbb{R}$, if some state $S_1$ and next state $S_2$ has two actions between them, $S_1-a_1rightarrow S_2$ and $S_1-a_2rightarrow S_2$, and if optimal policy $π_1^*$ chooses $a_1$, then $π_2^*$ choosing $a_2$ will also be optimal, thus making NONE “uniquely” optimal since both $a_1$ and $a_2$ will yield same reward as reward is associated with $S_1$ instead of with $(S_1,a_x)$.

  2. For $R: (S,A)rightarrowmathbb{R}$, we can assign large reward say $+∞$ to all actions specified in given $π^*$ and $-∞$ to all other actions. This reward assignment will make $π^*$ a unique optimal policy.

Are above logics correct and enough to prove given points?