## convex optimization – Convexity of the Hamiltonian: optimal control

Consider $$H(x, p)=sup _{u in U}{-f(x, u) cdot p-f^0(x, u)}$$
This is a classical Hamiltonian coming from an infinite horizon control problem of ODE’s with state equation, i.e.
$$begin{equation}label{state_eq_finite_dim} left{begin{array}{l} y^{prime}(t)=f(y(t), u(t)), t>0 \ y(0)=x in mathbb{R}^n end{array}right. end{equation}$$
where $$u in mathcal{U}={u:(0,+infty) rightarrow U$$ measurable$$}$$, $$U subset mathbb{R}^n$$

and the problem is to minimize
$$begin{equation} J(x, u)=int_{0}^{+infty} e^{-lambda t} f^0(y_x(t), u(t)) d t end{equation}$$

Assume for simplicity that $$f,f^0 in C^1_B$$ and $$f$$ is lipchitz uniformly in $$u$$

Now in (Bardi, Martino, and Italo Capuzzo-Dolcetta. Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations. Springer Science & Business Media, 1997.) it is claimed that for fixed $$x$$ the function $$H(x,p)$$ is convex in $$p$$.

How do you see that?

## Convexity of a log quadratic function

My objective function is $$log_2(x^2+x+1)$$. Is this a quasiconvex function? If not, is it possible to rewrite it as a convex function?
Thanks!

## functional analysis – Logarithumic convexity of norm of \$W^{s,p}(Omega)\$ space in \$s\$ analogue to that of \$H^s(Omega)\$.

I have seen following logarithmic convexity property:

Let $$s$$ and $$s’$$ be two real numbers such that $$uin H^{{max{s,s’}}}(mathbb R^n)$$ then for any $$0le tle 1$$
$$|u|_{ts+(1-t)s’}le C|u|_s^{t}|u|_{s’}^{1-t}$$

I am wondering is similar result hold in case of Sobolev space $$W^{s,p}(mathbb R^n)$$.

Any help or reference will be greatly appreciated.

## convex analysis – Convexity and linear optimization

Consider the optimization problem $$min_{xin S}f(x)$$ where $$f(x)=max_{i=1,…,m}{a_ix+b_i}$$ and $$S$$ is a polyhedron contained in $$R^n$$.

First I want to show that the function $$f$$ is a convex function.

What I have done so far: We know $$max$$ function is convex and $$a_ix+b_i$$ is linear so it is also convex, hence $$f(x)$$ is convex. Now I am stuck at the $$min_{xin S}f(x)$$ part. I am not sure how I can handle the $$min$$ function with $$S$$ region.

Second I want to show that the optimization problem $$min_{xin S}f(x)$$ can be solved by a linear optimization problem. I don’t have any idea on how to approach this one.

Any helps would be appreciated!

## economics – Microeconomics Doubt about Convexity and Satiation [\$u(x_1,x_2)=-x_1^2-2x_2^2+2x_1x_2-10x_1+40x_2\$]

I am having some difficulty with this question: $$text{Prove that } u(x_1,x_2)=-x_1^2-2x_2^2+2x_1x_2-10x_1+40x_2 text{represents a strictly convex preference and has a global satiation point}$$

I really don’t know how to approach this problem. For the convex part, I think I need to prove that $$u(x_1,x_2)$$ is strictly quasi-concave, but I don’t know how to do that.

Thank you very much.

## Is Lagrangian relaxation convexity optimization problem or not?

We know that for a regular maximization LP problem, it should be
$$z^* = max_x c^Tx s.t. x in X, Ax leq b$$
where $$b in mathbb{R}^m$$. There is a technique called Lagrangian relaxation, which can make the problem easier to solve. The Lagrangian relaxation is given by
$$z(u) = max_x c^Tx + u^T(b-Ax) s.t. x in X$$
The Lagrangian relaxation $$z(u)$$ can provide an upper bound on $$z^*$$ for any $$u geq 0, u in mathbb{R}^m$$. Therefore, the tightest possible Lagrangian relaxation is
$$min_{u geq 0}z(u)$$
It is a good upper bound of the original solution. But the textbook also mentions that the above minimization problem (in u) is a convex optimization problem. Could someone explain why $$z(u)$$ is a convex function of $$u$$? Thank you very much.

## On the mean value theorem and convexity

Let $$f$$ be a function of class $$C^2$$ on $$(a,b)$$ such that $$f” ge k$$ for $$kinmathbb{R}$$.

Show that for all $$x in (a,b)$$, we have $$frac{f(x)-f(a)}{x-a} le frac{f(b)-f(a)}{b-a}-frac k 2 (b-x)$$.

Conversely, if this inequality holds for all $$(c,d) subset (a,b)$$, do we have $$f” ge k$$ ?

## matrices – Convexity of set bounded positive semidefinite matrix

can anybody help me with this question?
Let $$Ain R^{nxn}$$ a positive semidefinite matrix and $$alpha geq 0$$ I want to prove the following:

$$M_{alpha} = { x in R^n | x^{T}Ax leq alpha}$$ is convex?
to prove that I am adviced to use the following tip after proving it
for any $$lambda,mu in R$$ the following holds
$$lambda^2x^{T}Ax+2lambdamu x^T A y+ mu^2 y^{T}Ay$$

Please excuse me for the bad writing

## Deciding convexity of univariate function

In general, it’s NP-Hard to decide a convexity of a function. Is it also NP-hard to decide whether a generic univariate (one dimensional) function is convex or not?

## convex analysis – Property of a function on a cone implies convexity?

Let $$E$$ be a Hilbert space with inner product $$langlecdot,cdotrangle:Etimes Etomathbb R$$ and let $$C$$ be a subset of $$E$$ such that for any $$xin C$$ and $$yin E$$, $$y$$ is in $$C$$ if and only if $$langle x,yranglegeq 0$$. Now $$C$$ is a cone since if $$xin C$$ then clearly $$a x$$ is also in $$C$$ for $$ageq 0$$.

Let $$f:Cto (0,infty($$ be a function satisfying the following property

If for some $$x,yin C$$ we have that, for all $$zin C$$, $$langle x,zrangleleq langle y,zrangle$$ then $$f(x)geq f(y)$$.

I have a feeling that this forces $$f$$ to have some kind of convexity, so here is my question :

Does that guaranty that the level sets $$L_{leq f(y)}={ xin C:f(x)leq f(y) }$$ for all $$yin C$$ are convex sets ? Are they closed ?

And what about the level sets $$L_{geq f(y)}={ xin C:f(x)geq f(y) }$$ ?

Let $$x,y,zin C$$ such that $$f(x),f(y)leq f(z)$$ (respectively $$geq$$) and with $$lambdain(0,1)$$, let $$u=lambda x+(1-lambda)y$$. Suppose toward contradiction that $$f(u)>f(z)$$ (resp. $$<$$). This implies by the aforementioned property that there is $$vin C$$ such that $$langle z, v rangle>langle u,vrangle=lambda langle x,vrangle+(1-lambda)langle y,vrangle$$ (resp. <). It feels like the only useful information that this gives is that either $$langle z, v rangle>langle x,vrangle$$ or $$langle z, v rangle>langle y,vrangle$$ (resp. $$<$$) and without of loss of generality we can assume it is true for $$x$$. I am not sure if this can lead to anything useful, but I would be grateful if some could provide me with a counter example or a proof.