convex optimization – Does Ax must be positive in lagrangian relaxation?

In lagrangian relaxation, the constraints can relax on the objective function.

$Min C^T x$
subject to $Ax = b$

then we have this $C^Tx + u^T(Ax – b)$
where $u ^ T$ is a positive number, let $u ^ T$ is 1, b is 10

For example (positive case)

$Ax$ = 100 then the relaxation becomes 1(100 – 10) = 90 for postiive number to lagrangian dual

(negative case)
However if $Ax$ = -30 then the relaxation becomes 1(-30 – 10) = -40 for negative number to lagrangian dual

When it violates the constraints (negative numbers), it lowers the cost function in minimization.

$min C^T + (Ax-b)$

$min C^T – 40$

It seems wired.

so does $Ax$ must be a positive number?

convex analysis – Any example of known strong convexity constant?

A continuously differentiable function $f(x)$ is strongly convex on $mathbb{R}^{n}$ if there exists a positive constant $mu$ such that for any $x, y in mathbb{R}^{n}$,
begin{align}
f(y)geq f(x) + langle nabla f(x), y-xrangle + frac{1}{2}mu |y-x|^2
end{align}

I’m curious about the strong convexity constant $mu$. I heard that for some limited cases, it is known.

Could anyone tell me what kind of functions have a known value for $mu$? For example, is it known for a quadratic function? I searched on Google but I haven’t found any useful source.

Quadratic convex function without a lower bound

Does there exist a specific convex (not strict) quadratic function that has no lower bound?

convex analysis – Is this Hessian matrix positive semidefinite?

I’m trying to prove the convexity of a function from its Hessian matrix.

He = $begin{bmatrix}
frac{1}{x_1} – frac{1}{sum_{i=1}^n x_i} & & frac{-1}{sum_{i=1}^n x_i} & cdots & frac{-1}{sum_{i=1}^n x_i}\
frac{-1}{sum_{i=1}^n x_i} & & frac{1}{x_2} – frac{1}{sum_{i=1}^n x_i} & cdots & frac{-1}{sum_{i=1}^n x_i}\
vdots & & ddots & & vdots \
frac{-1}{sum_{i=1}^n x_i} & & cdots & & frac{1}{x_n} – frac{1}{sum_{i=1}^n x_i}\
end{bmatrix}$

My idea was to separate this into two matrices and see if we could get something out of that:

He = $begin{bmatrix}
frac{1}{x_1} & 0 & 0 & cdots & 0\
0 & frac{1}{x_2} & 0 & cdots & 0\
vdots & ddots & ddots & ddots & 0\
0 & 0 & cdots & 0 & frac{1}{x_n}\
end{bmatrix}$

+
$begin{bmatrix}
frac{-1}{sum_{i=1}^n x_i} & frac{-1}{sum_{i=1}^n x_i} & cdots & frac{-1}{sum_{i=1}^n x_i}\
frac{-1}{sum_{i=1}^n x_i} & ddots & ddots & frac{-1}{sum_{i=1}^n x_i}\
vdots & ddots & ddots & vdots\
frac{-1}{sum_{i=1}^n x_i} & frac{-1}{sum_{i=1}^n x_i} & cdots & frac{-1}{sum_{i=1}^n x_i}\
end{bmatrix}$

Knowing that: $x_1, x_2, … x_n in mathbb{R}_{++}$

we can see that the first matrix is positive definite and the second matrix consists of the same value in each position, and this value must be negative.

Also, since $dfrac{1}{x_i} > dfrac{1}{sum_{i=1}^n x_i}$ for all $i$, the entries on the diagonal of the original Hessian must all be positive.

However, I don’t know what else I can do with this.

convex analysis – Proving convexity of $ 25y^2 – x^2 geq 9 $

I need to show that the following equation is convex:

$$25y^2 – x^2 geq 9$$

So I came up with the following:

Suppose we have $x = langle x_1, x_2 rangle , : y=langle y_1, y_2 rangle in S text{ for } 0 leq t leq 1$

Then we get the following:

$$(1-t)x + ty = langle (1-t)x_1 + ty_1, (1-t)x_2+ty_2 rangle =\
25left((1-t)x_2+ty_2 right)left((1-t)x_2+ty_2 right) – ((1-t)x_1 + ty_1)((1-t)x_1+ty_1)=\
25((1-t)^2x_2^2 + 2(1-t)tx_2y_2 + t^2y_2^2) – ;… \
((1-t)^2x_1^2 + 2(1-t)tx_1y_1 + t^2y_1^2) = \
(1-t)^2 (25x_2^2 – x_1^2) + t^2 (25x_2^2 – x_1^2) ;+ ;…
$$

I am a bit confused by the exponentials and as a result, the multiplications, e.g. $2(1-t)tx_2y_2$ . Am I doing this correctly or am I making a mistake?

I was also thinking about creating a composite function but I am not quite sure.

nonlinear optimization – Prove that a polygon is convex over a circle

The problem

Let $C_A$ (resp. $C_B$) a circle of center $A = (x_A,0)$ (resp. $B = (x_B,0)$) and radius $r_A$ (resp. $r_B$).

For $k = 0,1,2,3,4$, let $D_k$ some points on $C_A$ with $D_0 = (x_0,0)$

Let $D_0D_1D_2D_3D_4$ a non-convex polygon s.t $(D_kD_{k+1})$ is tangent to $C_B$ for $k = 0,…,3$

I have to show that $D_0D_1D_2D_3D_4$ is convex (i.e $D_0 = D_4$) if and only if

$$ d^2 = r_A^2 + r_B^2 – r_B sqrt{ r_B^2 + 4r_A^2 } $$

where $d$ is the distance between $A$ and $B$. ($d$ can be seen as the smallest distance between $A$ and $B$ to get the polygon convex)

enter image description here

My works

I proved this implication : if $D_0D_1D_2D_3D_4$ is convexe (i.e $D_0 = D_4$) then

$$ d^2 = r_A^2 + r_B^2 – r_B sqrt{ r_B^2 + 4r_A^2 } $$

But I don’t know how to prove the converse : i.e if $ d^2 = r_A^2 + r_B^2 – r_B sqrt{ r_B^2 + 4r_A^2 } $ then $D_0D_1D_2D_3D_4$ is convex (i.e $D_0 = D_4$).

I tried to prove this by contradiction. I suppose $ d^2 = r_A^2 + r_B^2 – r_B sqrt{ r_B^2 + 4r_A^2 } $ is true. I suppose $D_0 neq D_4$ and I have to find a contradiction. Even if we suppose $D_0 neq D_4$ we can suppose $D_4 = D_1$. Let $D_0 = (x_0,y_0) = (x_0,0)$, $D_1 = (x_1,y_1)$ and $D_4 = (x_4,y_4)$. We start with $x_1 = x_4$ and $y_1 = y_4$ and we should get $x_4 = x_0$ and $y_4 = y_0 = 0$ with the relation

$$ d^2 = r_A^2 + r_B^2 – r_B sqrt{ r_B^2 + 4r_A^2 } $$

to get a contradiction. But I don’t know if this is the correct way to prove…

Convex hull of the union of scaled polytopes create few new edges

Let $P$ and $Q$ be two polygons in $mathbb{R}^2$. Given $a > 0$, denote by $aP$ its image under the dilation by $a$ centered around the origin (i.e. the polygon obtained by replacing each vertex $(p_0,p_1)$ by the vertex $(ap_0, ap_1)$). Let $C(P,Q)$ denote the convex hull of the union $P cup Q$, and let $N(P,Q)$ denote the number of new edges created (i.e. edges in $C(P, Q)$ which do not appear in $P$ or $Q$). Let $P+Q$ denote the Minkowski sum of the two polygons.

Based on some empirical results, I was led to the below heuristic (if $X$ is a Gaussian centered at $0$, then $|X|$ follows the half-normal distribution).

Heuristic: Let $a,b$ be chosen from the half-normal distribution, and $P, Q$ be two fixed convex polygons in $mathbb{R}^2$ that are “sufficiently generic”. Then:
$$ mathbb{E}(N(aP, bQ))<10$$

Unfortunately it turns out that this does not hold for all choices of polygons $P$ and $Q$, and finding a counterexample is not difficult (see below for details). Above $10$ is an arbitrarily chosen constant, the key point is that it does not depend on the number of vertices in the polygons $P$ and $Q$. Let a Gaussian random polygon $P$ be the convex hull of $n$ random, independent points in $mathbb{R}^2$ sampled according to the standard normal distribution, for some $n$; here $n$ is the “size” of the polygon. So this leads me to my real question:

Question: What are some conditions which ensure that the above inequality holds? These conditions should be “generic” in some sense, and should be satisfied (with high probability) when $P$ and $Q$ are Gaussian random polygon of fixed size.

In particular, an answer to the above question should rigorously prove that the heuristic holds (with high probability) when $P$ and $Q$ are Gaussian random polygons. The next question is a more general version, with Minkowski sums.

Question 2: Let $a_1, cdots, a_m, b_1, cdots, b_n$ be chosen from a half-normal distribution, and $P_1, cdots, P_m, Q_1, cdots, Q_n$ be polygons in $mathbb{R}^2$. Consider the following statement:
$$ mathbb{E}(N(a_1P_1+cdots+a_mP_m, b_1Q_1+cdots+b_nQ_n)) < 10$$
What are some conditions on these polygons which ensure that the above inequality holds? These conditions should be “generic” in some sense, and should be satisfied (with high probability) when the $P_i$ and the $Q_j$ are Gaussian random polygon of fixed sizes.

If the phrasing of the above questions are too vague, I’d be happy with a proof that the above inequalities hold, with high probability, when the polygons are random Gaussian polygons. One special case I’m interested in is when the sizes of the Gaussian polygons is 2 (i.e. they are line segments).

The counterexample for Q1 can be constructed as follows. Suppose $O P_1 P_2 cdots P_n$ is a convex polygon, with the vertices in clockwise order. Let $P_i P_{i+2}$ intersect $O P_{i+1}$ at the point $Q_{i+1}$. The points should be chosen carefully so that the ratio $frac{OQ_{i+1}}{OP_{i+1}} < epsilon$, where $epsilon>0$ is small (this can be done inductively). Let $P$ be the polygon $OP_2 P_4 …$ and $Q$ the polygon $OP_1 P_3 …$. With this choice, it is easy to see that $mathbb{E}(N(aP, bQ))$ grows linearly with $n$.

discrete geometry – Cutting Convex Regions into Equal Diameter and Equal Least Width Pieces

Diameter of a Convex Region is the greatest distance between any pair of points in the region.
Least Width of a 2D convex region can be defined as the least distance between any pair of parallel lines that touch the region.

  1. Given a positive integer n, can every 2D convex region C be divided into n convex pieces, all of the same diameter? The pieces ought to be non-degenerate and have finite area.

  2. If the answer to 1 is yes, how does one minimize the common diameter of the n pieces?

  3. Consider dividing C into n convex pieces such that the maximum diameter among the pieces is a minimum. Will such a partition necessarily require all pieces to have the same diameter? This looks unlikely but I have no counter example.

  4. For any n, can any C be divided into n convex nondegenerate pieces, all of same least width?

  5. If 4 has a “yes” answer, how does one maximize the common least width of the n pieces?

  6. If the least width among convex n pieces into which C is being cut ought to be maximized, will such a partition necessarily be one where all pieces have same least width? Again, one has no counter example.

These questions have obvious analogs in higher dimensions and other geometries.

reference request – If $C_1subseteq C_2$ are two closed convex cones that are pointed with $partial C_1subseteq partial C_2$ then is $C_1=C_2$?

Let $C_1$ and $C_2$ be two proper full dimensional closed convex cones in $mathbb{R}^n$ that are pointed. Suppose that $C_1subseteq C_2$ and that the boundary of $C_1$ is contained in the boundary of $C_2$. Then is $C_1=C_2$? Any references for a result of this form would be welcome. I suspect this to be true, and I have some rough ideas about how to prove this, but my arguments seem messy and I am worried that my intuition from low dimensional and finitely generated cases goes wrong in the generality I am considering.

By pointed I mean that $(-C)cap C={0}$ and by full dimensional I mean that $C$ spans $mathbb{R}^n$. I am particularly interested when $C_1,C_2$ are not necessarily finitely generated, though partial results in the finitely generated or finitely generated rational case would be welcome.

discrete geometry – On Some Centers of Convex Regions Based On Partitions

These questions are inspired by Yaglom and Boltyanskii’s ‘Convex Figures’.

Winternitz Theorem: If a 2D convex figure is divided into 2 parts by a line l that passes through its center of gravity, the ratio of the areas of the two parts always lies between between the bounds 4/5 and 5/4.

Y and B also prove that for any triangle, there is no other point O than its center of gravity (centroid) for which the ratio of the partial areas into which the triangle is subdivided by lines thru O can be enclosed within narrower bounds.

Question 1: For any general convex 2D region, is the centre of mass still the point such that the areas into which the region is divided by lines thru that points are closest to each other? If the point we seek is not necessarily the centre of mass (seems unlikely), then it could be called the ‘area partition center’ of the region and finding this center for a general given region could be an algorithmic question.

Y and B also prove:
Let a bounded curve of length L that may consist of separate pieces be given in the plane. Then there is a point O in the plane so that each line through O divides the curve into 2 parts each having a length of not less than L/3.

Question 2: If L be the boundary of a single convex region, there must be a point O’ in its interior such that any line thru O’ divides the boundary into 2 portions such that the lengths of the two portions are closer than 1:3. What is a bound for this ratio for convex regions?

Let us define the perimeter partition center of a 2D convex region as that point P in its interior such that the 2 portions into which any line thru P divides the outer boundary are guaranteed to be closest to each other in length.

Example: For an isosceles triangle with very narrow base, this perimeter partition center is close to the mid point of the bisector of its apex angle and so clearly different from the centroid.

Question 3: Given a general convex region (even a triangle) to find its perimeter partition center.

Note: These questions have obvious 3D analogs with volume and surface area replacing area and perimeter.