## reference request – About the maximums of the injective holomorphic maps in $mathbb {C} ^ n$

Thank you for contributing an answer to MathOverflow!

But avoid

• Make statements based on opinion; Support them with references or personal experience.

Use MathJax to format equations. MathJax reference.

## Actual analysis: Cauchy sequence with ${x_n: n in mathbb {N} }$ not closed converge

Suppose $$(x_n) _ {n in mathbb {N}}$$ it's a Cauchy sequence and $$A = {x_n: n in mathbb {N} }$$ It has not been closed. Prove that there is $$x in X$$ such that $$x_n longrightarrow x$$.

As $$A$$ it's not closed:
$$for all y in A, exists varepsilon> 0 s.t. S (and, varepsilon) subset A$$
and from $$x_n$$ is Cauchy:
$$forall varepsilon> 0, exists n_0 in mathbb {N} s.t. forall n, m geq n_0: rho (x_n, x_m) < varepsilon$$
But I can not see a way to connect these two to prove convergence. Some initial tips would be very appreciated.

## Theory of gr.group: a finite generation set for the group $SL_ {2} ( mathbb {F} _ {2} [t , t^{-1} ]$

I guess you knew before (the document you mention deals with finite presentability, a more difficult problem).

Denoting $$(E_ {ij})$$ the base of the matrix space, $$e_ {ij} (t) = I + tE_ {ij}$$, $$d_ {ij} (a) = aE_ {ii} + a ^ {- 1} e_ {jj}$$, the group $$mathrm {SL} _2 ( mathbf {F} _p[t,t^{-1}]$$ for $$p$$ prime is generated by $${e_ {12} (1), e_ {12} (t), e_ {21} (1), e_ {21} (t), d_ {12} (t) }$$.

Indeed, $$mathbf {F} _p[t,t^{-1}]$$ being a Euclidean ring, $$mathrm {SL} _2 ( mathbf {F} _p[t,t^{-1}]$$ It is generated by elementary matrices. And using conjugation by $$d_ {12} (t)$$ and the four elementary generators given, one obtains all the other basic elements among the elementary matrices.

(It's not hard to check that $$e_ {21} (t)$$ it is redundant in this generation subset, and also that there are not two of these 5 elements that generate the group.)

## nt.number theory – Is it true that ${x ^ 3 + 2y ^ 3 + 3z ^ 3: x, y, z in mathbb Z } = mathbb Z$?

It is easy to see that no integer is congruent with $$4$$ or $$-4$$ module $$9$$ It can be written as the sum of three whole cubes. In view of this and of Question 331163, I proposed the following conjecture in March 2019.

Guess. Each whole $$n$$ It can be written as $$x ^ 3 + 2y ^ 3 + 3z ^ 3$$ with $$x, y, z$$ integers That is,
$${x ^ 3 + 2y ^ 3 + 3z ^ 3: x, y, z in mathbb Z } = mathbb Z.$$

This conjecture has an interesting application. Under the conjecture, my result in the Tenth Problem of Hilbert implies that there is no effective algorithm to prove a general polynomial $$P (x_1, ldots, x_ {33})$$ with integer coefficients if the Diophantine equation
$$P (x_1 ^ {3}, ldots, x_ {33} ^ 3) = 0$$
It has entire solutions.

Recently, my PhD student, Chen Wang, seriously checked my previous conjecture. Found that
$${0, ldots, 5000 } setminus {x ^ 3 + 2y ^ 3 + 3z ^ 3: x, y, z in {- 30000, ldots, 30000 } }$$
they only contain four numbers: $$36, 288, 2304, 4500.$$
Note that
$$288 = 2 ^ 3 times 36, 2304 = 4 ^ 3 times36, 4500 = 5 ^ 3 times36.$$
So, to finish the verification of the conjecture for all. $$n = 0, ldots, 5000$$, remains to be found $$x, y, z in mathbb Z$$ with $$x ^ 3 + 2y ^ 3 + 3z ^ 3 = 36$$.

QUESTION. There are integers $$x, y, z$$ satisfactory $$x ^ 3 + 2y ^ 3 + 3z ^ 3 = 36$$?

## nt.number theory – How many $mathbb {Q}$ – bases of  can be constructed from the vector set $log (1), cdots, log (n)$?

How many $$mathbb {Q}$$-base  It can be constructed from the set of vectors. $$log (1), cdots, log (n)$$?

Data for $$n = 1,2,3 cdots$$ computed with Sage:

$$1, 1, 1, 2, 2, 5, 5, 7, 11, 25, 25, 38, 38, 84, 150, 178, 178, 235, 235$$

Context:
The real numbers $$log (p_1), cdots, log (p_r)$$ where $$p_i$$ Is it known that the i-th prime are linearly independent over the rational ones? $$mathbb {Q}$$.

Example:

[{}]    -> For n = 1, we have 1 = a (1) bases; We count {} as the basis for V_0 = {0}
[{2}] -> For n = 2, we have 1 = a base (2), which is {2};
[{2, 3}] -> for n = 3, we have 1 = a base (3), which is {2,3};
[{2, 3}, {3, 4}] -> for n = 4 we have 2 = a (4) bases, which are {2,3}, {3,4}
[{2, 3, 5}, {3, 4, 5}] -> a (5) = 2;
[{2, 3, 5}, {2, 5, 6}, {3, 4, 5}, {3, 5, 6}, {4, 5, 6}] -> a (6) = 5;
[{2, 3, 5, 7}, {2, 5, 6, 7}, {3, 4, 5, 7}, {3, 5, 6, 7}, {4, 5, 6, 7}] -> a (7) = 5.


## linear algebra – Generators for the semigroup $mathrm {SL} (n, mathbb {N})$

by $$2 times 2$$ In the matrices we have the following result.

Any matrix in $$mathrm {SL} (2, mathbb {Z})$$ with non-negative entries can be obtained from $$mathrm {Id} _2$$ repeatedly adding one column to another.

Test: Just prove that if $$begin {pmatrix} a & b \ CD end {pmatrix} in mathrm {SL} (2, mathbb {Z}) setminus { mathrm {Id} _n }$$ has non-negative entries,
then also
$$begin {pmatrix} a-b & b \ c-d & d end {pmatrix} text {or } begin {pmatrix} a & b-a \ c & d-a end {pmatrix}$$
It has non-negative entries too. After this you can finish by induction. Now to prove that, let's suppose $$a$$ It is the largest entry in the matrix, if $$a = 1$$ then we get the matrices
$$begin {pmatrix} 1 & 0 \ 1 and 1 end {pmatrix}, begin {pmatrix} eleven \ 0 and 1 end {pmatrix} tag { star }$$
and we are finished. Otherwise $$a> 1$$from there
$$d-c leq d-bc / a = (ad-bc) / a = 1 / a$$ from which $$d-c leq 0$$ And we arrived in the first case.
Cases where the maximum entry is different from $$a$$ they are made in a similar way $$blacksquare$$

This can be reaffirmed by saying that the elementary matrices in ($$star$$) generate the semigroup $$mathrm {SL} (2, mathbb {N})$$.

My question is:

It is true that $$mathrm {SL} (n, mathbb {N})$$ It can be generated by elementary matrices, similar to what happens in the case $$n = 2$$?

I guess this has already been discussed in the literature, so a good reference would suffice.

## reference request – Involves in $mathbb {F} _p[[x]]$

Yes $$p neq 2$$, an involution in $$mathbb F_p[x]$$ with zero constant term must have $$± 1$$ as the coefficient of $$x$$. If the main coefficient is $$1$$, we can inductively see that each coefficient is zero (writing $$f (x) = x + a x ^ n + dots, f (f (x)) = x + 2a x ^ n + dots$$ so $$a = 0$$.) Of course, this trivial involution can be lifted.

On the other hand, if the main coefficient is $$-1$$, then we can take $$x f (x)$$ A series of powers with main coefficient. $$-x ^ 2$$. We can observe the only algebra homomorphism. $$mathbb F_p[[x]] a mathbb F_p[[x]]$$ shipping $$x$$ to $$f (x)$$ and retains the topology. Under this automorphism, $$-x f (x)$$ it's fixed, so $$sqrt {-x f (x)}$$ (only up $$± 1$$) it sends itself or less itself. Since its main term is non-zero, it is sent less to itself. Until the reparameterización by the series of invertible power. $$sqrt {-x f (x)}$$, this is the standard involution $$± 1$$. We can lift $$sqrt {-x f (x)}$$ to the integers, where the power series of the inverse function will also be integral, and are used to raise the involution.

In characteristic two, the situation is worse since there are more involutions. For example, we can take $$f (x) = left (x ^ n / (1 + x ^ n) right) ^ {1 / n}$$ for any stranger $$n$$. Some of these can be lifted (for example, if $$n = 1$$ we can do $$f (x) = -x / (1-x)$$ But I do not know if everything can be.

## pr.probability: Taylor's theorem for a composition with $min: mathbb R ^ 2 to mathbb R$ and Lebesgue differentiation almost everywhere

Leave

• $$f in C ^ 3 ( mathbb R)$$ with $$f> 0$$
• $$g: = ln f$$ (and assume $$g & # 39;$$ is Lipschitz continuous)
• $$n en mathbb N$$, $$s (x, y): = sum_ {i = 1} ^ n left (g (y_i) -g (x_i) right)$$ Y $$h (x, y): = min left (1, e ^ {s (x, : y)} right)$$ for $$x, and in mathbb R ^ n$$
• $$x en mathbb R ^ n$$ Y $$Y$$ be a $$mathbb R ^ n$$random variable normally distributed in a probability space $$( Omega, mathcal A, operatorname P)$$ with medium vector $$x$$ and covariance matrix $$sigma I_n$$ for some $$sigma> 0$$ ($$I_n$$ denoting the $$n times n$$ identity matrix)

I want to rigorously make the following argument: By Taylor's theorem, $$begin {equation} begin {split} h (x, Y) -h (x, (x_1, Y_2, ldots, Y_n)) & = frac { partial h} { partial y_1} (x, ( x_1, Y_2, ldots, Y_n)) (Y_1-x_1) \ & = frac12 frac { partial ^ 2h} { partial y_1 ^ 2} (x, (Z_1, Y_2, ldots, Y_n)) (Y_1-X_1) ^ 2 end {split} tag1 end {equation}$$ for some random variable of real value $$Z_1$$ with $$Z_1 in[min(x_1,Y_1),max(x_1,Y_1)]$$. A) Yes, $$begin {equation} begin {split} left. operatorname E left[h(x,(y_1,Y_2,ldots,Y_n))right] right | _ {y_1 : = : Y_1} & = operatorname E left[minleft(1,e^Aright)right]+ g & # 39; (x_1) operatorname E left[1_{left{:A:<:0:right}}e^Aright](Y_1-x_1) \ & + frac12 (g & # 39; & # 39; (Z_1) + left | g & # 39; (Z_1) right | ^ 2) left. Operatorname E left[1_{left{:B:<:0:right}}e^Bright] right | _ {z_1 : = : Z_1} (Y_1-x_1) ^ 2. end {split} tag2 end {equation}$$ Above I wrote $$A: = sum_ {i = 2} ^ n (g (Y_i) -g (x_i))$$ Y $$B: = g (z_1) -g (x_1) + sum_ {i = 2} ^ n (g (Y_i) -g (x_i))$$ to make the equation more readable (you must replace them where they occur).

Question 1: There are two themes: the first is that $$(x, y) mapsto min (x, y)$$ It is partially differentiable in both arguments except in the diagonal. $$Delta_2: = left {(x, y) in mathbb R ^ 2: x = y right }$$. Can we conclude the existence of $$Z_1$$ anyway? Note that $$frac { partial h} { partial y_1} (x, y) = begin {cases} displaystyle g & # 39; (y_1) e ^ {s (x, : y)} & text {, yes} s (x, y)<0\0&text{, if }s(x,y)>0 end {cases} tag3$$ Y $$frac { partial ^ 2h} { partial y_1 ^ 2} (x, y) = begin {cases} displaystyle (g & # 39; & # 39; (y_1) + | g & # 39; (y_1) | ^ 2) e ^ {s (x, : y)} & text {, if} s (x, y)<0\0&text{, if }s(x,y)>0 end {cases} tag4$$ for all $$and in mathbb R ^ n$$.

Question 2: The second problem is the case $$s (x, y) = 0$$. To $$(3)$$ to maintain, we have to show that the probability of the corresponding event is $$0$$ (This seems to be related to the question of whether the set in which the function that occurs is not differentiable has the Lebesgue measure $$0$$; and it is clear that $$Delta$$ has measure of Lebesgue $$0$$). How can we do that?

While it is clear that $$h$$ is partially differentiable with respect to the second variable, except in an accounting set, it is not clear to me why $$h$$ it is even twice differentiable with respect to the second variable, except in a set (at least) of Lebesgue measure $$0$$ (see this related question).

## Algebra of lies from $left ( begin {array} {c c} a & b \ & a ^ 2 end {array} right)$ in $GL_2 ( mathbb {R})$

I am working on the question.

Leave $$G$$ Be the set of invertible real matrices of the form. $$left [begin{array}{c c}a & b\ & a^2end{array}right ]$$. Determine the lie algebra $$L$$ of $$G$$, and calculate the support in $$L$$.

I'm familiar with how to derive the Lie algebra for a linear group like $$U_n$$, $$SU_n$$, etc., but I'm not sure what to do in a more explicit case like this.

## Self-similar bars in $mathbb R ^ d$

Leave $$Lambda subset mathbb R ^ d$$ A discrete subgroup, until diminishing. $$d$$ we assume it's the way $$A mathbb Z ^ d$$ with $$A in GL (d)$$. Until dilation we assume that the shortest vector in $$Lambda setminus {0 }$$ It has length $$1$$.

I would like to call this $$Lambda$$ "self-similar"yes for each $$p in Lambda setminus {0 }$$ one can complete $$p$$ to a subpart $$Lambda & subset Lambda$$ from the way $$Lambda & = 39; = lambda R Lambda$$ with $$lambda = | p |$$ Y $$R in O (d)$$ (that is to say. $$Lambda & # 39;$$ is a dilated copy rotated from $$Lambda$$ such that $$p$$ It is one of the shortest vectors that are not zero in $$Lambda & # 39;$$).

The motivation was for me that the lattices. $$mathbb Z ^ 2$$ Y $$A_2$$ (the triangular equilateral network) in $$d = 2$$ They are self-similar, and I wondered how rare is this property: What are other examples of self-similar grids in other dimensions?

Pointers to possibly related concepts are very welcome.