## general topology – How is $overline {c _ { epsilon}}$ convex and $c: = bigcap _ { epsilon> 0} overline {c _ { epsilon}} neq emptyset$ in a CAT $(0)$ space?

Leave $$(X, d)$$ be a cat$$(0)$$ space, $${x_n } subset X$$ be limited and $$K subset X$$ be closed and convex Define $$varphi: , X longrightarrow mathbb {R},$$ by $$varphi (x) = limsup limits_ {n to infty} d (x, x_n)$$ for each $$x in X.$$ Then, there is a single point. $$u in K$$ such that $$varphi (u) = inf limits_ {x in K} varphi (x).$$

Test

Leave $$r = inf limits_ {x in K} varphi (x)$$ Y $$epsilon> 0.$$ Then, there is $$x_0 in K$$ such that $$varphi (x_0) This implies that there is $$N in mathbb {N}$$ such that
$$x_0 in bigcup_ {n geq N} left ( bigcap_ {k = n} B (x_k, r + epsilon) cap K right) subset bigcup_ {n geq 1} left ( bigcap_ {k = n} B (x_k, r + epsilon) cap K right).$$
Define $$c _ { epsilon}: = bigcup_ {n geq 1} left ( bigcap_ {k = n} B (x_k, r + epsilon) cap K right).$$ It is clear that $$c _ { epsilon}$$ It is convex

Question: How is it $$overline {c _ { epsilon}}$$ convex and $$c: = bigcap _ { epsilon> 0} overline {c _ { epsilon}} neq emptyset?$$

Details: The current article that I am reviewing is Dhompongsa et al. To show that $$overline {c _ { epsilon}}$$ it is convex, Dhompongsa et al., referring to Proposition 1.4 (1). I just have a hard time understanding it. Any help, please?

## finite automata – ε-NFA to DFA – initial state with only epsilon transitions

Thank you for contributing an answer to the Computer Science Stack Exchange!

But avoid

• Make statements based on opinion; Support them with references or personal experience.

Use MathJax to format equations. MathJax reference.

## logic – kPDA handling multiple epsilon transitions

I am assigned to build a kPDA with 2 batteries that handle {w # w, where w is a string of (0,1) *}. I understand that the # delineates the two chains, but I'm not sure of the logic when starting the batteries with epsilon. I asked my teacher, verbally, if this works and he said no, but I can not think otherwise.

## How can I prove that I use a test $epsilon – delta$ that $lim_ {x rightarrow frac {1} {e}} (e ^ {x ^ {x ^ x}}) <2$?

It is not a homework question. I just wanted to refresh my tests of the epsilon delta and this happened to me: I struggled for an hour, I have no idea where to start.

## limits: What values ​​of $alpha$, if any, produce $f = mathcal {O} ( epsilon ^ alpha)$ as $epsilon downarrow 0$?

I need to solve the following exercise:

Exercise: What values ​​of $$alpha$$, if applicable, performance $$f = mathcal {O} ( epsilon ^ alpha)$$ as $$epsilon downarrow 0$$ Yes
$$f = dfrac {1} {1 – e ^ epsilon}$$

What I have tried:

I know I can use the following theorem:

Yes
$$lim_ {e downarrow e_o} dfrac {f ( epsilon)} { phi ( epsilon)} = L,$$
where $$– infty , so $$f = mathcal {O} ( phi)$$ as $$epsilon downarrow epsilon_0$$.

So I think I need to evaluate the limit
$$lim _ { epsilon downarrow 0} dfrac {1} { epsilon ^ alpha (1 – e ^ epsilon)}$$
and find out for what values ​​of $$alpha$$ exists. The limit exists if $$epsilon ^ alpha (1 – e ^ epsilon)$$ converges to infinity or any other number that is not $$0$$. I know that $$1 – e ^ epsilon a 1 – 1 = 0$$ for $$epsilon downarrow 0$$. Then the limit exists if $$epsilon ^ alpha to infty$$ faster than $$1 – e ^ epsilon a 0$$because in that case $$epsilon ^ alpha (1 – e ^ epsilon) a infty$$ Y

$$lim _ { epsilon to 0} dfrac {1} { epsilon ^ alpha (1-e ^ epsilon)} = 0$$
Yes $$alpha> 0$$ so $$lim _ { epsilon downarrow 0} epsilon ^ alpha = 0$$ so we know that at least $$alpha leq 0$$ necessary. Yes $$alpha = 0$$ so $$lim _ { epsilon downarrow 0} epsilon ^ alpha = 1$$. Yes $$alpha <0$$ so $$lim _ { epsilon downarrow 0} epsilon ^ alpha = infty$$. That's why I think that for all values ​​of $$alpha <0$$ we have that $$f = mathcal {O} ( epsilon ^ alpha)$$.

Question: If my reasoning about when the limit exists is correct, I think the weak point of my solution so far lies in the fact that I do not really show / know whether it is true or not. $$epsilon ^ alpha to infty$$ faster than $$1 – e ^ epsilon a 0$$. Is this true? And if so, how can I prove what it is?

## Probability: anti-concentration: upper limit for $P ( sup_ {a in mathbb S_ {n-1}} sum_ {i = 1} ^ na_i ^ 2Z_i ^ 2 ge epsilon)$

Leave $$mathbb S_ {n-1}$$ be the unitary sphere in $$mathbb R ^ n$$ Y $$z_1, ldots, z_n$$ be a sample i.i.d of $$mathcal N (0, 1)$$.

Dice $$epsilon> 0$$ (It can be assumed that it is very small), which is reasonable upper limit for the tail probability $$P ( sup_ {a in mathbb S_ {n-1}} sum_ {i = 1} ^ na_i ^ 2z_i ^ 2 ge epsilon)$$ ?

• Using ideas from this other answer (MO link), you can set the not uniform Limit of anti-concentration: $$P ( sum_ {i = 1} ^ na_i ^ 2z_i ^ 2 le epsilon) le sqrt {e epsilon}$$ for all $$a in mathbb S_ {n-1}$$.

• The uniform analog is another story. Can coverage numbers be used?

## Linear algebra: if $B$ is a small perturbation of the positive-defined matrix $A$, do we have $B> epsilon A$?

Suppose $$A = (a_ {ij})$$ is a positive-defined (symmetric) matrix, and $$B$$ It's another symmetric matrix.

Question: Yes $$B$$ It's in a small neighborhood $$U$$ of $$A$$, then it seems that $$B$$ It must also be positive-defined. Also why value of $$epsilon> 0$$ we can find a neighborhood $$U$$ so that $$B> epsilon A$$?

Yes, both $$A$$ Y $$B$$ They are diagonal matrices, so this is trivial. But in general, since we can only diagonalize one of them and I'm afraid there will be some problem.