## finite automata – State complexity of converting epsilon-NFAs to NFAs without epsilon transitions

I am well-aware of the result showing that one can convert an epsilon-NFA (that is, an NFA with epsilon transitions) $$A$$ to an NFA without epsilon transitions $$A’$$, where $$L(A) = L(A’)$$.

Is there a similar result comparing the state complexities of $$A$$ and $$A’$$? That is, if one has an epsilon-NFA $$A$$ with $$n$$ states, is it true that there exists an equivalent NFA without epsilon transitions $$A’$$ that also has $$n$$ states?

## probability theory: show that there is a \$ epsilon> 0 \$ such that \$ P (X_n> epsilon, text {i.o.}) = 1 \$.

Leave $$X_1, X_2, X_3, …$$ be a sequence of i.i.d. random variables. Suppose $$P (X_n ge 0) = 1$$ and $$P (X_n> 0)> 0$$, for each $$n in mathbb {N}: = mathbb {Z} cap (1, infty)$$.

Show that there is a $$epsilon> 0$$ such that $$P (X_n> epsilon, text {i.o.}) = 1$$.

My attempt:

For each $$n in mathbb {N}$$, put $$epsilon_n = inf {X_n ( omega): omega in Omega }$$. Please note that since $$P (X_n> 0)> 0$$ for each $$n$$, there is $$alpha_n> 0$$ such that $$P (X_n> alpha_n)> 0$$; and that's why we are guaranteed that infinitely $$epsilon_n$$ are positive (I think …?)

Now put $$epsilon = sup_ {n} epsilon_n$$ and for each $$n$$, define the event $$E_n = {X_n> epsilon }$$. It is clear that $$epsilon$$ is positive (since infinitely many of the $$epsilon_n$$ are positive) Now it would be nice if you could argue (via Borel-Cantelli) that $$sum P (E_n) = infty$$, but I have not had much luck doing it …

## Formal languages: why is it not necessary to FOLLOW for LL (1) grammars without transitions \$ epsilon \$?

I am aware of how FIRST and FOLLOW sets are used to build an analysis table for LL (1) grammars.

However, I have come across this statement from my notes:

With $$epsilon$$ productions in grammar we may have to look
beyond the current nonterminal to what may come next

In my opinion, this suggests that FOLLOW is not necessary for LL (1) grammars that do not have $$epsilon$$ transition. I'm wrong? And if I am not, why is this the case?

Thank you

## parsing – \$ f: U rightarrow V \$ a \$ C ^ 2 \$ diffeo, \$ forall , a in U, exist r> 0 \$ such that \$ f left (B_ {a} ( epsilon) right) \$ is convex \$ forall epsilon leq r \$

Leave $$U, V$$ open sets in $$Bbb {R} ^ n$$, $$f: U rightarrow V$$ a class dipheomorphism $$C ^ 2$$. I need to demonstrate that, for everyone $$a in U$$exists $$r> 0$$ such that the image of the open ball centered on $$to$$ with radio $$epsilon$$ it's convex, for everyone $$epsilon leq r$$.

My idea is very simple, and that's why I think I'm forgetting something. I found some different answers for that question, like this one. But I want to know what is wrong with my thinking.

Well i actually just used the fact that $$f ^ –$$ is continuous and $$f$$ it is surjective. Yes $$a in U$$, leave $$A$$ an open ball of $$to$$. So, $$f (A)$$ It is an open set. So there is $$r> 0$$ such that $$B_ {f (a)} (r) subset f (A)$$. So, I can use these balls as convex questions.

It's wrong? And what can I do to solve the question? I did not understand the answer linked above as it uses some statements about the Hessian, and I have not studied hessians yet. Context is the inverse function theorem.

## real analysis: finite cover of clopen sets with maximum diameter \$ epsilon. \$

Here is the problem:

Leave $$X$$ be a compact metric space that is totally disconnected and leave $$epsilon> 0.$$

(a) Show that $$X$$ it has a finite cover $$mathcal {A}$$ clopen sets with a maximum diameter $$epsilon.$$

My judgment

With the help of many people here on this site, I was able to demonstrate that:

Yes $$X$$ is a compact metric space that is totally disconnected, then for each $$r> 0$$ and every $$x in X,$$ there is a clopen set $$U$$ such that $$x in U$$ Y $$U subseteq B_ {r} (x).$$

I feel this can help me in testing the first question, but I don't know how, could someone please clarify this for me?

Also, I got hints here from a finite deck of clopen sets. but still, I can't write the solution thoroughly. Any help would be appreciated.

## abstract algebra – Smaller positive integer \$ n \$ such that \$ sigma ^ n = epsilon \$ for every \$ sigma in S_5 \$

Find the smallest positive integer $$n$$ such that $$sigma ^ n = epsilon$$ for each $$sigma in S_5$$

I am doing some practice problems with a textbook and I found this question. It is not included in the questions answered in the appendix, so I am a little unsure about how to do this efficiently. In previous examples the order of $$S_n$$ It was small enough to calculate $$n$$ manually.

I also came across this question that asks a very similar question, but in that case they give them their product of disjoint cycles. I am not completely sure how to proceed.

## formal languages ​​- NFA – \$ epsilon \$ extended transition function for inverted strings

It is well known that in $$NFA- epsilon$$ The extended transition function is defined as follows:
begin {align *} hat delta: Q & times Sigma ^ * rightarrow mathbb {P} (Q) \ hat delta (q, epsilon) & = ECLOSURE (q) \ hat delta (q, alpha x) & = bigcup_ {p_i in hat delta (q, alpha)} ECLOSURE ( delta (q, x)) text {para} alpha in Sigma ^ * text {y} x in Sigma end {align *}

I would like to give an equivalent definition for the function but now considering a string $$w in Sigma ^ *$$ such that $$w = x alpha$$ with the same definitions for $$x$$ Y $$alpha$$ given below. But what I only have is the base case (i.e. $$w = epsilon$$) which is quite the same. Any clues to define the function like this?

## \$ epsilon \$ -net under the distance of Hausdorff – MathOverflow

Consider linear subspaces of $$mathbb {R} ^ n$$. For two subspaces $$X$$ Y $$Y$$, we define its distance from Hausdorff as
$${ displaystyle d _ { mathrm {H}} (X, Y) = max left {, sup _ {x in X, | x | _2 = 1} inf _ and in Y, | and | _2 = 1} d (x, y), , sup _ {and in Y, | and | _2 = 1} inf _ {x in X, | x | _2 = 1} d (x, y) , right }. !}$$

by $$epsilon> 0$$, the set of subspaces $$X_1, points, X_m$$ it's called a $$epsilon$$-net under the distance of Hausdorff if there is any subspace $$Y$$there are some $$X_i$$ such that
$$d _ { mathrm {H}} (X_i, Y) < epsilon.$$
The question is to provide the upper and lower limit of $$m$$.

## Epsilon Delta test of rational functions

I'm stuck trying this problem then how do I proceed?
Solution

## complexity theory: simple question about epsilon and estimation machines

I am getting very confused. I reached a point where I had to calculate the lim when $$n rightarrow infty$$ for an optimization problem, and I got to the point where I had to calculate a fairly simple limit: $$lim_ {n rightarrow infty} {3- frac {7} {n}}$$.

now used to $$3 – epsilon$$ and I'm trying to prove that there can't be $$epsilon> 0$$ so that the algorithm estimate is $$3- epsilon$$, because there is a "bigger estimate", and this is the part of which I am not sure, what is the correct direction of inequality? $$3- frac {7} {n}> 3 – epsilon$$ or the opposite? I am trying to show that the estimation ratio is close to 3.

I think what I wrote is the right way, but I'm not sure. I would appreciate knowing what is right in this case. Thank you.