general topology – In a locally compact $ 2 $ Hausdorff space locally, there is a sequence of compact subsets $ K_n $ with $ K_ {n-1} overs overset∘ {K_n} $ and $ bigcup_nK_n = E $

Leave $ (E, tau) $ Being a second space of Hausdorff locally compact. I want to show that there is a $ (K_n) _ {n in mathbb N_0} subseteq E $ such that $ K_n $ It is compact and $ K_ {n-1} subseteq overset { circ} {K_n} $ for all $ n en mathbb N $ Y $ bigcup_ {n in mathbb N_0} K_n = E $.

The question is basically answered in mathoverflow, but there is a part of the test that I do not understand.

As $ E $ he is the second accountant, $ tau $ has an accounting base $ mathcal B $. It's easy to see that $$ mathcal B_c: = left {U in mathcal B: overline U text {is compact} right } $$ is again a basis for $ tau $.

Now, the construction described in the answer is as follows:
Leave $ U_0 en mathcal B_c $ Y $ K_0: = overline {U_0} $. Dice $ (U_0, K_0), ldots, (U_ {n-1}, K_ {n-1}) $, leave $ U_n in mathcal B_c setminus left {U_0, ldots, U_ {n-1} right } $, $ mathcal C $ be a finite sublayer of $ K_ {n-1} $ Y $ K_n: = overline {U_n} cup overline {C} $.

The problematic part is the cover. $ mathcal C $ of $ K_ {n-1} $ (why subcover?). Power $ mathcal C $ Actually be arbitrary. And if we choose? $ mathcal C = E $?

Dynamic programming: calculation of all factors products $ n-1 when factors $ n $ are given

Let's assume we have an operator
$$ times: E ^ 2 a E $$
Of which we only know that it is associative. Let's say a multiplication $ e times f $ It always takes a while $ M $ for all $ e, f in E $.

Now we are given $ n $ elements $ e_1, …, e_n in E $, and have the task of calculating all $ n $ products
$$ def bigtimes { mathop { vcenter { huge times}}} p_j: = bigtimes_ {i = 1 \ i neq j} ^ n e_i $$

Naively multiplying everything $ n $ products take $ O (n ^ 2M) $.

A more sophisticated approach that runs on $ O (n ^ {3/2} M) $ divisions $ bigtimes_ {i = 1} ^ n e_i $ in arbitrary positions in $ sqrt n $ Factors
For each of the $ n $ products, now we only have to recalculate one factor and multiply the resulting factor, and the other $ k-1 $ factors together.

What is the fastest algorithm for this problem and how does it look?

What is the asymptotic limit of $ 1n + 2 (n-1) + 3 (n-2) + … + (n-1) 2 + n $?

From Ryan's comment, the sum, when you distribute $ i $ and divide it into more sums comes to
$$ sum_ {i = 1} ^ ni (n- (i-1)) = sum_ {i = 1} ^ n (i * n) + sum_ {i = 1} ^ n (- (i ^ 2) -i) $$ which is approximately $ frac {1} {2} n ^ 3 – frac {1} {3} n ^ 3 $ so the asymptotic link is, in fact, $ O (n ^ 3) $

Why is $ frac {X ^ p ^ N-1} {X ^ p ^ {N-1} -1 $ about $ mathbb {Q} _p $ irreducible?

Can anyone tell me why this polynomial is irreducible? Thank you.

calculation – How do I calculate $ lim limits_ {x to 8} ( sqrt[3]x-2 / (x-8)) = 1 $ using $ x ^ na ^ n = (xa) (x ^ {n-1} + x ^ {n-2} a + x ^ {n-3} a ^ 2 + … + xa ^ {n-2} + a ^ {n-1}) $?

They ask me to calculate the $ lim limits_ {x to 8} ( sqrt[3]x-2 / (x-8)) = 1 $ using the factoring formula

$ x ^ na ^ n = (xa) (x ^ {n-1} + x ^ {n-2} a + x ^ {n-3} a ^ 2 + … + xa ^ {n-2} + a ^ {n-1}) $

I have rewritten the limit as

$ lim limits_ {x a 8} (x ^ {1/3} -8 ^ {1/3} / (x-8)) = 1 $

I know that $ x-8 $ will be canceled, but I do not know how to connect the values ​​of $ x $ Y $ a $ in the formula. I do not know where to stop connecting values, the fractional exponent confuses me.

Algorithms – Solving T (n) = 3T (n-1) + 2

I am trying to improve the resolution of recurring relationships, so I am creating my own simple relationships and try to solve them. I have made the following recurrence.

T (n) = 3T (n-1) + 2 and we say that T (1) = 1

T (n) = 3 [3T(n-2) + 2] + 2
T (n) = 3 [3[3T(n-3) + 2] + 2]+ 2
T (n) = 3 [3[3[3T(n-4) + 2] + 2]+ 2]+ 2 = T (n) = 81 T (n-4) + 8
.
.
.
We say that k = n-1.
T (n) = 3 ^ k T (n-k) + 2 * k
T (n) = 3 ^ k T (1) + 2 * k

I'm stuck on how to finish this and how to find out what is the complexity of time in Big O.

turing machines: complexity of space and time of $ L = {a ^ nb ^ {n ^ 2} mid n≥1 } $

Consider the following language:
$$ L = {a ^ nb ^ {n ^ 2} mid n≥1 } , $$

When it comes to determining the time and space complexity of a multi-tape TM, we can use two memory tapes, the first to count $ n $, and the second to repeat. $ n $ times the count of $ n $. Therefore, due to the way we are using the second tape, you should have a $ Theta (n ^ 2) $ The complexity of space, and would say the same with respect to time one. I thought it was correct, but the solution is $ TM (x) = | x | + n + 2 $, where, $ x $ is, supposedly, the length of the chain, therefore $ Theta (| x |) $. Sounds right, then, is my reasoning completely wrong, or just a different way of expressing it?

Could we have reasoned differently, and say, for example, for each $ a $ we write a symbol on the first tape, and then we count the $ b $By scanning the symbols from one side to the other $ n $ times? This time, the complexity of space should be $ Theta (n) $, while the complexity of time should remain unchanged. What would change if we had a single TM tape?

Probability: anti-concentration: upper limit for $ P ( sup_ {a in mathbb S_ {n-1}} sum_ {i = 1} ^ na_i ^ 2Z_i ^ 2 ge epsilon) $

Leave $ mathbb S_ {n-1} $ be the unitary sphere in $ mathbb R ^ n $ Y $ z_1, ldots, z_n $ be a sample i.i.d of $ mathcal N (0, 1) $.

Dice $ epsilon> 0 $ (It can be assumed that it is very small), which is reasonable upper limit for the tail probability $ P ( sup_ {a in mathbb S_ {n-1}} sum_ {i = 1} ^ na_i ^ 2z_i ^ 2 ge epsilon) $ ?

  • Using ideas from this other answer (MO link), you can set the not uniform Limit of anti-concentration: $ P ( sum_ {i = 1} ^ na_i ^ 2z_i ^ 2 le epsilon) le sqrt {e epsilon} $ for all $ a in mathbb S_ {n-1} $.

  • The uniform analog is another story. Can coverage numbers be used?

Additional explanation of the steps of an equation that proves that $ sum ^ {n} _ {k = 0} k cdot binom {n} {k} = n cdot2 ^ {n-1} $

So, this is one of the questions in my textbook, which seems to be quite common: $$ sum ^ {n} _ {k = 0} k cdot binom {n} {k} = n cdot2 ^ {n-1} $$

The same book provides the following solution:
$$ sum_ {k = 0} ^ {n} k cdot binom {n} {k} = sum ^ {n} _ {k = 0} n cdot binom {n-1} {k- 1} = n sum ^ {n-1} _ {k = 0} binom {n-1} {k} = n cdot2 ^ {n-1} $$

What is not clear to me is (1) how do I get from $$ sum ^ {n} _ {k-0} n cdot binom {n-1} {k-1} $$ to$$ n sum ^ {n-1} _ {k = 0} binom {n-1} {k} $$ and (2) of $$ n sum ^ {n-1} _ {k = 0} binom {n-1} {k} $$ to $$ n cdot2 ^ {n-1} $$ respectively.