## linear algebra: search for a closed form for a recurrence relation \$ 𝑎_𝑛 = (n) * a_ {n-1} + 1 \$ and \$ 𝑎_𝑛 = (n) * a_ {n-1} + n \$?

Consider the sequence defined by
$$begin {cases} a_0 = 1 \ 𝑎_𝑛 = (n) * a_ {n-1} +1 & text {if} n ge 1 end {cases} .$$
Find a closed form for $$a_n$$.

The second case is the following:
$$begin {cases} a_0 = 1 \ 𝑎_𝑛 = (n) * a_ {n-1} + n & text {if} n ge 1 end {cases} .$$
Find a closed form for $$a_n$$.

## nt.number theory – Are there infinite zeros of \$ sum_ {r = 1} ^ {n-1} mu (r) gcd (n, r) \$?

Leave $$mu (n)$$ be the function of Möbius and $$S (x)$$ be the number of positive integers $$n le x$$ such that

$$sum_ {r = 1} ^ {n-1} mu (r) gcd (n, r) = 0$$

My experimental data for $$n le 2.7 times 10 ^ 5$$It seems to suggest that the number of solutions $$le x$$ is growing around

$$S (x) sim ( zeta (2) -1) sqrt {x}$$
Is there any explanation for this? The square can come from the growth rate of the Mertens function. $$M (n) = sum_ {r le n} mu (r)$$ while the appearance $$zeta (2)$$ may be due to the fact that it appears in many sums related to the Pillai function $$P (n) = sum_ {r le n} gcd (n, r)$$.

I also observed that $$200$$ outside $$230$$ zeros for $$n le 1.3 times 10 ^ 5$$, were prime numbers that indicate that zeros could be dominated by prime numbers. For a first $$p$$, $$sum_ {r = 1} ^ {p-1} mu (r) gcd (p, r) = M (p-1)$$ So I guess it's more likely that a sequence of $$± 1$$ add to $$0$$ than a sequence of integers of greater absolute value.

FYI: The question was published in MSE. He got votes but no response and comments suggested that this is difficult. Therefore, I am publishing it in MO.

## inequalities – (Des) try \$ k geqq exp (1) = e therefore (k-x_ {1} x_ {2} … x_ {n-1}) (k-x_ {2} x_ {3} … x_ {n}) … (k-x_ {n} x_ {1} … x_ {n-2}) geqq (1- k) ^ {n} \$

I publish the following conjecture about the inequalities with the power functions, it is the following (des) valid

Dice $${! x_ {1}, x_ {2}, x_ {3}, …, x_ {n} ! } ! subset ! mathbb {R} _ {+} ^ {n PS$$ so that $$x_ {1} + x_ {2} + x_ {3} + … + x_ {n} ! = ! n geqq ! 5$$ Y $$k ! = ! constant$$. Yes
$$k geqq exp (! 1 !) = e , therefore , (! k- x_ {1} x_ {2} … x_ {n- 1} !) ( ! k- x_ {2} x_ {3} … x_ {n} !) … (k- x_ {n} x_ {1} … x_ {n-2} !) geqq (! 1- k !) ^ {N}$$

## street vendor: why the size of the search space of TSP \$ (n-1)! \$ instead of \$ n! \$?

I'm just learning about the problem of street vendors, and I've been playing with that. I am not sure if what I have found is just a special case or not. My teacher says that the size of the search space is $$n!$$, where $$n$$ It is the number of cities.

Suppose we have a table with cities. $$A, B, C, D$$ On each axis, and the elements represent the distances between cities:

$$begin {array} {| c | c | c | c |} hline & A & B & C & D \ hline A & 0 & 2 & 3 & 4 \ hline B & 2 & 0 & 10 & 5 \ hline C & 3 & 10 & 0 & 15 \ hline D & 4 & 5 & 15 & 0 \ hline end {array}$$

From this, I can generate several permutations of possible routes, and their lengths:
$$A to B to C to D to A: 31$$
$$A to B to D to C to A: 25$$
$$A to C to B to D to A: 22$$
$$A to C to D to B to A: 25$$
$$A to D to B to C to A: 22$$
$$A to D to C to B to A: 31$$

Now, it seems to me that each of these permutations is cyclical. In this way, $$B a C a D a A a B: 31$$ Y $$A to B to C to D to A: 31$$ They are equivalent, for example.

So, in fact, it does not matter which city you choose initially, because the resulting shorter route will be equivalent to one of the previous routes.

And of course, there $$3!$$ of these tours, or $$(n-1)!$$ from them. So, why is not the size of the search space $$(n-1)!$$ instead of $$n!$$? Maybe there is a problem with my understanding of the size of the search space, I'm not sure.

## Relating \$ int_0 ^ 1 frac {( ln x) ^ {n-1} ( ln (1 + x)) ^ p} {x} dx \$ y \$ int_0 ^ 1 frac {( ln x ) ^ {n} ( ln (1 + x)) ^ {p-1} {1 + x} dx \$

This post evaluates the integral,
$$I = int_0 ^ 1 frac { ln ^ 2 (x) , ln ^ 3 (1 + x)} xdx$$

just like

$$I = – frac { pi ^ 6} {252} -18 zeta ( bar {5}, 1) +3 zeta ^ 2 (3) tag1$$

where,

$$zeta ( bar {5}, 1) = frac {1} {24} int ^ 1_0 frac { ln ^ 4 {x} ln (1 + x)} {1 + x} { rm d} x$$

More succinctly,

$$I = -12 , S_ {3,3} (- 1) tag2$$

with Nielsen polilogaritmo widespread. $$S_ {n, p} (z)$$.

Question: How do we show that? $$zeta ( bar {5}, 1)$$ is Only a generalized polilogaritmo of Nielsen disguised? Or, in general,

begin {aligned} S_ {n, p} (- 1) & = C_1 int_0 ^ 1 frac {( ln x) ^ {n-1} big ( ln (1 + x) big) ^ p} {x} dx \ & = C_2 int_0 ^ 1 frac {( ln x) ^ {n} big ( ln (1 + x) big) ^ {p-1}} {1 + x} dx end {aligned}

where,
$$C_1 = frac {(- 1) ^ {n + p-1}} {(n-1)! , P!}, Qquad C_2 = frac {(- 1) ^ {n + p}} {n! , (p-1)!}$$

Thus,

$$zeta ( bar {5}, 1) = S_ {4,2} (- 1)$$

## Simplification of \$ sum_ {n = 1} ^ {+ infty} frac {z ^ {n-1} s ^ {n}} {(n-1)! N!} \$

How can I simplify $$sum_ {n = 1} ^ {+ infty} frac {z ^ {n-1} s ^ {n}} {(n-1)! n!}$$ for $$z$$ Y $$s$$ positive real numbers?
I thought about finding a function. $$f$$ such that $$f ^ {n} (0) = frac {1} {(n + 1)!} for all n$$.
I also thought of representing sum as the internal product of two vectors with components of the exponential in a Hilbert space.

## A limt with sum \$ S_n = 1 + frac {n-1} {n + 2} + frac {n-1} {n + 2} cdot frac {n-2} {n + 3} + cdots + frac {n-1} {n + 2} cdot frac {n-2} {n + 3} cdots frac {1} {2n} \$

$$S_n = 1 + frac {n-1} {n + 2} + frac {n-1} {n + 2} cdot frac {n-2} {n + 3} + cdots + frac {n-1} {n + 2} cdot frac {n-2} {n + 3} cdots frac {1} {2n}$$ So $$S_n / sqrt {n}$$ tends to $$frac { pi} {2}$$.

How to show then? the general term in $$S_n$$ is $$frac {n-1} {n + 2} frac {n-2} {n + 3} cdots frac {nk} {n + k + 1} = frac {C_ {n + 1} ^ {k + 2}} {C_ {n + k + 1} ^ {k + 2}}$$ where $$C_n ^ k = frac {n!} {K! (N-k)!}$$. So, how to do?

## polynomials – Roots of \$ x ^ n-x ^ {n-1} – cdots-x-1 \$

It's easy to see that $$f (x) = x ^ n-x ^ {n-1} – cdots-x-1$$ has only one positive root $$alpha$$ which is in the interval $$(1,2)$$. But it is stated that this root is a Pisot number, that is, the other roots are in sight. $${z in mathbb {C}: | z | <1 }$$. I tried the following, but I failed. I considered $$P (x) = (1-x) f (x) = x ^ {n + 1} -2x ^ n + 1 = x ^ n (x-2) + 1$$ and then I tried to use Rouche's theorem, selecting $$g (x) = – x ^ n (x-2)$$ and trying to show that $$1 <| g (z) | + | P (z) |$$ for $$| z | = 1$$. Testing this implies that $$P (x)$$ have $$n$$ roots in $${z in mathbb {C}: | z | <1 }$$ what the claim shows. But this inequality fails in $$z = 1$$. Can you give me an idea of ​​how this claim can be proven?

## general topology – In a locally compact \$ 2 \$ Hausdorff space locally, there is a sequence of compact subsets \$ K_n \$ with \$ K_ {n-1} overs overset∘ {K_n} \$ and \$ bigcup_nK_n = E \$

Leave $$(E, tau)$$ Being a second space of Hausdorff locally compact. I want to show that there is a $$(K_n) _ {n in mathbb N_0} subseteq E$$ such that $$K_n$$ It is compact and $$K_ {n-1} subseteq overset { circ} {K_n}$$ for all $$n en mathbb N$$ Y $$bigcup_ {n in mathbb N_0} K_n = E$$.

The question is basically answered in mathoverflow, but there is a part of the test that I do not understand.

As $$E$$ he is the second accountant, $$tau$$ has an accounting base $$mathcal B$$. It's easy to see that $$mathcal B_c: = left {U in mathcal B: overline U text {is compact} right }$$ is again a basis for $$tau$$.

Now, the construction described in the answer is as follows:
Leave $$U_0 en mathcal B_c$$ Y $$K_0: = overline {U_0}$$. Dice $$(U_0, K_0), ldots, (U_ {n-1}, K_ {n-1})$$, leave $$U_n in mathcal B_c setminus left {U_0, ldots, U_ {n-1} right }$$, $$mathcal C$$ be a finite sublayer of $$K_ {n-1}$$ Y $$K_n: = overline {U_n} cup overline {C}$$.

The problematic part is the cover. $$mathcal C$$ of $$K_ {n-1}$$ (why subcover?). Power $$mathcal C$$ Actually be arbitrary. And if we choose? $$mathcal C = E$$?

## Dynamic programming: calculation of all factors products \$ n-1 when factors \$ n \$ are given

Let's assume we have an operator
$$times: E ^ 2 a E$$
Of which we only know that it is associative. Let's say a multiplication $$e times f$$ It always takes a while $$M$$ for all $$e, f in E$$.

Now we are given $$n$$ elements $$e_1, …, e_n in E$$, and have the task of calculating all $$n$$ products
$$def bigtimes { mathop { vcenter { huge times}}} p_j: = bigtimes_ {i = 1 \ i neq j} ^ n e_i$$

Naively multiplying everything $$n$$ products take $$O (n ^ 2M)$$.

A more sophisticated approach that runs on $$O (n ^ {3/2} M)$$ divisions $$bigtimes_ {i = 1} ^ n e_i$$ in arbitrary positions in $$sqrt n$$ Factors
For each of the $$n$$ products, now we only have to recalculate one factor and multiply the resulting factor, and the other $$k-1$$ factors together.

What is the fastest algorithm for this problem and how does it look?