integration – Proof that $ int_ { pi / 6} ^ { pi / 2} frac {x} { sin {x}} le frac { pi ^ 2} {6} $

Proof that $ int _ { pi / 6} ^ { pi / 2} frac {x} { sin {x}} le frac { pi ^ 2} {6} $

After some calculations I get it if I take $ frac {3} {2} x $ then after the integral I get $ frac {3} {4} x ^ 2 + C $ Y
$$ int _ { pi / 6} ^ { pi / 2} frac {3} {4} x ^ 2 + C = pi ^ 2/6 $$ so I should show that
$$ frac {x} { sin {x}} le frac {3x} {2} $$

But the last inequality is not true …

real analysis – For $ g (x) = int_0 ^ infty frac {1} {x + y} f (y) , dy $, Show $ m {x in (0, infty): g (x)> lambda } le 1 / lambda cdot lVert f rVert_ {L ^ 1} $

Q: for $ x in (0, infty) $ leave:
begin {align *}
g (x) & = int_0 ^ infty frac {1} {x + y} f (y) , dy \
end {align *}

Show that to $ f in L ^ 1 (0, infty) $:
begin {align *}
m {x in (0, infty): g (x)> lambda } & le frac { lVert f rVert_ {L ^ 1}} { lambda} \
end {align *}

My job:

begin {align *}
E_ lambda & = {x in (0, infty): g (x)> lambda } \
end {align *}

By tchebychev

begin {align *}
m (E_ lambda) & le frac {1} { lambda} int_0 ^ infty g (x) , dx \
& = frac {1} { lambda} int_0 ^ infty int_0 ^ infty frac {1} {x + y} f (y) , dy , dx \
end {align *}

I'm a little stuck here. Is there any integration technique to solve or simplify this integral?

nt.number theory – About the determinants $ det left[(ipm j)left(frac{ipm j}pright)right]_ {1 le i, j le (p-1) / 2} $

Leave $ p $ be an odd cousin and define
$$ D_p ^ +: = det left[(i+j)left(frac{i+j}pright)right]_ {1 le, j le (p-1) / 2} $$
Y $$ D_p ^ {-}: = det left[(i-j)left(frac{i-j}pright)right]_ {1 le i, j le (p-1) / 2}, $$
where $ ( frac { cdot} p) $ It is the symbol of Legendre.

QUESTION. Is my next conjecture true (formulated in 2013)? How to solve it?

Guess. For any cousin $ p> 5 $ with $ p equiv1 pmod4 $, we have
$$ left ( frac {D_p ^ +} p right) = 1 = left ( frac {D_p ^ -} p right). $$

Observation. (i) I have verified the conjecture of all the cousins $ 5 <p <1200 $ with $ p equiv1 pmod4 $.

(ii) For any prime $ p equiv1 pmod4 $, clearly $ D_p ^ – $ it is a determinant of symmetry of bias and, therefore, it is a whole square by a Cayley result. But I can not show that $ p nmid D_p ^ – $ for all the cousins $ p> 5 $ with $ p equiv 1 pmod4 $.

In my opinion, the conjecture does not seem so difficult. Your comments towards your solution are welcome!

Show that $ Lambda (t) le- frac C2 Lambda & # 39; (t) $ if and only if $ e ^ {2t / C} Lambda (t) $ does not increase

Leave $ Lambda in C ^ 1 ([0infty))$[0infty))$[0infty))$[0infty))$ Y $ C> 0 $. Why $$ Lambda (t) le- frac C2 Lambda & # 39; (t) ; ; ; text {for all} t> 0 $$ keep if and only if $ e ^ {2t / C} Lambda (t) $ is not increasing in $ t $?

Is this just an application of Grönwall's inequality?

Actual analysis: is there $ alpha> 0, beta in (0,1) $ such that $ dfrac { sum_ {k = 1} ^ n a_k} {n} le alpha (a_1 cdots) a_n) ^ {1 / n} + beta max_i (a_i) $ has?

Leave $ a_1 ge a_2 ge cdots ge a_n ge 0 $ You will be given non-negative numbers. My question is this:

There are some $ beta in (0,1), alpha> 0 $, such that $$ dfrac {a_1 + cdots + a_n} {n} le alpha (a_1 cdots a_n) ^ {1 / n} + beta a_1? $$

This article derives the inequalities from above, with $ alpha = 1 $ Y $ beta = 1 / n $, but with $ sum_ {1 le i <j le n} | a_i-a_j | $ instead of $ a_1 $, multiplied with $ beta $. Also, your method does not seem applicable to address my question. Is there any known result about this or is it an open question? Any ideas regarding how to proceed? Thanks in advance.

Theory of measurement – $ U $ exp. distributed, $ E[U]= 1 $. $ X (t) = chi _ { {U le t }} $. Looking for a marginal and finite one-dimensional dimensional distribution of $ X $

Leave $ U $ Be a random variable with exponential distribution and $ mathbb E[U]= 1 $ Y

$$ X (t) = chi _ { {U le t }}, Y (t) = chi _ { {U <t }}, t in mathbb {R _ + } $$

($ chi $ denotes the indicator function).

So $ (X (t), t ge 0) $ Y $ (Y (t), t ge 0) $ They are stochastic processes in continuous time with values ​​in $ {0,1 } $

I need to describe the one-dimensional marginal distributions of $ X $ Y $ Y $, that is the distribution of $ X (t) $ Y $ Y (t) $ for $ t ge 0 $. I also need to describe the finite-dimensional distributions of $ X $ Y $ Y $. According to the task, it is sufficient to establish the probabilities for a single point.

We have never done anything with stochastic processes before, so I really do not know what I should do, so any help is appreciated!

$ Π (x) ge log log x $ holds $ 2 le x e e {{e ^ 3} <5.3 times 10 ^ 8 $?

The book Theory of Numbers by G H Hardy, et al. test $ π (x) ge log log x $ for $ x> e ^ {e ^ 3} $. There is a way to try out that also goes for $ 2 le x e e ^ {e ^ 3} $ otherwise (in the worst case) some valuable source to verify this numerically?

partitions such that each $ k $ ($ 1 le k le n $) number appears at most $ k $ times

Leave $ lambda $ be a partition of $ n $ such that each number $ k $ ($ 1 le k le n $) appear at the maximum $ k $ times in $ lambda $.

For example : $ lambda = 6 + 6 + 6 + 6 + 3 + 2 + 2 + 1 $

Is there a special name for this type of partitions?

In addition, it is easy to write the generation function of these partitions
How do we have other types of partitions with restrictions?

If possible, please share some combinatorial importance of these partitions.

Thank you very much for your time.

Have a nice day.

functional analysis: how to find the $ M $ desired in such a way that for every $ alpha in R ^ 3 $ $ | T (α) | le M | alpha | $

Leave $ R ^ 3 $ denotes the Euclidean space,$ |. | $ denotes the usual norm. For a matrix $$ T = begin {matrix}
a_1 and a_2 and a_3 \
a_4 & a_5 & a_6 \
a_7 & a_8 & a_9 \
end {matrix} $$
How to find the desired $ M $ such that for each $ alpha in R ^ 3 $ $$ | T (α) | le M | alpha | $$
(a discovery $ e_1, e_2, e_3 $ which form a basis for $ R ^ 3 $ then we can have $ | T (e_i) | le A_i | e_i | $ , leave $ M_1 = max {A_1, A_2, A_3 } $

(b) Find $ e_1, e_2, e_3 $ that form an orthogonal basis of $ R ^ 3 $ then we can have $ | T (e_i) | le A_i | e_i | $ , leave $ M_2 = max {A_1, A_2, A_3 } $

(c) Leave $ t_1, t_2, t_3 $ denote the proper values ​​of $ T $ , and let $ M_3 = max {t_1, t_2, t_3 } $
We can prove that $ M_1 $ , $ M_2 $ or $ M_3 $ the desired $ M $ We are looking for ?

Then in general conditions. Leave $ H_1, H_2 $ denotes two normed spaces, $ T $ is an operator of $ H_1 $ to $ H_2 $ With a slight modification, we can prove that $ M_1 $ , $ M_2 $ or $ M_3 $ the desired $ M $ We are looking for ?