real analysis – Khintchine’s Inequality variant

Let $f_nin L_{infty}((0,1))$ and $(r_n)_n$ be the Radermacher sequence. For each $ninmathbb{N}$ we define the function $g:(0,1)^2tomathbb{R}$ by $g_n(x_1,x_2)=r_n(x_1)f_n(x_2)$. Show that for every $infty>pgeq 1$ there exists a constant $c(p)$ such that: $|sum_{n=1}^kg_n|_{L_p((0,1)^2)}geq c(p)|(sum_{n=1}^k{f_n}^2)^{frac{1}{2}}|_{L_p((0,1))}$.

I can see that this is essentially Khintchine’s Inequality with $f_n$‘s relacing the constants so I suspect that a proof might mimic the original proof but I am not certain how to proceed. Also is there any way to estimate $c(p)$?

How to find the threshold of hash inequality at a given time?

For a miner to mine in bitcoin, he should find a nonce s.t the hash of block <= threshold. At a given time lets assume the total hash rate of the network be A hashes/sec and the mining rate be f blocks/sec. Is there way any to calculate this threshold from these parameters?

Inequality problem – Mathematics Stack Exchange

What are the values ​​of $B_1, B_2$, and $B_s$ so that the following inequality is satisfied with any value of $U_1$ and $U_2$? where $B_1, B_2, B_s, U_1$, and $U_2 in mathbb R^+$

$ B_1 U_1^2 + B_2 U_2^2 < B_s (U_1-U_2)^2 $

reference request – a square root inequality for symmetric matrices?

In this post all my matrices will be $mathbb R^{Ntimes N}$ symmetric positive semi-definite (psd), but I am also interested in the Hermitian case. In particular the square root $A^{frac 12}$ of a psd matrix $A$ is defined unambigusouly via the spectral theorem.
Also, I use the conventional Frobenius scalar product and norm

Question: is the folowing inequality true
|A^{frac 12}-B^{frac 12}|^2leq C_N |A-B|quad ???

for all psd matrices $A,B$ and a positive constant $C_N$ depending on the dimension only.

For non-negative scalar number (i-e $N=1$) this amounts to asking whether $|sqrt a-sqrt b|^2leq C|a-b|$, which of course is true due to $|sqrt a-sqrt b|^2=|sqrt a-sqrt b|times |sqrt a-sqrt b|leq |sqrt a-sqrt b| times |sqrt a+sqrt b|=|a-b|$.

If $A$ and $B$ commute then by simultaneous diagonalisation we can assume that $A=diag(a_i)$ and $B=diag(b_i)$, hence from the scalar case
|A^frac 12-B^frac 12|^2
=sumlimits_{i=1}^N |sqrt a_i-sqrt b_i|^2
leq sumlimits_{i=1}^N |a_i-b_i|
leq sqrt N left(sumlimits_{i=1}^N |a_i-b_i|^2right)^frac 12=sqrt N |A-B|

Some hidden convexity seems to be involved, but in the general (non diagonal) case I am embarrasingly not even sure that the statement holds true and I cannot even get started. Since I am pretty sure that this is either blatantly false, or otherwise well-known and referenced, I would like to avoid wasting more time reinventing the wheel than I already have.

This post and that post seem to be related but do not quite get me where I want (unless I missed something?)

Context: this question arises for technical purposes in a problem I’m currently working on, related to the Bures distance between psd matrices, defined as
d(A,B)=minlimits_U |A^frac 12-B^frac 12U|

(the infimum runs over unitary matrices $UU^t=Id$)

inequality – Upper bound on x where $2^x leq (ax)^4$

We have some constant $a > 1$ and we know the following inequality:
$$2^x leq (ax)^4$$

And need to find an upper bound on $x$.

I thought of trying to calculate where $2^x$ intersects $(ax)^4$ and then the larger intersection would be an upper bound for $x$.
So this is what I did:I called the value where they intersects $t$ and solved:

$$2^t = (at)^4\
tln2 = 4ln(at)\

And therefore:

$$xleq max left{frac{e^{-W_0left(-frac{ln2}{4a}right)}}{a},frac{e^{-W_{-1}left(-frac{ln2}{4a}right)}}{a}right}$$

But I don’t know how to continue from here. How can I bound this expression with $W$?

linear algebra – Inequality for the trace of the n-th power of a semi-definite matrix with trace smaller than 1.

Let $M$ be a $mxm$ positive semi-definite matrix with $tr(M)le1$. Is there some non-trivial inequality of the type $tr(M^n)ge f(tr(M^i), tr(M^j))$ with $f$ some function and $i,jle n$ ?

And what if M is not positive semi-definite but just satisfies $tr(M)le1$?

Just to give some context, I am trying to compute a sequence $F_n = tr(M^n) in (0,1) $ and I expect it to be monotonically decreasing with n, but the matrices are too big to find exact results (either by hand or numerically), so I’m trying to find out some bound on how this scales with n. An inequality of that type would be a good start, since I can compute $F_n$ for small n by other means (boring physics stuff :P).

Any hint would be helpful. Thanks!

math quiz – Jensen's inequality question

Below is the picture of the Jensen Inequality app question.

asks about Jensen's application of inequality to the book challenge and thrill of pre-college math

Below is my approach to testing inequality.

my approach ... sorry or bad handwriting!

Can anyone verify if my test
it's ok … I feel like this is a fancy test.

If there is an error, a detailed explanation would be helpful and appreciated.

Thank you

linear algebra: an inequality on the gram matrix

Leave $ {x_1, … x_r, x_ {r + 1} } $ to be $ r + 1 $ linearly independent vectors in $ mathbb R ^ n $. I speculate that the following inequality is true:

$$ det text {Gram} {x_1, dots, x_ {r + 1} }
le langle x_ {r + 1}, x_ {r + 1} rangle cdot
det text {Gram} {x_1, dots, x_r }

I also speculate that equality is valid if $ x_ {r + 1} $ is orthogonal to the subspace traversed by $ {x_1, dots, x_r } $

Remember that a Gram matrix is ​​the matrix where the inputs are internal products $ langle x_ {i}, x_ {j} rangle $

When $ r = 1 $ This is obvious. For higher dimensions, I think this is true due to the following geometric insight: the square of the determinant (volume) of the discrete subgroup generated by $ {x_1, dots, x_r } $ equals the determinant of the previous Gram matrix. By adding a new "border" to the original parallelogram and increasing the dimension of the object by one, the new volume is maximized when this new border is perpendicular to the original parallelogram, otherwise it is smaller. But I don't know how to demonstrate it rigorously.

entropy: a difficult inequality of mutual information

Leave $ X_0, X_1, X_2 $ be three bits independently distributed let's $ B $ be a random variable such that $ I (X_0: B) = 0 $, $ I (X_1: B) = 0 $, $ I (X_2: B) = 0 $and $ I (X_1 oplus X_2: B) = 0 $, I have to show that $ I (X_0, X_1, X_2: B) leq 1 $

(where $ I (M: N) $ is Shannon's mutual information.)

I can show that $ I (X_0, X_1, X_2: B) leq 2 $, by using the mutual information chain rule $ I (X_0, X_1, X_2: B) = I (X_0: B) + I (X_1, X_2: B | X_0) = H (X_1, X_2 | X_0) – H (X_1, X_2 | B, X_0) = 2 – H (X_1, X_2 | B, X_0) leq 2 $.

(where $ H (.) $ is Shannon's binary entropy.)

But I can't go any further, please help.

inequality – Let $ a $ be a real number with $ a> 1 $. Let $ x, y ge a $ find a number $ c $ such that $ left | frac {x + y} {(x ^ 2-1) (y ^ 2-1) right |

I'm trying to do a uniform continuity test and I'm stuck in this step of the scratch job.
Leave $ f (x) = frac {1} {x ^ 2-1} $ show that $ f (x) $ is uniformly continuous in $ (a, infty) $ where $ a> 1 $.
So far I have, for $ x, y in (a, infty) $
$$ left | frac {1} {x ^ 2-1} – frac {1} {y ^ 2-1} right | = left | frac {y ^ 2-x ^ 2} {(x ^ 2-1) (y ^ 2-1)} right | = | xy | left | frac {x + y} {(x ^ 2-1) (y ^ 2-1)} right | . $$
So I'm trying to get a constant in terms of $ to $ what can i use for me $ delta $ in the test Any advice?