Solving for a Quadratic Congruence Mod of Prime

I am currently going through some questions to prep a test on quadratic congruences when I came across this: $x^2 equiv 7 (textrm{mod} 1009)$. I have already proven that a solution exists for x by verifying that the Legendre symbol is equal to 1. How do I solve for x?

stochastic processes – The quadratic variation of the following process…

Let $B$ denotes a Brownian motion, and a stochastic process $X$ is definied as follows: $$X_{t}=e^{3B_{t}}+int_{0}^{t}B_{s}ds.$$

What is the quadratic variation of $X^2$?

I got the following result: $$int_{0}^{t}36e^{12B_{s}}ds+2int_{0}^{t}B_{s}dscdotint_{0}^{t}36e^{9B_{s}}ds.$$

Is it a good result? I calculated $X_{t}^{2}$, then I tried to find the quadratic variation.

linear algebra – Matrix equation with quadratic form

Let $X,Yinmathbb{R}^{ntimes k}$, $Lambda(alpha) = text{diag}(alpha)$, with $alphainmathbb{R}^k$, and let $c,dinmathbb{R}^+$ be positive constants. Let

$$A_i(alpha) = (XLambda(alpha) X^T)^{-1}x_ix_i^T(XLambda(alpha) X^T)^{-1},$$ $$B_i(alpha) = (YLambda(alpha) Y^T)^{-1}y_iy_i^T(YLambda(alpha) Y^T)^{-1},$$ where $x_i, y_iinmathbb{R}^n$ are the i-th column of matrix $X$ and $Y$ respectively.

Is there any efficient way to solve the following system of equations for $alpha$?

$$
c_i(x_i^TA_i(alpha)x_i) = d_i(y_i^TB_i(alpha)y_i), quad i = 1,dots k.
$$

Quadratic Programming with quadratic constraints in R

I want so solve the quadratic optimization problem

minimize t(x) %*% Q %*% x + t(mu) %*% x
s.t. t(x) %*% A %*% x >= b

in R. Unfortunately, quadprogonly allows for linear constraints…
is there an alternative?

pr.probability – Proof check: Using Hanson-Wright inequality to concentrate a quadratic form $y^top A y$ where both $y$ and $A$ are random but independent

Disclaimer. I don’t know if this is the right venue to ask this. I’m working out a bigger proof, and somewhere down the line, I’ved used an argument I’m not quite sure about.


Let $n$ be a large positive integer and let $X$ be a random $n times n$ matrix such that such that all the singular-values of $|X|_{op} le 1$. with probability $1-o(1)$. For concreteness, one may consider $X sim N(0,s^2/n)^{n times n}$ for an appropriate absolute constant $s>0$. Let $y$ be random vector in $mathbb R^n$, with iid coordinates uniformly distributed on ${pm 1}$, and independent of $X$.

Goal. I wish to argue that $|Xy|_2=Omega(sqrt{n})$ w.p $1-o(1)$.

Here is my argument. Suppose $|X|_F le $ except on an event $mathcal E$ of probability $1-o(1)$. Let $A := XX^top$. Considitioning on $mathcal E^c$ and applying the Hanson-Wright inequality, we know that there exists an absolute constant $K>0$ such that for all $t ge 0$,

$$
mathbb P(|y^top A y – mathbb E (y^top Ay mid mathcal E^c)| ge t mid mathcal E^c) le 2exp(-frac{t^2}{K^4|A|_F^2}land frac{t}{K^2|A|_{op}}). tag{1}
$$

Now, one does the following computations

  • $mathbb E(y^top A y mid mathcal E^c)=mbox{trace}(A) = mbox{trace}(XX^top) = |X|_F le n|X|_{op} le n$, since $|X|_{op} le 1$ by assumption.
  • $|A|_F^2=mbox{trace}(XX^top XX^top) = mbox{trace}((X^top X)^2) = sum_{i=1}^{n} lambda_i(X^top X)^2 le sum_{i=1}^{n} sigma_i(X)^4 le n|X|_{op}^4 le n$.
  • $|A|_{op} le |A|_F le sqrt{n}$.

Taking $t=n/2$, the RHS of (1) simplifies to $2exp(-frac{n}{4K^4} land frac{sqrt{n}}{2K^2})=e^{-Omega(sqrt{n})} = 1-o(1).$ Putting things together then gives

$$
begin{split}
mathbb P(|Xy|_2 ge sqrt{n/2}) &= mathbb P(y^top A y – n ge -n/2) ge mathbb P(y^top A y – mathbb E(y^top A y mid mathcal E^c) le n/2)\
&ge mathbb P(|y^top A y – mathbb E (y^top Ay mid mathcal E^c) le n/2 mid mathcal E^c) \
&ge mathbb P(mathcal E^c)mathbb P(|y^top A y – mathbb E (y^top Ay mid mathcal E^c)| le n/2 mid mathcal E^c)\
&= (1 – o(1))cdot(1 – o(1)) = 1 – o(1).
end{split}
$$

We conclude that $|X^top y|_2 = Omega(sqrt{n})$ w.p $1-o(1)$.

Question. Is my above use of the Hanson-Wright inequality correct ?

matrices – Solving two quadratic matrix equations

Given $10 times 10$ matrices $A$ and $B$, I would like to find $10 times 10$ matrix $X$ such that

$$A = X B X^T tag{1}$$

$$B = X A X^T tag{2}$$

How can I solve the issue? if there is a way to solve only equation (1) or (2) that is ok also.

If anyone can already solve this and show me the way it’s fine too.

Matrix $A$:

   ((0.125+0.03125i,0,0,0,0,-0.0625,-0.0625,-0.03125,0,0),      
    (0,0.0625,0,0,0,0,-0.0625,0,0,0),
    (0,0,0.0625,0,0,0,0,-0.0625,0,0),
    (0,0,0,0.15625,0,-0.03125i,0,-0.0625,-0.0625),
    (0,0,0,0,0.0625,0,0,0,0,-0.0625),
    (-0.0625,0,0,0,0,0.0625,0,0,0,0),
    (-0.0625,-0.0625,0,-0.03125i,0,0,0.125+0.03125i,0,0,0),
    (-0.03125,0,-0.0625,0,0,0,0,0.15625,0,0),
    (0,0,0,-0.0625,0,0,0,0,0.0625,0),
    (0,0,0,-0.0625,-0.0625,0,0,0,0,0.125))

Matrix $B$:

   ((0.15625,0,0,0,0,-0.0625,-0.0625,-0.03125i,0,0),        
    (0,0.0625,0,0,0,0,-0.0625,0,0,0),
    (0,0,0.0625,0,0,0,0,-0.0625,0,0),
    (0,0,0,0.125+0.03125i,0,0,-0.03125,0,-0.0625,-0.0625),
    (0,0,0,0,0.0625,0,0,0,0,-0.0625),
    (-0.0625,0,0,0,0,0.0625,0,0,0,0),
    (-0.0625,-0.0625,0,-0.03125,0,0,0.125+0.03125i,0,0,0),
    (-0.03125i,0,-0.0625,0,0,0,0,0.09375,0,0),
    (0,0,0,-0.0625,0,0,0,0,0.0625,0),
    (0,0,0,-0.0625,-0.0625,0,0,0,0,0.125))

Thanks!

mathematical optimization – Is there a better way to simulate quadratic cost MPC problems?

I would like to know if there is a more straightforward or a better way in terms of code length and accuracy to simulate MPC (or essentially optimal control) problems as compared to what I am doing.

The receding horizon optimal control problem I want to simulate is as follows:
$$
begin{equation}
begin{aligned}
min_{{u_0, …,u_{N-1}}} ; & sum_{p=0}^{N-1} x_p^TPx_p + x_N^TQx_N \ text{subject to}
;& x_{k+1}=Ax_k+Bu_k,; ; k = 0, 1,…,N-1 \& A_x x_k leq b_x, ;; x_k in mathbb{R}^n, ;; k = 0, 1…,N-1 \ & A_u u_k leq b_u, ;; u_k in mathbb{R}^m, ;; k = 0, 1…,N-1 \ & A_N x_N leq b_N, ;; x_N in mathbb{R}^n\ &
x_0=x(0)
end{aligned}
end{equation}
$$

What I have done is translate the above problem into a form which I can give to Mathematica’s QuadraticOptimization solver. Going as per Sec. 11.3.1 of Predictive Control for linear and hybrid systems (this book is available for free on author’s website and this is not an illegitimate copy), the problem can be re-cast as below:
$$
begin{equation}
begin{aligned}
min_{V_0} ; & V_0^TMV_0 \ text{subject to}
;& J_0V_0 leq w_0 \& begin{bmatrix} 0 & … & 0 & 1 & … & 1 end{bmatrix}V_0=x(0)
end{aligned}
end{equation}
$$

where $V_0={u_0,…,u_{N-1},x(0)}$ and the row matrix in second constraint has $mN$ zeros and $n$ ones, which basically means that the last $n$ components of $V_0$ make up the vector $x(0)$.

Is there a more straightforward way?

A bit of a background

This works but I don’t know if it gives correct results. I tried comparing my MPC results with a few examples from research papers and lecture notes and the outcome was a mixed bag. For some, results matched perfectly, for some results matched for certain time periods only and for some it threw the error of no points satisfies the constraints. It is of course an issue for a separate question on MathematicaSE but this is what prompted me to ask this question.

Thank you for your help!

nt.number theory – Sum of inverse squares of numbers divisible only by primes in the kernel of a quadratic character

Let $chi$ be a primitive quadratic Dirichlet character of d modulus $m$, and consider the product
$$prod_{substack{p text{ prime} \ chi(p) = 1}} (1-p^{-2})^{-1}.$$

What can we say about the value of this product? Do we have good upper or lower bounds?

Some observations, ideas, and auxiliary questions

  • When $chi$ is trivial, it has value $zeta(2)$.
  • In general, since Chebotarev density theorem (CDT) tells us that $chi(p)$ is equidistributed in the limit, I would “want” the value to be something like

$$Big(zeta(2)prod_{p | m} (1-p^{-2})Big)^{frac{1}{2}}.$$

However, if I’m not mistaken, it seems that the error terms in effective forms of CDT may cause this to be very far from the truth. We can’t ignore what happens before we are close to equidistribution as the tail and the head are both $O(1)$. We can’t even control the error term well (without GRH) because of Siegel zeroes.

  • I don’t think we can appeal to Dirichlet density versions of CDT since those only tell us things in the limit as $s$ goes to $1$ and here $s = 2$.
  • Is there a way to “Dirichlet character”-ify a proof of $zeta(2) = pi^2/6$ to get a formula for this more general case? At least with Euler’s proof via Weierstrass factorization, it seems that we would need some holomorphic function which has zeroes whenever $chi(n) = 1$.

I had a few other ideas but they all seem to run into the same basic problem of “can’t ignore the stuff before the limit”… am I missing something?

Quadratic Diophantine equations with all values prime

Given a quadratic Diophantine equation over the integers in two variables, can we say much about when it has only finitely many solutions with the additional assumption that both variables are prime?

That is, given integers $a$, $b$, $c$, $d$, and $e$, $a$ and $c$ both non-zero. Then it seems reasonable to conjecture that there are only finitely many primes $x$ and $y$ with $$ax^2 +bxy +cy^2 + dx +ey=0,$$ barring the exceptional case where the left-hand side itself factors.

The obvious heuristic here is that solutions to the equation (whether or not they are prime) should grow roughly exponentially based on standard methods to solve quadratic Diophantine equation. Thus, if we call the $n$th solution, $(x_n,y_n)$ then the chance they are both prime should be $O(frac{1}{log k^n})=O(frac{1}{n})$ (where $k$ is some constant). So the chance they are both prime is about $O(frac{1}{n^2})$, and so the relevant series converges, since $sum_{n geq 1} frac{1}{n^2}$ converges.

There are some cases where it is not hard to prove this sort of conjecture, using completely elementary methods. For example, it is not hard to show the following:

Proposition: Suppose that $p$ is prime primes, $b geq 1$, $m geq 2$, $a geq 1$, and $$p^2 + bp + ma^2 = mq^2.$$ Then $p leq b + 4am$.

Proof sketch: The equation can be written as $$p(p+b)=m(q-a)(q+a)$$ and so $p$ needs to divide one of the terms on the right-hand side.

One interesting thing here is that in this case, we only need that $p$ is prime, which is stronger than what the above heuristic would suggest, since that heuristic uses both variables being prime, whereas here no assumption about the primality of $q$.

My guess is that answering this question in complete generality is going to be tough. So my question then is what broad sets of cases can we prove this for?

calculus – How to evaluate a function with a quadratic argument or higher? –how to proceed?

if I have a function of the form
$$f(2x+5) = 4x-3$$ and I am asked to evaluate $$f(8)$$ ,I make $$t=2x+5$$ from where
$$x= (t-5)/2$$
then $$f(t) = 4( (t-5)/2 ) -3$$ , I take $$t=8$$ and that’s it.

but how do I do it here, if I do the above I get a quadratic equation or a cubic one

a)$$f( x^2 +1/x^2) =x+1/x$$ , find $$f(4)=?$$
b)$$f( x^3 +1/x^3) =x+1/x$$ , find $$f(4)=?$$

thank you in advance