linear algebra – Full rankedness of the sum of two matrices satisfying a certain condition

Suppose that $mathbf{A}$ and $mathbf{B}$ are two $Ntimes M$ matrices with $Nleq M$ and $text{rank}(mathbf{A}) = text{rank}(mathbf{B}) = N$.

Question : Is the following statement true ? Why ?

If $text{rank}((cosalphamathbf{I})mathbf{A}+(sinalphamathbf{I})mathbf{B}) = N$, then $text{rank}(mathbf{C}mathbf{A}+mathbf{S}mathbf{B}) = N$.

where $mathbf{I}$ is the $Ntimes N$ identity matrix, $mathbf{C}$ and $mathbf{S}$ are two $Ntimes N$ diagonal matrices with $mathbf{C} = text{diag}(costheta_1,cdots,costheta_N)$ and $mathbf{S} = text{diag}(sintheta_1,cdots,sintheta_N)$.

Thoughts : Since $mathbf{C}$, $mathbf{S}$ and $mathbf{I}$ are row-equivalent, I try to prove that $((cosalphamathbf{I})mathbf{A}+(sinalphamathbf{I})mathbf{B})$ and $(mathbf{C}mathbf{A}+mathbf{S}mathbf{B})$ are also row-equivalent. Any ideas on how to proceed with this ?

linear algebra – Where is the negative sign coming from in the distributive step

I came across a proof in the Book Quantum Computer for Computer Scientists. It shows the derivation of the equation for the division of complex numbers. I am trying to follow. It starts with
$$
(x,y) = frac{(a_{1}, b_{1})}{(a_{2}, b_{2})}
$$

where a1, b1 represent the real and imaginary part of a complex number c1. Likewise for a2 and b2. The next step in the proof says:

then by definition of division as the inverse of multiplication
$$
(a_{1}, b_{1}) = (x,y) times (a_{2}, b_{2})
$$

which I understand fine, but then it says

or
$$
(a_{1}, b_{1}) = (a_{2}x – b_{2}y, a_{2}y + b_{2}x)
$$

My question is where does that (-) come from? everything is positive. I expect it to be:
$$
(a_{1}, b_{1}) = (a_{2}x + b_{2}y, a_{2}y + b_{2}x)
$$

I am guessing my understanding of how multiplication works here is wrong. Any help is much appreciated.

linear algebra – Column-equivalence of two vectors of projected vector-valued functions

I have two $1times N$ row vectors containing projections of a series of vector functions $vec{F}_i: mathbb{R}^2 rightarrow mathbb{R}^2$ onto two distinct unit vectors in $mathbb{R}^2$:

$vec{v}_1 = (vec{F}_1(x,y)cdot vec{n}_1, vec{F}_2(x,y)cdot vec{n}_1, cdots, vec{F}_N(x,y)cdot vec{n}_1)$

$vec{v}_2 = (vec{F}_1(x,y)cdot vec{n}_2, vec{F}_2(x,y)cdot vec{n}_2, cdots, vec{F}_N(x,y)cdot vec{n}_2)$

where $vec{F}_i(x,y) = (f_i(x,y), g_i(x,y))^T$ with $i = 1,cdots,N$, and $vec{n}_j = (costheta_j, sintheta_j)^T$ are unit vectors in $mathbb{R}^2$.

Question: Are $vec{v}_1$ and $vec{v}_2$ column-equivalent?

Knowing that $vec{n}_1$ and $vec{n}_2$ differ by a 2D rotation matrix, I’ve been trying to prove the column-equivalence by looking for an $Ntimes N$ invertible matrix $mathbf{P}$ such that $vec{v}_2 = vec{v}_1mathbf{P}$, but without success.

If $vec{n}_2$ was simply $vec{n}_1$ scaled by a factor $a$ ($vec{n}_2 = avec{n}_1$), then $mathbf{P}$ would be a diagonal matrix $mathbf{P} = amathbf{I}$, but does $mathbf{P}$ exist in the case of rotation?

linear algebra – Orthogonal endomorphisms in finite dimension

As far as I understand it, orthogonal endomorphisms in finite dimensions have an orthogonal matrix for representation only if the basis on which the matrix is created is orthonormal. Doesn’t that pose a problem, as then there will exist some bases in which the orthonormal application is not orthonormal ?

linear algebra – Show that if $ad-bc ne 0$ then a system of equations has a unique solution. Know answer. Trouble understanding it.

Thanks for contributing an answer to Mathematics Stack Exchange!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, see our tips on writing great answers.

linear algebra – Eigenvalues of block matrix

Given a positive definite matrix $A in mathbb{R}^{ntimes n}$ and a general matrix $B in mathbb{R}^{mtimes n}$, can I say somehing about the eigenvalues of

$$T = begin{bmatrix} alpha A & alpha B^T \ beta B & 0 end{bmatrix} $$,

with $alpha, beta in mathbb{R}$? Can I maybe give bounds of the eigenvalues of $T$ as a function of $alpha, beta$?

linear algebra – Ask for some help to understand the notations

The following is taken from a text which tries to proves a relation between dimensions of subspaces.

enter image description here

My first question comes from

the canonical injections $in_1:Mto Moplus N$

in line 3 and 4. My understanding of direct sum $Moplus N$ is that, when we write down this notation, $M$ and $N$ must be independent, i.e., $Mcap N=0$. But since $M$ and $N$ are two arbitrary subspaces as indicated in line 1 an 2, they are not necessarily independent. What does the author mean here? Does $oplus$ mean something other than direct sum? Furthermore, If we assume $Moplus N$ denotes direct sum, problems comes out immediately. In line 4, the author wrote

injections $f:Mcap Nto Moplus N$

If $Mcap N=0$, then the function $f$ is just the trivial $0to 0$. So is the following $g$. I believe this is not what the author intended to define.

The second question arises from the third line if Lemma 2.14. I don’t really know what that line of symbols and “a short exact sequence” in the subsequent line mean. Does that line of arrows mean something arcane in the circle of math? Of course, the question may due to I don’t understand what $f$ means in the first place. I wish I could understand what the author wanted to say, after I have tried hard, so could you please help me understand all these stuff? Thanks a lot.

time complexity – answers Average case analysis of linear search

Suppose we have an array$(1..n)$ and run linear search to find $x$, on it with following specification:
probability of existence $x$ in first half of array is $p$,and probability of existence $x$ in second half is $3p$ and in each half probability of any element to be x is equally likely.Calculate Average case linear search.
Is my answer true?

$E(successful)+unsuccessful=sum_{i=1}^{n/2}pi + sum_{i=frac{n}{2}+1}^{n}3pi + (1-p)(1-3p)n $

nt.number theory – Defining set of $r$ bit squares in mixed integer linear programs with fewer than $r$ integer variables

Consider the set $mathcal R(A,b)$ defined by

$$mathcal R(A,b)={zinmathbb Zcap(0,2^r-1):exists(x,y)inmathbb Z^ktimesmathbb R^{2^m} mbox{ such that }A(x,y,z)’leq b}$$
holds where $k=o(r)$, $m=o(r)$, $mk=Omega(r)$, $Ainmathbb Q^{elltimes(k+2^m+1)}$, $binmathbb Q^ell$, $ell=poly(r)$, $log max_{i,j}|A_{ij}|=O(2^{t})$, $log max_{i}|b_i|=O(2^{t})$ and $t=o(r)$ holds.

Is it known
$$mathcal R(A,b)neqmathcal{Sq}_r$$
always holds for any $A,b$ where $mathcal{Sq}_r$ is squares in ${0,1,2,dots,2^r-1}$ ($r$ bit squares)?

Verbally speaking I am asking to define set of square values of $r$ bits it is necessary $k=Omega(r)$.

Is it clear it cannot be defined with a single existential quantifier when we are allowed only $o(r)$ integer variables ($zinmathbb Z^{k}$) in our program and up to $2^{o(r)}$ real variables ($yinmathbb R^{2^m}$) and coefficients have $2^t$ bits with $t=o(r)$ in the program?

Note if $k=r$ then even $m=t=O(log r)$ suffices and so the question is if we reduce number of integer variables then increasing number of real variables cannot be helpful to define squares.

linear algebra – Derivative of multilinear function

I’m stuck in proving derivation of multilinear function

I want to understand proof, but I’m stuck.

Theorem: Let x1, … , xn is differentiable vector functions. Then, multilinear function M(x1, … ,xn)

is differentiable.

In the proof,
enter image description here

I know that multi linear function is linear for all x1, … , xn so that

if all x_i for all i except j, multi linear function M is linear function of remaining argument x_j.

But I can’t understand that proof.

Can anyone help me?

Thanks in advance