matrix – Use of a function inside of a table

I have a function that I want to generate a square matrix whose entries are run from 1,…,n and whose i,jth entry is f(i,j) for some function f. My code that I have right now is
genMat(n_, f_(x_, y_)) := Table(f(i, j), {i, n}, {j, n}), but if I set f(x_,y_):=x*y, then genMat(4,f(x,y)) evaluates to {{2, 3, 4, 5}, {3, 4, 5, 6}, {4, 5, 6, 7}, {5, 6, 7, 8}}, the matrix generated from adding the values together as opposed to multiplying them. It does this for all other n. Anybody know what is going on here? Thanks.

linear algebra – Block matrix of tensor product

$K$ is a field, $K^n$ is a vector space with $(e_1, ldots, e_n)$. The tensor product $K^n otimes K^n$ has the basis $mathcal {B} = (e_1 otimes e_1, ldots, e_1 otimes e_n, e_2 otimes e_1,, ldots, e_2 otimes e_n, e_3 otimes e_1, ldots, e_n otimes e_n),.$

Look at the matrices $A,B in M(n times n; K)$. We write $A otimes B : K^n otimes K^n to K^n$. How does the block matrix look?

a. $A otimes B = begin {pmatrix} b_{11} A & b_{12} A & cdots & b_{1n} A \ b_{21} A & b_{22} A & cdots & b_{2n} A \ vdots & vdots &ddots & vdots \ b_{n1} A & b_{n2} A & cdots & b_{nn} A end {pmatrix}$

b. $A otimes B = begin {pmatrix} a_{11} B & a_{12} B & cdots & a_{1n} B \ a_{21} B & a_{22} B & cdots & a_{2n} B \ vdots & vdots &ddots & vdots \ a_{n1} B& a_{n2} B & cdots & a_{nn} B end {pmatrix}$

c. $A otimes B = begin {pmatrix} A & cdots & A & 0 & cdots & 0 \ vdots & ddots & vdots & vdots & ddots & vdots \ A & cdots & A & 0 & cdots & 0 \ 0 & cdots & 0 & B & cdots & B \ vdots & ddots & vdots & vdots & ddots & vdots \ 0 & cdots & 0 & B & cdots & B end {pmatrix}$

performance tuning – Is there any ways to accelerate Matrix Dot Multiplication?

I’m trying to code moment computing, which needs massive Matrix Dot Multiplication.

My current code is below.

XSub = Compile({{m}}, 
   Module({}, xsub = Total(LegendreP(m, kXList)); xsub), 
   CompilationTarget -> "C", RuntimeAttributes -> {Listable}, 
   RuntimeOptions -> {"CatchMachineIntegerOverflow" -> False});

YSub = Compile({{n}}, 
   Module({}, ysub = Total(LegendreP(n, kYList)); ysub), 
   CompilationTarget -> "C", RuntimeAttributes -> {Listable}, 
   RuntimeOptions -> {"CatchMachineIntegerOverflow" -> False});

Mom = Compile({{m}, {n}}, 
   Module({}, moment = ((2*m + 1) (2*n + 1)/((k^2)*W*H))*XSub(m).A.YSub(n); 
    moment), CompilationTarget -> "C", 
   RuntimeAttributes -> {Listable}, Parallelization -> True, 
   RuntimeOptions -> {"CatchMachineIntegerOverflow" -> False});

momentMatrix = ParallelTable(Mom(col - row, row), {col, 0, Mmax}, {row, 0, col})(*//AbsoluteTiming*)

Where A is a matrix sized at 256*256.XSub and YSub would return two lists sized at 256. momentmatrix is a triangular matrix.

By far, Mmax=100, elapsed time is 14s. My target is to compute on larger image (i.e. larger matrices A,XSub() and YSub()) and higher order (i.e. larger Mmax), which means it would be slower.

I suppose the most time-consuming part would be matrix dot multiplication in function Mom.

Could anyone give some suggestions to accelerate my code?

matrices – How to prove this equation about calculation of matrix determinant?

How to prove the equation about the determinant of Matrix $M$, i.e.,

$|M|=frac{(M cdot a) times (M cdot b) cdot (M cdot c)}{a times b cdot c}$

where $a$, $b$ and $c$ are arbitrary vectors.

This euqation is encountered in An introduction to continuum mechanics P.55 , authored by G.N. Reddy. It’s subsequently used to prove that the determinant of Deformation Gradient is the change of volume during deformation.

I would be greatful if somebody could shed some lights on it.

java – Recognize L-Shape in a 4×4 Matrix

I’m trying to make a board game, there the player has to press four buttons to make the L. It can be mirrored or rotated, it just has to be an L.
But if the player press four buttons and make another shape, like a line or a Z, the system must recognize it and alert the player.

R – Storing values from a nested loop in a matrix

I am trying to store values from a nested loop in a matrix. I’ve provided a simplified version of code below:

    for (i in 1:2) {
  for (j in 1:2) {
    
Test  <- i+j

'Create Summary table of looping results'
Summary_Loop <- matrix(ncol=2,nrow=2)
Summary_Loop(i,j) <- Test

'End of loop'
}
}

Summary_Loop <- as.table(Summary_Loop)
print(Summary_Loop)

For some reason it is only storing the last value from the loop in the matrix. The entries (1,1) (1,2) and (2,1) are #N/A. Can anyone tell me how to get the matrix to store all values?

Note this is in R Studio.

Thanks

linear algebra – Show that If $A$ is a Symmetric $ntimes n$ matrix then $q(x)=x^TAx=sum_{i=1}^nlambda_ix_i^2$

I found this on wiki that if $A$ is a Symmetric $ntimes n$ matrix then
$$q(x)=x^TAx=sum_{i=1}^nlambda_ix_i^2$$
I tried to prove this by write $A$ in term of $PDP^T$, but didn’t work out:
begin{align*}
q(x)=&(x^TP)D(P^Tx)\
=&
begin{pmatrix}x_1&dots&x_nend{pmatrix}
begin{pmatrix}
p_{11}&dots&p_{1n}\
vdots&ddots&vdots\
p_{n1}&dots&p_{nn}
end{pmatrix}
begin{pmatrix}
lambda_1&dots&0\
vdots&ddots&vdots\
0&dots&lambda_n
end{pmatrix}
begin{pmatrix}
p_{11}&dots&p_{n1}\
vdots&ddots&vdots\
p_{1n}&dots&p_{nn}
end{pmatrix}
begin{pmatrix}x_1\vdots\x_nend{pmatrix}\
=&
begin{pmatrix}sum_{i=1}^nx_ip_{i1}&dots&sum_{i=1}^nx_ip_{in}end{pmatrix}
begin{pmatrix}
lambda_1&dots&0\
vdots&ddots&vdots\
0&dots&lambda_n
end{pmatrix}
begin{pmatrix}sum_{i=1}^nx_ip_{i1}\dots\sum_{i=1}^nx_ip_{in}end{pmatrix}\
=&begin{pmatrix}lambda_1sum_{i=1}^nx_ip_{i1}&dots&lambda_nsum_{i=1}^nx_ip_{in}end{pmatrix}begin{pmatrix}sum_{i=1}^nx_ip_{i1}\dots\sum_{i=1}^nx_ip_{in}end{pmatrix}\
=&sum_{j=1}^nlambda_jleft(sum_{i=1}^nx_ip_{ij}right)^2
end{align*}

This is all I have so far, could someone help me with this, thank you.

Is this benchmark sufficient to consider my algorithm as an efficient matrix multiplication algorithm?

I built a matrix multiplication algorithm and now I need some thoughts about following benchmark.

C++ chrono:: high resolution clock Time(micro second)

  1. (Dim)256–> (Naive algo ) 296807, (My algo) 187479
  2. (Dim)512–> (Naive algo) 2249495, (My algo) 1359046
  3. (Dim)1024–>(Naive algo) 27930970, (My algo) 12309645

FYI, I have no knowledge about strassen matrix multiplication algorithm and how to utilize a github project.Therefore,I stole some benchmark from a github account for sake of some comparisons with strassen matrix mulplication algorithm.Those are as below.

C++ chrono:: high resolution clock Time(micro second)

  1. (Dim)256–>(Naive algo) 260281,(Strassen algo) 216970
  2. (Dim)512–> (Naive algo) 2122299, (Strassen algo) 1580466
  3. (Dim)1024–>(Naive algo) 2algorithm?Strassen algo) 14696774

I don’t know specs of github user’s computer.I assume it’s better than mine after observing both execution times of naive matrix multiplication algorithm.(quoted from https://github.com/rangelak/Strassen-Matrix-Multiplication)

Specs of my computer

Processor core i7 6th gen(2.60 GHz)

Ram 8gb

What do you think about my algorithm after observing above mentioned band what can I expect after comparing my algorithm with strassen matrix multiplication algorithm?

mp.mathematical physics – Diagonalization of the generalized 1-particle density matrix

Let $mathscr{H}$ be a complex separable Hilbert space and $mathscr{F}$ be the corresponding fermionic Fock space generated by $mathscr{H}$. Let $rho: mathscr{L}(mathscr{F}) to mathbb{C}$ be a bounded linear functional on all bounded operators of $mathscr{F}$ with $rho(I)=1$ and $rho(A^*)=rho(A)^*$, and define the 1-particle density matrix (1-pdm) by the unique bounded self-adjoint $Gamma: mathscr{H}oplus mathscr{H}^* to mathscr{H}oplus mathscr{H}^*$ such that

$$
langle x|Gamma yrangle = rho((c^*(y_1)+c(y_2))(c(x_1)+c^*(x_2)))
$$

where $x=x_1 oplus bar{x}_2$ and $y=y_1 oplus bar{y}_2$ (I use the notation $bar{x} (cdot) = langle x|cdotrangle$) and $c,c^*$ are the annihilation/creation operators.

In references V. Bach (Generalized Hartree-Fock theory and the Hubbard model)(Theorem 2.3) and J.P Solovej (Many Body Quantum Mechanics)(9.6 Lemma and 9.9 Theorem), the authors claim that (under suitable conditions) $Gamma$ is diagonalizable by a Bogoliubov transform $W:mathscr{H}oplus mathscr{H}^* to mathscr{H}oplus mathscr{H}^*$ so that $W^* Gamma W = operatorname{diag}{(lambda_1,…,1-lambda_1,…)}$. The main idea of the proof is that $Gamma$ is diagonalizable by an orthonormal basis, and that if $xoplus bar{y}$ is an eigenvector with eigenvalue $lambda$, then $yoplus bar{x}$ is an eigenvector with eigenvalue $1-lambda$. The proof is fine when $lambdane 1/2$, since the 2 eigenvectors are orthonormal to each other. However, if $lambda=1/2$, then things become a little more difficult. J.P Solove solves this in the case where the eigenspace of $lambda =1/2$ is even-dimensional, but as far as I know, I can’t understand why would it be.

Question. Is there something I’m forgetting? If not, is there a way or are there references that complete the proof?

computer vision – OpenCV Decompose projection matrix

I got confused with the outputs of opencv decomposeProjectionMatrix function.

I have the projection matrix and the camera matrix “K” and I want to get the translation vector “t”and the Rotation matrix “R” from the projection matrix

As I know the projection matrix of dimension 34 = K[R|t] in which “t” is a 31 vector

cv2.decomposeProjectionMatrix returns R with dimension 33 which is correct but the transVect returned is of dimension 41 not 3*1

My question is how to get back the projection matrix from the function outputs?

documentation: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html