## Cover time in Markov Chain from transition matrix

Given a process on a graph $$X_{n} = {x_{1}, …, x_{n}}$$, is there a way to obtain the cover time, starting at any state $$x_{i}$$, from the transition matrix $$mathbf{P}$$?

I’ve obtained the expected number of steps to reach any state $$x_{j}$$ by solving the following linear system:

$$mathbf{E}_{x} = (mathbf{I}_{n} – mathbf{P})^{-1} mathbf{1}_{n}$$

But now I’m not sure how I could obtain the cover time directly from either $$mathbf{E}_{x}$$ (which is a column vector with all expected numbers of steps) or $$mathbf{P}$$.

Thank you very much.

## xna – Basic Projection & View matrix

Currently writing a camera component for FNA (basically XNA), and can’t for the life of me figure out why my matrix transform isn’t working as expected. This is how I calculate my view & projection matrices:

``````// projection
// WorldSize is something like 16x9, 20x10, etc. Basically the "simulated" world size.
// Zoom is just a float to zoom the camera.
var left = Zoom * -WorldSize.X / 2;
var right = Zoom * WorldSize.X / 2;
var bottom = Zoom * -WorldSize.Y / 2;
var top = Zoom * WorldSize.Y / 2;
const float near = 0;
const float far = 100;
var projection = Matrix.CreateOrthographicOffCenter(left, right, bottom, top, near, far);
``````

and

``````// view
// Position is the position of my Camera, e.g. (10, 15), (6.51, 16.612), etc.
var position = new Vector3(Position, 0);
var target = position + Vector3.Forward;
var up = Vector3.Up;
var view = Matrix.CreateLookAt(position, target, up);

// Combine them
var combined = projection * view;
``````

This should, by all the sources I’ve checked, double-checked, and triple-checked, be correct. However when I apply this matrix to my batch or effects it doesn’t show anything at all:

``````// I would expect a white square to be rendered in the middle of the screen. Since WorldSize is
// 16x9 I would expect a 1x1 square to be clearly visible.
var batch = new SpriteBatch();
batch.begin(/* the rest */, combined);
var texture = new Texture2D(Game.GraphicsDevice, 1, 1);
texture.SetData(new (){Color.White });
batch.Draw(texture, Vector2.Zero, Color.White);
batch.End();

// Also tried just rendering lines, shows nothing
var effect = new BasicEffect(graphicsDevice)
{
VertexColorEnabled = true,
View = view,
Projection = projection
};
effect.CurrentTechnique.Passes(0).Apply();
graphicsDevice.DrawUserPrimitives(PrimitiveType.LineStrip, vertices, 0, vertices.Length - 1);
``````

I have checked so many sources and they all do exactly like this. I even tried copy+pasting the matrix source code for a Java project I made some time back that I know works, and that didn’t work either so I don’t think the Matrix transforms are to blame. Anyone know what I’m doing wrong?

## Singular value decomposition, matrix of singular vectors

Let’s say I have a matrix A that is M X N. When I find the SVD, I get P dominant singular values. How do I get the M x P matrix whose columns are the first P singular vectors of A?

## c++ – Rcpp sparse CSC matrix class

This is a sparse matrix (dgCMatrix) class that extends Rcpp.

WHAT: This class includes `Rcpp::NumericVector` and `Rcpp::NumericMatrix` classes which are simple references to R objects. This class fully supports reference-only no-copy conversion of the `Matrix` package `dgCMatrix` class from R to C++ and vice versa.

WHY: I want read-only no-copy access to a `>R Matrix::dgCMatrix` in `double` type in C++, in contrast to RcppArmadillo `SpMat` and RcppEigen `SparseMatrix` which are deep copies. I am definitely am not trying to replace the excellent linear algebra and element-wise operations that `RcppArmadillo` and `RcppEigen` already offer.

I’m especially shaky on the `const_iterator` syntax. Everything works, but may not work as expected (I’m a few months new to C++/Rcpp).

This is pretty bare-bones, but is there any functionality missing that you might expect of a read-only sparse matrix class? I figure the iterator is most important.

RcppSparse.h

``````#include <rcpp.h>

namespace Rcpp {
class dgCMatrix {
public:
IntegerVector i, p, Dim;
NumericVector x;
List Dimnames;

// constructors
dgCMatrix(IntegerVector& A_i, IntegerVector& A_p, NumericVector& A_x, int nrow) {
i = A_i;
p = A_p;
x = A_x;
Dim = IntegerVector::create(nrow, A_p.size() - 1);
};
dgCMatrix(IntegerVector& A_i, IntegerVector& A_p, NumericVector& A_x, int nrow, List& A_Dimnames) {
i = A_i;
p = A_p;
x = A_x;
Dim = IntegerVector::create(nrow, A_p.size() - 1);
Dimnames = A_Dimnames;
};
dgCMatrix(S4 mat) {
i = mat.slot("i");
p = mat.slot("p");
x = mat.slot("x");
Dim = mat.slot("Dim");
Dimnames = mat.slot("Dimnames");
};

// basic properties
int nrow() { return Dim(0); };
int ncol() { return Dim(1); };
int rows() { return Dim(0); };
int cols() { return Dim(1); };
int n_nonzero() { return x.size(); };
NumericVector& nonzeros() { return x; };
double sum() { return Rcpp::sum(x); };

// forward constant iterator
class const_iterator {
public:
int index;
const_iterator(dgCMatrix& g, int ind) : parent(g) { index = ind; }
bool operator!=(const_iterator x) const { return index != x.index; };
bool operator==(const_iterator x) const { return index == x.index; };
bool operator<(const_iterator x) const { return index < x.index; };
bool operator>(const_iterator x) const { return index > x.index; };
const_iterator& operator++(int) { ++index; return (*this); };
const_iterator& operator--(int) { --index; return (*this); };
int row() { return parent.i(index); };
int col() { int j = 0; for (; j < parent.p.size(); ++j) if (parent.p(j) >= index) break; return j; };
double& operator*() { return parent.x(index); };
private:
dgCMatrix& parent;
};

// iterator constructors
const_iterator begin(int j) { return const_iterator(*this, (int)0); };
const_iterator end(int j) { return const_iterator(*this, i.size() - 1); };
const_iterator begin_col(int j) { return const_iterator(*this, p(j)); };
const_iterator end_col(int j) { return const_iterator(*this, p(j + 1)); };

double at(int row, int col) const {
for (int j = p(col); j < p(col + 1); ++j) {
if (i(j) == row) return x(j);
else if (i(j) > row) break;
}
return 0.0;
}
double operator()(int row, int col) { return at(row, col); };
NumericVector operator()(int row, IntegerVector& col) {
NumericVector res(col.size());
for (int j = 0; j < col.size(); ++j) res(j) = at(row, col(j));
return res;
};
NumericVector operator()(IntegerVector& row, int col) {
NumericVector res(row.size());
for (int j = 0; j < row.size(); ++j) res(j) = at(row(j), col);
return res;
};
NumericMatrix operator()(IntegerVector& row, IntegerVector& col) {
NumericMatrix res(row.size(), col.size());
for (int j = 0; j < row.size(); ++j)
for (int k = 0; k < col.size(); ++k)
res(j, k) = at(row(j), col(k));
return res;
};

// column access (copy)
NumericVector col(int col) {
NumericVector c(Dim(0), 0.0);
for (int j = p(col); j < p(col + 1); ++j)
c(i(j)) = x(j);
return c;
}
NumericVector column(int c) { return col(c); }
NumericMatrix cols(IntegerVector& c) {
NumericMatrix res(Dim(0), c.size());
for (int j = 0; j < c.size(); ++j) {
res.column(j) = col(c(j));
}
return res;
}
NumericMatrix columns(IntegerVector& c) { return cols(c); }

// row access (copy)
NumericVector row(int row) {
NumericVector r(Dim(1), 0.0);
for (int col = 0; col < Dim(1); ++col) {
for (int j = p(col); j < p(col + 1); ++j) {
if (i(j) == row) r(col) = x(j);
else if (i(j) > row) break;
}
}
return r;
}
NumericMatrix rows(IntegerVector& r) {
NumericMatrix res(r.size(), Dim(1));
for (int j = 0; j < r.size(); ++j) {
res.row(j) = row(r(j));
}
return res;
}

// colSums and rowSums family
NumericVector colSums() {
NumericVector sums(Dim(1));
for (int col = 0; col < Dim(1); ++col)
for (int j = p(col); j < p(col + 1); ++j)
sums(col) += x(j);
return sums;
}
NumericVector rowSums() {
NumericVector sums(Dim(0));
for (int col = 0; col < Dim(1); ++col)
for (int j = p(col); j < p(col + 1); ++j)
sums(i(j)) += x(j);
return sums;
}
NumericVector colMeans() {
NumericVector sums = colSums();
for (int i = 0; i < sums.size(); ++i) sums(i) = sums(i) / Dim(0);
return sums;
};
NumericVector rowMeans() {
NumericVector sums = rowSums();
for (int i = 0; i < sums.size(); ++i) sums(i) = sums(i) / Dim(1);
return sums;
};
};

// Rcpp::as
template <> dgCMatrix as(SEXP mat) { return dgCMatrix(mat); }

// Rcpp::wrap
template <> SEXP wrap(const dgCMatrix& sm) {
S4 s(std::string("dgCMatrix"));
s.slot("i") = sm.i;
s.slot("p") = sm.p;
s.slot("x") = sm.x;
s.slot("Dim") = sm.Dim;
s.slot("Dimnames") = sm.Dimnames;
return s;
}
}
``````

EXAMPLE USAGE:

In C++:

``````#include <RcppSparse.h>
// ((Rcpp::export))
Rcpp::NumericVector Rcpp_colSums(Rcpp::dgCMatrix& mat){
return mat.colSums();
}
``````

In R:

``````library(Matrix)
mat <- rsparsematrix(100, 100, 0.5)
colSums <- Rcpp_colSums(mat)
``````

## matrix – DivisionFreeRowReduction Method for RowReduce

My overall goal is to use a division free algorithm to compute the nullspace of a matrix containing multivariate polynomials. For doing so, I believe, there is the method `DivisionFreeRowReduction` which can be also used for `RowReduce`. However, it seems to not actually do division free row reduction. Here are two examples:

``````M = {{x, x + y}, {y, 2*y}};
RowReduce[M, Method -> "DivisionFreeRowReduction"]
``````

yields the identity matrix whereas one might expect something like
$$begin{pmatrix} x & x+y \ y & 2yend{pmatrix} to begin{pmatrix} x & x+y \ xy & 2xyend{pmatrix} to begin{pmatrix} x & x+y \ 0 & xy-2xy^2end{pmatrix}.$$

The same is even true over the integres:

``````M2 = {{2, 3}, {3, 4}};
RowReduce[M2, Method -> "DivisionFreeRowReduction"]
``````

yields the identity matrix, so it seems to divide.

What is going on? Why does the division free row reduction divide? How can you compute a nullspace using division free methods?

## linear algebra – Show convergence of a complicated fixed point iteration with matrix variable

The SVD mentioned below are all thin (or compact) version
Given a set of $$m$$ by $$n$$ full rank ($$r=min{m,n}$$) matrices $${A_i}_{i=1}^N$$ and a $$N$$ by $$N$$ full rank matrix $$W$$
First define $$A_h=(A_1 A_2 ldots A_N)$$, $$A_v=(A_1^T A_2^T ldots A_N^T)$$ and their thin SVD $$(U_h,Sigma_h,V_h)$$, $$(U_v,Sigma_v,V_v)$$ respectively.
Function $$f(P,t)$$ maps $$P$$ to its left singular vectors ($$U$$‘s columns) corresponding to the least $$t$$ singular values.
I wonder that why following iteration converge: Given $$Y_0 in mathbb{R}^{n,b}$$
for $$i$$ from $$1$$ to $$iter$$
$$X_i=U_hSigma _h^{-1}f(V_h^T(Wotimes Y_{i-1}),a)$$
$$Y_i=U_vSigma _v^{-1}f(V_v^T(Wotimes X_{i-1}),b)$$
end
where $$otimes$$ is Kronecker product
The different initial $$Y_0$$ would converge to different result, but this algorithm always gives converged $${X_i}$$ and $${Y_i}$$.
I’m trying to show the convergence behavior.
My first thought is to take out the $$V_h^T$$ in $$f$$ since svd of Kronecker product has a simple form. But I can’t figure out how to do it. Any help would be appreciated!

## algorithms – Check if graph is regular given adjacency matrix

I have the following problem:

Write a function
whose input is an adjacency matrix A of a graph G. The function returns true
if G is a regular graph and false otherwise.

I understand that a graph is regular if the degrees of all the vertices are the same.

Therefore, I have written the following code which calculates the amount of degrees for each column:

``````def sumColumns(A):
columns = ()
for i in range(0, len(A(0))):
total = 0
for j in range(0, len(A)):
total += A(j)(i)
if total > 0:
columns.append(total)
return columns

def isRegularGraph(A):

# Get list of degrees
cd = sumColumns(A)

if not len(cd) > 0 or  cd(0) != cd(len(cd) - 1):
return False

# Do comparisons from i to i - 1
for i in range(0, len(cd) - 1):
if cd(i) != cd(i + 1):
return False
return True
``````

My question is, do I also need to check row wise if the number of degrees are the same column wise?

Is there a better algorithm to do this?

## mathematics – Implementation of the basic matrix operations for embedded application in C++

I have been developing a control software in C++ and for implementation of the control algorithms I need basic matrix operations like addition, subtraction, multiplication and multiplication by scalar.

Due to the fact that the application is the real time application I need to avoid the dynamic memory allocation so I have decided to exploit the C++ templated class and the nontype parameters

``````template <class T, uint8_t ROWS, uint8_t COLUMNS>
class Matrix
{
public:
T array(ROWS)(COLUMNS);

Matrix<T, ROWS, COLUMNS> operator+(const Matrix<T, ROWS, COLUMNS> &m) const
{
Matrix<T, ROWS, COLUMNS> result;

for (uint8_t row = 0; row < ROWS; row++) {
for (uint8_t column = 0; column < COLUMNS; column++) {
result.array(row)(column) = array(row)(column) + m.array(row)(column);
}
}

return result;
}

Matrix<T, ROWS, COLUMNS> operator-(const Matrix<T, ROWS, COLUMNS> &m) const
{
Matrix<T, ROWS, COLUMNS> result;

for (uint8_t row = 0; row < ROWS; row++) {
for (uint8_t column = 0; column < COLUMNS; column++) {
result.array(row)(column) = array(row)(column) - m.array(row)(column);
}
}

return result;
}

template <uint8_t N>
Matrix<T, ROWS, N> operator*(const Matrix<T, COLUMNS, N> &m) const
{
Matrix<T, ROWS, N> result;

for (uint8_t row = 0; row < ROWS; row++) {
for (uint8_t column = 0; column < N; column++) {
result.array(row)(column) = 0;
for (uint8_t element = 0; element < COLUMNS; element++) {
result.array(row)(column) +=
array(row)(element) * m.array(element)(column);
}
}
}

return result;
}

friend Matrix<T, ROWS, COLUMNS> operator*(double k,
const Matrix<T, ROWS, COLUMNS> &m)
{
Matrix<T, ROWS, COLUMNS> result;

for (uint8_t row = 0; row < ROWS; row++) {
for (uint8_t column = 0; column < COLUMNS; column++) {
result.array(row)(column) = k * m.array(row)(column);
}
}

return result;
}

friend Matrix<T, ROWS, COLUMNS> operator*(const Matrix<T, ROWS, COLUMNS> &m,
double k)
{
return k*m;
}
};
``````

I have doubts regarding the decision to have the matrix itself i.e. the `array` as a public member of the `Matrix` classes.

## Show that an invertible linear transformation is represented by a square matrix

Please point out any flaws in my arguments

$$proof$$. Suppose $$T: V to W$$ is an invertible linear transformation where $$B_{V} = {v_{1},…,v_{n}}$$ and $$B_{W} = {w_{1},…,w_{m}}$$ be ordered bases of $$V$$ and $$W$$, and let $$A = (a_{ij})_{m times n}$$ be the matrix representation of $$T$$. Since $$T$$ is invertible, $$T$$ is an isomorphism.

We use the following lemma:

If $$T: V to W$$ is an isomorphism, then $$dim V = dim W$$.

$$proof$$. Suppose $$T: V to W$$ is an isomorphism and let $$B = {v_{1},…,v_{n}}$$ be a basis of $$V$$. We now show that $$S = {T(v_{1}),..,T(v_{n})}$$ is a basis of $$W$$.

$$span(S) = {sum_{i = 1}^{n} a_{i}T(v_{i})| a_{i} in mathbb{F}, T(v_{i}) in S}$$
$$={T(sum_{i=1}^{n} a_{i}v_{i}) | a_{i} in mathbb{F}, T(v_{i}) in S }$$
$$={T(v) | v in V}$$
$$= imT$$

Since $$T$$ is onto, $$im T = W$$. Thus $$S$$ spans $$W$$.

Let $$a_{1},…,a_{n} in mathbb{F}$$ such that $$sum_{i = 1}^{n} a_{i}T(v_{i}) = 0$$
$$implies sum_{i = 1}^{n} a_{i}T(v_{i}) = T (sum_{i = 1}^{n} a_{i}v_{i}) = 0$$.
Since $$T$$ is an isomorphism, $$sum_{i = 1}^{n} a_{i}v_{i} = 0$$. Also since $$B$$ is linearly independent, $$a_{i} = 0$$ $$forall i in {1,…,n}$$.

Thus $$S$$ is a basis of $$W$$.
$$square$$

Thus by the lemma, $${T(v_{1}),…,T(v_{n})}$$ is a basis of $$W$$, thus $$dim V = dim W$$ and $$A$$ is a square matrix. $$square$$

## How to generate a random matrix with arbitrary correlation between elements?

I would like to find a smart way to generate a $$Ntimes N$$ random matrix $$M$$ with arbitrary correlation:
$$begin{equation} boxed{langle M_{ij}M_{kl}rangle=tau_{ijkl}} end{equation}$$
Where the mean and variance of the elements are given by:
begin{align} langle M_{ij}rangle&=0 \ langle M_{ij}^2rangle&=sigma^2 end{align}

The case I am interested in is actually a sub-problem of this. I would like to generate a matrix whose elements follow a normal distribution of mean $$0$$ and variance $$1/N$$, and whose elements are correlated the following way:
$$begin{equation} langle M_{ij}M_{ki}rangle=tau_{ijk} end{equation}$$
When $$tau_{ijk}=delta_{jk}N^{-1}$$ I recover a symmetric matrix.