linear algebra – Is simultaneous similarity of matrices independent from the base field?

Suppose that $F$ is a subfield of a field $G$ and, for
$ntimes n$ matrices $A_1,dots,A_m, B_1,dots,B_m$
over $F$, there exists a matrix $Tin{rm GL}_n(G)$
such that $T^{-1}A_iT=B_i$ for all $i$.

Does this imply that such a matrix $T$ can be chosen from ${rm GL}_n(F)$?

Surely,

  • yes if $m=1$;
  • and yes if the field $F$ is infinite.

linear algebra – Order of elements of a dihedral group

Consider the group 𝐷15.

(a) Find | 𝐷15|.

(b) How many elements π‘₯ in 𝐷15 satisfy the equation π‘₯2 = 𝑒, where e is the identity element in 𝐷15? Explain.

(c) What are the elements in 𝐷15 with order 5?

(d) Find all possible orders of the elements in 𝐷15.

c – Linear probing 16 hashmap elements at a time using SIMD instrinsics

For my hashmap implementation, I’m caching hashes for each map element in an array of length NBUCKETS, where each element corresponds to an element in the hashmap. Assuming a hash consists of 64-bits, the first 57 bits decide the position(bucket) in the hashmap and the last 7 bits (i.e H2(hash)) are used as control bytes for probing the table. More concretely, while finding an element in the map using linear probing, H2(hash) is compared for 16 elements at a time using SIMD intrinsics before comparing actual keys. Hence, each call to probe_group() probes 16 elements of the map at the same time.

Components of code where I think optimization is needed but don’t know how to do so (these have TODO tags in code):

  1. Populating the group array with 16 elements from hash_arr is expensive. I want to avoid the usage of temporary group array and load the last 7 bits of 16 consecutive hash values from hash_arr directly into a __m128i (i.e ctrl). Is there a way to directly load the last 7 bits of 16 hash values from hash_arr into __m128i instead of using the temporary byte array group?
  2. Checking which indices have bit 1 is expensive because my current implementations applies (match >> i) & 1U for each index i, where i ranges from 0 to 15 inclusive. Is there a more efficient way to do this?

Any suggestions on improving these components as well as others parts of the code will be deeply appreciated.

#include <immintrin.h>
#include <stdint.h>
#include <stdio.h>

#define NBUCKETS 100
#define GROUP_SIZE 16

typedef char h2_t;

// return the lowest byte 
static inline h2_t H2(size_t hash)
{
    return hash & 0x7f;
}

// If ith element in `group` is 'hash', then ith bit in `match` will be 1,
// otherwise it will be 0
static inline uint16_t probe_group(h2_t hash, h2_t *group)
{

    __m128i ctrl  = _mm_loadu_si128((__m128i*) group);
    __m128i match = _mm_set1_epi8(hash);
    return _mm_movemask_epi8(_mm_cmpeq_epi8(match, ctrl));
}

int main()
{
    // Dummy array of hashes. Each element corresponds to an element at that index in the hashmap.
    size_t hash_arr(NBUCKETS);
    for (int i = 0; i < NBUCKETS; i++)
        hash_arr(i) = i;

    // TODO Optimize this: Avoid using this array. Instead load the last 7 bits of 16 hash values directly into a `__m128i`.
    const int start_pos = 90;
    h2_t group(GROUP_SIZE);
    for (int i = 0; i < GROUP_SIZE; i++)
        group(i) = H2(hash_arr((start_pos + i) % NBUCKETS));

    // Element to be searched for in `group`.  ASCII for 'c' is 99
    h2_t find_ele = 'c';

    uint16_t match = probe_group(find_ele, group);

    // TODO Optimize this: Find a better way to check which indices have bit 1
    for (int i = 0; i < GROUP_SIZE; i++) {
        if ((match >> i) & 1U)
            printf("Element at index %d in group is equal to %cn", i, find_ele);
    }
}
```

linear algebra – Math history research: a copy of “Zur relativen Wertbemessung der Turnierresultate” , eigenvector centrality by Edmund Landau

Thanks for contributing an answer to MathOverflow!

  • Please be sure to answer the question. Provide details and share your research!

But avoid …

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, see our tips on writing great answers.

Solving a system of linear equations that has infinitely many variables (dynamic programming?)

I have a system of equations of the following
$$z(n)=c+deltabig(p_1z(n+1)+p_2z(n-1)+(1-p_1-p_2)z(n)big)~~if~~ngeq 0$$
and
$$z(n)=deltabig(p_1z(n+1)+p_2z(n-1)+(1-p_1-p_2)z(n)big)~~if~~n< 0$$

One interpretation of the system is like… You receive a reward of $c$ today if and only if your state variable $n$ is non-negative. Tomorrow’s reward is discounted with a factor $delta$, which is $deltain(0,1)$.

And the state $n$ evolves as follows: it becomes $n+1$ with probability $p_1$, $n-1$ with probability $p_2$, and stays the same with probability $1-p_1-p_2$. $c$ is a fixed scalar.

How can I solve for $z(cdot)$ in this case? It is a system of simple linear equations but with infinitely many variables.

If this is not solvable, would it be solvable if I add some boundary conditions and make it a finite problem? For example, restricting to $nin{-100,-99,cdots,99,100}$ and with some proper modification of the equations for $z(-100)$ and $z(100)$?

multivariable calculus – Proof of the Formula for Linear Approximation With $n$ Variables?

The linear approximation of $f(vec{x})$ where $vec{x}$ has $n$ elements is given by $$f(vec{x}) approx f(vec{a}) + sum_{j=1}^n frac{partial f}{partial x_j} (vec{a})(x_j-a_j)$$ for $vec{a}$ near $vec{x}$. I understand this formula, but don’t understand the proof. Can someone please explain this to me?

r – Relationship between logistic regression and linear regression

I’ve encountered a problem where I need to analyze the relationship between a movie’s length, a movie’s price and it’s sale on a video streaming platform. Now I have two choices to quantify sale as my dependent variable:

  1. whether or not a user ended up buying the movie
  2. selling rate (# of people buying the movie / # of people watched the trailer)

if I use selling rate I essentially would use a linear regression where I have
selling rate= beta_0 + beta_1*length + beta_2*price + beta_3*length*price

But if I’m asked to use option 1 where my response is a binary output, and I assume I need to switch to logistic regression, how would the standard error change? Will the standard error be an underestimate?

linear algebra – Recover approximate monotonicity of induced norms

Let $A$ some square matrix with real entries.
Take any norm $|cdot|$ consistent with a vector norm.

Gelfand’s formula tells us that $rho(A) = lim_{n rightarrow infty} |A^n|^{1/n}$.

Moreover, from (1), for a sequence of $(n_i)_{i in mathbb{N}}$ such that $n_i$ is divisible by $n_{i-1}$, we also know that the sequence $|A^{n_i}|^{1/n_i}$ is monotone decreasing and converges towards $rho(A)$. I am interested in what happens when this divisibility property is not verified.

  1. If the matrix has non-negative entries, it seems the general property holds: For integers $n$ and $m$ such that $m > n$, it is the case that $|A^m|^{1/m} leq |A^n|^{1/n}$.

  2. If the matrix can have positive and negative entries, this more general observation does not seem to hold. I am trying to understand why it fails, how worse can the inequality become, and if it is possible to recover an inequality up to some function of $A$: $|A^m|^{1/m} leq f(A)cdot|A^n|^{1/n}$.

Any references to 1., or pointers for understanding 2. would be much appreciated.

(1) Yamamoto, Tetsuro. “On the extreme values of the roots of matrices.” Journal of the Mathematical Society of Japan 19.2 (1967): 173-178.

linear algebra – Homogenous Equation with Complex Vector Solution: Converting to Real Functions

Solving the system
$$
begin{array}{l}frac{d x}{d t}=6 x-y \ frac{d y}{d t}=5 x+4 yend{array}
$$

we get $lambda_{1}=5+2 i, lambda_{2}=5-2 i$ eigenvalues. So eigenvectors and corresponding solutions is:

$$
mathbf{K}_{1}=left(begin{array}{c}1 \ 1-2 iend{array}right), quad mathbf{X}_{1}=left(begin{array}{c}1 \ 1-2 iend{array}right) e^{(5+2 i) t}
$$

$$
mathbf{K}_{2}=left(begin{array}{c}1 \ 1+2 iend{array}right), quad mathbf{X}_{2}=left(begin{array}{c}1 \ 1+2 iend{array}right) e^{(5-2 i) t}
$$

Thus general solution:
$$
mathbf{X}=c_{1}left(begin{array}{c}1 \ 1-2 iend{array}right) e^{(5+2 i) t}+c_{2}left(begin{array}{c}1 \ 1+2 iend{array}right) e^{(5-2 i) t}
$$

So converting the above solution to a real function via Euler’s:
$$
begin{array}{l}e^{(5+2 i) t}=e^{5 t} e^{2 t i}=e^{5 t}(cos 2 t+i sin 2 t) \ e^{(5-2 i) t}=e^{5 t} e^{-2 t i}=e^{5 t}(cos 2 t-i sin 2 t)end{array}
$$

My text states to collect terms and replace $ c_1 + c_2 $ by $ C_1 $ and $ (c_1 – c_2)i $ by $ C_2 $ the solution becoming $mathbf{X}=C_{1} mathbf{X}_{1}+C_{2} mathbf{X}_{2}$ where
$$
mathbf{X}_{1}=left(left(begin{array}{l}1 \ 1end{array}right) cos 2 t-left(begin{array}{r}0 \ -2end{array}right) sin 2 tright) e^{5 t}
$$

$$
mathbf{X}_{2}=left(left(begin{array}{r}0 \ -2end{array}right) cos 2 t+left(begin{array}{l}1 \ 1end{array}right) sin 2 tright) e^{5 t}
$$

I seem to be making a mistake somewhere and don’t get to the same simplification. I’m looking for some of the steps to the books specification. I’ve tried a few times by multiplying the general solution across rows, fully expanding outside of its matrix form.

linear algebra – What adjectly a vector transformation is?

According to essence of linear algebra 3blue1brown.
One vector can be transformed to another vector and linearity preserves. he says this by rotating basis vector. In that way the basis vector(of transformed) losses orthogonality. My question is how should we think this, visual in a plane(infinite) stretching along diagonal (if we think this as of 2-D sheet). rotation of transformed basis vector and stretching fullfills this visual except when it stretches to one line.
there is rotation and shearing that keeps orthogonality i guess i am making wrong assumption.