linear algebra – How to multiply polynomials quickly?

Let’s say I have a set of polynomials generated from a sequence $A$ of $N$ non-negative integers. $N$ polynomials are generated like this:

For example, if $N = 4$:

$$ P_1 (x) = (A_1)(A_1 + A_2 x)(A_1 + A_2 x + A_3 x^2)(A_1 + A_2 x + A_3 x^2 + A_4 x^3) $$

$$ P_2 (x) = (A_2)(A_2 + A_3 x)(A_2 + A_3x + A_4x^2) $$

$$ P_3 (x) = (A_3)(A_3 + A_4 x) $$

$$ P_4 (x) = (A_4) $$

Now, I want to to find the product of these $N$ polynomials $P$.
For example, here: $ P_1 (x) P_2 (x) P_3 (x) P_4 (x) $

My question is, can we do it in $O(N)$ or $O(N log N)$ or $O(N log x)$? If not, then what’s the best complexity possible for this? Obviously I want something better than $O(N^2)$.

algebra precalculus – Shrinkage as a percentage, $100%$ or $%$ in this formula?

I have a question regarding calculating percentage in the following:

This shows the shrinkage as a percentage:
$$
text{Saving percentage} = frac{text{SizeBeforeCompresseion} – text{SizeAfterCompression}}{text{SizeBeforeCompresseion}} % tag 1
$$

Example:

$text{SizeBeforeCompresseion } =text{65 536 bytes}$ and $text{SizeAfterCompression} = text{16 384 bytes}$.

The saving percentage is
$$
text{Saving percentage} = frac{text{65 536} – text{16 384}}{text{65 536}} % = 0,75 %
$$

Question:

$0,75 %$ isn’t correct (I guess?), we actually have to multiply with $100$ to find the right percentage, i.e. $0,75cdot 100 %=75 %$. So isn’t it more correct to write $(1)$ as
$$
text{Saving percentage} = frac{text{SizeBeforeCompresseion} – text{SizeAfterCompression}}{text{SizeBeforeCompresseion}} cdot 100 % tag 2
$$

?

linear algebra – Show that If $A$ is a Symmetric $ntimes n$ matrix then $q(x)=x^TAx=sum_{i=1}^nlambda_ix_i^2$

I found this on wiki that if $A$ is a Symmetric $ntimes n$ matrix then
$$q(x)=x^TAx=sum_{i=1}^nlambda_ix_i^2$$
I tried to prove this by write $A$ in term of $PDP^T$, but didn’t work out:
begin{align*}
q(x)=&(x^TP)D(P^Tx)\
=&
begin{pmatrix}x_1&dots&x_nend{pmatrix}
begin{pmatrix}
p_{11}&dots&p_{1n}\
vdots&ddots&vdots\
p_{n1}&dots&p_{nn}
end{pmatrix}
begin{pmatrix}
lambda_1&dots&0\
vdots&ddots&vdots\
0&dots&lambda_n
end{pmatrix}
begin{pmatrix}
p_{11}&dots&p_{n1}\
vdots&ddots&vdots\
p_{1n}&dots&p_{nn}
end{pmatrix}
begin{pmatrix}x_1\vdots\x_nend{pmatrix}\
=&
begin{pmatrix}sum_{i=1}^nx_ip_{i1}&dots&sum_{i=1}^nx_ip_{in}end{pmatrix}
begin{pmatrix}
lambda_1&dots&0\
vdots&ddots&vdots\
0&dots&lambda_n
end{pmatrix}
begin{pmatrix}sum_{i=1}^nx_ip_{i1}\dots\sum_{i=1}^nx_ip_{in}end{pmatrix}\
=&begin{pmatrix}lambda_1sum_{i=1}^nx_ip_{i1}&dots&lambda_nsum_{i=1}^nx_ip_{in}end{pmatrix}begin{pmatrix}sum_{i=1}^nx_ip_{i1}\dots\sum_{i=1}^nx_ip_{in}end{pmatrix}\
=&sum_{j=1}^nlambda_jleft(sum_{i=1}^nx_ip_{ij}right)^2
end{align*}

This is all I have so far, could someone help me with this, thank you.

Some tips on the algebraic graph theory (or graph algebra) problem

I’m looking for some tips on the following problem I faced with. Imagine we have a tree $T$, i.e. acyclic connected graph. We can label its edges using ${0,1}$. Note that these labelings are just functions $f:Erightarrow{0,1}$ where $E$ denotes the set of edges of $T$. Say $T$ contains $q$ edges, then we can produce $2^q$ labelings. Denote $L$ the set of all labelings. Given two elements of $L$, we may define their sum adding corresponding labels modulo 2. Obviously, we will obtain the group $({mathbb{Z}/2mathbb{Z})}^q$.

Now, let’s label the edges of our tree using some injective function $F:Erightarrow{1,…,q}$. Let’s think that the vertices of our tree form the set $V={0,…,q}$ (note that the set of vertices of any tree contains one more element than the set of its edges). For every function $fin L$ we can define the following function $f’:Vrightarrowmathbb{Z}$ labeling the vertices. Put $f'(0)=0$ and after that go inductively: taking an edge $ab$ such that $a$ has been already labeled put $f'(b)=|f'(a)+(-1)^{f(ab)}F(ab)|$. I’m trying to prove that for some $fin L$ the function $f’$ is injective.

The following picture demonstrates what we will have for some tree with labeling $F$ (denoted by red). Using ${+,-}$ instead of ${0,1}$ I’ve firstly built all possible labelings $L$ (the first row). After that I’ve produced the corresponding labelings of the vertices (the second row). The lower row contains values of the obtained labelings. We see that most of them satisfy the injectivity (denoted by green).

enter image description here

Actually, this problem comes from the generalization of the Ringel–Kotzig conjecture I’ve made in my russian article. There 81 of 84 problems were solved, except the well-known conjecture with its equivalent form and the problem I’m sharing here. I mostly will be grateful if you offer some algebraic structures (expanding the group I’ve mentioned) that could be useful for solving the problem.

abstract algebra – Integral closure of a monomial

Recall that given an ideal $I$ in $R=k(x_1,ldots,x_n)$, an element $rin R$ is integral if $r$ satisfies an equation of the form
$$r^m+a_1r^{m-1}+ldots+a_{m-1}r+a_m=0,$$
where $a_iin I^i$ for any $i=1,ldots,m$. The set (actually, an ideal) of integral closed elements is denoted by $overline{I}$.

Goal: I want to prove the following result (taken from Villareal’s “Monomial Algebra”): let $I$ be a monomial ideal of $R$. Then
$$overline{I}=(x^amid x^{ma} in I^m text{ for some } mgeq 1).$$

Proof: I’m having trouble proving the $subset$ inclusion. Consider $r=x^ainoverline{I}$: by definition it satisfies the equation
$$r^n+a_1r^{n-1}+ldots+a_{n-1}r+a_n=0,$$
where $a_iin I^i$.

Now quoting “since $I$ is monomial ideal one obtains $r^min I^m$ for some $mgeq 1$. Observing that $overline{I}$ is a monomial ideal the asserted equality follows.”

I notice that for any $i=1,ldots,n$ the element $a_ir^{m-i}$ belongs to $I$, therefore also $r^min I$. But apart from this I don’t know how to continue: ok $r^m=(x^a)^min I$, but I don’t have any costraint on which $I^t$ it belongs to.

Thanks in advance to anyone.

algebra precalculus – I`m wrong, or in the book answer is wrong

Given $S=1+3+5+cdots+2017+2019.$ Find $$frac{1}{1010}S-1008$$

My attemp is:

$S=1+3+5+cdots+2017+2019$

$S=frac{2019-1}{2}(1+2019)$

$S=2038180$

Now i find
$$frac{1}{1010}S-1008=frac{1}{1010}2038180-1008=1010$$

But in the book answer is 2. were is my wrong, or wrong anwser in the book. Help me please

ac.commutative algebra – Characterization of algebraic integers providing a prime ideal

Let $alpha$ be an algebraic integer and let $mathcal{O}_{mathbb{Q}(alpha)}$ be the ring of integers of $mathbb{Q}(alpha)$.

Question: How to characterize the algebraic integers $alpha$ such that $alpha mathcal{O}_{mathbb{Q}(alpha)}$ is a prime ideal of $mathcal{O}_{mathbb{Q}(alpha)}$?

Example: if $alpha in mathbb{Z}$, then $mathcal{O}_{mathbb{Q}(alpha)} = mathbb{Z}$, so that $alpha mathcal{O}_{mathbb{Q}(alpha)}$ is a prime ideal iff $alpha$ is a prime number (up to a sign).

abstract algebra – An equality on the sum, the intersection and the product of ideals

We know that in the ring $mathbb{Z}$, the following equality holds
$$
(I+J)(I cap J) = (IJ)
$$

for any ideals $I$ and $J$ in $mathbb{Z}$. It can interpreted as the fact that for any two integers $a$ and $b$,
$$
mathrm{lcm}(a,b) times gcd (a,b) = ab.
$$

My question is this : Can we generalize this equality to some broader contexts? For example, does this equality holds in an arbitary PID (principal ideal domain) or UFD (unique factorization domain)? Does this equality holds in an arbitary Dedekind domain, etc..

My Ideas and Attempts:

  1. It remains to be true in any PID, as we can directly use the same proof as in proving the fact on the lcm and gcd of two integers.

  2. I do not think that the statement holds in any UFD. But I am not able to provide any counterexample on this and I’m hoping to get one in this question.

  3. Yet does it ture that for any principal ideals in an UFD, the equality remains to be true? (I haven’t proved the above claim.)

  4. Since the ring of integers in algebraic number theory is a generalization of the ring $mathbb{Z}$ in number fields (finite extension of $mathbb{Q}$), does such equality holds in Dedekind domains (or at least the ring of integers $mathcal{O}_K$ for any number field $mathbb{K}$ over $mathbb{Q}$)?

I have calculated for some rings, for example the ring of integers $R = mathbb{Z}(sqrt{-5})$. In the ring $R$,
$$(2) = (2, 1+sqrt{-5})^2 =: mathfrak{p}_1^2, $$
$$(3) = (3, 1+sqrt{-5})(3, 2+sqrt{-5}) =: mathfrak{p}_2 mathfrak{p}_2^prime, $$
$$(5) = (5, sqrt{-5}) =: mathfrak{p}_3^2.$$

Then consider the ideals
$$ I = (3) mathfrak{p}_1 = mathfrak{p}_2 mathfrak{p}_2^prime mathfrak{p}_1 $$
and
$$ I = (5) mathfrak{p}_1 = mathfrak{p}_3^2 mathfrak{p}_1 . $$

Hence,
$$
I + J = mathfrak{p}_1 mathfrak{p}_2 mathfrak{p}_2^prime mathfrak{p}_3,
$$

$$
I cap J = mathfrak{p}_1 mathfrak{p}_2 mathfrak{p}_2^prime mathfrak{p}_3^2.
$$

Thus,
$$
(I+J)(I cap J) = mathfrak{p}_1^2 (mathfrak{p}_2 mathfrak{p}_2^prime)^2 mathfrak{p}_3^3 = (450, 90 sqrt{-5}),
$$

which is not a principal ideal. (I am not sure on this.) Yet
$$
IJ = mathfrak{p}_1^2 mathfrak{p}_2 mathfrak{p}_2^prime mathfrak{p}_3^2 = (30),
$$

which is a principal ideal. Hence such equality does not hold in $R$. This is very strange to me, since the ring of integer is a generalization of $mathbb{Z}$.

Thank you in advance for your answers and sorry for the possible mistakes in this question.

linear algebra – Reference on classifying real subspaces of complex vector spaces (based on restricted complex structure)

Every complex vector space can also been seen as real vector space. If we now choose a real subspace, it may not be a complex subspace (in particular, if it is of odd real dimension).

If the complex vector space was equipped with an inner product (for example, a Hilbert space), we can restrict the imaginary unit (also known as linear complex structure) to any real subspace using the orthogonal projection. We can then classify the types of real subspaces based on the spectrum of this “restricted complex structure”. In particular, if the restricted complex structure squares to minus identity, i.e., is itself a complex structure, the real subspace is also a complex subspace. In general, the spectrum encodes how being a complex subspace is violated.

I worked this out for myself, but I’m confident that this is standard material in linear algebra of complex vector spaces. However, the standard introductory text books that I checked do not discuss real subspaces of complex vector spaces and their classification in the above way.

Do you know of a standard reference that I could cite when discussing this (in particular, the above mentioned classification based on the spectrum of the restricted complex structure)?

linear algebra – Subgroup of Permutations leaving matrices invariant

I have two $dtimes d$ matrices $A$ and $B$ and interested in finding the subgroup $G$ of the symmetric group that permute the indices while leaving these invariant. In other words all $sigmain S_d$ such that

$$ A_{sigma(i)sigma(j)} = A_{ij}, qquadtext{and}qquad B_{sigma(i)sigma(j)} = B_{ij}.$$
Or if $P(sigma)$ is the corresponding $dtimes d$ permutation matrix, then

$$P(sigma) A P(sigma)^{-1} = A qquadtext{and}qquad P(sigma) B P(sigma)^{-1} = B.$$

To make the problem a lot simpler, let

$$ a=begin{pmatrix}
1 & 1 & 1& 1\
1 & 1& -1& -1\
1 & -1& 1& -1\
1& -1&-1&1
end{pmatrix},qquad b=begin{pmatrix}
1 & 0 & 0& 0\
0 & 1& 0& 0\
0 & 0& 1& 0\
0& 0&0&-1
end{pmatrix}.$$

Then I am more specifically interested in $A$ and $B$ matrices of the form

$$ A^{(n)} = aotimesdotsotimes a, qquad B^{(n)} = botimesdotsotimes b,$$
where $n$ corresponds to $n$ copies.
And ideally I want the elements (or better, generators) of $G$ in terms of Cycles. For $n=1$ it’s easy to see that $G=mathbb Z_2$ and there is only one non-trivial permutation. For $n=2$, I can show that the group is at-least of order 72.

But I would like to be able to find these more systematically using Mathematica. The more brute-force approaches I’ve tried are not going to work, as they scale extremely poorly (even $n=2$ is beyond reach since $|S_{16}|=16!approx 20cdot 10^{12}$).

I welcome any idea.

PS. finding the subgroup leaving $B$ invariant is easy, and leaves a much smaller set of permutations to check. But it’s still unmanageable.