ag.algebraic geometry – Todd polynomials

Let $T_k(x_1,ldots,x_n)$ be the Todd polynomials, $e_k(x_1,ldots,x_n)$ the elementary symmetric polynomials and $p_k(x_1,ldots, x_n)$ the power sums of degree $k$.

We have the following generating formulas
sum_{kgeq 0}T_k(x_1,ldots,x_n)t^k = prod_{i=1}^nfrac{tx_i}{1-e^{-tx^i}},,\
sum_{kgeq 0}e_k(x_1,ldots,x_n)t^k = prod_{i=1}^n(1+tx_i),,\
sum_{kgeq 0}frac{1}{k!}p_k(x_1,ldots,x_n)t^k = sum_{i=1}^ne^{tx_i},.

There is an explicit relation between $(1/k!)p_k$ and $e_k$ in terms of Newton’s identities which can be expressed using generating series (see e.g. wikipedia). Is there some similar expression for $T_k$ in terms of $e_k$ or $p_k$?

For example if $X$ is a hyperkähler complex manifold. Replacing $x_i$ with $alpha_i$ the roots of $c(TX)$, one can show that (see (3.13)):
td(X) = text{exp}Big(-2sum_{ngeq0} b_{2n}text{ch}_{2n}(TX)Big),,

where $b_{2n}$ are the modified Bernoulli numbers. Is there possibly even a less explicit formula without any assumptions on the geometry?

polynomials – Closed form for $sum_{j=0}^{infty}{alpha choose j} {beta choose j}x^j$

As stated, I wonder if there is a closed form for the generating function $F_{alpha,beta}(x):=sum_{j=0}^{infty}{alpha choose j} {beta choose j}x^j$ where $alpha,beta inmathbb{N}$. Calling this a generating function is slightly misleading since ${alpha choose j}=0$ when $j>alpha$ so this is really a finite sum. A few cases are known already: In the case $x=1$ we have that

$$F_{alpha,beta}(1)=sum_{j=0}^{infty}{alpha choose j} {beta choose j}={alpha+beta choose alpha}$$

by Chu-Vandermore. The main approach I would take for this sort of problem would be to use the recurrence relations on binomial coefficients to get a differential question. In this case, this yields the equation

$$left(x^{2}-xright)F”left(xright)-left(1+left(alpha+beta-1right)xright)F’left(xright)+alphabeta Fleft(xright)=0$$

This equation is really similar to an Euler differential equation, but the fact that the terms are polynomials with terms of varying degrees messes it up. I can’t solve it, and Wolfram Alpha gives a useless answer in terms of a function that takes in $9$ arguments and a seperarte function that takes $4$. This feels like the sort of problem which would have a nice solution, but fixing $alpha$ and $beta$ and looking at the polynomials generated they do not seem to be very simple nor do they have roots at rational numbers.

semidefinite programming – Non-negative bivariate polynomials in a rectangle

I have been working on non-negative univariate polynomials and I found the following equivalent relationship to check if a polynomial is non-negative or not:

The polynomial $g(x) = sum_{r=0}^k y_rx^r$ satisfies $g(x) geq 0$ for all $x in (a, b)$ if and
only if there exists a positive semidefinite matrix $X = (x_{ij})_{i,j=0,…,k}$, such that
sum_{substack{i,j=0,ldots,k \ i+j=2q-1}}x_{ij}=0, qquad q=1,ldots, k

sum_{substack{i,j=0,ldots,k \ i+j=2q}}x_{ij}=sum_{m=0}^qsum_{r=m}^{k+m-q}y_r {rchoose m} {k-rchoose q-m}a^{r-m}b^m, qquad q=0,ldots, k.

This result is found in
Bertsimas, D., & Popescu, I. (2005). Optimal inequalities in probability theory: A convex optimization approach. SIAM Journal on Optimization, 15(3), 780-804.

I was wondering if this result (or any other, preferably using semidefinite programming) have been extended to bivariate polynomials, that is to say, if someone can find conditions for a bivariate polynomial $g(x,y)=sum_{r=0}^ksum_{s=0}^l a_{r,s}x^ry^l$ to be non-negative over some rectangle $(a,b)times (c,d)$.

Thank you in advance.

Find all integer polynomials $f(p)$ that divide $2^p–2$ for every prime $p>2$

Find all polynomials $f(p)$ with integer coefficients that divide $2^p–2$ for every prime $p>2$

From Fermat’s little theorem we can find $2p, p, -2p, -p$. But are there others, or can we prove higher degrees won’t work?

ac.commutative algebra – Discrete logarithm for polynomials

Let $p$ be a fixed small prime (I’m particularly interested in $p = 2$), and let $Q, R in mathbb{F}_p(X)$ be polynomials.

Consider the problem of determining the set of $n in mathbb{N}$ such that $X^n equiv R$ in the ring $mathbb{F}_p(X) / Q$. It is easy to see that this is either the empty set $emptyset$, or a singleton set ${ a }$, or an arithmetic progression ${ a + cm : m in mathbb{N} }$.

In the special case where $Q$ is a primitive polynomial, $mathbb{F}_p(X) / Q$ is a finite field and this is just the ordinary discrete logarithm problem, and there’s an efficient quasi-polynomial-time algorithm for solving this:

What about when $Q$ is not a primitive polynomial? Can this more general problem be reduced to solving instances of the ordinary discrete logarithm, and therefore also be solved in quasi-polynomial time?

By the structure theorem for finitely-generated modules over a PID, we can factor $mathbb{F}_p(X)/Q$ as the direct sum of rings of the form $mathbb{F}_p(X)/S^k$, where $S$ is an irreducible polynomial. If we could solve the discrete logarithm problem in each of these direct summands, then the Chinese Remainder Theorem would determine the solution in the original ring. Hence, it suffices to consider the case where $Q$ is the power of an irreducible polynomial. But I can’t see how to proceed any further, because ‘power of an irreducible polynomial’ is not the same thing as ‘primitive polynomial’.

Motivation: when $p = 2$, this is equivalent to efficiently searching the output of a linear feedback shift register PRNG to determine where (if at all) a specific bit-string arises.

algebra precalculus – Solving polynomials with compound angle formulae

Solve $64x^6-96x^4+36x^2-3=0$ (Hint: Consider expanding $cos 6theta$ in terms of $cos theta$)

To solve this exercise I can simply derive an identity using de-Moivre theorem then shuffle the equation around. It turns out that substituting $x=cos theta$ reduces the equation to $cos 6theta = frac{1}{2}$.

However, I’m wondering in the general case, is there any way to see this without the monster of a hint? In other words, given a polynomial, is there something I could look for in the coefficients that would make me think that deriving a compound angle formula may be a good idea?

polynomials – Extension of Coefficient to variable exponents

Say I have an expression that is a polynomial in a variable, for example in the variable $x$, in which terms appear for which the exponent of $x$ also includes variables. For example:

pol = a + b x^n + c x^(n + 1)

Is there a clean way to find the coefficient in front of any given term?

I could not get the built-in function Coefficient to work directly, namely

{Coefficient(pol, x, 0), Coefficient(pol, x, n), Coefficient(pol, x, n + 1)}

yields the output

{a, b + c x, c}

while I want it to yield

{a, b, c}

To fix this I used

Coefficient2(pol_, var_, exp_) := Coefficient(Coefficient(pol, var, exp), var, 0)

This was good enough to yield the correct results in my case, but it feels a bit nasty. Is there a better way?

gram schmidt – Problem with orthogonalizing the Laguerre polynomials

Alright, so I ran into a little problem while applying the Gram-Schmidt orthogonalization process. To the functions ${1,x,x^2,x^3…}$ over $xin(0,infty)$ with weight function $sigma (x)=e^{-x}$. As a reminder:

$$<x^n,x^m>=int_0^{infty}x^n x^m e^{-x}dx=(n+m)!$$

The first three polynomials I obtained were:
phi_0 &=1\
phi_1 &=x-1\
phi_2 &=x^2-4x-2

It’s very strange to me that $phi_2$ is orthogonal to $phi_1$, but NOT orthogonal to $phi_0$. Note that $phi_1$ and $phi_0$ are not only orthogonal to one another, but also each already normalized. Using these two for the orthogonalization process, you get $phi_2$… Why is this happening?

polynomials – Direct proof monomial ideal torus fixed

I know that an ideal $I subseteq K(x_1,dots,x_n)$ is monomial if and only if this is closed under the action of the torus, i.e., $I$ is monomial if and only if it is fixed under the action $varphi(x_i) = c_i x_i$ for all $1 leq i leq n$ and all $(c_1,dots,c_n) in (K^*)^n$.

I also know that given an ideal $I subseteq K(x_1,dots,x_n)$, one can compute mon$(I) = (I cap (x_1,dots,x_n))$, the largest monomial ideal contained in $I$, as described here If someone is unfamiliar with this paper, the idea is to consider the ideal $t.I subseteq K(x_1,dots,x_n)(t_1^{pm},dots,t_n^{pm})$ which is generated by the generators of $I$ but where each variable $x_i$ has been replaced by $t_i x_i$. Then,
text{mon}(I) = (t.I) cap K(x_1,dots,x_n)

The proof of this statement relies on the fact that $(t.I) cap K(x_1,dots,x_n)$ is a homogeneous ideal (w.r.t. a specific grading) and that a polynomial can only be homogeneous w.r.t. this grading if it is a monomial.

Now, I was wondering if one could somehow directly prove that the ideal $(t.I) cap K(x_1,dots,x_n)$ is closed under the action of the torus without this argument about something being homogeneous w.r.t. a certain grading. Has anyone seen a (direct) proof of this fact? Or has an idea how this could be tackled?

Thanks already in advance for any help! 🙂

nt.number theory – Can factorization of very large numbers be aided by associating them with a series (described below) of quadratic polynomials?

My name is J. Calvin Smith. I graduated in 1979 with a Bachelor of Arts in Mathematics from Georgia College in Milledgeville, Georgia. My Federal career (1979-2012) in the US Department of Defense led me to learn, explore, and take courses in cryptologic mathematics and number theory for cryptology, which in turn led me to the problem – which I assume is still difficult – of factoring very large numbers. I came up with a series of quadratic polynomials associated with any number one is trying to factor – an example of the technique follows – and I want to find out if this is a technique with promise, one that would help make factorization be algorithmic and fast, reducing the order of magnitude of the problem.

It is most easily described by way of example. Let p = 1009 and q = 2003.

n = pq = 2021027.

In this situation, we can effortlessly factorize n: we know what p and q are. What I want to do here is take what we know and use it as a means toward a means: a way to find a way, that way being a method of factoring numbers of arbitrary size straightforwardly.

The largest integer less than the square root of n is 1421. Let us set a
new variable m to 1421.

1009 = m – 412.
2003 = m + 582.

n = m^2 + 170*m – 239784.

The determinant in the quadratic formula for this polynomial would be B^2 – 4AC = 28900 + 959136 = 988036, the square of 994. Thus the quadratic formula will give us integer roots. This follows obviously from how we constructed the quadratic: as the product of two known factors.

But since m^2 = 2019241, we also have as a polynomial for n:

n = m^2 + 1786, which cannot be factored using the quadratic formula.

But 1786 = m + 365 = 2m – 1056 = 3m – 2477 and so on. Adding each of these expressions to m^2 produces a series of quadratic polynomials, almost all of which cannot be factored into integer roots.

Let us now look at m^2 + 1786, with or without the next few polynomials in a series thus constructed, and see if we can determine/calculate ahead of time that we would be hitting the jackpot, so to speak, at the polynomial with the 170*m term? (i.e., the 171st quadratic in the series)

B C B^2-4*C
0 1786 -7144 no real square root
1 365 -1459 no real square root
2 -1056 4228 = 65^2 + 3 = 66^2 – 128.
3 -2477 9917 = 99^2 + 116 = 100^2 – 83.

In general here, Determinant is (B + 2842)^2 -8084108. (This is B^2 + 5684B – 7144, after completing the square.) What I have not yet figured out, but I suspect might be easy, is for which values of B the determinant becomes a perfect square, thus causing the quadratic formula to produce integer roots and lead us to the answer we want – the factorization. Further, will this approach scale nicely? I am hoping the discovery of the perfect-square values of the determinant’s quadratic, regardless of n = pq chosen (and in that case p and q still unknown), can be done algorithmically and easily.