arithmetic – Proof of Associativity and Commutativity For Multiplication and Addition of Real Numbers.

This fundamental proof is really bothering me for a long time. I have seen the proofs on proofwiki and other sites but it uses too much mathematical jargon. I would like a nice, intuitive proof using only basic arithmetic axioms and induction.
P.S (If it is not possible to prove this fact for real numbers at least give a proof for natural numbers.
Thank You!

space complexity – Is there a polynomial sized arithmetic formula for iterated matrix multiplication?

I found an article on Catalytic space which describes how additional memory (which must be returned to it’s arbitrary, initial state) can be useful for computation. There’s also an expository follow up with some more details.

In particular, they describe a scheme for iterated matrix multiplication (for the purposes of this post, multiplying $n$, $n times n$ matrices) in log space, poly “catalytic space”, and polynomial time. The argument to the best of my understanding can be sketched as follows.

  1. Theorem 4 (second article) says any arithmetic formula (i.e. arithmetic circuit w/ fanout 1) of depth $d$ can be computed with a program of size $4^d$ (and all the previously mentioned space guarantees). Here, “program” is in the context of register machines, and the size is the number of instructions and equals the runtime.

  2. Brent et al. 1973 proved that any arithmetic formula of size $s$ can be “balanced” to have depth $O(log s)$, so combining with (1) it has a program of size $poly(s)$

  3. For some reason, I cannot find this last, implied claim in either of the articles: there is a arithmetic formula of size $s = poly(n)$ for iterated matrix multiplication. This would imply the claim made by the papers — that IMM can be done in polynomial time with the other space bounds, but for some reason I can’t find the claim explicitly written out, which suggests I am missing something.

The smallest formula I can think of for iterated matrix multiplication is “divide and conquer” on the number of matrices, and results in size $n^{O(log n)}$, and I don’t see any way to improve on this.

The first linked article says “iterated matrix product can be computed transparently
by polynomial size programs”, which would seem to follow by putting together 1,2, and 3, but it references an old thesis I can’t find anywhere.

So, it’s either the case that I’ve totally misread the argument, or there should exist a polynomial sized arithmetic formula for iterated matrix multiplication. Does anyone know of one?

list manipulation – How to implement fast series multiplication

I need to generalize the SeriesData object for my own purposes. One of the things I need to do is to reimplement code to multiply product of series.

I’ve made two attempts at this, and both of them are slower than the built-in SeriesData. Is there an algorithm that has better Timing than mine?

(*Two of my implementations*)
multSerList1(lists__) := 
    Array(Plus @@ Times @@@ (MapThread(Part, {{lists}, #}, 1) & /@ 
      Flatten(Permutations /@ IntegerPartitions(#, {3}), 1)) &, Min(Length /@ {lists}), Length({lists}));

multSerList2(listFirst_, listRest__) := 
    Fold(Function({a1, a2}, Array(Inner(Times, Take(a1, #), Reverse(Take(a2, #)), Plus) &, Min(Length /@ 
      {listFirst, listRest}))), listFirst, {listRest});

To test this, I try to multiply the following three series together:

realExampleList = {
  List @@ Normal(Series(Exp(y x), {x, 0, 4})), 
  List @@ Normal(Series(Log(1 + c x), {x, 0, 5})),
  List @@ Normal(Series(PolyLog(2, -n x), {x, 0, 3}))}

Then multSerList1@@realExampleList or multSerList2@@ realExampleList both yield

enter image description here

I can apply AbsoluteTiming to time my code. It takes 0.0002 s for first one and 0.0001 s for the second one on my machine. But Multiplying the SeriesData (need to remove the Normal and List)

realExampleListSerData = {
  Series(Exp(y x), {x, 0, 4}), 
  Series(Log(1 + c x), {x, 0, 5}),
  Series(PolyLog(2, -n x), {x, 0, 3})};

By simply doing Times@@realExampleListSerData, it gets the answer in 0.00002 s, which is five times faster.

I need help implementing the multiplication of series that performs approximately as well as the SeriesData.

Thanks!

Turn vector equation into matrix multiplication

I have physical equations of motion that describe the dependence of one vector field on the components of another vector field. Without getting too much into detail, my differential equations involve a double curl that mixes the vector components on one side of the equation. Furthermore, I would like to discretize the space on which these equations are defined. So far I was able to set up the discretized equations:

$$
frac{d}{dt} begin{pmatrix}
vdots\
P_x(i)\
P_y(i)\
P_z(i)\
vdots
end{pmatrix} = begin{pmatrix}
vdots\
A_y(i)-A_y(i+1)\
-A_x(i)+A_x(i+1)-a*A_z(i)\
b*A_y(i)\
vdots
end{pmatrix}tag{1}
$$

where a and b are some arbitrary constants and $vec{A}$ and $vec{P}$ are the two vector fields whose relation I want to determine. The lattice coordinates in one dimension are written as i (e.g. $frac{d}{dt} P_x(7) = A_y(7)-A_y(8)$).

Is there any way to automatically turn this into a matrix multiplication? I would want the result to look something like

$$
frac{d}{dt} begin{pmatrix}
vdots\
P_x(i)\
P_y(i)\
P_z(i)\
P_x(i+1)\
P_y(i+1)\
P_z(i+1)\
vdots
end{pmatrix} = begin{pmatrix}
ddots & ddots & ddots & ddots & ddots & ddots & ddots & ddots \
ddots & 0 & 1 & 0 & 0 & -1 & 0 & ddots\
ddots & -1 & 0 & -a & 1 & 0 & 0 & ddots\
ddots & 0 & b & 0 & 0 & 0 & 0 & ddots\
ddots & ddots & ddots & ddots & ddots & ddots & ddots & ddots\
ddots & ddots & ddots & ddots & ddots & ddots & ddots & ddots\
ddots & ddots & ddots & ddots & ddots & ddots & ddots & ddots\
ddots & ddots & ddots & ddots & ddots & ddots & ddots & ddots\
end{pmatrix}
begin{pmatrix}
vdots\
A_x(i)\
A_y(i)\
A_z(i)\
A_x(i+1)\
A_y(i+1)\
A_z(i+1)\
vdots
end{pmatrix} tag{2}
$$

You can already see from this example that there is some regularity as P(i+1), P(i+2), etc. will have the same entries in the matrix as the ones for P(i), only that the entries within the matrix will be at different positions.

So the question is: If I gave you the vector on the right-hand side of equation (1), could you give me the matrix on the right-hand side of equation (2)?

algebraic number theory – cubes obtained from multiplication tables

Consider this series

         1           =2^3
      2    2         =3^3
    3    4   3       =4^3
  4   6    6   4     =5^3
5   8    9   8   5   =6^3

this is nothing but the multiplication table stacked in a triangular format
the amazing thing is that this results in cubes
as an example we take 3+4+3=10
if we multiply this by 6 and add the increment of the base number(3+1) we get 60+4
which is 4^3.
if we want to find the cube of 5 we add 4+6+6+4=20 then multiply that with 6
20*6=120 and add 5 we get 125 which is 5^3
is this a standard known fact in number theory?
and if so why does it work?

opengl – Why does this order of Quaternion multiplication not introduce roll into my fps-style character controller?

I’m working on an OpenGL based project (in C#), employing Quaternions to rotate my camera I first tried to:

cameraOrientation = cameraOrientation * framePitch * frameYaw;

This accumulated an undesired roll in my cam-controller which made rotations unusable.
I found a post on stack exchange which suggested this reordering of operations:

cameraOrientation = framePitch * cameraOrientation * frameYaw;

Which completely solved this accumulation of roll. While I’m comfortable with matrix multiplication, I can’t seem to understand why this removes roll accumulation. Does anybody have any articles or images so I can grok what’s happening here?

It feels weird not to understand such a fundamental operation in my project.
Thanks!

list manipulation – How to generate all combination of multiplication of multiple variables (raised to some powers)

I want to generate all the possible combinations (commutative) of a few variables but also raised to some fixed powers.

Lets take the following example:
I have three variables x,y,z.
The list I want to generate will have all these variables and also their combinations of two of them, three of them, any of them raised to power 2,

{x y z, x y,x z,y z, x,y,z, 
 x^2 y^2 z^2, x^2 y^2 z,x^2 y z^2,x y^2 z^2,
 x^2 y z,x y^2 z,x y z^2,
 x^2 y^2,y^2 z^2,x^2 z^2,
 x^2 y,x y^2,x^2 z,x z^2,y^2 z,y z^2,
 x^2,y^2,z^2}

Basically all possible combinations of any number of multiplications along with they can take two powers.

Is there any easier way without incorporating nested Do loop?

Recursive multiplication

How can I get ASSY_EXT_QTY to be the quantity of all the path?
IE:

LVL   QTY   ASSY_EXT_QTY
1     1     1
2     3     3 1*3
3     2     6 1*3*2
...

I thought making a path but for the quantities and then somehow multiply them but this seems far reached. This is the code:

select
  LEVEL,
  SYS_CONNECT_BY_PATH(CHILD_ITEM_NUMBER, '>') ROUTE,
  CASE
    LEVEL
    WHEN 1 THEN COMPONENT_QTY
    ELSE (PRIOR COMPONENT_QTY) * COMPONENT_QTY <--- RIGHT NOW I'M DOING THIS BUT NOT ENOUGH AS WORKS IN THE CURRENT SUB-LEVEL ONLY.
  END ASSY_EXT_QTY
from
  (
    select
...
    FROM
...
    WHERE
...
  ) START WITH FATHER_ITEM_NUMBER = :ASSY CONNECT BY PRIOR CHILD_ITEM_ID = STR_ITEM_ID
) bom

Thanks!

ag.algebraic geometry – Multiplication maps for big line bundles

In Birational Geometry of Algebraic Varieties, Kollar and Mori write that for a line bundle “being big is essentially the birational version of being ample” (page 67). Recall that a line bundle $L$ on a projective variety $X$ of dimension $d$ is big if

$$ limsup_{n to infty } dfrac{H^0(X,L^n)}{n^d} neq 0.$$

In other words, the rate of growth of the spaces of global sections is as big as possible. Big line bundles tend to exhibit behavior analogous to ample line bundles. I will give a couple of examples. In what follows, let $X$ be a variety over the complex numbers and let $L$ be a line bundle on $X$.

  1. Suppose $X$ is normal. If $L$ is ample, some power of $L$ defines an embedding in a projective space. Analogously, if $L$ is big, some power of $L$ defines a map

$$ varphi_m: X dashrightarrow H^0(X,L^m)$$

that is birational onto its image (Positivity in Algebraic Geometry I, page 139).

  1. If $L$ is ample, some power of $L$ is globally generated. On the other hand, if $L$ is big, some positive power of $L$ is generically globally generated; that is, the natural map

$$ H^0(X,L^m) otimes mathcal{O}_{X} rightarrow L^m$$

is generically surjective (Positivity in Algebraic Geometry I, page 141).

Now, to get to my question, recall that if $L$ is ample, there exists a natural number $m$ such that the multiplication maps

$$ H^0(X,L^a) otimes H^0(X,L^b) rightarrow H^0(X,L^{a+b}) $$

are surjective for $a,b geq m$ (Positivity in Algebraic Geometry I, page 32).

Question: Do big line bundles have a property analogous to the surjectivity of multiplications maps?

It is not clear to me what this property should be, but I would hope that these multiplication maps have eventually high rank in some suitable sense.

logic – Defining multiplication on the non-negative hyperreal numbers ala the Tarski/Eudoxus technique

Tarski defines multiplication as a ‘last step derivation/consequence’ of his axiomatization of the reals.

Can a similar program be carried in the construction of the non-negative hyperreal numbers, by first axiomatizing $(text{*R}^{ge 0},+)$ and then ‘layering in’ multiplication?

I added the soft-question as a tag since I really know very little about the hyperreals but understand there that there is an axiomatic approach to the theory.

My Work

I figure that the techniques found in

Constructing $(Bbb N,+)$ via Peano function algebra duality.

might be applicable – we can represent $(text{*R}^{ge 0},+)$ as a commutative algebra (monoid) of injective functions under composition, satisfying several axioms.