Range of linear transformation of a natural vector

Given a natural variable vector and its upper bound $vec{x}=(x_1,x_2,dotsc, x_n)^top,vec{X}=(X_1,X_2,dotsc,X_n)^top$, where $x_i,X_iinmathbb{N}wedge x_i<X_i, i=1,2,dotsc,n$. With a natural transformation matrix $Ainmathbb{N}^{mtimes n}$, how many different vectors can $Avec{x}$ produce? (Is there a equation involving $vec{X}$ and $A$? Or is there a efficient algorithm to calculate the number?)

For example, with $i,j,kinmathbb{N}wedge i<Iwedge j<Jwedge k<K$, how many distinct vectors can $(i+j,j+k)$ produce?

linear algebra – matrices symmetric and under horizontal + vertical flip

I encountered a real-valued matrix A that, if you reflect it across a horizontal midpoint, then across a vertical midpoint, you get A again. Is there a name for this type of matrix? It seems a generalization of symmetric matrices. Is there an interpretation in terms of its action on $R^n$?

vector spaces – If $(v_1, … , v_m)$ is linear independent in $V$, then there exist a linear function $f: Vto W$ with $f(v_i)=w_i$ for arbitrary $w_i in W$

I found this interesting theorem in a text on linear algebra but unfortunately I do not understand the given proof. Hopefully nothing is lost in the translation. If anything is unclear, please let me know.

Let $V$ and $W$ be vector spaces of finite dimension $n$ over a field $F$.

If $(v_1, … , v_m)$ is linear independent in $V$, then there exists a linear function $f: Vto W$ with $f(v_i)=w_i$ for arbitrary $w_i in W$; $i = 1, …, m$.

Proof:

We augment $(v_1, … , v_m)$ to a basis $(v_1, … , v_m, v_{m+1}, … , v_n)$ of $V$ and set for all $a_1, … , a_n in F$: $$fleft(sum_{i=1}^n a_iv_iright)=sum_{i=1}^m a_i w_i. $$ In particular, $f(v_i)=w_i$ holds for $i=1, …, m$ and $f(v_i)=0$ holds for $i=m+1, …, n$. Since there is a uniquely determined $a_1, …, a_n$ for every $v in V$ such that $v= sum_{i=1}^n a_i v_i$ holds, $f$ is a well-defined function $f: Vto W$.

Let $v, v’in V$ with $v= sum_{i=1}^n a_i v_i$ and $v’= sum_{i=1}^n a’_i v_i$: $$f(v+v’)=v= sum_{i=1}^n (a_i+a’_i) v_i=f(v)+f(v’). $$ Similarly we show that $f(av)=a f(v)$ holds for $a in F$. Thus $f$ is a linear function.

I do not understand what is going on the first two paragraph of this proof. I know that every linear independent set of vectors can be augmented until it is a basis (in this context), but why is this useful here? Why does the first equality hold?

The last part is clear since we are simply checking if the properties of a linear function hold.

machine learning – Ratio of intra-class scatter by inter-class scatter is getting minimised by linear discrimination process

This question is complicated (at least for me who is new in machine learning and image processing).

I need help in understanding, why my iterative process is minimising the ratio of inter-class scatter by intra-class scatter after Fisher-Rao discrimination, rather than maximising it.

Background:

I am implementing following old paper.

https://ieeexplore.ieee.org/document/5746646/

The logic of the code where I am facing problem, is in Table 1 (iterative fisher-Rao optimization), of the paper.

My code is in https://github.com/Sujata018/Image-Processing/blob/main/P2/classification/lda.py

In that iteration, it is supposed to maximise J, but I am getting following output, which looks like minimising it :

iteration  1 J= 11.083135263169856  Jnew =  5.968339819774404
iteration  2 J= 5.968339819774404  Jnew =  4.657109480685861
iteration  3 J= 4.657109480685861  Jnew =  4.291975621553885
iteration  4 J= 4.291975621553885  Jnew =  4.170759974802839
iteration  5 J= 4.170759974802839  Jnew =  4.127241042274884
iteration  6 J= 4.127241042274884  Jnew =  4.11233043402457
iteration  7 J= 4.11233043402457  Jnew =  4.108385230771005
iteration  8 J= 4.108385230771005  Jnew =  4.108419371607428
Converged at J= 4.108419371607428

I need help in understanding, what is wrong here.

trying to solve linear equetion system above %Z11

give this system linear above Z11

10=(3a^2-b)x-2y

by=2

find all the solutions to the linear system

I am tried to figure out how to solve something like that.
I know how to solve a linear equation and polynomial equation
but when I came to something like that I stuck
maybe an example with description really can help me how to handle with questions like that

sequences and series – How to solve linear recurrence relations with constant coefficients.

I. Definition:

A sequence verifying a linear induction relation with constant coefficients, is a sequence for which the current term is a linear combination of its predecessors.

Perhaps the most known example is the Fibonacci sequence:

$$F_n=F_{n-1}+F_{n-2}$$

Here are some other examples:

  • $x_n=a,x_{n-1}$
  • $2x_{n+1}+3x_{n}-5x_{n-2}+6x_{n-3}=4^n+n$
  • $ax_n+bx_{n-1}+cx_{n-2}=0$

And some that are not:

  • not linear $x_{n+1}=dfrac{x_n+1}{x_n-3}$
  • non constant coefficients $x_{n+1}=(-1)^nx_n+2n,x_{n-1}$
  • algorithmic cost type $T(n)=4T(frac n2)+ln(n) $ or $, T(n^2)=T(n)+n$

More generally they can be written:

$$sumlimits_{k=0}^p a_k,x_{n+k}=f(n)$$

Where the relation $Big(sumlimits_{k=0}^p a_k,x_{n+k}=0Big)$ is called the associated homogeneous equation, and $f(n)$ the RHS.

Of course, there is no real difference between for instance $x_n=ax_{n-1}+bx_{n-2}$ and $x_{n+2}=ax_{n+1}+bx_{n}$ or even $x_{n+1}-ax_n-bx_{n-1}=0$, it’s just a matter of presentation and starting index.

We will see that the “normalized” form allows to define a characteristic equation that will help us in solving the equation. For now just remember that it is a linear combination of some previous and subsequent terms of a sequence.

II. One term dependency:

Before entering the subject of the general method to solve such equations, let start with the initial $1$-dimensional case, where the current term depends only on a single predecessor.

A special case, the telescoping sum:

Among these, the easiest of all is the one below,

$$x_{n+1}=x_n+f(n)iff x_{n+1}-x_n=f(n)$$

Notice that when we perform a summation of all terms, then only the beginning and the ending terms remain, while the others are cancelling out.

$require{cancel}begin{array}{lll}
x_{n+1}&-&cancel{x_n}&=&f(n)\
+cancel{x_{n}}&-&cancel{x_{n-1}}&=&f(n-1)\
+cdots&&cdots&&cdots\
+cancel{x_{2}}&-&cancel{x_{1}}&=&f(1)\
+cancel{x_{1}}&-&x_0&=&f(0)\
hline
x_{n+1}&-&x_0&=&sumlimits_{i=0}^n f(i)end{array}$

Which gives you an expression for the general term:

$$x_n=x_0+sumlimits_{i=0}^{n-1} f(i)$$

Notice that when $f$ is a constant function, i.e. $x_{n+1}=x_n+a$ then we get the usual arithmetic sequence $x_n=x_0+sumlimits_{i=0}^{n-1} a=x_0+na$.

The geometric sequence:

Then comes sequences that only involve two terms, like $ax_n+bx_{n-1}=0$, in which $a,bneq 0$. We can always put it in the reduced form

$$x_n=r, x_{n-1}$$

This is the well known geometric sequence of reason $r$ and it can easily be shown by induction that

$$x_n=r^n,x_0$$

Note that we can also have a bigger gap than just $1$ between indexes, as in $x_{n+2}=r, x_n$ which solve almost in the same way, but with separated branches for even and odd indexes $begin{cases}x_{2n}=r^n, x_0\x_{2n+1}=r^n, x_1end{cases}$

And we can generalize to any sequence verifying $x_{n+p}=r, x_n$, we just have $p$ branches, according to the initial values $x_0,cdots,x_{p-1}$

Introducing a RHS:

Assume we now have $$x_n-r,x_{n-1}=f(n)$$

Notice that if we set $x_n=r^n, u_n$ then we get:

$r^nu_n-rcdot r^{n-1}u_{n-1}=f(n)iff u_n-u_{n-1}=dfrac{f(n)}{r^n}=g(n) $ and we have reduced the problem to the telescopic case seen above.

III. The homogeneous equation:

A general solution is a sum of a two solutions:

As for linear ODEs with constant coefficients, the linear induction relations share the same notion of associated homogeneous equation.

Assume that we have two solutions $x_n$ and $y_n$ of our equation (I show it with a $3$-dimensional equation, but this is just for clarity of presentation. It extends without any complications to an equation with more dependent terms):

$begin{cases}a,x_n+b,x_{n-1}+c,x_{n-2}=f(n)\a,y_n+b,y_{n-1}+c,y_{n-2}=f(n)end{cases}$

we obtain after subtraction of these two, and setting $h_n=x_n-y_n$ that $h_n$ is a solution of the homogeneous equation:

$$a,h_n+b,h_{n-1}+c,h_{n-2}=0$$

Therefore in order to solve the general equation with RHS, it suffice to find a general solution $h_n$ of the homogeneous equation, and one particular solution $pi_n$ of the full equation to get all the solutions, and they will have the form:

$$x_n=overbrace{hphantom{mm}h_nhphantom{mm}}^text{homogeneous solution}+overbrace{hphantom{mm}pi_nhphantom{mm}vphantom{h}}^text{particular solution}$$

The characteristic equation:

We call the characteristic equation associated with the (homogeneous) linear induction relation with constant coefficients $sumlimits_{k=0}^p a_k,x_{n+k}=0$ the following polynomial:

$$p(X)=sumlimits_{k=0}^p a_k,X^k$$

Here are some examples to understand how it works when you need to shift the indexes:

$begin{array}{lll}
x_{n+1}=ax_n &iff x_{n+1}-ax_n=0 &iff p(X)=X-a\
x_{n}=ax_{n-2} &iff x_{n+2}-ax_n=0 &iff p(X)=X^2-a\
x_{n}=ax_{n-1}+bx_{n-2} &iff x_{n+2}-ax_{n+1}-bx_n=0 &iff p(X)=X^2-aX-b\
ax_{n+1}+bx_{n}+cx_{n-3}=0 &iff ax_{n+4}+bx_{n+3}+cx_n=0 &iff p(X)=aX^4+bX^3+c\
end{array}$

In the following course of this post, we will call the roots $r_i$ of this characteristic polynomial and their multiplicity $m_i$ (i.e. $m=1$ for a single root, $m=2$ for a double root, and so on).

As a special case, we will also settle for the convention $m=0$ if some particular value $r$ is not a root of the characteristic polynomial (i.e $p(r)neq 0$).

Finding a closed form for the sequence:

In this paragraph I’ll show the general method to solve the homogeneous equation without any justifications, for the theoretical aspects please refer to paragraph (VI).

Let assume the characteristic polynomial has roots $r_i$ with multiplicity $m_i$ for $i=1..k$ then the solution is a sum of the following terms for each root $r_i$.

  • If $r$ is a single root (i.e. $m=1$) then we have a term in $quad a,r^n$
  • If $r$ is a double root (i.e. $m=2$) then we have a term in $quad (an+b),r^n$
  • If $r$ is a triple root (i.e. $m=3$) then we have a term in $quad (an^2+bn+c),r^n$
  • $cdots$
  • If $r$ is a root of multiplicity $m$ then we have a term in $,P(n),r^n$ where $P$ polynomial of degree $m-1$.

In case the roots are complex, but initial terms and coefficients of the equation are all real, then the roots will in fact appear by pairs of conjugated complexes.

We can regroup $r=|r|e^{it}$ and $bar r=|r|e^{-it}$ and use De Moivre formula to transform $a,r^n+b,bar r^n$ into

$$|r|^nBig(alphacos(nt)+betasin(nt)Big)$$

This is for single roots, but it extends similarly to roots of higher multiplicity, just replace $alpha,beta$ by polynomials of higher degree.

IV. Finding particular solutions:

RHS is of the form $P(n)a^n$:

Generally in the exercises, you will be asked to solve the full equation with a RHS which is of the form $P(n)$ or $P(n),a^n$ with $P$ some polynomial.

Notice that $,P(n)$ can also be written $,P(n),1^n, $ with $a=1$, so we don’t need to solve this case separately, it follows exactly the same process.

  • If $a$ is not a root of the characteristic polynomial, then we can search for a particular solution of the form $Q(n),a^n$ where $Q$ is a general polynomial of the same degree than $P$.

  • If $a$ is a single root of the characteristic polynomial, then you have to raise the degree of $Q$, meaning we can search for solution of the form $Q(n),a^n$ where $deg(Q)=deg(P)+1$

In general is $a$ is a root of the characteristic polynomial of multiplicity $m$ then we have to search for particular solutions of the form $Q(n),a^n$ where:
$$deg(Q)=deg(P)+m$$

Examples:

Assume $2$ is not a root and $RHS=3times2^n$, you can search for particular solutions $Q(n)=alpha,3^n$

Assume $1$ is not a root and $RHS=3n^2+5$, you search for particular solutions $Q(n)=alpha,n^2+beta,n+c$

Assume $-1$ is a root, and $RHS=(-1)^n$, you can search for particular solutions $Q(n)=(alpha,n+beta)(-1)^n$

A little refinement:

In fact we can ignore the terms of lower degree of $Q(n)$ since they will cancel out because they are already part of the homogenous equation solution.

e.g. if $r$ is a double root, then the homogeneous equation has a general solution $(an+b)r^n$.

Assume the $RHS=(5n),r^n$ then we search for $deg(Q)=deg(5n)+2=3$ therefore we should search for a particular solution

$$require{cancel}Q(n),r^n=big(alpha,n^3+beta,n^2+underbrace{cancel{gamma,n+delta}}_text{already solution of homogenous eq.}big),r^n$$

This little refinement often speeds up a bit the solving of the exercise.

RHS is a sum of such terms:

In this case, we proceed following the exact same method for each term, and the particular solution has to be searched as a sum of all terms.

I presented an example in (V.)

RHS is $f(n),a^n$ where $f$ is not a polynomial:

In this case unfortunately, this will be a case by case solving.

Remember that as we have seen in (II.) this will involve summing $sum f(n)left(frac arright)^n$ which may or may not have a closed form depending on $f$.

If you have $RHS=ln(n)$ for instance then you are out of luck, because this does not have a closed form.

On the other hand if $f(n)=cos(n)$ then you are happy to transform it to $frac 12(e^{in}+e^{-in})$ and this is a $P(n)a^n$.

In fact especially for this case we can extend the search for particular solutions to $$Q_1(n)cos(n)+Q_2(n)sin(n)$$

Where the polynomials $Q_1,Q_2$ follow exactly the same rules than the regular case (with real roots).

V. Some examples and applications:

The Fibonacci sequence:

Let’s find a closed for the famous Fibonacci sequences, it verifies $begin{cases}F_0=0\F_1=1\F_n=F_{n-1}+F_{n-2}end{cases}$

It’s characteristic equation is $,r^2-r-1=0iff r=dfrac{1pmsqrt{5}}2$, the roots are generally noted $varphi,bar varphi$ where $varphi$ is known as the golden ratio.

The solution is then $$F_n=a,varphi^n+b,barvarphi^n$$

Applying initial conditions gives $begin{cases}F_0=a+b=0\F_1=avarphi+bbarvarphi=1end{cases}iff a=-b=frac 1{sqrt{5}}$

$$F_n=frac 1{sqrt{5}}Big(frac{1+sqrt{5}}2Big)^n-frac 1{sqrt{5}}Big(frac{1-sqrt{5}}2Big)^n$$

Lucas numbers:

The Lucas numbers are closely related to Fibonacci numbers, the only difference is in the initial conditions, $L_0=2$ and $L_1=1$.

It solves exactly the same and we get $a=b=1$ instead.

$$L_n=varphi^n+barvarphi^n=Big(frac{1+sqrt{5}}2Big)^n+Big(frac{1-sqrt{5}}2Big)^n$$

Pisot numbers whose powers are almost integers:

When dealing with 2-dimensional problems like this one with two conjugated roots (for the square root) and one of the root has modulus $|barvarphi|<1$, we call these Pisot numbers, they are remarkable by the fact that their powers get quickly closer and closer to integer values.

Indeed since the modulus is strictly lower than $1$ then $|barvarphi|^nto 0$

Since the Lucas numbers are integers (initial terms are integers and the next term is a sum of integers by induction) then $L_nsim varphi^n$.

Similarly the Fibonacci number $F_n$ can be found to be the rounded value of $left(dfrac{varphi^n}{sqrt{5}}right)$

Calculating $x^n+y^n$:

A direct application of linear induction relations is calculating $x^n+y^n$ whenever $S=x+y$ and $P=xy$ are given.

Since $x$ and $y$ are solutions of the sum and product quadratic polynomial,

$$r^2-Sr+P=0$$

Considering it as the characteristic equation for a sequence $u_n$, then the sequence must verify

$$u_n=S,u_{n-1}-P,u_{n-2}iff u_n=x^n+y^n$$

Since initial terms are known $begin{cases}u_0=x^0+y^0=1+1=2\u_1=x^1+y^1=x+y=Send{cases}$

Then it is just a direct application of the induction relation which is needed to calculate $u_n$.

Example:
  • calculate $(3+2sqrt{2})^4+(3-2sqrt{2})^4$ ?

Processing the binomial expansion would surely work, but it would be tedious and prone to calculation and sign errors.

Instead let’s call $x=3+2sqrt{2}$, notice that $y=3-2sqrt{2}=dfrac 1x$ therefore $S=x+y=6$ and $P=xy=1$.

$begin{array}{l}
u_0=2\
u_1=S=6\
u_2=Su_1-Pu_0=6times 6-2=34\
u_3=Su_2-Pu_1=6times 34-6=198\
u_4=Su_3-Pu_2=6times 198-34=1154
end{array}$

Notice that since $u_0,u_1,S,P$ are all integers, then $u_n$ is always an integer despite its initial looking.

An example with a bit of everything:

Let’s dive in a more complex example with multiplicities, complex roots and colliding RHS.

$$u_{n+6}-2u_{n+4}+8u_{n+3}-27u_{n+2}+32u_{n+1}-12u_n=2^ncos(tfrac{npi}2)+n-7+5^n$$

The characteristic polynomial is

$$x^6-2x^4+8x^3-27x^2+32x-12=(x-2i)(x+2i)(x-1)^3(x+3)=0$$

  • $pm2i=2exp(pm ifracpi2)$ are roots, so we will have terms in $2^n(asin(frac{npi}2)+bcos(frac{npi}2))$, and since RHS contains such a term of degree $0$ we will have to search for a particular solution of degree $1$. By the refinement option we can search for $n,2^n(alphasin(frac{npi}2)+beta cos(frac{npi}2))$.

  • $1$ is root of multiplicity $3$ so we will have a term in $(cn^2+dn+e)times 1^n$ and since RHS contains a term $(n-7)times1^n$ of degree $1$ we will have to search for a particular solution of degree $4$. By the refinement option we can search for $(gamma n^4+delta n^3)times1^n$.

  • $-3$ is root so will will have terms in $f(-3)^n$, no RHS term here.

  • finally RHS contains $5^n$ but since $5$ is not a root of the characteristic equation we will only have to search for a particular solution of degree $0$, namely $epsilon 5^n$.

To summarize:

$$h_n=2^nBig(asin(tfrac{npi}2)+bcos(tfrac{npi}2)Big)+Big(cn^2+dn+eBig)+f(-3)^n$$

$$pi_n=n,2^nBig(alphasin(tfrac{npi}2)+betacos(tfrac{npi}2)Big)+(gamma n^4+delta n^3)+epsilon,5^n$$

Reporting $pi_n$ in the recurrence relation gives

$2^nBig((-296beta+128alpha)sin(frac{npi}2)+(-296alpha-128beta)cos(frac{npi}2)Big)+Big(480gamma,n+120delta+1032gammaBig)+14848,epsilon,5^n = 2^ncos(frac{npi}2)+(n-7)+5^n$

And we have to solve the system

$begin{cases}
-296beta+128alpha = 0\
-296alpha-128beta=1\
480gamma = 1\
120delta+1032gamma = -7\
14848,epsilon = 1
end{cases}
iff
begin{cases}
alpha = frac{-37}{13000}\
beta= frac{-2}{1625}\
gamma = frac {1}{480}\
delta = frac {-61}{800}\
epsilon = frac {1}{14848}\
end{cases}$

VI. Theoretical justification of $operatorname{Span}({r_i}^n)$:

The $2$-dimensional problem:

Let $r_1$ and $r_2$ be the roots of the characteristic equation, then they verify the sum and product quadratic,

$$x^2-(r_1+r_2)x+r_1r_2=0$$

Let take advantage of this formulation to transform the $2$-dimensional problem into two $1$-dimensional problems, proceeding like below,

$u_{n+2}-(r_1+r_2),u_{n+1}+r_1r_2,u_n=0iff underbrace{big(u_{n+2}-r_1,u_{n+1}big)}_{v_{n+1}}-r_2underbrace{big(u_{n+1}-r_1,u_nbig)}_{v_n}=0$

We get the system: $$begin{cases}v_{n+1}-r_2,v_n=0\u_{n+1}-r_1,u_n=v_nend{cases}$$

  • The first one is homogeneous and solves to $v_n=(r_2)^n,v_0$

  • For the second one with RHS we introduce $u_n=(r_1)^n,U_n$ and we have seen in (II.) that it solves to

$$U_n=U_0+sumlimits_{i=0}^{n-1}dfrac{v_i}{{r_1}^i}=U_0+v_0sumlimits_{i=0}^{n-1}left(dfrac{r_2}{r_1}right)^i$$

We are now facing a condition:

  • if $r_1=r_2$ (the root has multiplicity $2$) then $sumlimits_{i=0}^{n-1}left(dfrac{r_2}{r_1}right)^i=sumlimits_{i=0}^{n-1} 1=n$

  • if $r_1neq r_2$ then $sumlimits_{i=0}^{n-1}left(dfrac{r_2}{r_1}right)^i=dfrac{r_1}{r_2-r_1}Big(Big(dfrac{r_2}{r_1}Big)^n-1Big)$

Gathering the results and regrouping the constant expressions involving $,U_0,v_0,frac{r_1}{r_1-r_2},$ under arbitrary constants $a,b$ we get:

$$begin{array}{ll}
r_1 text{ root of multiplicity }2 &quad u_n=(an+b),{r_1}^n\
r_1,,r_2 text{ distinct roots} &quad u_n=a,{r_1}^n+b,{r_2}^n
end{array}$$

The $3$-dimensional problem:

Similarly when there are three roots $r_1,r_2,r_3$ we can transform the $3$-dimensional problem in three $1$-dimensional problems by expanding the characteristic equation as below,

$$(x-r_1)(x-r_2)(x-r_3)=x^3-(r_1+r_2+r_3)x^2+(r_1r_2+r_2r_3+r_3r_1)x-r_1r_2r_3$$

It leads to the system:

$$begin{cases}w_{n+1}-r_3,w_n=0\v_{n+1}-r_2,v_n=w_n\u_{n+1}-r_1,u_n=v_nend{cases}$$

I do not detail all the calculations but similarly when $r$ is a triple root then where $sum 1$ appeared in the previous case, then now $sum i$ appears and we get terms in $n^2$.

$$begin{array}{ll}
r_1 text{ root of multiplicity }3 &quad u_n=(an^2+bn+c),{r_1}^n\
r_1 text{ double root and },r_2 text{ simple root} &quad u_n=(an+b),{r_1}^n+c,{r_2}^n\
r_1,,r_2,,r_3 text{ distinct roots} &quad u_n=a,{r_1}^n+b,{r_2}^n+c,{r_3}^nend{array}$$

Generalisation, Newton’s identities:

The $n$-dimensional problem is no different, using Newton’s identities which give some relations between the roots and the coefficients of the characteristic polynomial, we can transform the $n$-dimensional problem into a system of $1$-dimensional problems.

While it is quite straightforward to establish the above mentioned system, dealing with multiple roots may reveal a bit tedious (but still feasible I guess…). Anyways it follows exactly the same pattern as in the $2$ and $3$-dimensional cases.

This is the reason why, I present in the next paragraph an alternative method which is often preferred for solving the general case.

Solving with linear algebra:

We will change the notations a bit, instead of $ sumlimits_{k=0}^p a_k,x_{n+k}=0 $, let’s use $ c_i=-dfrac {a_i}{a_p}$ to get

$$x_{n+p}=sumlimits_{k=0}^{p-1} c_k,x_{n+k}$$

So we can write it in matrix form:

$$begin{bmatrix}x_{n+p}\x_{n+p-1}\vdots\x_{n+1}end{bmatrix} =
begin{bmatrix}
c_{p-1}&c_{p-2}&cdots&c_0\
1&&LARGE 0\
&ddots\
LARGE 0&&1&0\
end{bmatrix}
begin{bmatrix}x_{n+p-1}\x_{n+p-2}\vdots\x_{n}end{bmatrix}iff X_{n+1}=C, X_n$$

And the solution is obviously $X_n=C^n, X_0$, what is less obvious of course if what does $C^n$ looks like ?

If we assume $C$ is diagonalisable with all distinct eigenvalues, $C=operatorname{diag}(r_1,r_2,cdots,r_p)$.

Then $C^n=operatorname{diag}({r_1}^n,{r_2}^n,cdots,{r_p}^n)$ and after multiplication with initial terms $X_0$

We end up with the solution we already know $$sumlimits_{i=0}^{p-1} X_{0,i}, {r_i}^n$$

But the matrix does not necessarily have distinct eigenvalues, these can have various multiplicities $m_i$.

In this case, we can put the matrix in Jordan form and we have formulas to get the power of a matrix in its Jordan form.

Remember that the basis for polynomials of degree $m$ is not unique.

  • If $ displaystyle1,n,n^2,n^3,cdots,n^m $ is a valid basis
  • $displaystylebinom{n}{0},binom{n}{1},binom{n}{2},cdots,binom{n}{m} $ is a valid basis as well

Therefore all these binomial coefficients you can see in the $J_{m_i}^n(r_i)$ in the link I provided above, just lead to the form discovered in the previous paragraph:

$$(alpha_0+alpha_1n+cdots+alpha_{m-1}n^{m-1}){r_i}^n$$

linear algebra – Whether subset of a vector space is a vector space?

I started learning Linear Algebra and was following notes, but few answers to exercise
problems can’t digest. Problem is we are given with a set all vectors in $(x_1, x_2, x_3, x_4) in V_4 $ and need determine whether a given subset is a vector space or not:

  1. $x_1 = 1$

In this case answer says it’s a vector space but I think it’s not as it will not have zero vector.

  1. $x_3^2 geq 0$

In this case answer says it’s not a vector space but I think it is, as it’ll have zero vector and seems to satisfy all axioms.

I’ve attached notes image, please correct me if my understanding is wrong. Thanks in advance.

Notes image

Linear algebra guassian elimination problem

I was reading robert beezer book about linear algebra and in a part he proved guassian elimination using a constructive proof
The proof involved adding a new column
and perform some operations on the matrix that dont change its solution set but keep it row reduced ,the problem is during second paragraph he said adding a new column with r+1 to m rows of column is zero then the matrix is row reduced
but he didnt specify how the new matrix meet the definition of row reduced echolon form,
Am I missing something or is the proof missing

Is every differential 1-form a linear combination of closed forms?

Let $M$ be a smooth manifold. We know that the $C^infty (M)$-module $Omega^1 (M)$ is finitely generated, i.e. there exists $1$-forms ${alpha_1, ldots, alpha_k }$ such that for any $1$-form $omega$, we can write $omega = sum_{i=1}^k f_i alpha_i$ for some $f_i in C^infty (M)$.

I’m wondering if the $alpha_i$ can be chosen to be closed, or, furthermore, exact.

I’m guessing there must exist a counterexample, as I haven’t seen this result in any of the standard textbooks or online sets of notes, and it might make computations a little too easy. I’ve been toying around with this idea for a while but haven’t gotten any leads in either direction, except that this is trivially true in $mathbb{R}^n$.

Have any of you seen this result or know a counterexample?

fa.functional analysis – Linear transport equation with Lipschitz conditions

Given the equation here, I would like to ask the following relaxed question:

Consider the PDE

$$partial_t f(x,t) = langle q(x), nabla rangle f(t,x) + p(x),$$

with Schwartz initial data $f(0,x) = f_0(x) in mathscr S(mathbb R^n).$

I am wondering then if $q$ is Lipschitz and $p$ is Schwartz, too:

Does there exist a solution $(t,x)mapsto f(t,x)$ to this equation that decays faster than any polynomial in space $x$ at any fixed time $t>0$?