Homological algebra: conditionally convergent spectral sequences with differential outputs and inputs.

I have to deal with unlimited leaks and I want to use the conditional convergence of spectral sequences and the results of

(one): J. Michael Boardman, Conditionally Convergent Spectral Sequences, March 1999 (http://hopf.math.purdue.edu/Boardman/ccspseq.pdf)

The article uses cohomological spectral sequences derived from the exact pair that comes from a cochain complex. $ C $ and a decreasing filtration $ F $ of $ C $. The system of inclusions is $$ A ^ s: = H (F_s C) leftarrow A ^ {s + 1} $$ and the pages are denoted by $ E ^ s_r $ for $ s in mathbb {Z} $ Y $ r in mathbb {N} $ ($ r $ is the page number and $ s $ the “ degree of filtration & # 39; & # 39;). The symbol $ A ^ infty $ denotes the limit and the symbol $ A ^ {- infty} $ the colimit The symbol $ RA ^ infty $ Denotes the right derived module of the limit. I basically work on $ mathbb {R} $.

The following are the two theorems (or their parts) of (1) that interest me:

Theorem 6.1 (p.19): Leave $ C $ Be a filtered cochain complex. Suppose that begin {equation} label {Eq: Exit} tag {C1} E ^ s = 0 quad text {for all}
s> 0. end {equation}
Yes $ A ^ infty = 0 $, then the spectral sequence
converges strongly to $ A ^ {- infty} $.

Theorem 7.2 (p.21): Leave $ f: C rightarrow bar {C} $ Be a morphism of filtered cochain complexes and assume that $ E ^ s $, resp. $ bar {E} ^ s $
conditionally converge to $ A ^ {- infty} $, resp. $ bar {A} ^ {- infty} $.
Suppose, in addition, that begin {equation} tag {C2} E ^ s = bar {E} ^ s =
0 text {for all} s <0. end {equation}
Yes $ f $ induces the
isomorphisms $ E ^ infty simeq bar {E} ^ infty $ Y $ RE ^ infty simeq
R bar {E} ^ infty $
, then induces isomorphism $ H (C) simeq H ( bar {C}) $.

Let me introduce the standard (grade changed) beamed in $ E_r $ and visualize $ E_r ^ s, d} $ as sitting on the coordinate $ (s, d) $ flat. The differentials are then.
$$ d_r: E_r ^ {s, d} rightarrow E_r ^ {s + r, d-r + 1}. $$
This are my questions:

  1. How is Theorem 6.1 generalized if (C1) is replaced by the following? Condition of differential outputs?
    $$ E_r text {sit on a semiplane and if we fix some coordinate} (s, d), text {then almost all finely} d_r text {starting at} (s, d) text {leave the semiplane.} $$

  2. How is Theorem 7.2 generalized if (C2) is replaced by the following? Condition of entering differentials?
    $$ E_r text {sit on a semiplane and if we fix some coordinate} (s, d), text {then almost all finely} d_r text {ending in} (s, d) text {start outside the semiplane .} $$

The author of (1) answers the questions as follows:

  1. On p.19, Chapter 6 in parentheses just before Theorem 6.1:

    …The
    the results are properly generalized, since all arguments can be carried out gradually; the
    The main difficulty is finding a notation that helps instead of hindering exposure.

  2. On p.20, chapter 7 in parentheses, a couple of paragraphs before theorem 7.2:

    … The results are still valid when properly modified, like all arguments
    it can be carried out gradually; the difficulty is finding notation that
    Help instead of hindering.

How do these theorems generalize precisely? Has it been done somewhere? Thank you!

PD I come from differential geometry and I am not familiar with the test methods for spectral sequences. I use it simply as a black box.

Linear algebra – Confused with Dual Space and Annihilator in Axler

I am reading Axler's Linear Algebra Well Done.

I am massively confusing myself with the previous parts of its Duality section (3.F), which introduce the dual space, the dual map and the null space and the dual range of a linear map.

I feel that I am still around 85% of the test, but the motto is full of equivalent relationships like null $ T & # 39; $=$ ($distance $ T) ^ 0 $, but I have a hard time understanding what is the point and the intuition.

Is there a publication in the stack where someone helps a confused student like me to better understand and gain clarity about the dual spaces, the annihilator and the whole set of things that come with zero space and range?

Any reference reading material that you emphasize with an intuitive explanation would be appreciated!

Linear algebra: what exactly is a cofactor and how is the signboard derived?

Dice

begin {equation *}
begin {bmatrix}
a_ {11} & dots & a_ {1j} \
vdots & ddots & vdots \
a_ {i1} & dots & a_ {ij}
end {bmatrix}
end {equation *}

I know that the cofactor is $ A_ {ij} = M_ {ij} (- 1) ^ {i + j} $ where $ M_ {ij} $ It is the smallest of an element. But what does that mean? What does it mean to take the cofactor of a matrix, is there a geometric visualization of this or something?

Second, where does the sign table below come from and why do I have to use it to take the sum of the cofactor in the row to find the determinant of the matrix?
begin {equation *}
begin {bmatrix}
+ & – & + \
– & + & – \
+ & – & +
end {bmatrix}
end {equation *}

I have a decent understanding of why $ Delta x = Delta_1 $, but they are expansions of cofactor and the table of signs that I do not understand (but I can make / use).

Linear algebra – Operator of polynomials remnants

Here I am considering the $ Bbb {R} $space -vector $ E $ Of all the polynomials with degree. $ n en Bbb {Z} _ + $ or less and $ f colon E to E $ the operator that maps $ P in E $ to the rest of the Euclidean division of $ XP $ for $ A (X) = X ^ {n + 1} + a_nX ^ n + cdots + a_1X + a_0 $. I need to find the matrix of $ f $ at the base $ {1, X, …, X ^ n } $.

I have already proven that $ f $ is linear, and if $ P (X) = sum_ {k = 0} ^ {n} b_kX ^ k $ so $$ f (P) = left ( sum_ {k = 1} ^ n (b_ {k-1} -a_kb_n) X ^ k right) -a_0b_n. $$ But now, I do not know how to proceed in the computation of $ f (1) $, $ f (X) $, etc …

Nilpotent elements of Lie algebra and unipotent groups.

Leave $ k $ be a characteristic 0 field (not necessarily algebraically closed), be $ G $ be a split reducer group connected on $ k $ and let $ mathfrak {g} $ be the lie algebra of $ G $.

Leave $ X in mathfrak {g} $ Be a nilpotente element. Is there a unipotent subgroup? $ U $ of $ G $ such that $ X $ It is contained in the Lie algebra. $ U $ ?

Yes $ k $ is closed algebraically this is Theorem 5.1 of http://virtualmath1.stanford.edu/~conrad/249BW16Page/handouts/applgr.pdf.

Here is a rough idea for a test in the general case, but I can not make the details work.

By the result on an algebraically closed field there is a unipotent subgroup $ U _ { overline {k}} $ of $ G _ { overline {k}} $ such that $ X $ is in Lie algebra $ U _ { overline {k}} $. Through the exposure / logarithm there is an isomorphism between $ U _ { overline {k}} $ and it's Lie's algebra so it exists $ x in U _ { overline {k}} ( overline {k}) $ whose exponential is $ X $.

As $ X $ is defined on $ k $ I hope it is also true for $ x $ (as an element of $ G $). Then by Theorem 3.6 (very easy in feature 0) of Conrad's notes, $ x $ It will be an element of a unipotent subgroup. $ U $ of $ G $ and we would deduce that $ X $ It is contained in the Lie algebra. $ U $.

As I work on feature 0, I imagine there could be a much simpler way. Also, I do not think the split reduction hypothesis is necessary (at least I do not use it in my "test idea").

linear algebra: search for a closed form for a recurrence relation $ 𝑎_𝑛 = (n) * a_ {n-1} + 1 $ and $ 𝑎_𝑛 = (n) * a_ {n-1} + n $?

Consider the sequence defined by
$$
begin {cases}
a_0 = 1 \
𝑎_𝑛 = (n) * a_ {n-1} +1 & text {if} n ge 1
end {cases}
. $$

Find a closed form for $ a_n $.

The second case is the following:
$$
begin {cases}
a_0 = 1 \
𝑎_𝑛 = (n) * a_ {n-1} + n & text {if} n ge 1
end {cases}
. $$

Find a closed form for $ a_n $.

linear algebra – Inequality in the inverse numerical range of the core matrix

Leave $ k (.,.) $ be a function that takes two vectors as input and generates a scalar as follows
begin {align}
mathcal {k} (x, y) = exp (- frac {|| x-y || _2 ^ 2} {2})
end {align}

where $ || x || _2 $ denotes the $ 2- $rule. Now leave $ x_1, dots, x_m $ be $ m $ vectors in $ mathbb {R} ^ d $. Let me define the $ m times m $ matrix $ mathbf {K} $ so that your tickets are given as
begin {align}
mathbf {K} _ {ij} = mathcal {k} (x_i, x_j)
end {align}

For any vector $ x $let me define the $ m times 1 $ vector $ mathcal {K} _x $ as
begin {align}
mathcal {K} _x = begin {bmatrix}
mathcal {k} (x, x_1) \ vdots \ mathcal {k} (x, x_m)
end {bmatrix}
end {align}

Now I'm curious if the inequality tracking is valid for everyone. $ x in mathbb {R} ^ d $
begin {align}
mathcal {K} _x ^ T mathbf {K} ^ {- 1} mathcal {K} _x leq 1
end {align}

Reference request: the dimension of the tangent space of Zariski is limited by a definitively generated algebra

Can anyone suggest a reference published by the following fact?

For a certain finely generated algebra on an algebraically closed field, the dimension of the tangent space of Zariski in the maximum ideals is delimited from above.

? I can not find it in the much-loved Stacks project.

linear algebra – Existence of adjunct through the Riesz representation theorem

In Well Done Linear Algebra we have a theorem that establishes

Riesz representation theorem: Suppose $ V $ it's finite-dimensional and $ A $ is a linear functional in $ V $. Then there is a unique vector $ u $ such that for each $ v $ : $ A (v) = langle u, v rangle $

However, then the existence of the attached transformation is cited using this theorem

$ langle T v, w rangle = left langle v, T ^ {*} w right rangle $

To see why the above definition makes sense, let's suppose $ T in
mathcal {L} (V, W) $
. Pin up $ w in W $. Consider the linear functionality V
what maps $ v in V text {a} langle T v, w rangle $; this linear
functional depends on $ T $ Y $ w $. By the Riesz representation theorem,
there is a unique vector in V such that this linear functional is
given taking the inner product with him. We call this unique vector.
$ T ^ {*} w $

I do not understand what the two situations are like. First we did not have a transformation before the internal product and now we do. How does the guarantee continue to exist? Not only that, but $ v $ Y $ w $ It could belong to different dimension spaces and $ T $ transforms $ V $ to $ W $. How is the Riesz representation theorem well supported in this case? The two seem quite disconnected.

linear algebra – non-commutative diagonalizable matrices on R and C

I have the question: find diagonalizable matrices that will not be switched (elements in matrixes of $ mathbb {R} $ Y $ mathbb {C} $). But I think the matrices about the $ mathbb {R} $ and the $ mathbb {C} $ Always commute, because the elements on the diagonal travel. What am I missing?