Formula 3.1 is the formula for projection a vector v onto a subspace E spanned by a set of orthogonal basis. I was wondering whether we need the basis v1, v2, ….,vr to be orthogonal for this formula to work. If the basis is not orthogonal, where is the problem?

# Tag: subspace

## real analysis – Show that the tangent hyerplane to a hypersurface is an n’th dimensional affine subspace

We are given a hyperplane defined by:

$nabla F(C) cdot (X – C ) = 0$, where $F(x) = f(x) – x_{n+1}$, $x in mathbb{R}^n$ and $f(x) = x_{n+1}$. The question is as follows:

My approach so far has been to expand the equation in terms of the coordinates of $nabla F(C), X$, and $C$, but that is where I become stuck. Note that $C = (c, f(c)$

## linear algebra – How do I find the subspace of p2 from the following subsets?

One of the following is a subspace of p2.

A. W= {ax^2+bx+c / a+b+c=1}

B. W= {ax^2+bx+c / a+b+c=0}

C. W= {ax^2+bx+c /a greater than or equal 0}

D. W= {ax^2+bx+c /b,c greater than or equal 0}

So it’s either B, C, or D because the zero vector condition won’t work on A. What I don’t understand is the difference between C and D. How does having ‘a’ greater than or equal zero differ from having ‘b’ and ‘c’ greater than or equal zero? Can someone please explain how it works? This concept is a bit new to me and I couldn’t find any examples of this particular case online.

## rt.representation theory – Invariant subspace for representations of compact quantum groups

Let $(A, Delta)$ be a compact quantum group and $v in M(B_0(H)otimes A)$ a representation of $(A, Delta)$. A closed subspace $H_1$ of $H$is called invariant if $$(eotimes 1)u(eotimes 1) = u(eotimes 1)$$ where $e in B(H)$ is the orthogonal projection on $H_1$.

How is this multiplication above defined? I.e. how to multiply the element $eotimes 1 in B(H)otimes A$ with an element in $M(B_0(H)otimes A)?$

Or do we consider $M(B_0(H) otimes A)subseteq M(B(H)otimes A)$ for this?

## linear algebra – Constructing invariant subspace of factor of minimal polynomial

Suppose $Z$ is an endomorphism of an $N$-dimensional complex vector space $V$, and that $U subset V$ is a $Z$-invariant subspace of codimension $M$. Suppose also that $Z$ admits a cyclic vector $phi in V$.

Now the minimal polynomial of $Z$ is a degree $N$ polynomial $f$ which factorizes as $f = g h$, where $h$ is the minimal polynomial of $Zmid_U$, and $g$ has degree $M$.

Then $ker h(Z) = U$, whilst $ker g(Z) = W$ is another invariant subspace, of dimension $M$, which I am interested in. My question is: is there any other way to construct the space $W$ from $U$ and $Z$, without mentioning minimal polynomials, and without making further assumptions about coprimality of $g,h$ or anything like that?

(In particular, note that as $g,h$ are not necessarily coprime, we cannot characterize $W$ by thinking about a $Z$-invariant complement to $U$. I essentially want a natural generalization of this idea.)

This is a cross-post from here.

## If $A$ has exactly one invariant one dimensional subspace, then what can you say about $n$?

I have a trivial question that I am confused with

Let $A in M_{n}(mathbb{R})$ such that $A A^{t}=I$. If $A$ has exactly one invariant one dimensional subspace, then what can you say about $n$ ?

I am not sure how to approach this.

## Linear subspace meaning of ′ example p′(1)=p′(0)

I need some help solving this Im kinda confuse U={p∈P3/p′(1)=p′(0)} prove that u is a liner subspace of P3, using normal operations like p + αq ∈ P3. Im confuse with the p′(1)=p′(0), never seen this type of notation. so I’m not sure what it means with the ′.

## linear algebra – How to check if an affine subspace has all non-negative components efficiently?

Suppose that a $n$-dimensional affine space is defined with vectors $e_i=(0,dots,1,dots,0), i=1,dots,n$ (the $1$ stands in the $i$-th place), i.e.

$$mathcal{A}=left{alpha_1e_1+alpha_2e_2+dots+alpha_ne_nmidsum_{k=1}^nalpha_k=1right}$$

and there is an affine subspace, e.g. $Ax+c$, in $mathcal{A}$, where the columns of $A$ are some basis parallel to $mathcal{A}$ and $c$ is the corresponding shift vector to the affine space. The problem is that I want to check if the affine subspace has all non-negative components, i.e. $Ax+cge0$. If not, it’s desired to find out the closest point to achieve that goal.

The non-negativity requirement in the affine space $mathcal{A}$ is equivelent to the definition of a standard $n$-dimensional simplex $$Delta^n=left{alpha_1e_1+alpha_2e_2+dots+alpha_ne_nmidsum_{k=1}^nalpha_k=1,; forall alpha_kge0 right}$$

From the geometrical point of view, the problem is equivelent to check if a line or hyperplan, etc, intersects the simplex in the same affine space $mathcal{A}$? if not, we want to find out the closet point to the simplex. Illustration

Is there some simple and efficient way to do so?

## vector spaces – Checking set of continuous function is a linear subspace?

I really need help with this question!

Let I = (a, b) be an interval.

a) Check that the set X of continuous functions f : I → R is a linear subspace of the vector space R^I, X = C(I, R). (You can take for granted that R^I is a vector space.)

(b) Show that ||f||1 = integral from a to b of |f| dt defines a norm on C(I, K)

## ag.algebraic geometry – Generalization of: The dimension of a projective $mathbb{F}$-variety equals the smallest codimension of a disjoint linear subspace

Let $mathbb{F}$ be an algebraically closed field. Consider the following definition of the dimension of a (quasi)projective $mathbb{F}$-variety, given in Harris *Algebraic Geometry: A First Course*:

It seems nonstandard to take this as the *definition* of a variety, so I will think of this as a theorem that should be proven from a more conventional definition of dimension, e.g. the Krull dimension.

My question is this: Does Definition 11.2 generalize to structures other than a variety over an algebraically closed field? For example, what if $mathbb{F}=mathbb{R}$? Or $X$ is a manifold embedded in projective space? In these cases, does the dimension of $X$ still agree with the smallest codimension of a disjoint linear subspace?

Note that Definition 11.2 holds when the “irreducible” assumption is dropped, so this small generalization does hold.