If $A$ has exactly one invariant one dimensional subspace, then what can you say about $n$?

I have a trivial question that I am confused with

Let $A in M_{n}(mathbb{R})$ such that $A A^{t}=I$. If $A$ has exactly one invariant one dimensional subspace, then what can you say about $n$ ?

I am not sure how to approach this.

Linear subspace meaning of ′ example p′(1)=p′(0)

I need some help solving this Im kinda confuse U={p∈P3/p′(1)=p′(0)} prove that u is a liner subspace of P3, using normal operations like p + αq ∈ P3. Im confuse with the p′(1)=p′(0), never seen this type of notation. so I’m not sure what it means with the ′.

linear algebra – How to check if an affine subspace has all non-negative components efficiently?

Suppose that a $n$-dimensional affine space is defined with vectors $e_i=(0,dots,1,dots,0), i=1,dots,n$ (the $1$ stands in the $i$-th place), i.e.
$$mathcal{A}=left{alpha_1e_1+alpha_2e_2+dots+alpha_ne_nmidsum_{k=1}^nalpha_k=1right}$$
and there is an affine subspace, e.g. $Ax+c$, in $mathcal{A}$, where the columns of $A$ are some basis parallel to $mathcal{A}$ and $c$ is the corresponding shift vector to the affine space. The problem is that I want to check if the affine subspace has all non-negative components, i.e. $Ax+cge0$. If not, it’s desired to find out the closest point to achieve that goal.

The non-negativity requirement in the affine space $mathcal{A}$ is equivelent to the definition of a standard $n$-dimensional simplex $$Delta^n=left{alpha_1e_1+alpha_2e_2+dots+alpha_ne_nmidsum_{k=1}^nalpha_k=1,; forall alpha_kge0 right}$$
From the geometrical point of view, the problem is equivelent to check if a line or hyperplan, etc, intersects the simplex in the same affine space $mathcal{A}$? if not, we want to find out the closet point to the simplex. Illustration

Is there some simple and efficient way to do so?

vector spaces – Checking set of continuous function is a linear subspace?

I really need help with this question!

Let I = (a, b) be an interval.

a) Check that the set X of continuous functions f : I → R is a linear subspace of the vector space R^I, X = C(I, R). (You can take for granted that R^I is a vector space.)

(b) Show that ||f||1 = integral from a to b of |f| dt defines a norm on C(I, K)

ag.algebraic geometry – Generalization of: The dimension of a projective $mathbb{F}$-variety equals the smallest codimension of a disjoint linear subspace

Let $mathbb{F}$ be an algebraically closed field. Consider the following definition of the dimension of a (quasi)projective $mathbb{F}$-variety, given in Harris Algebraic Geometry: A First Course:

enter image description here

It seems nonstandard to take this as the definition of a variety, so I will think of this as a theorem that should be proven from a more conventional definition of dimension, e.g. the Krull dimension.

My question is this: Does Definition 11.2 generalize to structures other than a variety over an algebraically closed field? For example, what if $mathbb{F}=mathbb{R}$? Or $X$ is a manifold embedded in projective space? In these cases, does the dimension of $X$ still agree with the smallest codimension of a disjoint linear subspace?

Note that Definition 11.2 holds when the “irreducible” assumption is dropped, so this small generalization does hold.

for each given subspace W there is precisely one row-reduced echelon matrix that has W as its row space

I’m wondering why it proved without assuming that two row-reduced echelon matrix are different.
I would appreciate it if you could briefly explain the proof of the book.

functional analysis – Intersection of two dense 1-codimensional subspace of a Banach space

Let $X$ be a Banach space. $Y, X_1, X_2$ are 1-codimensional subspaces of X. Is $X_1 cap X_2$ dense in X? Is $X1 cap Y$ dense in $X$?

This is an exercise from Hahn-Banach theorem (chapter 3, Linear Analysis, Béla Bollobás). I tried to think $Y, X_1, X_2$ as kernels of linear functionals in order to apply Hahn-Banach theorem or hyperplane separation theorem but it doesn’t work.

any help would be appreciated.

pr.probability – Expectation of Gaussian random vector over a Subspace

Suppose $boldsymbol{x} in mathbb{R}^d$ is a Gaussian random vector, distributed as $boldsymbol{x} sim mathcal{N}(boldsymbol{mu}, sigma^2 boldsymbol{I}_d)$, where $boldsymbol{I}_d$ is an identity matrix in $mathbb{R}^{d times d}$. We also assume $|boldsymbol{mu}|_{ell_2} = 1$. Given a subspace $mathcal{R} subset mathbb{R}^d$, I want to upper bound the Euclidean norm $|mathbb{E}(boldsymbol{x}~|~ mathcal{R})|_{ell_2}$.

Find dimension of the subspace

Find Dimension of the subspace

linear algebra – Intersection of a vector subspace with a cone

Given a set of vectors $S={v_1, v_2,…,v_d} subset mathbb{R}^{N}, , N>d$, is there any algorithm to decide if there exist a vector with all coordinates strictly positive in the generating subspace $langle S rangle$?
I am aware of results like Farka’s Lemma (or variants as Gordon’s Theorem, etc…).
Or some papers like: Ben-Israel, Adi. “Notes on linear inequalities, I: The intersection of the nonnegative orthant with complementary orthogonal subspaces.” Journal of Mathematical Analysis and Applications 9.2 (1964): 303-314.
But I am looking for an algorithm to decide yes or no.