linear algebra – Given $f:mathbb{R}^m supset U rightarrow mathbb{R}^n$ differential. Relation of rank of $f’$ and the property of image of $f$?

Let $f:mathbb{R}^m supset U rightarrow mathbb{R}^n$ a differential map, $U$ open, and $a in U$. Are the followings true:

  1. If $m > n$ and $f(U)$ is open set in $mathbb{R}^n$ then $text{rank}f’_a = n$?

  2. If $m < n$ and $f(U)$ is open set in $mathbb{R}^m$ then $text{rank}f’_a = m$?

I don’t how to get information of Jacobian of $f$ to know about its rank. I try proof by contradiction as follows:

In 1’s problem, If $text{rank}f’_a = k < n$ then $f'(a):mathbb{R}^mrightarrow mathbb{R}^k$. We have, when $hrightarrow 0$, $mathbb{R}^n ni f(a + h) – f(a)rightarrow f’_a(h)in mathbb{R}^k$. But, I don’t see any reason to forbid $f(a + h) – f(a)in mathbb{R}^n$ close to a point $mathbb{R}^k$. So may be the conclusions are wrong.

If they are wrong, could you give an example?

$X_n xrightarrow{p} X Rightarrow existshspace{0.2em} X_{n_j} xrightarrow{a.s.}X$

i need a check about this demonstration

suppose that $X_n xrightarrow{p} X$ and take some decreasing ${epsilon_m}_{m in mathbb{N}}$ who satisfy $epsilon_mrightarrow0$ and define:

  • $A_{j,epsilon}= { omega in Omega : vert X_j(omega)-X(omega)vert <epsilon}$
  • $S_{n,epsilon}= bigcap_{j geq n} A_{j,epsilon}$
  • $S_{epsilon}=bigcup_{n in mathbb{N}}S_{n,epsilon}=liminf A_{j,epsilon}$
  • $S:= bigcap_{m in mathbb{N}}S_{epsilon_m}$

is easy to demonstrate that $S ={ omega in Omega: lim_{jto infty}X_j(omega) = X(omega)}  $

now fixing $m$ i can build an increasing $n : mathbb{N} to mathbb{N}$ who satisfy $$mathbb{P}(vert X_{n_j}-Xvert>epsilon_m)leq2^{-j}, forall jin mathbb{N}$$

now because $sum_{j=1}^{infty}mathbb{P}(vert X_{n_j}-Xvert>epsilon_m)leqsum_{j=1}^{infty}2^{-j}<infty$, by borel-cantelli lemma

$$mathbb{P}(limsup A_{j,epsilon_m})=0 Rightarrow mathbb{P}(S_{epsilon_m})=1$$
so this demonstate that $mathbb{P}(S_{epsilon_m})=1$ do not depend by $m $

now $mathbb{P}(bigcap_{m=1}^{infty}S_{epsilon_m})=lim_{mtoinfty}mathbb{P}(S_{epsilon_m})=1$, this is because $$epsilon < epsilon’ Rightarrow S_{epsilon} subset S_{epsilon’} Rightarrow S_{epsilon_{m+1}}subset S_{epsilon_{m}}, forall m in mathbb{N}$$

i think that all i used for demonstrate this theorem is legal

isometry – Isometric isomorphism from $mathbb{C} rightarrow mathbb{R}^2$.

$mathbb{C} cong mathbb{R} times mathbb{R}$ are identical from set theoretical viewpoint, even as vector spaces or groups. The difference would then be the one considering it as a field, as $mathbb{R} times mathbb{R}$ is certainly not a field.
Both underly the same metric, as by the complex modulus we get for two complex numbers $z_1 = x_1 + iy_1$ and $z_2 = x_2 + i y_2$ the metric:
begin{equation}
|z_1-z_2| = sqrt{(x_1-x_2)^2 – (y_1-y_2)^2}.
end{equation}

Clearly this is an Euclidean metric. Thus the complex plane is a $2$-dimensional Euclidean space. I wonder if one can construct an isometric isomorphism from $mathbb{C} rightarrow mathbb{R}^2$ and how this isomorphism looks like?

real analysis – Questions about a proof of $(forall a)[a in mathbb{F} rightarrow -(-a) = a]$

Proposition:

$(forall a)(a in mathbb{F} rightarrow -(-a) = a)$

proof.

Suppose that $a in mathbb{F}$. By the additive inverse property, $-a in mathbb{F}$ and $-(-a) in mathbb{F}$ is the additive inverse of $-a$; i.e. $$-(-a) + (-a) = e$$ Since $-a$ is the additive inverse of $a$, $$(-a) + a = a + (-a) = e$$ which also justifies that $a$ is the additive inverse of $-a$. From the uniqueness of additive inverses, we conclude that $-(-a) = a$.

Are we allowed to state that $-(-a) + (-a) = e$ implicitly using commutativity of the reals? I ask that because I thought that we were supposed to state that $-(-a) + (-a) = (-a) + (-(-a)) = e$ by definition. Also, is the statement “also justifies that $a$ is the additive inverse of $-a$ ” simply saying that $(-a) + a = a + (-a) = e$ justifies that $-a$ is the additive inverse of $a$ as well as justifying that $-a$ is the additive inverse of $a$?

abstract algebra – Any morphism $phi:G rightarrow A$ to an abelian group $A$ factors uniquely through the projection $G rightarrow G /C$

I’m doing this exercise 7(b) in textbook Algebra by Saunders MacLane and Garrett Birkhoff. Could you please verify if my attempt is fine or contains logical mistakes?

enter image description here

Let $G$ be a group and $C$ its commutator subgroup. Prove that


My attempt:

For $a,b in G$, we have $aC, bC in G/C$. It follows from $b^{-1}a^{-1}ba in C$ that $C =
(b^{-1}a^{-1}ba)C$
. Then $(aC)(bC) = (ab)C = (ab)(b^{-1}a^{-1}ba)C=(ba)C = (bC)(aC)$. Hence $G/C$ is abelian.

Next we prove that $phi(C) = {1}$. For $x = b^{-1}a^{-1}ba in C$, we have $phi(x) = phi(b^{-1}a^{-1}ba) = phi(b)^{-1} phi(a)^{-1} phi(b) phi(a)$. On the other hand, $A$ is abelian and thus $phi(a)^{-1} phi(b) = phi(b) phi(a)^{-1}$. Hence $phi(x) = 1$.

To sum up, we have $C trianglelefteq G$ and $phi:G rightarrow A$ a group morphism and $phi(C) = {1}$. Then the result follows from Theorem 26.

enter image description here

When is $(x_n,y_n) rightarrow (x,y)$ equivalent to both $x_n rightarrow x$ and $y_n rightarrow y$?

Let $X,Y$ be metric spaces. What metric is used on the product space to get this statement?

$(x_n,y_n) rightarrow (x,y)$ is equivalent to both $x_n rightarrow x$ and $y_n rightarrow y$

nt.number theory – Prove that a ring homomorfism $f: mathbb{Z}_m rightarrow mathbb{Z}_n $ is injective iff n|m and $gcd(n/m,m) = 1$

Prove that a ring homomorfism $f: mathbb{Z}_m rightarrow mathbb{Z}_n $ is injective iff n|m and $gcd(n/m,m) = 1$

I already did the proof of f being injective under the hypotesis, but I have no idea about the first implication; in particular I dont know how to show that $mcd(n/m,m)=1$.

limits – Show that $lvert lvert (x,y) rvert rvert rightarrow infty$ implies $f(x,y) = x^2-4xy+4y^2+y^4-2y^3+y^2 rightarrow infty$

I am working on the following exercise:

Consider the function
$$f(x,y) = x^2-4xy+4y^2+y^4-2y^3+y^2.$$

Show that $$lvert lvert (x,y) rvert rvert rightarrow infty quad Longrightarrow quad f(x,y) rightarrow+infty.$$

I do not see how I could prove this in a “nice way”. Is there any way to avoid a lot of case distinctions, so to avoid considering each case like

$$x rightarrow +infty, y rightarrow -infty $$
$$x rightarrow -infty, y rightarrow -infty $$

and so on separately?

combinatorics – Counting a walk $i rightarrow j rightarrow k rightarrow j rightarrow l rightarrow j$ in a graph

This paper gives a procedure for counting redundant paths (which I will refer to as walks) in a graph using its adjacency matrix. As an exercise, I want to count only the walks of the form $i rightarrow color{blue}j rightarrow k rightarrow color{blue}j rightarrow l rightarrow color{blue}j$ from node $i$ to node $j$, with $ i neq j neq k neq l$. Also see this post.

Let $A$ be the adjacency matrix. The notation I use below is: “$cdot$” for usual matrix multiplication, “$odot$” for element-wise matrix product, “d$(A)$” for the matrix with the same principal diagonal as $A$ and zeros elsewhere, and $S = A odot A^T$.

The matrix for $i rightarrow color{blue}j rightarrow k rightarrow color{blue}j rightarrow l rightarrow color{blue}j$ will have its $i$, $j$ entry: $a_{ij}cdot a_{jk}cdot a_{kj}cdot a_{jl}cdot a_{lj}$. I have found this to be:
$$
A cdot (d(A^2))^2
$$

However, $i rightarrow color{blue}j rightarrow k rightarrow color{blue}j rightarrow l rightarrow color{blue}j$ also includes the following walks which repeat undesired nodes and should be subtracted:
$$
color{red}i rightarrow j rightarrow color{red}i rightarrow j rightarrow l rightarrow j tag{1} $$

$$ color{red}i rightarrow j rightarrow k rightarrow j rightarrow color{red}i rightarrow j tag{2} $$
$$ i rightarrow j rightarrow color{red}k rightarrow j rightarrow color{red}k rightarrow j tag{3} $$
$$color{red}i rightarrow j rightarrow color{red}i rightarrow j rightarrow color{red}i rightarrow j tag{4} $$

My calculations for $(1) – (4)$ are:
$$ S cdot text{d}(A^2) tag{1} $$
$$ text{d}(A^2) cdot S tag{2} $$
$$ A cdot text{d}(A^2) tag{3} $$
$$ S tag{4} $$

Every time one of $(1) – (3)$ is subtracted, $(4)$ is subtracted as well, since it is included in all three. Since it is not desired in the end, it is added back 2 times. Overall:
$$ A cdot (d(A^2))^2 – S cdot text{d}(A^2) – text{d}(A^2) cdot S – A cdot text{d}(A^2) + 2S$$

However, this is wrong and gives incorrect counts, even negatives. What am I missing here?

regular languages – $L = {alpha^i beta^j gamma^k vert i,j,k in mathbb{N}_0, (i=1) Rightarrow (j=k)}$

I am asking this question here, because I am not allowed to comment on the thread that I am actually interested in, but maybe someone can still help me?

I alredy found an anwser to the Problem above (in the post linked to this question), but I still don’t understand, why I can’t just use that one case $s = alpha beta^p gamma^p$. I can show for that case, that it doesn’t fit the pumping lemma for regular languages. Isn’t the point of condratiction, that I have to find just one case that doesn’t fit the hypothesis?

Actually I am not even supposed to use the pumping lemma, but the definitions of closure for regular languages. And that is where I started with $(i=1) Rightarrow (j=k) = i neq 1 vee j = k$. And then I wanted to use the properties of closure, like in the first anwser in the post I linked.
(I was also thinkg of using a regular expression? It seemed easier)
But if I can’t just find that mentioned one word to proove the language not regular (without the PL)? I am confused.
I hope it makes sense. I am genuinely interested in understanding this problem.

Irregularity of ${a^ib^jc^k mid text{if } i=1 text{ then } j=k }$