complexity theory – Nisan-Wigderson generator and $mathbf P=mathbf{BPP}$

In this set of notes it is claimed that:

If there exists a polynomial-time pseudorandom generator $G:{0,1}^{O(log n)}to{0,1}^{O(n)}$ that $1/10$-fools all $n^2$-size circuits, then $mathbf P=mathbf{BPP}$.

The proof of this is as follows: If a language $L$ is in $mathbf{BPP}$, then there is a polynomial-time algorithm $A$ such that
$$
xin Limplies Pr(A(x,y)=1)>frac23;\
xnotin Limplies Pr(A(x,y)=1)<frac13,
$$

where the probabilities extend over random strings $y$ of length polynomial in the length of $x$.

Then consider the following deterministic algorithm $B$ that:

  1. Enumerate over all seeds $zin{0,1}^{O(log n)}$.
  2. For each $z$, call $A(x,G(z))$.
  3. Return $1$ if strictly more than half of the calls return $1$, otherwise return $0$.

It is clear that $B$ is deterministic and polynomial-time. However I’m somewhat confused on the rigor of this proof. The proof is predicated on the idea that since $G$ is a pseudorandom generator, $A$ is unable to distinguish between the output of $G$ and the uniform distribution, hence for an input $xin L$ we would have $Pr_{zsim{0,1}^{log n}}(A(x,G(z))=1)>2/3-1/10>1/2$.

But by assumption $G$ is only able to fool circuits of size $n^2$. What if $L$ is not recognizable by circuits of size $n^2$ (or, if $A$ cannot be represented by circuits of size $n^2$)?

ag.algebraic geometry – Mori fiber space in dimension $2$ over a point is $mathbf P^2$

Let $k$ be an algebraically closed field. Let $X$ be a surface over $k$. Let $pi: X to S$ be an extremal contraction. It is well-known that if $dim(S) = 0$, then $X cong mathbf P^2$. I wonder if we can prove this using Mori’s result: A variety $X$ is isomorphic to the projective space if and only if $-K_X$ is ample and every non-constant morphism $mathbf P^1 to X$ is very free.

logic – $mathbf K1.1$ is sound and strongly complete w.r.t. frames for which $R$ is a partial function

(Blackburn et al.’s Modal Logic, Ex 4.3.1): Let $1.1$ be the axiom $diamond ptosquare p$. Show that $mathbf K1.1$ is sound and strongly complete with respect to the class of all frames $(W,R)$ such that $R$ is a partial function.

Firstly, I am confused about the meaning of a partial function. After a quick Google search, I found out that a partial function $f$ from $X$ to $Y$ is essentially a function $f: Sto Y$ where $Ssubset X$ (possibly improper). In other words, we are basically allowing the function to forget about some values in the domain. That being said, we know $Rsubset Wtimes W$. If $R$ is a partial function, then the idea is that we can have isolated worlds, right? In addition, one world can be mapped to only one other (because $f: Sto Y$ is a function, so it is well-defined).

Just want to quickly verify my solution.

  1. Suppose the class of all frames $(W,R)$ in which $R$ is a partial function is denoted by $P$. Then, for every frame $mathfrak F in P$, we show $mathfrak FVdash diamond ptosquare p$. $mathfrak FVdash diamond ptosquare p$ if and only if $mathfrak F,wVdash diamond ptosquare p$ for all $win W$. Consider arbitrary $win W$. Suppose $mathfrak F,wVdash diamond p$. Then, there exists $vin W$ such that $wRv$ and $mathfrak F,vVdash p$. Moreover, for all $v’ in W, v’ne v$, we have $wlnot Rv’$. This is because $R$ is a partial function. Hence, $mathfrak F,wVdash square p$. One direction is done!

  2. Next, I want to show that if $mathfrak FVdash diamond ptosquare p$ (for all frames in $P$), i.e. $mathfrak F,wVdash diamond ptosquare p$ for all $win W$, then $R$ is a partial function. This is easily done too, and I am skipping the proof.

Another question I have is, where are soundness and completeness coming into play, exactly? When we say soundness, we want $vdash implies Vdash$. For completeness, it’s the other way round. Since $1.1$ is put into the axioms of $mathbf K1.1$, perhaps the first part (1) of my solution corresponds with proving soundness, i.e. seeing if the axiom is valid in all frames of a particular type? If yes, then the other part (2) is probably completeness, but I do not really see how.

Could someone help me really understand if what I’m doing is right, and why I am doing it? I have difficulty understanding the bigger picture of completeness and soundness in modal logic. Thanks!

Any embedded smooth submanifold of $mathbf R^n$ has smooth coordinate patches

I believe that more or less this type of question has already appeared but when typing this question, I cannot find it. Nevertheless, this question concerns two types of definition of (smooth) manifolds: one for submanifolds of $mathbf R^n$ and the other for abtract manifolds. Obviously, these definitions are generally not equivalent but I want to understand these in the situation of embedded smooth submanifolds of $mathbf R^n$.

Let me recall these definitions for clarity.

Definition 1 (Munkres, etc). A subspace $M subset mathbf R^n$ is a $k$-manifold of class $C^r$ if for any point $p in M$, there is an open set $V subset M$, an open set $U subset mathbf R^k$ and a continuous map $alpha : U to V$ such as

  1. $alpha in C^r$ (in the extended sense)
  2. $alpha^{-1}: V to U$ is continuous
  3. $Dalpha$ has rank $k$ at any point in $U$.

Note that the continuity of $alpha^{-1}$ in 2. is enough because $alpha^{-1}$ is in fact of class $C^r$; see Thm 24.1 in Munkres’s book. So, basically Definition 1 says that for any point $p in M$, there is a diffeomorphism between an open set containing $p$ and an open subset in $mathbf R^k$.

Definition 2 (Lee, etc). A smooth manifold $M$ of dimension $k$ is a topological space equipped with a smooth structure, which basically says that there is a family of local charts $(U, phi)$ of $M$, where $U$ is open in $M$ and $phi$ is a homeomorphism from $U$ onto an open subset of $mathbf R^k$ such that the translation maps between two overlapped charts are smooth. (I do not want to write down in details because more or less this is standard and appears in many textbooks.)

Let us now consider an embedded smooth submanifold $M subset mathbf R^n$ of dimension $k$. (By definition, this means that the inclusion $i : M to mathbf R^n$ is a smooth embedding but here $M$ is a subset of $mathbf R^n$ for simplicity.) My question is how to verify the three conditions in Definition 1. I hardly see how to construct $alpha$ with enough regulariry around a given point since $phi$ and $phi^{-1}$ in Definition 2 are only continuous.

algebraic stacks: cohomology of $ B mathbf {G} _m $?

I don't have a reference, but I think the etale construction pulleys on Artin stacks have all six operations (I could be wrong). In particular, writing $ p: B mathbf {G} _m to star $, we can define compact compatibility $ ell $ adic cohomology of $ B mathbf {G} _m $ how
$$ H ^ * _ c (B mathbf {G} _m, mathbf {Q} _ ell) = left ( lim_ {n to infty} R ^ * p _! Mathbf {Z } / ell ^ n mathbf {Z} right) otimes _ { mathbf {Z} _ ell} mathbf {Q} _ ell $$
and similarly $ H ^ * $ using $ p _ * $ instead of $ p _! $. The cup product gives a map $ H ^ * otimes H ^ * _ c a H ^ * _ c $.

What is it $ H ^ * _ c (B mathbf {G} _m, mathbf {Q} _ ell) $ (as a $ H ^ * (B mathbf {G} _m, mathbf {Q} _ ell) $ module)? And what is the map $ H ^ * _ c a H ^ * $ from $ p _! Rightarrow p _ * $?

I'm a little confused because like $ H ^ * = mathbf {Q} _ ell ((x)) $ with $ | x | = $ 2 Poincare's duality should involve $ H ^ * _ c = y ^ {- 1} mathbf {Q} _ ell ((y)) $ with $ | and | = -2 $, in which case both the module structure and the map $ H ^ * _ c a H ^ * $ They are not clear. The fact that it has cohomology in negative degrees is also strange.

Matrices – How to find a 2D rotation matrix that rotates the vector $ mathbf {a} $ a $ mathbf {b} $

I have two unit vectors a Y yes. I would like to find the rotating matrix of rotation a to yes. The formulas I see online are for a rotation matrix are

$$
left(
begin {matrix}
cos theta & – sin theta \
sin theta & cos theta \
end {matrix}
Right)
$$

And I can get the angle between a Y yes with

$$
theta = cos ^ {- 1} ( mathbf {a} cdot mathbf {b})
$$

My problem is that it doesn't give me the address. For example, yes $ theta $ it is $ pi / 2 $ when maybe the matrix should use $ – pi / 2 $

at.algebraic topology: why $ mathbb {S} ^ 1 $ is a group object in $ mathbf {Top.} $?

$ require {AMScd} $

This is a basic level question, but these types of questions generally do not find answers in stackexchange.

I am trying to introduce myself to category theory and advanced algebraic topology. I just learned about the structures of groups and cogroups. My question is about cogroup structures

An object with group structure. $ (G, mu: G times G a G, u: * a G, i: G a G) $ satisfies the following properties among the other properties

begin {CD}
G times * @ <{id times * _G} << G\ @V{idtimes u}VV @VV{id}V \ G times G @>{ mu} >> G
end {CD}

here $ * $ is the terminal object and $ * _ G $ the unique morphism of $ G $ to $ * $. This is the law of the neutral element.

If I understood correctly, a shared group object should summarize the inverted diagram, that is.

$ require {AMScd} $
begin {CD}
G coprod * @> {id coprod * _G} >> G \
@A {id coprod u} AA @AA {id} A \
G coprod G @ <{ mu} << G
end {CD}

Now, choose as a category $ mathbf {Top.} $ and $ G = ( mathbb {S} ^ 1, 1) $ with operation $ mu: mathbb {S} ^ 1 to mathbb {S} ^ 1 vee mathbb {S} ^ 1 $ that surround the circle "half and half". The statement is that this is a cogroup structure in $ mathbb {S} ^ 1 $, but it seems to me no.

The diagram above becomes

begin {CD}
mathbb {S} ^ 1 vee * @> {id vee *} >> mathbb {S} ^ 1 \
@A {id vee u} AA @AA {id} A \
mathbb {S} ^ 1 vee mathbb {S} ^ 1 @ <{ mu} << mathbb {S} ^ 1
end {CD}

But $ * $ in $ mathbf {Top.} $ it is both initial and terminal and so $ mathbb {S} ^ 1 vee mathbb {S} ^ 1 to mathbb {S} ^ 1 vee * to mathbb {S} ^ 1 $ would collapse the "right circle" of $ mathbb {S} ^ 1 vee mathbb {S} ^ 1 $ to the base point of $ mathbb {S} ^ 1 $ and so $ mathbb {S} ^ 1 to mathbb {S} ^ 1 vee mathbb {S} ^ 1 to mathbb {S} ^ 1 vee * to mathbb {S} ^ 1 $ would collapse the "right semicircle" of $ mathbb {S} ^ 1 $ to the base point of $ mathbb {S} ^ 1 $. Then it could not be the identity as the other arrow states.

Now, I'm sure that I'm wrong, but where?

pr.probability – What is the expected value of $ mathbf x ^ H mathbf x mathbf x ^ H mathbf y $?

Yes $ mathbf x = (x_1, …, x_K) ^ T $ and $ mathbf y = (y_1, …, y_K) ^ T $, where $ mathbf x sim mathcal C mathcal N ( mathbf 0, sigma_x ^ 2 mathbf I) $, (a complex normal vector) and $ mathbf and sim mathcal C mathcal N ( mathbf 0, sigma_y ^ 2 mathbf I) $.

What is the expected value of $ I = mathbf x ^ H mathbf x mathbf x ^ H mathbf y $?

I've been trying to estimate it in Matlab by generating a normal random complex vector ($ mathbf x $ and $ mathbf and $) and calculate the value of $ I $ for 100,000 times, every time I run I get very different results, especially when $ K gt $ 1.

Show $ R_ mathbb {V} $ is a linear operator, find its standard matrix and find all the vectors so that $ R_ mathbb {V} mathbf {v} = mathbf {v} $

Leave $ mathbb {V} $ be the linear subspace in $ mathbb {R} ^ 4 $ defined as
$$ mathbb {V} = left { begin {bmatrix}
x_1 \
x_2 \
x_3 \
x_4
end {bmatrix} in mathbb {V}: x_1 + x_2 + 2x_3-x_4 = 0 right } $$

leave $ P_ mathbb {V}: mathbb {R} ^ 4 rightarrow mathbb {R} ^ 4 $ be the orthogonal projection on $ mathbb {V} $, leave $ R_ mathbb {V}: mathbb {R} ^ 4 rightarrow mathbb {R} ^ 4 $ be the function such that $$ mathbf {x} rightarrow mathbf {x} +2 (P_ mathbb {V} ( mathbf {x}) – mathbf {x}) $$
for each vector $ mathbf {x} in mathbb {R} ^ 4 $. Show that $ R_ mathbb {V} $ It is a linear operator, find its standard matrix $ begin {bmatrix}
R_ mathbb {V}
end {bmatrix} $
and find all the vectors $ mathbf {v} in mathbb {R} ^ 4 $ such that $ begin {bmatrix}
R_ mathbb {V}
end {bmatrix} mathbf {v} = mathbf {v} $
.

I started by having $$ mathbb {V} = span left {{ begin {bmatrix} -1 \ 1 \ 0 \ 0 end {bmatrix}, begin {bmatrix} -2 \ 0 \ 1 0 end {bmatrix}, begin {bmatrix} 1 \ 0 \ 0 \ 1 end {bmatrix}} right } Rightarrow A = begin {bmatrix}
-1 and -2 and 1 \
1 and 0 and 0 \
0 and 1 and 0 \
0 and 0 and 1
end {bmatrix} $$

Then I calculated, $ begin {bmatrix} P_ mathbb {V} end {bmatrix} = A (A ^ TA) ^ {- 1} A ^ T $ and I worked from there. Is there a better way to solve this problem?

linear algebra – Let $ bf A $ be a $ m $ matrix for $ n $ and $ bf b = O $. Try $ bf Ax = b $ has a solution $ implies $ $ range ( mathbf {A mid b}) = range ( mathbf A) $

I have following:

Proposition Leave $ bf A $ be a $ m $ by $ n $ matrix and $ bf b = O $ then the linear
system $ bf Ax = b $ has a solution $ implies $ $ rank ( mathbf {A mid b}) =
range ( mathbf A) $
.

My attempt:

I'll use next $ bf R $ to denote $ rref (A) $.

$ ( rightarrow) $

We test by contradiction.

Suppose $ bf Ax = b $ has a solution and $ range ( mathbf {A mid b}) ≠
range ( mathbf A) $

We are required to consider two cases:

  1. $ rank ( mathbf {A mid b})>
    range ( mathbf A) $

  2. $ range ( mathbf {A mid b}) <
    range ( mathbf A) $

one)

Suppose $ rank ( mathbf {A mid b})> rank ( mathbf A) $.

$ mathbf {A mid b} $ is row equivalent to $ bf R mid b & # 39; $, Thus $ rank ( mathbf {A mid b}) = rank ( bf R mid b & # 39;) $

$ mathbf {A} $ is row equivalent to $ bf R $, Thus $ rank ( mathbf {A}) = rank ( bf R) $

Therefore we have $ rank ( mathbf {R mid b & # 39;})> rank ( mathbf R) $

It follows that there must be at least one row so that

$$ mathbf {R mid b & # 39;} =
left ( begin {array} {cccc | c}
cdots & cdots & cdots & cdots & cdots \
cdots & cdots & cdots & cdots & cdots \
cdots & cdots & cdots & cdots & cdots \
0 & 0 & cdots & 0 & a \
cdots & cdots & cdots & cdots & cdots \
end {array} right) $$

Where $ to ≠ 0 $.

But that would mean that the linear system is inconsistent. As $ bf R mid b & # 39; $ It is equivalent to $ mathbf {A mid b} $, follow that linear system $ bf Ax = b $ it is inconsistent too, which contradicts the fact that $ bf Ax = b $ it has a solution. Thus $ rank ( mathbf {A mid b}) not>
range ( mathbf A) $

2)

Suppose $ rank ( mathbf {A mid b}) <rank ( mathbf A) $. Leave $ range ( mathbf A) = m $.

$ mathbf {A mid b} $ is row equivalent to $ bf R mid b & # 39; $, Thus $ rank ( mathbf {A mid b}) = rank ( bf R mid b & # 39;) $

$ mathbf {A} $ is row equivalent to $ bf R $, Thus $ rank ( mathbf {A}) = rank ( bf R) $

So, we have $ range ( mathbf {R}) = m $Y $ range ( mathbf {R mid b & # 39;}) <m $

Consider matrix $ ( mathbf {R mid b & # 39;}) $
$$ mathbf {R mid b & # 39;} =
left ( begin {array} {c | c}
mathbf {r} _1 & b_ {1} \
mathbf {r} _2 & b_ {2} \
vdots & vdots \
mathbf {r} _m & b_ {m} \
vdots & vdots \
end {array} right) $$

As $ range ( bf R mid b & # 39;) $ $ <m $, it follows that the row vectors $ bf r $ are linearly dependent and vector $ mathbf {r} _m $ It can be written as the linear combination of the previous vectors. This is a contradiction, since we have said that $ rank ( mathbf {A}) = rank ( mathbf {R}) = m $, which implies that the vectors $ bf r $ It must be linearly independent. Thus, $ range ( bf R mid b & # 39;) $ $ not <m $, and from $ rank ( mathbf {A mid b}) = rank ( bf R mid b & # 39;) $ implies that
$ range ( bf A mid b) $ $ not <m $.

We have shown that both cases are impossible and, therefore, we can conclude that $ rank ( mathbf {A mid b}) =
range ( mathbf A) $
. $ Box $

It is right?


Actually, the initial proposition required to show $ iff $, but as you can see, the post would be too long, therefore, I tried to demonstrate $ implies $ First.