Proving the following shrinking lemma for binary covers

Prove a topological space is normal if and only if for each pair of open sets $U,V$ such that $U cup V=X$, there exists open sets $U’$ and $V’$ such that $overline U’ subset U$ and $overline V’ subset V$ and $U’ cup V’=X$


$(Rightarrow)$ Since $U cup V=X implies (X-U) cap (X-V)=varnothing$. $X-U$ and $X-V$ are disjoint closed sets. Then by normality of $X$, there are disjoint open sets $W,Z$ with $X-U subset W$ and $X-V subset Z$, and an equivalent definition of normality states there are open sets $W’,Z’$ containing $X-U$ and $X-V$ with $overline W’ subset W$ and $overline Z’ subset Z$. Now I am unsure how to show that $W’ cup Z’=X$, which is what I need to show in the problem.

$(Leftarrow)$ Suppose the second part of the statement holds. Let $A$ be closed in $X$ and $U$ be any open set containing $X$. Then the existence of an open set $U’$ containing $A$ such that $overline U’ subset U$ implies $X$ is normal.

Comment:I am having difficulty understanding (why/if) $W’ cup Z’=X$ holds in the first direction. Also I am skeptical whether I started the second direction correctly by starting with a closed set, and an arbitrary open set containing this closed set. I also realize it may not be true that $U’$ contains the closed set $A$, which adds to my skepticism. Any ideas?

Normal distribution equality proving

I proved that $xi sim mathcal{N}(0,1)=0$. How can I prove the following:

Let $ξ ∼ mathcal{N} (0, 1), a, b ∈ mathbb{R}.$ Prove that $bξ + a ∼ mathcal{N}(a, b^2$

algorithms – Proving that a preorder traversal of a rooted tree can be performed in linear time


Let $T(V, E)$ be a rooted tree with root $r$.

If $T$ has no other vertices, then the root by itself constitutes the preorder traversal of $T$.

If $lvert V rvert > 1$, let $T_1, T_2, dots, T_k$ denote the subtrees of $T$ from left to right. The preorder traversal of $T$ first visits $r$ and then traverses the vertices of $T_1$ in preorder, then the vertices of $T_2$ in preorder, and so on until the vertices of $T_k$ are traversed in preorder.


How does one prove, using the above definition, that a preorder traversal of a rooted tree $T(V, E)$ can be computed in $O(lvert V rvert)$ time? Since $T$ is a tree, $lvert E rvert = lvert V rvert – 1$, and so showing that a preorder traversal algorithm simply visits the vertices and edges of $T$ a constant number of times and does constant work on each visit would do it. Obviously this is true, but how does one prove this formally?

Proving using a SAT solver that KB entails D

Suppose you’re given this KB:

$KB = (A, A ⇒ B, A ⇒ C, B ∧ C ⇒ D)$

How would you show using a SAT solver that KB |= D?

discrete mathematics – Proving functions are onetoone and onto

Given f o g is a bijection
Given g: A –> B
Given f: B –> C

Prove or disprove: f is one-to-one, f is onto.

My answer:

f is one to one since no matter what, a B value will be inputted that maps to a C value
f is onto because there are all mapping requirements made in the bijection of f(g(x)), therefore B always maps to a C value

Is my logic right? I also have no idea how I would go about proving this. Any help would be great!

mathematical optimization – Minimization problem in proving Fermat’s principle

I am trying to prove the Fermat’s Principle of least distance, the problem statement goes:

Minimize[{n1* d1 * Sec[theta1] + n2 * d2 * Sec[theta2], d1 * Tan[theta1]+d2*Tan[theta2]==d},theta1]

I can’t get a solution to this. The good solution should satisfyn1*Sin[theta1]=n2*Sin[theta2]

Can anyone help?

algorithms – Proving big-theta complexity with constants in $f(n)$

You can find the definitions of big O notations on Wikipedia. They agree with your definition of big $Theta$.

As for your actual question, the answer is actually quite subtle. Sometimes it is possible to find constants $c_1,c_2,n_0$ which work for all values of the parameters. For example, in your case, all $n geq 1$ satisfy
1 cdot n^{max(a,b)} leq n^a + n^b leq 2 cdot n^{max(a,b)},

and so you can say that $n^a + n^b = Theta(n^{max(a,b)})$.

(As a side note, you can also say that $n^a + n^b = Theta(n^a + n^b)$, or that $n^a + n^b = Theta(n^a + 2n^b)$; but usually we aim at the simplest possible expression, or an expression of a particular form, say $n^alpha log^beta n$.)

In other cases we are not so lucky. For example, $log^k n = O(n)$ for all $k$, but the relevant constants depend on $k$. Sometimes this makes no difference, but in other cases it is important to know that the constants depend on $k$. When it does matter, we sometimes use the notation $O_k(n)$ to stress that the hidden constants depend on $k$. This is not completely standard, but pretty common. When the hidden constants are independent of some parameter, we can explicitly say so – $n^a + n^b = Theta(n^{max(a,b)})$, where the hidden constants are independent of $a,b$.

In your case, since you are only interested in an isolated estimate, you don’t care about the dependence of the hidden constants on the parameters. So for you, $log^k n = O(n)$ holds, even though the hidden constants depend on $k$, since for any particular $k$ this statement holds. This is the usage in the master theorem, for example – the constants there depend on the parameters, but usually it makes no difference, because the parameters are fixed in any given application, in the vast majority of circumstances.

logic – Proving a result similar to the diagonal lemma/fixed-point theorem

As the title explains, I’m trying to solve the following exercise that was left for the reader in my lecture notes:

Show that for any two formulae $F(v_1)$ and $G(v_1)$ in $L_E$ (language of arithmetic with exponentiation) with one free variable, there
exist sentences $X$ and $Y$ such that the sentences $(X ↔ G(overline{⌜Y⌝}))$ and $(Y ↔ F(overline{⌜X⌝}))$ are both true.

(where $⌜X⌝$ means the Gödel number of X, and $overline{⌜X⌝}$ is the ‘logical equivelent’ of that Gödel number.)

This clearly resembles the diagonal lemma/fixed-point theorem, except instead of one self-referential sentence we’re supposed to have two sentences referring to each other.

I’ve tried playing around with different combinations of what you might take $X$ and $Y$ to be, attempting to adapt the proof of the diagonal lemma, but haven’t had much luck.

Any suggestions anyone could offer would be much appreciated.

(if I’ve missed out any important details, let me know and I’ll add them1)

Proving $R/I otimes_R M cong_R M/IM$ using UMP and a biadditive map not with exact sequences.

I want to Understand a paragraph of the proof of $R/I otimes_R M cong_R M/IM.$ in example$(8)$ on pg. 370 of Dummit and Foote (third edition)

Here is the example from Dummit & Foote:

enter image description here
enter image description here

My questions are:

1- I do not understand exactly why we need the previous observation (as stated in the book) to say that the map $N rightarrow (R/I) otimes_R N$ defined by $n mapsto 1 otimes n$ is surjective?

2-why we need $IN$ to be in the kernel?

3- why $1 otimes a_i n_i = a_i otimes n_i$?

4- Why the author defined $f$ by that definition?

Could anyone help me answer those questions please?

real analysis – Proving that a function has primitives

Let f a differentiable function such that $lim_{xtoinfty}frac{f(x)}{x}$=$lim_{xto-infty}frac{f(x)}{x}=0$ .Prove that the function $g:Bbb RtoBbb R$ admits primitives when g is:
$g(x)=f'(1/x)$, for any $xneq 0$
I tried to integrate the function g and to make the substitution $x:=1/x$ but it did not helped me out.