## inequality – Convergence in probability and almost surely convergence for maximal empirical processes

For any $$n$$, let $$X_1, …, X_n$$ be i.i.d. random variables on the probability space $$(Omega, mathcal{F}, Pr)$$. Define
$$mu_n(A) = frac{1}{n} sum_{i=1}^n 1_{{X_i in A}}, \ mu(A) = Pr(X_1 in A).$$

Consider any $$mathcal{A} subset mathcal{F}$$, and define

$$g(X_1, …, X_n) := sup_{A in mathcal{A}} |mu_n(A) – mu(A)|$$

Prove that $$g(X_1, …, X_n)$$ converges in probability to $$0$$ if and only if $$g(X_1, …, X_n)$$ converges almost surely to $$0$$.

p/s: This problem is posed as an exercise in the book “Combinatorial methods in density estimation” by Luc Devroye and Gabor Lugosi (exercise 3.2 with a hint to use the bounded difference inequality). I have tried but could not solve it. Hopefully, someone could help. Thank you.

## bitcoincore development – What is the motivation behind Russell Yanofsky’s work to separate Bitcoin Core into independent node, wallet and GUI processes?

There are benefits to both users and developers to having Bitcoin Core split into separate node, wallet and GUI processes.

As Alyssa Hertig outlines here the benefit to users will be being able to run the Bitcoin Core node on a different machine to the Bitcoin Core wallet rather than being forced to run them on the same machine. A user could leave a node running continuously in the background but start and stop the wallets and the GUI as needed. It also opens up the prospect of using a different (i.e. not the Bitcoin Core) GUI or wallet with the Bitcoin Core node.

For Bitcoin Core developers, Yanofsky highlights maintainability and security as the key advantages.

Process separation will make Bitcoin Core more easily maintainable as it defines interfaces at process boundaries. Different parts of the code can interact by calling each other instead of sharing state. This helps code review by making it easier to identify dependencies between parts of the code. Defining boundaries in the codebase will also code review more scalable as reviewers will just need to understand part of the codebase well rather than needing to understand interdependencies across the whole codebase.

From a security perspective, the wallet and node code could run with different privileges and vulnerabilities should be harder to exploit given they will be limited to a single process. Inter-process communication (IPC) makes new debugging tools available such as the IPC_DEBUG environment variable to log all IPC calls.

There are some potential disadvantages that Yanofsky highlights. Inter-process communication is generally slower. IPC code can be tricky to write and may have bugs. Bad interfaces and unnecessary layers of abstraction can make it harder to implement new features. Features such as SPV (Simplified Payment Verification) that cross process boundaries will likely be more difficult to build.

Overall it seems clear the advantages outweigh the disadvantages. At the time of writing (August 2020) there are four remaining PRs to be reviewed and merged into Bitcoin Core and then Bitcoin Core should be multiprocess!

For more details on the process separation project see here.

## stochastic processes – On a degenerate SDE in the unit ball

This is a question about a diffusion process on the unit ball.

In this article J.S, the author considered the following SDE in the closed unit ball $$E subset mathbb{R}^n$$:
begin{align*} (1)quad dX_t=sqrt{2(1-|X_t|^2)},dB_t-cX_t,dt, end{align*}
where $${B_t}_{t ge 0}$$ is an $$n$$-dimensional Brownian motion, $$|cdot|$$ denotes the Euclidean norm on $$mathbb{R}^n$$ and $$c$$ is a nonnegative constant. We define an elliptic operator $$(mathcal{A},text{Dom}(mathcal{A}))$$ by $$mathcal{A}=C^2(mathbb{R}^n)|_E$$ and
begin{align*} mathcal{A}f=sqrt{2(1-|x|^2)}Delta f-c xcdot nabla f,quad f in text{Dom}(mathcal{A}). end{align*}
Then, standard results from martingale problems show that there exists a diffusion process $${X_t}_{t ge 0}$$ on $$E$$ such that
$$f(X_t)-f(x)-int_{0}^{t}mathcal{A}f(X_s),ds quad(t ge 0,, x in E)$$
is a martingale. Thus, the SDE $$(1)$$ possesses a solution. Furthermore, we can show that the solution is unique in the sense of distribution (by the way, the pathwise uniqueness for (1) is a very profound problem).

My question is as follows.

If $$c=0$$, then $$mathcal{A}$$ is a weighted Laplacian on $$E$$. However, we do not impose the Neumann boundary condition on $$mathcal{A}$$. Thus, the operator $$mathcal{A}$$ is not associated with a time-changed reflected Brownian motion on $$E$$, right? Indeed, there is no local time in the display of (1).

Even if $$c=0$$, is it difficult to describe the quadratic (Dirichlet) form of $$X$$? I am also interested in the fundamental solution of $$X$$. I’m not really sure that it exists…

I’m asking these questions to see what kind of diffusion process $$X$$ is.

## terminal – bash fork retry no child processes cpanel

sorry for my English language

i working on a node.js app test project on cpanel

i used cpanel terminal and i used `nodemon index.js` command then i got this message error:

``````jailshell: fork: retry: No child processes
jailshell: fork: retry: No child processes
jailshell: fork: retry: No child processes
jailshell: fork: retry: No child processes
jailshell: fork: retry: No child processes
``````

how can stop this error?

## network – Running some processes and not others over VPN

I’m looking to secure some apps using an OpenVPN connection.
I want the apps not to work when the VPN isn’t active.
I can’t necessarily track which servers the apps are trying to access, so manually specifying routes either from the server or client end is prohibitive. I also don’t want all traffic going over the VPN.

So here’s what I have so far:

1. Something needs to work at the application layer to ‘capture’ all traffic from an application.
2. Something else needs to work at the network layer to take all of that traffic and push it over the VPN.

And then I need a way to specify which apps and which interface, and be sure that if that interface is down or disconnected, no traffic flows.

So far I think this might be doable with `pf` (I found Murus, a GUI front-end), except that it doesn’t seem to deal with applications per-se, but rather networks and ports, which as stated above is problematic.

Then there’s `Little Snitch`, which deals with applications but is a binary go/no-go decision maker, rather than directing some traffic here and some traffic there.

That said, I did find a not-well-documented feature where it seems like you can create a rule for a process in `Little Snitch`, and give it access to `pf`. So perhaps there’s a way to write a `pf` rule that then directs that traffic over the VPN.

Open to suggestions.

## stochastic processes – Optimal rule for multiple stopping times for defect finding

Suppose a quality inspector is inspecting $$b$$ black and $$w$$ white gadgets. It is known in advance that there are in total $$d_b$$ defective black and $$d_w$$ white gadgets. The device comes down along an assembly line one by one. As each gadget passes, the inspector observes its color, and he chooses to let the gadget pass or use a device to detect whether the gadget is defective. But he can only use the device a total of $$n$$ times. What is the optimal stopping rule to use the inspecting device to maximize the expected number of defective gadgets found.

=====

Suppose at each pass the number of black gadgets already device inspected is $$i_b$$, amongst them $$f_b$$ are detected to be defective. Then the probability of this current black gadget detected to be defective is $$p_b=frac{d_b-f_b}{b-i_b}$$. Symmetrical probability holds for the white gadgets.

I have a conjecture for the explicit solution, which is a greedy algorithm, as follows and am seeking a proof.

At each pass of the gadget, the inspector waits for the gadget with the color to show up with the defect probability equal to $$p:=max(p_b, p_w)$$ to inspect with the device unless the number of device usages left is greater than the number of gadgets the defective probability of which equals to $$p$$.

I have set up the dynamic programming formulation but fail to see immediately either the proof or a counterexample to my conjecture.

## hardware – When Intel / AMD choose their Nanometer Processes, why were the specific numbers, 5, 7, 10, 14, 22, 32, 45, etc chosen?

When looking at the roadmaps for the CPU manufacturing process

Intel Expects to Launch 10nm Chips in 2017

1. 10 µm – 1971
2. 6 µm – 1974
3. 3 µm – 1977
4. 1.5 µm – 1981
5. 1 µm – 1984
6. 800 nm – 1987
7. 600 nm – 1990
8. 350 nm – 1993
9. 250 nm – 1996
10. 180 nm – 1999
11. 130 nm – 2001
12. 90 nm – 2003
13. 65 nm – 2005
14. 45 nm – 2007
15. 32 nm – 2009
16. 22 nm – 2012
17. 14 nm – 2014
18. 10 nm – 2016
19. 7 nm – 2018
20. 5 nm – 2020
21. 3 nm – ~2022

Why are these numbers chosen specifically? I have looked around, and there are deviations, such as:

Samsung Electronics began mass production of 64 Gb NAND flash memory chips using a 20 nm process in 2010.[114]

TSMC first began 16 nm FinFET chip production in 2013.[115]

And many others.

and so on. Yet as far as Intel and AMD are concerned, they are both in lockstep. Is there something to these numbers that lends themselves to the manufacturing process? Or is the selection completely arbitrary?

## stochastic processes – An integral involving Levy process with no positive jumps

Let $$L_t$$ be a Levy process with no positive jumps, but $$L_t$$ is not strictly decreasing, i.e
$$L_t = gamma t + sigma B_t + J_t,$$
where $$B_t$$ is a Brownian motion, $$J_t$$ is a pure jump process with only negative jumps, $$gamma geq 0$$, $$sigma geq 0$$, and at least one of $$gamma$$ and $$sigma$$ is non-zero. In this case, it is known that the 1-dim distribution of $$L_t$$ is absolutely continuous.

Let $$p(t,x)$$ be the probability density function of $$L_t$$:
$$P(L_t in B) = int_B p(t,s) ds.$$

Fix $$a > 0$$.

Now for $$t > 0$$ and $$x < a$$, let
$$q(t,x) = int_0^t frac{a}{s} p(s,a)p(t-s,x-a)ds.$$
I have reason to believe that $$q(t,x) to p(t,a)$$ as $$xnearrow a$$, but haven’t been able to prove it.

## Should Online Processes Assume that the User has Access to a Printer?

As printing on paper is a rapidly vanishing action, should online processes assume that the user has access to a printer ?

As in ‘Print QR code and take to collection point’ – which I saw recently.

## reference request – Rate of convergence for point processes in Skorokhod J1 topology

Skorohod J1 Topology space $$D(0,1)$$ is a metric space, see its definition in https://encyclopediaofmath.org/wiki/Skorokhod_topology

Assume we have a sequence of point processes $$(X^n_t: (Omega, P) to mathbb{Z}^+)_{t in (0,1)}$$ for any $$n ge 0$$ (i.e., stochastic processes with positive integer values), e.g., Poisson processes. So each sample path of $$X^n$$ is in $$D(0,1)$$.

Do we have reference materials which estimate the convergence rate $$rho_n$$ of point processes in Prokhorov metric, i.e.,

$$inf{{epsilon>0: P(X^n in A) le P(X^0 in A^{epsilon})+epsilon}, forall A subset D(0,1)}=O(rho_n) to 0,$$

where $$A^{epsilon}$$ means $$epsilon$$-neighborhood of $$A$$.

Edit: I know Kubilius obtained a rate of convergence in Prokhorov metric for weak invariance principle (i.e., $$X^n$$ are processes with values in $$mathbb{R}$$ instead without jumps, and converge to Brownian motion):

Kubilius, K. Rate of convergence in the invariance principle for martingale difference arrays, Lith Math J. 34 (1994) pp 383–392, doi:10.1007/BF02336885