## Architecture – Sidecar processes in the unit?

I am a .NET Core web developer trying to learn Unity. In web development, if I want to perform more than one domain function at a time, I want to execute it as a completely separate service (disjointed / microservice) or as a separate process (sidecar). In .NET Core we have `IHostedService` and other variable hosting abstractions, so creating small sidecar processes is fine, and in 3.0, even ASP.NET Core is considered a sidecar process because it is being built on `IHostBuilder` for generic host support. My web development experience is mainly focused on distributed back-end systems.

In my process of trying to learn Unity, I am trying to understand how networks work in Unity. I was looking at their `NetworkManager` documentation, seems more useful for integrated host / client systems. I like that I can provide custom implementations of `NetworkManager`, and I think it will be of great help. I have been reading about asynchronous programming in Unity and looking at its ECS work system. It seems useful, but it doesn't seem to be what I'm looking for either. I am looking for a background / sidecar process architecture that the Unity main thread can call as needed, but that does not interact with the Unity main thread at all, only its life cycle is linked to the Unity main thread.

In the asynchronous multiplayer game that I am learning to create, the game world will be a dedicated server that runs remotely, and the client will have to connect to it to load its internal state. Since the remote state of the game is mainly in real time, I was thinking of implementing a combination of real-time and cached network administrators. Something like this:

``````// Used for strongly consistent, real-time communication, talks to the remote server
// directly. The main Unity thread never calls this directly.
public RealtimeStateNetworkManager { }

// Used for GameObjects. Tries to load from cache before making real-time networking
// calls. Cache items are instantiated when GameObject makes initial call,
// then background threads keep the cache updated during the lifecycle
// of the GameObject.
public CachedNetworkManager : NetworkManager { }
``````

For me `CacheHostSidecar`, I want to run it out of the main thread as a sidecar process, so it doesn't affect the core performance of the game client. The main objective of the internal cache is to manage network synchronization so that the client updates are in real time once the cache has warmed up, and then the cache updates its internal state separately.

The process would like this:

My understanding of `NetworkManager` and its function may be wrong. My main goal is to understand how I could start and run the `CacheSidecarHost` as a background / sidecar process when the Unity main thread starts so you can decouple the Unity main thread network.

My hope is that by decoupling the main network functionality of the Unity main thread, it helps people with poor network connections to have a pleasant experience since the network stack is implemented in a more sustainable / modern way, ensuring that even They can play game well enough that requires an internet connection.

## processes – zombie process using termux

I am testing one of my applications in the Android terminal terminal emulator.
I want to generate some zombie processes.

On GNU / Linux, if I open a terminal emulator and write:

``````ruby -e '10.times { fork { exit! } } && sleep '
``````

This will create 10 zombie processes. To verify the zombie processes, I write:

``````ruby -e "puts Dir('/proc/**').select { |x| File.split(x)(1).then { |y| y.to_i.to_s == y } }.then { |a| %)(1).split(1) == ?Z.freeze} } Zombies)> }"
``````

What outputs in the format:

``````Active Processes: 189 (10 Zombies)
``````

But in Termux, I can't make it work. Even if I try to generate processes, it will not generate any. I have a total of 11 processes shown in Termux!

What is different about Android? Is there any way to test my application using termux? Or is there any other application that allows me to do those things?

## stochastic processes – convergence of Brownian module movement

I'm looking for a brief argument that for a Brownian motion it started sometime $$x$$ we have
$$begin {equation} The | B_t | to infty text {a.s.} end {equation}$$ as $$t to infty$$.

I thought maybe the law of the iterated logarithm could give this. Also, I hope the statement is even true.

## node.js: FFmpeg conflict that runs in different processes

I have a nodeJS server that does some manipulation with audio files using the fluent-ffmpeg library, but when concatenating some files using the function:

``````    ffmpeg('./public/uploads/'+tmpFilename)
.input(path+newTmpName)
.complexFilter((
))
.on('end', () =>{

deleteArray.push(newTmpName);
deleteArray.push(tmpFilename);

deleteFiles(deleteArray).then(() =>{
parentPort.postMessage(newFilename);
}).catch((error)=>{
console.log('erro ao apagar os arquivos temporarios');
throw new Error(error);
})

})
.on('error', (error) =>{
console.log('erro na concatenação das partes: '+ error)
throw new Error(error);
})
``````

It works well when I run a single instance, if I test with more users who test (or even several tabs in the same browser) it is generated at the wrong time, it seems that there is a type of conflict between the processes in execution. I search, I show that it is possible to execute several processes without conflict.

## machine learning: in Markov decision processes, why is R0 omitted?

I am in the process of learning MDP, and a rather small thing is bothering me. Wherever I look, I see the order of things in this order:

$$S_ {0}, A_ {0}, R_ {1}, S_ {1}, A_ {1}, R_ {2}, ldots, A_ {t}, S_ {t}, R_ {t}$$

My question is why $$R_ {0}$$ be omitted?

## stochastic processes: find the spectral measure of a stationary centered sequence with a covariance function

I am trying to find the spectral measure of a stationary centered sequence $${ xi _n } _ {n in Bbb Z}$$ with a covariance function $$gamma_n = a ^ {| n |}, a in Bbb C, | to | = 1$$. The spectral measure is the finite measure defined in the segment $$(- pi, pi)$$ $$sigma$$-algebra of Borel sets, so that $$gamma_n = int _ {- pi} ^ { pi} e ^ {i lambda n} F (d lambda)$$, where $$gamma_n$$ is the covariance function
My solution is: $$a ^ {| n |} = e ^ {i alpha | n |}, α in (0; 2 pi).$$
Thus, $$e ^ {alpha]| n |} = int _ {- pi} ^ { pi} e ^ {i lambda n} F (d lambda) = int _ {0} ^ {2 pi} e ^ {i ( theta- pi) n} F (d theta)$$. I need to know how $$F$$ It is defined in this case.

## How to kill all processes using a given GPU?

I use the CUDA toolkit to perform some calculations on my Nvidia GPUs in Windows 7 SP1 x64 Ultimate. How to kill all processes that use a given GPU? (killing all processes at once)

## probability theory – Karatzas and Shreve. Extension of submartingale convergence to almost certainly correct continuous processes.

This is from Karatzas and Shreve & # 39; s Brownian Motion and Stochastic Calculus.
In the preceding sentence to the following theorem, he says that the assumption of correct continuity can be replaced by the assumption of correct continuity for $$P$$-almost every sample route. How can this replacement be tested? I thought about replacing the process $$X$$ with its correct continuous modification $$tilde {X}$$. However, first of all, I am not sure if it remains a submartingale after taking its correct continuous version. Furthermore, even if it is a submartingale, how do we know that the limit of the correct continuous process will also be an almost certain limit of the original process? In this case, $$t to infty$$ goes along $$mathbb {R}$$, so we cannot take the union of all null sets of measure as in $$n to infty$$ along $$mathbb {N}$$.

## architecture: how to optimally and safely isolate processes when building an operating system from scratch?

I am trying to build software in JavaScript that emulates an operating system. For all intents and purposes, is an operating system, although it is JavaScript and we are running in the Node.js or the browser basically. But that shouldn't matter, the fact that it is JavaScript. It could be in any language or platform for that matter.

What I'm stuck is the beginning. I'm used to writing code in higher level languages ​​that work on the operating system, so the process architecture is already built, so I've never had to deal with it.

The general question is: How do I create a process architecture?

The specific question is: At a high level, how do I safely believe in isolated processes?

Basically I have this:

``````var all_processes = ()
var specific_process = -1

function set_process(index) {
if (all_processes(index)) {
throw new Error('Process taken')
} else {
specific_process = index
all_processes(index) = {}
}
}

function call_function(name, ...args) {
all_processes(specific_process)(name)(...args)
}
``````

But I imagine doing async and multi-thread, etc. All the sophisticated characteristics of the processes. I would like to know how you can keep processes isolated, basically, safely. How does the operating system? Because at some point you need to create and manage the processes, but then the processes themselves cannot communicate (unless it is through a specific protocol). So, it seems that on one level, I need to pass the specific process to each function call, but then you lose the clean API of not having the process in each parameter. But then, if you do that, what does it mean that it does not take another process and passes it (that is, how do you maintain the security of accessing only that process?). In addition, verifying the process on each call would add a performance overhead.

Essentially, how to create safe processes, at least at a high level, what you should be looking for, if not a description of how to do it.

Wikipedia says:

Process isolation can be implemented with the virtual address space, where the address space of process A is different from the address space of process B, preventing A from writing to B.

However, I don't understand what this means, what exactly to do. How does it make my code look theoretically?

Maybe the sandboxes offer some information, but Wikipedia doesn't offer much, delegating the operating system without details:

A sandbox is implemented by running the software in a restricted operating system environment, thus controlling the resources (for example, file descriptors, memory, file system space, etc.) that a process can use.

The privileges do not reveal much.

Most operating systems use the memory management hardware of a CPU
to provide process isolation, using two mechanisms. First processes
privilege levels prevent untrusted code from manipulating the system
resources that implement processes, for example, memory
management unit (MMU) or interrupt controllers. These mechanisms "
non-trivial performance costs are largely hidden, since there is no
Alternative approach widely used to compare them. Mapping from
virtual to physical addresses can generate overhead of up to 10-30% due to
exception handling, online TLB search, TLB reloads and
maintenance of kernel data structures, such as page tables (29). In
In addition, virtual memory and privilege levels increase the cost of
Communication between processes.

As a solution, they present isolated software processes (SIP) and say:

The design and implementation of a SIP based system is an important
Contribution of this work. An isolated software process is a collection.
of memory pages and a language security mechanism that guarantees that
The code in one process cannot access the pages of another process. ONE SIP
replaces hardware memory protection with static verification of
security program Singularity uses language security and fast
Communication mechanism built in channels (15) to enforce a
invariant throughout the system that neither the core nor any other process
It contains a reference to the object space of a given process. Why
different process object spaces always reside in disjoint memory
pages, memory recovery is simple when processes
Finish.

I don't understand how this could work at all. Although this is useful:

In addition, the system maintains the invariant that there is at most
a pointer to an item in the exchange heap. When a process sends a
message, it loses its reference to the message, which is transferred to
The reception process (similar to sending a letter by postal mail).
Therefore, processes cannot use this heap as shared memory, and
the messages can be exchanged very efficiently through the passage of the pointer,
Not copying.

## Stochastic processes: what is the probability that the mouse eats the cheese?

I am doing an exercise on the Markov chain.

A cheerful mouse moves in a maze. If it's on time $$n$$ in a room with $$k$$ adjacent horizontal or vertical rooms, will be on time $$n + 1$$ in one of the $$k$$ adjacent rooms, choosing one at random, each with probability $$1 / k$$. A fat and lazy cat stays in the room all the time $$3,$$ and a piece of cheese waits for the mouse in the room $$5$$ The mouse starts in the room. $$1$$. See the following figure:

The cat is not completely lazy: if the mouse enters the room inhabited by the cat, it will eat it. Also, if the mouse eats the cheese, rest forever. Leave $$X_ {n}$$ be the mouse position at the moment $$n$$.

What is the probability that the mouse can eat the cheese?

From the graph, the transition matrix is ​​as follows:

$$P = begin {pmatrix} 0 and 1/2 and 0 and 1/2 and 0 \ 1/2 and 0 and 1/2 and 0 and 0 \ 0 and 1/2 and 0 and 1/2 and 0 \ 1/3 and 0 and 1/3 and 0 and 1/3 \ 0 and 0 and 0 and 1 and 0 \ end {pmatrix}$$

So the probability that the mouse can eat the cheese is $$mathbb P left ( forall n in mathbb N: X_n neq 5 right)$$

Could you please leave me some clues to calculate this probability? Thank you very much!