## postgresql: the server process ended with signal 9: Deleted: 9, works with LIMIT

I am trying to execute queries in a large database without interrupting the connection to the server.

I am using Postgres 12.1 on a mac with 16 GB of memory and approximately 40 GB of free disk. The database is 78gb according `pg_database_size` with the largest table of 20 gb according to do `pg_total_relation_size`.

The error I get (from the registry), regardless of the query that doesn't work, is:

``````server process (PID xxx) was terminated by signal 9: Killed: 9
``````

In the VS code the error is `"lost connection to server"`.

Two examples that do not work are:

``````UPDATE table
SET column = NULL
WHERE column = 0;
``````
``````select columnA
from table1
where columnA NOT IN (
select columnB
from table2
);
``````

I can execute some of the queries (the previous one, for example) by adding a `LIMIT` of, say, 1,000,000.

I suspected that I was running out of disk due to temporary files, but in the registry (with `log_temp_files = 0`), I can't see any written temporary file.

I tried to increase and decrease `work_mem`, `maintenance_work_mem`, `shared_buffers`Y `temp_buffers`. None worked, the performance was almost the same.

I tried to eliminate all indexes, which reduced the "cost" in some of the queries, but still cut the connection to the server.

What could be my problem and how can I solve it more?

In addition, I read that temporary files for timeout queries are stored in pqsql_tmp. I checked the folder and it has no files of significant size. Could temporary files be stored elsewhere?

The log file to execute a failed query looks like this:

``````2020-02-17 09:31:08.626 CET (94908) LOG:  server process (PID xxx) was terminated by signal 9: Killed: 9
2020-02-17 09:31:08.626 CET (94908) DETAIL:  Failed process was running: update table
set columnname = NULL
where columnname = 0;

2020-02-17 09:31:08.626 CET (94908) LOG:  terminating any other active server processes
2020-02-17 09:31:08.626 CET (94919) WARNING:  terminating connection because of crash of another server process
2020-02-17 09:31:08.626 CET (94919) DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exi\$
2020-02-17 09:31:08.626 CET (94919) HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 09:31:08.626 CET (94914) WARNING:  terminating connection because of crash of another server process
2020-02-17 09:31:08.626 CET (94914) DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exi\$
2020-02-17 09:31:08.626 CET (94914) HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 09:31:08.629 CET (94908) LOG:  all server processes terminated; reinitializing
2020-02-17 09:31:08.698 CET (94927) LOG:  database system was interrupted; last known up at 2020-02-17 09:30:57 CET
2020-02-17 09:31:08.901 CET (94927) LOG:  database system was not properly shut down; automatic recovery in progress
2020-02-17 09:31:08.906 CET (94927) LOG:  invalid record length at 17/894C438: wanted 24, got 0
2020-02-17 09:31:08.906 CET (94927) LOG:  redo is not required
``````

## sleep wake – What is the apfsd process?

It is shown in the Activity Monitor as "prevent sleep". I have the opportunity to review that list of processes that prevent sleep quite frequently, and I don't remember seeing it before. I guess it might have something to do with the Apple File System, maybe a demon started by launchd, but I can't find any references to it anywhere through Google searches.
I wonder if it is what prevents my iMac (macOS 10.15.3) from sleeping or if it is just a false track. If it's something necessary for APFS, I'm afraid to experiment by stopping the process for fear that the file system will get corrupted.

## reference request – Generalization of the ordinary generation function (relevant for the branching / percolation process)

Leave $$p$$ be a probability distribution in non-negative integers. The ordinary generating function associated with this distribution, $$G_p (x) = sum_ {k = 0} ^ infty p (k) x ^ k,$$ It can be interpreted as follows. Leave $$K$$ be drawn according to $$p$$ and then leave $$S sim text {Binomial} (K, x)$$. Then $$G_p (x)$$ is the probability that $$S = K$$.

We can generalize this definition: Let $$K$$ be drawn according to $$p$$ and then leave $$S sim text {Binomial} (K, x)$$. Then $$G_ {p, a} (x)$$ is the probability that $$S geq K-a$$. It can be written explicitly as $${G} _ {p, a} (x) = sum_ {k = 0} ^ infty p (k) sum_ {m = 0} ^ {a} binom {k} {m} (1-x ) m x km.$$

This element naturally appears in certain filtration problems (generalizing the role of $$G_ {p, 0}$$ in the study of the probabilities of extinction of a branching process based on $$p$$)

Does this family of functions have a name or other applications?

## development process – Verifying my understanding of the spiral model

Hello software engineers,
I just need to verify my understanding regarding the spiral process model, since this model is confusing me. So, as I understand it, the spiral model is similar to the cascade model (that is, the activities are as follows: requirements analysis, design, implementation and testing), but at each stage we do the following: first, only We perform a risk analysis in which we discover and resolve risks, then we implement this phase (either a requirements analysis phase or a design phase, etc.) and then plan the next phase. So, is my understanding correct?

## python – Decorator to instantiate the class in another process

Motivation

I want to execute some heavy computing tasks in a separate process, so that they do not monopolize the GIL and can make effective use of a multi-core machine.

Where those tasks are pure functions, I would only use the one provided `multiprocessing.Pool`. However, that does not work so well for the tasks that maintain the state. I will assume an example of a process that is doing data encryption on the fly and pumping it to a file. I would like the keys, the block chaining parameters and the open file identifier (which cannot be pickled and passed between processes) to reside as internal status of some `EncryptedWriter` object. I wish I could use the public interface of that object in a completely transparent way. But I would like that object to reside in the external process.

Overview

To that end, this code creates a decorator `@process_wrap_object` It involves a class. The new class will generate an external process to instantiate an object from the wrapped class. The external process then calls the methods in it in the required order and returns the associated return values. The coordination object that lives in the original process is responsible for resending these functions.

The function `process_wrap_object` It is the decorator itself, which takes a class and returns a class.

The function `_process_wrap_event_loop` it is the one that executes the work process, which is closely coupled to the `process_wrap_object`.

Finally the function `_process_disconnection_detector` just check if the `process_wrap_object` The coordination object has been destroyed, either by normal garbage collection or because the main process was blocked. In any case, you should instruct the work process to close cleanly.

Warnings

Note that method calls are blocking, as are normal method calls. This means that, by itself, this container will not accelerate anything: it simply does the work elsewhere with more overhead. However, it effectively cooperates with the main process that is divided with a lighter thread within the process.

Code

``````import inspect
from functools import partial
from multiprocessing import Process, Queue, Pipe

CLOSE_CODE = "_close"

def _process_disconnection_detector(pipe, instruction_queue):
"""Watcher thread function that triggers the process to close if its partner dies"""
try:
pipe.recv()
except EOFError:
instruction_queue.put((CLOSE_CODE, (), {}))

def _process_wrap_event_loop(new_cls, instruction_queue, output_queue, pipe, *args, **kwargs):
cls = new_cls.__wrapped__
obj = cls(*args, **kwargs)

routines = inspect.getmembers(obj, inspect.isroutine)
# Inform the partner class what instructions are valid
output_queue.put((r(0) for r in routines if not r(0).startswith("_")))
# and record them for the event loop
routine_lookup = dict(routines)

disconnect_monitor.start()

while True:
instruction, inst_args, inst_kwargs = instruction_queue.get()
if instruction == CLOSE_CODE:
break
inst_op = routine_lookup(instruction)
res = inst_op(*inst_args, **inst_kwargs)
output_queue.put(res)

disconnect_monitor.join()

def process_wrap_object(cls):
"""
Class decorator which exposes the same public method interface as the original class,
but the object itself resides and runs on a separate process.
"""
class NewCls:
def __init__(self, *args, **kwargs):
self._instruction_queue = Queue() # Queue format is ({method_name}, {args}, {kwargs})
self._output_queue = Queue() # Totally generic queue, will carry the return type of the method
self._pipe1, pipe2 = Pipe() # Just a connection to indicate to the worker process when it can close
self._process = Process(
target=_process_wrap_event_loop,
args=((NewCls, self._instruction_queue, self._output_queue, pipe2) + list(args)),
kwargs=kwargs
)
self._process.start()

routine_names = self._output_queue.get()

assert CLOSE_CODE not in routine_names, "Cannot wrap class with reserved method name."

for r in routine_names:
self.__setattr__(
r,
partial(self.trigger_routine, routine_name=r)
)

def trigger_routine(self, routine_name, *trigger_args, **trigger_kwargs):
self._instruction_queue.put((routine_name, trigger_args, trigger_kwargs))
return self._output_queue.get()

def __del__(self):
# When the holding object gets destroyed,
# tell the process to shut down.
self._pipe1.close()
self._process.join()

for wa in ('__module__', '__name__', '__qualname__', '__doc__'):
setattr(NewCls, wa, getattr(cls, wa))
setattr(NewCls, "__wrapped__", cls)

return NewCls
``````

Sample use:

``````@process_wrap_object
class EncryptedWriter:
def __init__(self, filename, key):
"""Details unimportant, perhaps self._file = File(filename)"""
def write_data(self, data):
"Details still unimportant, perhaps self._file.write(encrypt(data))"

writer = EncryptedWriter(r"C:UsersJosiahDesktopnotes.priv", 4610)
writer.write_data("This message is top secret and needs some very slow encryption to secure.")
``````

I am looking for a general review of both the high-level approach and the particular implementation, with special interest in any subtlety around multiprocessing or the correct decorator wrapping for classes.

Any suggestions on the additional functionality that would make this decorator noticeably more useful are also welcome. One feature that I am considering, but not yet implemented, is the explicit support for `__enter__` Y `__exit__` work with `with` blocks

## Calculate the instance of the engine that takes 60 seconds to process a request but times out before responding

I have a vm compute engine instance with a docker image. It has a server that performs expensive calculations that take about 2 minutes for each request.

The server responds perfectly for small requests. For the older ones, respond with the following:

``````Connection Timed Out

Connection Timed Out

``````

The server in my docker container does not return this. Then I know that the computation engine intervenes in some way and responds instead.

When looking at the stackdriver logs, I see that my server receives the request and processes it, but before I finish it, I would get the previous answer.

How do I prevent the calculation engine from intervening with the logic of my server? Or at least, how can I increase the timeout limit?

Thank you.

## How to run a background process, after getting a user entry in bash shell

``````#!/bin/bash
set -e

INPUT_NO_OF_PROCESS=\$1
NO_OF_PROCESS="\${INPUT_NO_OF_PROCESS:-1}"

#mkdir -p \$DUMP_DIR
echo "Spawning processes=\$NO_OF_PROCESS"

for i in \$(seq 1 \$NO_OF_PROCESS)
do
# This command will expect a password and has to be spawned
# into multiple processes after getting the password
done
``````

The bash script above cannot obtain the password, if it is started as a background process Y

Is there any way to get tickets and let the process go to the background?

## compilation process: reduction of the total size of the files of several Unity games

Our product consists of many small independent games, some made in Unity and others in other languages. These games can be started from a center. The file size of Unity games is quite large, since each game is being created to function as an independent game. When looking inside the folders, I can see that many of the files, folders and dlls are the same, so I thought it would be possible to reuse them in all games to reduce the file size of our installer.

So far, I have successfully moved the exe and _Data folders of the games into a folder so they can reuse MonoBleedingEdge and UnityPlayer.dll, but I was hoping to share even more resources than this. All games are created with Unity 2019.1 and all use many of the same libraries and dlls.

Does anyone have experience combining many Unity games like this? Is there any configuration for Unity that helps allow shared resources?

## Tool for the vulnerability management process in the DevOps environment

We have an organization with approximately 10 teams that develop and execute your custom software (much of Java, but also anything else). We are currently adopting a DevOps culture and I am looking for a tool to configure a transparent vulnerability management process. Here are some requirements:

• vulnerability scanner entry summary (for example, "Software X uses Jar xyz, which has a CVE entry with a CVSS score of 7)
• provide a process in which each computer can evaluate possible problems (for example, "the XY problem is not critical because the Python critical library is only in the file system but not in the active code)
• provide the resolution process "the problem will be the address in version X"
• provide panels with status that allows a centralized view
• some incorporated into the best practice process

The input sources will be:

• static source code analysis
• image scanning in the window
• OWASP dependency verification

I'm not looking for more scanners, more something to digest and evaluate their results. Some type of web-based workflow and reporting tool.

Thank you

Leif