I have a golang program that calculates the data in several threads at once, all extracting their Postgres data. The number of threads depends on a previous result. Therefore, there can be hundreds of threads trying to extract data from Postgres at the same time.
The golang sql library allows you to specify a connection limit, which prevents postgres from running out of shared memory or free connections.
If I code the maximum number of connections, I will run out of connections when something else is connected. On the other hand, if I code a too low number of connections allowed from the Golang program performance, it will be unnecessarily limited.
What would be the best way to allow the go program to use as many connections as possible, without encountering the limits? I imagine that this number will be variable depending on the amount of other services that are connected to the database at that time.
I am thinking of running PgBouncer between the database and the golang program, hoping to accept all the connections of the golang program, allowing as much as possible, but blocking the rest until the connections are released. However, I am not sure if PgBouncer does this, but I will try it below.
Is there perhaps another method to have a connection group that blocks connections when there are no real free connections? Blocking, not rejecting, since rejecting a connection will mean that I have to add retry logic to my golang program.