transactions – Is 8 the maximum number of incoming peer connections?

The reference here for running the bitcoin daemon does not really clarify what maxconnections mean.

maxconnections=  Maintain at most  connections to peers (default: 125)

Is this the max connection for all inbound and outbound peer connections? Or is it just to one or the other? If I understand correctly, inbound connections are others trying to connect to your node for information (e.g. transaction) propagation, while outbound connections are your bitcoin daemon node connecting to others for information.

Here’s another reference in one of the posted answers that stated 8 outbound peer connections is the max; it seems to imply that the number of outbound peer connections is unconfigurable. Furthermore, that maxconnections only controls the number of inbound connections and not outbound. Is this true?

I don’t intend on being connected to by other peers (this decision is rather selfish, but that’s besides the point), so I’ve blocked port 8333. I believe, effectively, blocking port 8333 will only stop peers from connecting to me (stops inbound) and not me connecting to peers (does not stop outbound). Is this right?

What implications do blocking port 8333 have on my bitcoin daemon from getting transactions? Does it get less transactions or get transactions slower? I’ve been noticing that for relatively long stretches of time (e.g. 20 seconds or so), sometimes I see no transactions come through.

ASUS ROG GL553VE (aka FX53VE-MS74) External Monitor Connection(s)

When I attach a 3840×1080 monitor capable of 144Hz through HDMI, all I am getting is 60Hz.

There is a USB 3.1 Gen 1 Type-C port, but I cannot confirm that DisplayPort functionality is implemented.

Is there a way to get 144Hz signal out of the GL553VE, given that the Nvidia GeForce 1050Ti supports it?

optimization – MySQL Aborted Connections

we have a dedicated MariaDb 10.5 (on openshift / docker) server with 32GB / 16 cpu and for some reason we keep getting errors like:

(Warning) Aborted connection 15072644 to db: ‘unconnected’ user: ‘unauthenticated’ host: ‘connecting host’ (Out of memory.)

But when looking at Graphana we don’t see any peak in memory usage

enter image description here

I’ve tried to modify the

 max_allowed_packet to 512M ## instead of 128M

enter image description here

But i think it made things worse , that server has 35 databases each containing about 210 tables

Here is the cnf file we use :

    skip-external-locking
    skip-name-resolve = 1
    innodb_file_per_table = 1

    innodb_flush_log_at_trx_commit = 2
    innodb_flush_method=O_DIRECT
    key_buffer_size         = 16M
    max_allowed_packet      = 512M
    thread_stack            = 192K
    thread_cache_size       = 64
    table_open_cache        = 4500
    join_buffer_size        = 1M
    max_connections         = 2500
    wait_timeout            = 100
    interactive_timeout     = 20
    innodb_buffer_pool_size = 24G 
    # Set .._log_file_size to 25 % of buffer pool size
    innodb_log_file_size    = 1G
    #innodb_log_buffer_size  = 512M
    # Remove the STRICT_TRANS_TABLES which was added as default by MariaDB After 10.2.4
    sql-mode="NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"

    tmp_table_size                 = 256M
    max_heap_table_size            = 256M
    #maximum size of a single resultset in the cache.
    query_cache_limit              = 2M
    #maximum amount of data that may be stored in the cache
    query_cache_size               = 0
    query_cache_type               = 0

    log_bin         = /logs/mysql/mysql-bin.log
    expire_logs_days    = 2
    max_binlog_size         = 100M

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

Any idea what we are doing wrong?

How can i handle more websocket connections in Python?

I have the following basic code, which connects to a websocket server and receives some data:

import websocket, json, time

def process_message(ws, msg):
    message = json.loads(msg)
    print(message)

def on_error(ws, error):
    print('Error', e)

def on_close(ws):
    print('Closing')

def on_open(ws):
    def run(*args):
        Subs = ()
       
        tradeStr=  """{"method": "SUBSCRIBE", "params":%s, "id": 1}"""%(json.dumps(Subs))
        ws.send(tradeStr)

    thread.start_new_thread(run, ())

def Connect():
    websocket.enableTrace(False)
    ws = websocket.WebSocketApp("wss://myurl", on_message = process_message, on_error = on_error, on_close = on_close)
    ws.on_open = on_open
    ws.run_forever()

Connect()

Now, i would like to create more connections to different servers and receive data concurrently in the same script. I tried the following:

def run(url):

    def process_message(ws, msg):
        message = json.loads(msg)
        print(message)

    def on_error(ws, error):
        print('Error', e)

    def on_close(ws):
        print('Closing')

    def on_open(ws):
        def run(*args):
            Subs = ()
           
            tradeStr=  """{"method": "SUBSCRIBE", "params":%s, "id": 1}"""%(json.dumps(Subs))
            ws.send(tradeStr)

        thread.start_new_thread(run, ())

    def Connect():
        websocket.enableTrace(False)
        ws = websocket.WebSocketApp(url, on_message = process_message, on_error = on_error, on_close = on_close)
        ws.on_open = on_open
        ws.run_forever()

    Connect()

threading.Thread(target=run, kwargs={'url': 'url1'}).start()
threading.Thread(target=run, kwargs={'url': 'url2'}).start()
threading.Thread(target=run, kwargs={'url': 'url3'}).start()

Now, this code works, i’m connecting to different URLS and i’m streaming data from all of them, but it seemed to me an “hacky” solution, also i don’t know if what i’m doing could be bad practice or not. Each connection will send around 600/700 small json dictionaries, and i need to update every record to the db.
So my question is: is this implementation ok? Since it works with threads, can it create problems in the long run? Should i do another library such as Tornado? Any kind of advice is appreciated.

networking – Trouble accessing windows file shares over multiple VPN connections

Our team connects to our corporate network over VPN from remote Win 10 workstations where we access file shares. We’re lifting and shifting some resources to Azure and the easiest solution is through another VPN hosted in Azure. However when we connect to the Azure VPN we lose access to file shares on the corporate network. AFAIK there’s no address space collisions between the two networks and the corporate VPN stays connected, just stops working (access to file shares drops out) until you disconnect the Azure VPN.

How do you configure Windows 10 to allow access to file shares through two different VPN connections simultaneously?

server – Mirroring Connections – Ask Ubuntu

I’m not all too savvy in terms of servers but I currently have an Ubuntu Server 20.04 and I’m trying to use it to connect to another server host. However, the host is currently picking up my IP as suspicious and therefore flags it and blocks it’s connection. I’ve talked to the company and apparently there’s nothing I can do about it.

I’m wondering if it’s possible to pass connections through another secondary Ubuntu server, connect to the host and have information sent back through that secondary server in order to bypass the suspicious flag. Also, how would I do this? Sorry if this is a dumb question, I’m fairly new to server hosting, thanks.
Any help is appreciated.

IP Addresses of inbound connections in bitcoin full node (TOR) looking as a localhost (127.0.0.1)

I run bitcoin full node (mynode) through TOR and I see that inbound connections appear only as a localhost with varying ports usually in high range (examples 127.0.0.1:35010, 127.0.0.1:58188, 127.0.0.1:38804, 127.0.0.1:56338).

My config setup for TOR in bitcoin node is:

proxy=127.0.0.1:9050

listen=1

bind=127.0.0.1

onlynet=onion

dnsseed=0

dns=0

What are these inbound connections? Are they nodes of other people or some apps running on my mynode? And if they are nodes of others why are they channeled through my home address and how?

outlook.com – “Proxy server refusing connections” for Microsoft Team Meeting Join Link

It’s my first question here. It so happens that I was using a link from outlook office mail where I was given a Microsoft Teams Meeting link from my company, which upon being clicked should ideally allow me to enter the meeting.

However, it so happened that when I clicked the link to join the meeting from the Firefox browser, a message displaying “The proxy server is refusing connections” was displayed on the browser.

I updated my Firefox browser proxy settings to “Auto-detect” mode and also checked my LAN and device proxy settings which seemed fine. Thereafter, I tried clicking the link to join again. I even cleared the cache data of my Firefox browser which amounted to no success.

Could I please have some suggestions so as to why this “The proxy server is refusing connections” error might arise, and how to eradicate it ?

Edit : However Webex join-meeting links work fine, as they are clicked upon.

C# closing mysql connection is not working, mysqli_connect(): (08004/1040): Too many connections

I’m getting this error on phpMyAdmin

mysqli_connect(): (08004/1040): Too many connections 

The only script that is using this DB:

public static bool checkIp(string ip)
        {
            Console.WriteLine("CHECKIP");

            try
            {
                string sql = " SELECT * FROM `Ip tables` ";
                MySqlConnection con = new MySqlConnection("host=hostname;user=username;password=password;database=database;");
                MySqlCommand cmd = new MySqlCommand(sql, con);
                con.Open();

                MySqlDataReader reader = cmd.ExecuteReader();

                while (reader.Read())
                {
                    if (ip == reader.GetString("Ip"))
                    {
                        Console.WriteLine("Benvenuto, " + reader.GetString("Name"));
                        con.Close();
                        return true;
                    }
                }
                con.Close();
                return false;
            }
            catch(SqlException exp)
            {
                throw new InvalidOperationException("Error", exp);
            }
        }

Does this code close the connection correctly or something is wrong?

docker – Nginx: how to increase backlog (net.core.somaxconn), without changing sysctl.conf? Want to allow many pending connections

I am not running a webapp, but rather a Machine Learning model which needs to provide real-time predictions.

Am using Nginx with Gunicorn, both of which are running in a docker container. The setup uses 4 gunicorn workers with 1 thread each (hosting 4 copies of my model) and nginx with 1 worker process.

At the moment, this setup returns 502 errors when my client sends a burst of requests to my server. I want to avoid this, even if it means longer response times for each request.

Things I have tried:

  • Increasing net.core.somaxconn from 128 to 2048: this alleviates the issue of 502s. However, I cannot change sysctl.conf in a production environment because my docker container runs in a non-privileged mode (I have no control over this, since it is controlled by another team).
  • Removing nginx altogether. This does work since I don’t receive traffic from the internet and I don’t have to serve static content, just ML predictions coming in as an HTTP POST request. But I want to avoid this as it is not recommended by Gunicorn.

Would some of the folks here be able to help out?