Docker Compose ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

I have docker running on a centOS VM.

All of a sudden my docker stopped working.

When running docker-compose up, I get the following error every time:

Creating network “nginx-php” with the default driver ERROR: could not
find an available, non-overlapping IPv4 address pool among the
defaults to assign to the network

I tried uninstalling Docker and reinstalling it, but that didn’t help.

docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
ab8bed8d0def        bridge              bridge              local
eaf1f1928b69        host                host                local
a1e3c9baf283        none                null                local

ZFS Pool not auto mounting after upgrade to 20.04

zfs pool imported automatically on boot in Ubuntu 18.10

Updated to 20.04, pool no longer imported on boot – I don’t see any errors anywhere, but might be missing something.

sudo zpool import pool — Works, fine. Files are all there, no issues.

zpool update shows no updates required.
zpool status shows everything is ok.

/etc/default/zfs :: (cut most, but I think the important question is answered below)

# Run zfs mount -a during system start?

ZFS_MOUNT=’yes’

# Run zfs unmount -a during system stop?

ZFS_UNMOUNT=’yes’

Mount is to a specific location with the same name as the pool, so if I were running
zpool import pool
the mount location would be: /pool

/ is an ext4 volume on an ssd. So the system boots and operates fine outside of mounting the zfs volume.

I am fully up to date on apt update etc. Really kind of at a dead stop, not sure what to look at?

application pool – Force process to run in 32-bit mode in IIS 8.5

I have an ASP.NET MVC application which is compiled and deployed using “Any CPU”.

In IIS Manager -> Application Pools -> My Application Pool -> Advanced Settings, I have set to true the setting “Enable 32-bit Applications”.

So As far as I know, If that’s true, that means the worker process is forced to run in 32-bit. If the setting is false, then the app pool is running in 64-bit mode. So in my case, process is running in 32-bit (as I have set that setting to true) but If I open up Task Manager and check w3wp.exe process it is shown as w3wp.exe instead of w3wp*32.exe? Why? Does it mean process is not running in 32-bit and in fact it is running in 64-bit?

I am using IIS 8.5.9600.16385 on Windows Server 2012 R2 64-bit Standard.

sql server – Azure Elastic Pool – is it supported for MySQL?

Essentially, yes. An Azure Database for MySql is similar to an Azure SQL Database Elastic Pool or Azure SQL Database Managed Instance.

Within an Azure Database for MySQL server, you can create one or
multiple databases. You can opt to create a single database per server
to use all the resources or to create multiple databases to share the
resources
. The pricing is structured per-server, based on the
configuration of pricing tier, vCores, and storage (GB). For more
information, see Pricing tiers.

https://docs.microsoft.com/en-us/azure/mysql/concepts-servers

An Azure SQL Database Elastic Pool is similarly:

Azure SQL Database elastic pools are a simple, cost-effective solution
for managing and scaling multiple databases that have varying and
unpredictable usage demands. The databases in an elastic pool are on a
single server and share a set number of resources at a set price.

Elastic pools in Azure SQL Database enable SaaS developers to optimize
the price performance for a group of databases within a prescribed
budget while delivering performance elasticity for each database.

https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-pool-overview

Only Azure SQL Database has the feature to have lots of databases with separate physical resources in the same logical server. With Azure Database for MySql if you want two databases to have their own dedicate resources, you need two seperate Azure Database for MySQL servers.

dnd 5e – Multiclass Spellcaster: Do the involved classes share the same pool of spell slots?

Yes, there is a shared pool between EK and Wizard

Yes, you should ignore the original spell slots altogether

In general, I think you are overthinking this. Once you multiclass into multiple classes with spell slots (Warlock is an exception), you ignore the original spell slot tables. Just reference the multiclass spell slots per level table. That will show you how many slots you have.

Spells known are still gained normally. For each level you take you get the spells known as described by that class.

Any of the slots you have can be used for any of the spells you know and have prepared. (Provided they are the correct spell level of course)

Firebolt is a cantrip and cannot be cast using spell slots. (You can cast as many cantrips as you want per day, time allowing.)

That said, a spell you know from your first level of EK can be cast using one of the slots you get from your new ‘multiclass’ spell slots.

Your hypothetical 6EK/5Wiz could cast the magic missile they learned at first level using their one fourth level spell slot. It would generate 6 darts instead of 3 because of the higher spell slot used.

c++ – Should I write custom allocators for STL containers to interface with my memory pool, or just overwrite the standard new and delete

I want to write a custom memory allocator for learning. I’m tempted to have a master allocator that requests n bytes of ram from the heap (via new). This would be followed by several allocator… Adaptors? Each would interface with the master, requesting a block of memory to manage, these would be stack, linear, pool, slab allocators etc.

The problem I have is whether I should write custom allocator_traits to interface with these for the various STL containers; or if I should just ignore the adaptor idea and simply overload new and delete to use a custom pool allocator.

What I’m interested in understanding is what tangible benefit I would gain from having separate allocators for STL containers? It seems like the default std::allocator calls new and delete as needed so if I overload those to instead request from my big custom memory pool, I’d get all the benefit without the kruft of custom std::allocator code.

Or is this a matter where certain types of allocator models, like using a stack allocator for a std::dqueue would work better than the default allocator? And if so, wouldn’t the normal stl implementation already specialise?

python 3.x – multiprocessing Pool apply_async return value

I want to get value from function, which end first. The other function should be close after first output.

I have code like this:

import multiprocessing 
import cv2

def funct(num):

    if num == 1:
        img = cv2.imread('cam_c_1.jpg',0)
    if num == 2:
        img = cv2.imread('cam_c_2.jpg',0)
    
    d = decode(img) #function searching for some  pattern in picture
    return(d)

p = multiprocessing.Pool(2)

def quit(arg):
    p.terminate() 

for i in range(2):
    p.apply_async(funct,args=(i+1,), callback=quit)
p.close()
p.join()

If there’s no loop it’s not a problem. For example it works:

res = p.apply_async(funct,args=(i+1,), callback=quit).get() 

But with loop I don’t know how to get the decoded value from function which give output earlier.

bitcoin core – how does node remove conflicting transactions from memory pool?

Sorry for the long text. This is just my observation and was wondering if you could let me know if I am correct or not.

Case 1)

Let’s say minerA and minerB both have the same transaction in their mempool since my transaction got propagated to the network before including it in the block.

Now, Let’s say minerA was trying to mine a block and before it did, minerB solved it and broadcasted it to minerA.

Now, it’s time that minerA has to remove the conflicting transactions that already happened on minerB’s block. How will it remove those transactions from memory pool ? will minerA take a look at the minerB’s block, get txId’s from the transactions, and check if transactions with those txId is in its mempool and if it is, it removes them from it.

Case 2)

Let’s say I have 1 BTC only. I send this 1BTC to address B on minerA, and I send this same BTC at the same time to minerB. minerB will include the transaction to its mempool and then move it to block and start mining on it. minerA will also include the transaction to its mempool, then move it to block and start mining on it.

let’s say minerB solved its block quicker and broadcasted it. Now, what’s gonna happen is minerA stops mining its block, since minerB was quicker. Now, minerA will look minerB’s block, and seems legit. So now it will add it to its chain.

Right now, because minerB block got solved, minerA now needs to validate mempool again (it has to remove conflicting transactions as explained in case 1 and also has to check if after including minerB’s block, if minerA’s pool still has the valid transactions(balance is valid), in this case, since I only had 1BTC, it will check the transaction that i sent to address b and see that i don’t have any BTC anymore. so it will remove it .

NOTE: miner will start validating mempool again before that miner includes transactions in a block and starts mining on it. right ?

I guess , miners remove transactions from mempool if

  • transaction is conflicting as explained in Case 1

  • transaction can’t be included anymore because it’s already invalid due to blocks mined at the same time as explained in Case 2.

What do you think I missed, do you agree to my observations ?

why you couldn’t use mmap to manage a rdbms buffer pool – or the whole rdbms itself just like NoSql stores?

why you couldn’t use mmap to manage a rdbms buffer pool – or the whole rdbms itself just like NoSql stores?

I see that Cassandra, Hbase, Elastic do use mmap so why not use the same strategy for an RDBMS buffer pool?

MySQL takes huge memory by ignoring buffer pool size

I am facing a very strange problem, looks like on mysql startup, mysql is taking so much memory than 4x of buffer pool size.

I am using Ubuntu VM (4 Core 8 GB) with MySQL 5.6.33. /etc/mysql/my.cnf is as below:

(client)
port        = 3306
socket      = /var/run/mysqld/mysqld.sock

(mysqld_safe)
socket      = /var/run/mysqld/mysqld.sock
nice        = 0

(mysqld)
user        = mysql
pid-file    = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /datafiles/mysql
tmpdir      = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address        = 0.0.0.0
key_buffer      = 128M
max_allowed_packet  = 128M
thread_stack        = 192K
thread_cache_size       = 8
wait_timeout        = 300
interactive_timeout = 600
max_connect_errors = 1000000
open-files-limit = 1024000
transaction-isolation   = READ-COMMITTED
myisam-recover-options  = BACKUP
max_connections        = 25000
query_cache_limit   = 1M
query_cache_size        = 4M
general_log_file        = /dblogs/audit/general_log.log
general_log             = OFF
log_error = /dblogs/error.log
slow_query_log = ON
slow_query_log_file = /dblogs/SLOW.log
long_query_time =2 
min_examined_row_limit = 5000
server-id       = 1
log_bin         = /dblogs/binarylogs/mysql-bin.log
expire_logs_days    = 10
max_binlog_size   = 10M
binlog_format       = MIXED
innodb_strict_mode = OFF
sql_mode = NO_ENGINE_SUBSTITUTION
innodb_file_format = barracuda
innodb_file_format_max = barracuda
innodb_file_per_table = 1
innodb_data_home_dir        = /datafiles/mysql
innodb_buffer_pool_size = 300M
innodb_buffer_pool_instances = 1
innodb_open_files = 6000
innodb_log_file_size = 512M
innodb_log_buffer_size = 64M
innodb_lock_wait_timeout    = 600
innodb_io_capacity  = 400
innodb_flush_method = O_DSYNC
innodb_flush_log_at_trx_commit = 2
innodb_write_io_threads = 2
innodb_read_io_threads = 2
innodb_log_files_in_group = 2
innodb_monitor_enable = all
join_buffer_size=256K
sort_buffer_size=256K

(mysqldump)
quick
quote-names
max_allowed_packet  = 16M

(mysql)
#no-auto-rehash # faster start of mysql but no tab completition

(isamchk)
key_buffer      = 16M

!includedir /etc/mysql/conf.d/

As, per config file innodb_buffer_pool_size is set 300M and innodb_buffer_pool_instances is set to 1 (no need, as buffer pool is < 1G). error log file also confirms that buffer pool is set to 300.0M

ERROR.log

2020-09-24 01:50:06 4886 (Note) Plugin 'FEDERATED' is disabled.
2020-09-24 01:50:06 4886 (Note) InnoDB: Using atomics to ref count buffer pool pages
2020-09-24 01:50:06 4886 (Note) InnoDB: The InnoDB memory heap is disabled
2020-09-24 01:50:06 4886 (Note) InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-09-24 01:50:06 4886 (Note) InnoDB: Memory barrier is not used
2020-09-24 01:50:06 4886 (Note) InnoDB: Compressed tables use zlib 1.2.8
2020-09-24 01:50:06 4886 (Note) InnoDB: Using Linux native AIO
2020-09-24 01:50:06 4886 (Note) InnoDB: Using CPU crc32 instructions
2020-09-24 01:50:06 4886 (Note) InnoDB: Initializing buffer pool, size = 300.0M
2020-09-24 01:50:06 4886 (Note) InnoDB: Completed initialization of buffer pool
2020-09-24 01:50:06 4886 (Note) InnoDB: Highest supported file format is Barracuda.
2020-09-24 01:50:06 4886 (Note) InnoDB: 128 rollback segment(s) are active.
2020-09-24 01:50:06 4886 (Note) InnoDB: Waiting for purge to start
2020-09-24 01:50:06 4886 (Note) InnoDB: 5.6.33 started; log sequence number 4529735
2020-09-24 01:50:06 4886 (Note) Server hostname (bind-address): '0.0.0.0'; port: 3306
2020-09-24 01:50:06 4886 (Note)   - '0.0.0.0' resolves to '0.0.0.0';
2020-09-24 01:50:06 4886 (Note) Server socket created on IP: '0.0.0.0'.
2020-09-24 01:50:06 4886 (Note) Event Scheduler: Loaded 0 events
2020-09-24 01:50:06 4886 (Note) /usr/sbin/mysqld: ready for connections.
Version: '5.6.33-0ubuntu0.14.04.1-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  (Ubuntu)

even more show global variables like '%pool%'; also show that buffer pool is set properly:

+-------------------------------------+----------------+
| Variable_name                       | Value          |
+-------------------------------------+----------------+
| innodb_additional_mem_pool_size     | 8388608        |
| innodb_buffer_pool_dump_at_shutdown | OFF            |
| innodb_buffer_pool_dump_now         | OFF            |
| innodb_buffer_pool_filename         | ib_buffer_pool |
| innodb_buffer_pool_instances        | 1              |
| innodb_buffer_pool_load_abort       | OFF            |
| innodb_buffer_pool_load_at_startup  | OFF            |
| innodb_buffer_pool_load_now         | OFF            |
| innodb_buffer_pool_size             | 314572800      |
+-------------------------------------+----------------+

But after service start if I check using free -h or in top mysql is taking more than 75% of RAM

             total       used       free     shared    buffers     cached
Mem:          7.5G       5.3G       2.1G        28K       2.8M        77M
-/+ buffers/cache:       5.3G       2.2G
Swap:           9G       1.2G       8.8G

top command output:

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 8071 mysql     20   0 6982340 5.741g   9260 S   0.3 77.0   0:02.19 mysqld

I checked by changing innodb_buffer_pool_size to 1G,2G,3G and innodb_buffer_pool_instances to 1,8,10, but irrespective of all these when I start MySQL service 5-6 GB of total RAM (7GB) is taken by Mysql and as soon as I stop the service RAM usages reduced to 224M.

What is this behaviour?
Did I misconfigured the MySQL?
Why Mysql service is taking more than (4x or 8x) of buffer pool size?