mysqldump – MySQL dump is missing routines and events

I am doing a MySQLdump from MySQL 5.7 (ubuntu 16.04) and import it into 8.0 (ubuntu 20.04), unfortunately after import the routines and events are missing.

Export:

/usr/bin/mysqldump -u $MUSER -p$MPASS dbname --single-transaction --routines  --quick --compact | gzip -9 > $DIR_BA/DB_$NOW.sql.gz

Import:

gunzip DB_210510.sql.gz
sed -i '1s/^/SET @OLD_FOREIGN_KEY_CHECKS = @@FOREIGN_KEY_CHECKS; SET FOREIGN_KEY_CHECKS = 0;/' DB_210510.sql
echo 'SET FOREIGN_KEY_CHECKS = @OLD_FOREIGN_KEY_CHECKS;' >> DB_210510.sql

mysql -u myuser -p dbname < DB_210510.sql 

What am I missing? The dump should contain events and routines according to the options.

mysqldump – MySQL 8 can not enable Mast-Master Replication after rollback from GTID Replication

can you help me with this issue please?
I have rolled back from GTID replication to traditional, and at the save time I’m adding new node to the cluster as replica, I have about 3TB of data dump, I have took the dump while the gtid was on but I turned it of later after the dump/restore completed I done all the steps to turn on master-master replication but i’m getting the follower error:

IO Thread STOPPED (err: Got fatal error 1236 from master when reading data from binary log: 'Cannot replicate GTID-transaction when @@GLOBAL.GTID_MODE = OFF, at file /mnt/data/mysql_data/mysql_bin_logs/mysql-bin.000995, position 903927020.; the first event 'mysql-bin.000995' at 903927020, the last event read from '/mnt/data/mysql_data/mysql_bin_logs/mysql-bin.000995' at 903927099, the last byte read from '/mnt/data/mysql_data/mysql_bin_logs/mysql-bin.000995' at 903927099.')
SQL Thread RUNNING.

I’m using MySQL 8 community installed on RedHat 7.9 for both

my.cnf on Master 2 Server:


interactive_timeout=60
default-authentication-plugin=mysql_native_password
bind-address = 0.0.0.0
datadir=/mnt/data/mysql_data
socket=/var/lib/mysql/mysql.sock
log-error=/mnt/data/mysql_logs/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
max_connections = 100000
binlog_expire_logs_seconds = 1209600
log-bin=/mnt/data/mysql_data/mysql_bin_logs/mysql-bin
server-id = 2
binlog_do_db = edxapp
binlog_do_db = edxapp_csmh

relay-log="/mnt/data/mysql_data/mysql_relay_log/relay-log"
auto-increment-offset = 2
slow_query_log = 1
slow-query_log_file = /mnt/data/mysql_logs/slow_mysql.log
long_query_time = 20

innodb-dedicated-server=on
read_only=OFF
super_read_only=OFF
tmpdir=/mnt/data/tmp

my.cnf on Master 1 Server:

default-authentication-plugin=mysql_native_password
bind-address = 0.0.0.0
datadir=/mnt/data/mysql_data
socket=/var/lib/mysql/mysql.sock
log-error=/mnt/data/mysql_logs/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
max_connections = 100000
binlog_expire_logs_seconds = 1209600
log-bin=/mnt/data/mysql_data/mysql_bin_logs/mysql-bin
server-id = 1
binlog_do_db = edxapp
binlog_do_db = edxapp_csmh
replicate-do-db = edxapp
replicate-do-db = edxapp_csmh
relay-log="/mnt/data/mysql_data/mysql_relay_log/relay-log"
auto-increment-offset = 1
slow_query_log = 1
slow-query_log_file = /mnt/data/mysql_logs/slow_mysql.log
long_query_time = 20
innodb_buffer_pool_size = 1300M
read_only=OFF
super_read_only=OFF
!include /etc/my.cnf.d/secrets-backup.cnf
tmpdir=/mnt/data/tmp

mysql – mysqldump: Got error: 1066: Not unique table/alias: ‘temp1’ when using LOCK TABLES

When I try to dump a whole database using

mysqldump -htest.com -uabc -p --column-statistics=0 abc > abc.sql

The following error occured.

mysqldump: Got error: 1066: Not unique table/alias: 'temp1' when using LOCK TABLES

I have no problem in dumping other database in the same host, only this database is having problem.
What is the cause and how I can fix it?

mysql – Cant find the exe of mysqldump

I’m just learning about MYSQL after 6 years focusing only on sql server(dumb me).

I’m studying about how backups works there, but I can’t find the mysqldump file to execute the backup. on DOS it says the command is not recognized and i cant find it to use the full patch in the command line.

is it something that should be download apart?

and also as we are here. with SQL Server we create a physical file for the backup. with MYSQL if i’m not mistaken, we create a DUMP FILE that has the “commands” to insert data and create objects again, right?

Use mysqldump for replication?

I have a large production database in Amazon RDS MySQL (billions of rows) that I want to set up as a master, replicating to an Aurora Serverless instance. Unfortunately, it looks like the only way to copy the data over is by using mysqdump, which is going to be a serious pain in itself.

However, when I read about setting up a slave with this method, it looks like it requires the master to be basically locked down for the whole time that mysqdump is running, in order to preserve the binlog position, which makes it far too expensive in lost business.

I have already tried DMS – it appears to be unreliable, either losing data or, worse, inserting spurious rows, and I don’t have the time or the insight to troubleshoot it. And as it turns out, snapshots cannot be restored into a serverless instance, for some reason.

Is there any way around these problem? I have seen mention of GTIDs, but so far I haven’t found enough info to see if it is useful in this situation.

mysql – Consistent InnoDB dumps with mysqldump

The MySQL documentation recommends that when dumping an InnoDB with mysqldump to use –single-transaction to get a transactionally consistent snapshot of the database with minimal blocking of other transactions. Unfortunately, this only works if you can somehow prevent any other connection from executing a schema change during the dump:

https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html

While a –single-transaction dump is in process, to ensure a valid
dump file (correct table contents and binary log coordinates), no
other connection should use the following statements: ALTER TABLE,
CREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE. A consistent
read is not isolated from those statements, so use of them on a table
to be dumped can cause the SELECT that is performed by mysqldump to
retrieve the table contents to obtain incorrect contents or fail.

What is a good way of ensuring that no schema changes happen during a dump? My use case is that I periodically (cronjob) back up my database every hour using the mysqldump utility.

I thought that maybe using mysqldump’s default of –lock-tables instead would give me transactional consistency, but:

The –single-transaction option and the –lock-tables option are
mutually exclusive because LOCK TABLES causes any pending transactions
to be committed implicitly.

Is that saying that if someone runs a mysqldump –lock-tables against an InnoDB that it can actually cause an ongoing transaction in another session to no longer be transactional? I hope not — that seems insane. Elsewhere:

https://dev.mysql.com/doc/refman/8.0/en/lock-tables.html#lock-tables-and-transactions

LOCK TABLES and UNLOCK TABLES interact with the use of transactions as
follows:

LOCK TABLES is not transaction-safe and implicitly commits any active
transaction before attempting to lock the tables.

This makes sense within the context of a single DB session. If you in the middle of a transaction on your session and do this, then it implicitly commits your open transaction first. It isn’t clear what it does, if anything, to transactions on other sessions. I’m hoping that it wouldn’t do anything, or rather that it would be consistently ordered with respect to any ongoing transactions on other sessions.

Am I just reading too much into the mysqldump docs and using –lock-tables achieves transactional consistency?

Are the mysqldump docs simply explaining that doing –single-transaction together with –lock-tables is nonsensical because –lock-tables would immediately and implicitly close the single transaction? (Couldn’t the reverse order work? But maybe it would be pointless?)

Is mysqldump –lock-tables otherwise transactionally consistent?

Thanks!

mysqldump – All recent changes in MySQL database has removed automatically. Newly, Inserted data is not available

I have been using MySQL for last two months. I’m inserting new data update previous data regularly. But when I open myphpadmin today, I’m seeing all the changes what i have made in 10 days are vanished.

Like, Inserted data in 10 days is unavailable. The update i have made to previous data also restore to previous version.

The auto increment field also showing the index to next number like nothing is deleted.
It’s seem that the database restore to it 10 days previous version.

New, i need the newly inserted data which i have inserted in this 10 days and update what i have made.

What could be the problem for happening this? and how could i get back my data?

Backing up a large database using mysqldump and CRON

I’m trying to backup a single MySql Database on my website using CRON but it’s not working. Command is

/usr/bin/mysqldump -uUSERNAME -pPASSWORD –databases DATABASENAME | gzip > /home/MYSITE/forumbackup/backup.$(date +”%d_%m_%Y_%H_%M_%S”).sql.gz

There is no error message in log files or the default email account. I know the location of mysqldump is correct as I checked with my hoster. I’m assuming gzip can be called as is.

any advice?

mysql – mysqldump: unknown variable ‘set-gtid-purged=OFF’ in laravel when running command php artisan schema:dump

I m using Laravel : 8.^0. After a few migrations i m trying to run php artisan schema:dump to generate an sql file. But i m getting the below error

mysqldump: unknown variable 'column-statistics=0'
mysqldump: unknown variable 'set-gtid-purged=OFF'

   SymfonyComponentProcessExceptionProcessFailedException 

  The command "mysqldump  --user="${:LARAVEL_LOAD_USER}" --password="${:LARAVEL_LOAD_PASSWORD}" --host="${:LARAVEL_LOAD_HOST}" --port="${:LARAVEL_LOAD_PORT}" --skip-add-locks --skip-comments --skip-set-charset --tz-utc --set-gtid-purged=OFF "${:LARAVEL_LOAD_DATABASE}" --routines --result-file="${:LARAVEL_LOAD_PATH}" --no-data" failed.

Exit Code: 7(Unknown error)

Working directory: D:laravel

Output:
================


Error Output:
================
mysqldump: unknown variable 'set-gtid-purged=OFF'

  at D:laravelvendorsymfonyprocessProcess.php:257
    253▕      */
    254▕     public function mustRun(callable $callback = null, array $env = ()): self   
    255▕     {
    256▕         if (0 !== $this->run($callback, $env)) {
  ➜ 257▕             throw new ProcessFailedException($this);
    258▕         }
    259▕
    260▕         return $this;
    261▕     }

  1   D:laravelvendorlaravelframeworksrcIlluminateDatabaseSchemaMySqlSchemaState.php:142eSchemaMySqlSchemaState.php:142
      SymfonyComponentProcessProcess::mustRun(Object(Closure))
                                                                                         eSchemaMySqlSchemaState.php:145
  2   D:laravelvendorlaravelframeworksrcIlluminateDatabasonentProcessProcess), Object(Closure))eSchemaMySqlSchemaState.php:145
      IlluminateDatabaseSchemaMySqlSchemaState::executeDumpProcess(Object(SymfonyComponentProcessProcess), Object(Closure))

I m using a docker mysql container with below mysql version

mysql Ver 8.0.22 for Linux on x86_64 (MySQL Community Server – GPL)

mysql – MySQLdump leads to exceeding max connections. Skip-Quick as a solution?

Every night I do a full mysqldump of a 17G DB in MySQL 5.7.32. This worke for years, now I am doing some heavy load on inserts during the night which caused at exactly the backup time the connections to rise to max_connections which led to connections errors.

As the Server has enough RAM 64G (30G free), I increased max_connections from 150 to 300 in a first reaction.

However looking at the dump command I found the option — quick (also enabled by default) which tells me that it is exporting row by row.

--single-transaction --routines --quick --compact

I am thinking of changing this to --skip-quick. but dare to change this since I would need to check the restore again and this is very time consuming.

Looking at the connections over time I also noticed that there are some interruptions around that time period. So maybe connections stack up since there is a block during MySQLdump?

MySQL error log shows a large amount of the following error, although not at this time points but continuously throughout the day:

Aborted connection 63182018 to db: 'mydb' user: 'test' host: 'myhost' (Got an error reading communication packets)

How would you approach this problem?

enter image description here

enter image description here
enter image description here