migration – Migrate domain access field

I am currently looking for an example of how to migrate data into an domain access field to select the domain for the page in the Domian settings.

Has someone a hint for me on how to map the fields?

I tried the following:

    plugin: default_value
    default_value_callback: abc //id of the domain record 

field_domain_access: 123 // domain_id of the domain record 

powershell – Microsoft Teams Migration using Graph API not able to maintain Teams/Channels tab ordering in destination

We are able to migrate the Teams and Channels tabs using Microsoft graph API.
But the issue is the tabs ordering is not maintain in the destination environment.

Could you help me to find the PowerShell script or Graph api, Using that the Teams tab ordering can be maintain in the destination environment.

innodb – MySQL performance degraded after database migration?

I migrated my MySQL database from GCP to Azure (both 5.7), but it seems to have affected performance.

Server before migration: 2 VCPUS with 7.5GB memory
Server after migration: 2 VCPUS with 8GB memory

Both servers run / ran version 5.7 of the MySQL server. My database is currently around 6GB in size, growing 100MB+ a day. It only consists of 32 tables, although a fraction of them tables enter the millions of rows category.

I read up on innodb_buffer_pool_size, GCP apparently sets it to around 80% of the memory, which would make it 6GB. I have set the innodb_buffer_pool_size on the new server to the same value.

Before updating this value (when I first noticed decreased performance), innodb_buffer_pool_size was set to 0.1 GB on the new server, I then decided to update this to the value the GCP server was set at hoping it would help.

Following this documentation I was able to update the buffer pool size.

How did I check the innodb_buffer_pool_size initially?

-- returned 0.111...
SELECT @@innodb_buffer_pool_size/1024/1024/1024;

How did I update innodb_buffer_pool_size?

SET GLOBAL innodb_buffer_pool_size=6442450944;

I checked the resize status with this query,

-- returned 'Completed resizing buffer pool at 200920 13:46:20.'
SHOW STATUS WHERE Variable_name='InnoDB_buffer_pool_resize_status';

I execute around 2 queries a second, peaking at 250k a day spread out. I can’t be certain but this usage shouldn’t be enough to halt performance?

How am I checking performance?

I have shown a list of queries ran, and the times it takes for the server to respond. I have tested these queries in Navicat, Datagrip, and CLI with similar results.

I wasn’t sure what queries to include here to give as much information as possible, so if I haven’t included anything useful I can update it upon request.

-- Fetching 100k rows from a 3.1m rows table
-- Time took: 21.248s
SELECT * FROM `profile_connections` LIMIT 100000;

-- (SECOND TIME) Fetching 100k rows from a 3.1m rows table
-- Time took: 1.735s
SELECT * FROM `profile_connections` LIMIT 100000;

- Fetching a random row from a 3.1m row table 
-- Time took: 0.857s
SELECT * FROM `profile_connections` WHERE `id` = 2355895 LIMIT 1;

-- (SECOND TIME) Fetching a random row from a 3.1m row table 
-- Time took: 0.850s
SELECT * FROM `profile_connections` WHERE `id` = 2355895 LIMIT 1;

-- Fetching all rows from a 20 row table
-- Time took: 40.010s
SELECT * FROM `profile_types`

-- (SECOND) Fetching all rows from a 20 row table
-- Time took: 0.850s
SELECT * FROM `profile_types`

But at times, I can run all of the above queries and get a response in 2 – 5 seconds. Performance seems to be hit or miss, there are huge differences in times taken for the same query, depending on when it is run which I am currently struggling to diagnose.

I ran mysqltuner and got these performance metrics back:

(--) Up for: 47m 39s (38K q (13.354 qps), 1K conn, TX: 403M, RX: 63M)
(--) Reads / Writes: 50% / 50%
(--) Binary logging is disabled
(--) Physical Memory     : 7.8G
(--) Max MySQL memory    : 146.8G
(--) Other process memory: 0B
(--) Total buffers: 6.0G global + 954.7M per thread (151 max threads)
(--) P_S Max memory usage: 72B
(--) Galera GCache Max memory usage: 0B
(!!) Maximum reached memory usage: 21.9G (281.61% of installed RAM)
(!!) Maximum possible memory usage: 146.8G (1888.34% of installed RAM)
(!!) Overall possible memory usage with other process exceeded memory
(OK) Slow queries: 3% (1K/38K)
(OK) Highest usage of available connections: 11% (17/151)
(OK) Aborted connections: 0.67%  (9/1342)
(!!) name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
(OK) Query cache is disabled by default due to mutex contention on multiprocessor machines.
(OK) Sorts requiring temporary tables: 0% (0 temp sorts / 41 sorts)
(OK) No joins without indexes
(OK) Temporary tables created on disk: 4% (82 on disk / 1K total)
(OK) Thread cache hit rate: 98% (17 created / 1K connections)
(OK) Table cache hit rate: 63% (667 open / 1K opened)
(OK) table_definition_cache(1400) is upper than number of tables(302)
(OK) Open file limit used: 1% (55/5K)
(OK) Table locks acquired immediately: 100% (1K immediate / 1K locks)

Slow query logs
I run a lot of the same queries, so I’ve truncated it to include just a few.

# Time: 2020-09-20T16:45:04.230173Z
# User@Host: root(root) @  (  Id:     7
# Query_time: 1.022011  Lock_time: 0.000084 Rows_sent: 1  Rows_examined: 1058161
SET timestamp=1600620304;
SELECT @id := `id`,`item`
                    FROM `queue_items`
                    WHERE `processed_at` IS NULL AND `completed_at` IS NULL AND `confirmed` = '1'ORDER BY `id` ASC
                    LIMIT 1
                    FOR UPDATE;
# Time: 2020-09-20T16:45:09.676613Z
# User@Host: root(root) @  (  Id:     5
# Query_time: 1.198063  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0
SET timestamp=1600620309;
# Time: 2020-09-20T16:45:22.938081Z
# User@Host: root(root) @  (  Id:     4
# Query_time: 5.426964  Lock_time: 0.000133 Rows_sent: 0  Rows_examined: 1
SET timestamp=1600620322;
UPDATE `queue_items` SET `completed_at` = '2020-09-20 16:45:17', `updated_at` = '2020-09-20 16:45:17' WHERE `id` = 1818617;

migration – How to move users with homes encrypted using ecryptfs to new system instance (the same distro)?

From this topic I learned how to move users to new system (just copy etc/passwd,shadow,group,gshadow). I tried it on debian buster. And it turned out that in my case, it doesn’t work correctly. I suppose it may be connected with the fact, that some user’s homes are encrypted using ecryptfs. I copied the mentioned files, mounted the original /home location in the new system but after this operation graphical session stucks on black screen with cursor instead of showing login screen.
On the other hand, I’m able to login as user with encrypted home on text console. Encrypted home is mounted and everything looks OK. Is it indeed connected to ecryptfs? What else should I copy, besides 4 mentioned files, to fully migrate?

migration – Uncaught Error: Class ‘Redis’ errors on a site I’ve transfered to new hosting

I’ve been given a site that I need to install on my server. The site was made by someone else and it seems to have redis installed.

I get errors such as (paths altered/truncated in the error msg for privacy reasons):

Fatal error: Uncaught Error: Class ‘Redis’ not found in
wp-contentobject-cache.php:732 Stack trace: #0
wp-contentobject-cache.php(171): WP_Object_Cache->__construct() #1
wp-includesload.php(638): wp_cache_init() #2 wp-settings.php(131):
wp_start_object_cache() #3 wp-config.php(94):
require_once(‘path…’) #4 wp-load.php(37):
require_once(‘pathgree…’) #5 wp-blog-header.php(13):
require_once(‘path…’) #6 index.php(17): require(‘path…’)
#7 {main} thrown in wp-contentobject-cache.php on line 732

What’s really odd is that they didn’t give me the WP codebase, just the wp-content folders with theme, plugins, uploads.

So the entire wp codebase, wp-config etc are all defaults that I’ve just obtained from the current latest version at WordPress.org.

So if the wp-config is as default, how can some redis like system be coming into play? I’ve never had this issue before and transferred 101 pre-build WP sites between servers.

Can anyone assist ?

backup – MYSQL Huge Data migration (5TB) from old server to new server

We have an old MYSQL server contains huge data ~5TB and wanted to migrate to the new server in order to minimize the costs and get rid of very old hardware.

My one and only idea in my mind is using mysqldump for the migration but I’m pretty sure that it’s a poor and risk option with that huge data.

Then someone in my team came up with the idea for using ETL tools but we haven’t go into the deeper details and not really sure if this ETL way can literally help us.

Any idea are always welcomed


database – MySQL 5.7 migration error

I have some issues regrading migration from ubuntu (Server 1) to windows MySQL (Server 2). First of all, I had installed MySQL migration tools on Server 2, and when i try to migrate the data, Server 1 does not migrate the data. Some of the views stated syntax error and unable to create in the migration.

Then when I try to migrate with MySQL from the migration tab, all tables and views and routines were successfully created, but it will generate errors like MySQL server had gone away. So i double check on those tables, on which it is a big tables with carry lot of data inside.

ntnsap.dtrtbl:Finished copying 55 rows in 0m02s
ntnsap.bgttbl:Copying 17 columns of 280637 rows from table ntnsap.bgttbl
ERROR: ntnsap.bgttbl:Failed copying 30617 rows
ERROR: ntnsap.dptmas:mysql_query(SELECT count() FROM dptmas): MySQL server has gone away
ntnsap.dptmas:Finished copying 0 rows in 0m00s
ERROR: ntnsap.dlrtbl:mysql_query(SELECT count(
) FROM dlrtbl): MySQL server has gone away
ntnsap.dlrtbl:Finished copying 0 rows in 0m00s
ERROR: ntnsap.tmptrt:mysql_query(SELECT count() FROM tmptrt): MySQL server has gone away
ntnsap.tmptrt:Finished copying 0 rows in 0m00s
ERROR: ntnsap.rbttbl:mysql_query(SELECT count(
) FROM rbttbl): MySQL server has gone away
ntnsap.rbttbl:Finished copying 0 rows in 0m00s
ERROR: ntnsap.tihfil:mysql_query(SELECT count() FROM tihfil): MySQL server has gone away
ntnsap.tihfil:Finished copying 0 rows in 0m00s
ERROR: ntnsap.lidtbl:mysql_query(SELECT count(
) FROM lidtbl): MySQL server has gone away
ntnsap.lidtbl:Finished copying 0 rows in 0m00s

So i run some of the scripts like this,

— SHOW VARIABLES LIKE ‘sql_mode’; — show status on SQL mode

— set sql_mode=”; — change the mode to non strict mode as to copy the data without checking the format.

— select @@sql_mode; — to show mysql mode status.

— set global max_allowed_packet=104857600; — allow maximum packet to 100mb for data transfer.

After applying those scripts, the migration still does not run correctly, it will just lost connection from copying large table to Server 2. I have try to migrate the data from server 1 to my PC and the migration completed without any error.

Server 1 Specification
OS: ubuntu 18.4
MySQL Server Ver: 5.7
Processor: Xeon E3400 Series

Server2 Specification
OS: Windows Server 2012 R2
MySQL Server Ver: 5.7
Processor: Xeon E3400 Series

My PC Specification
OS: Windows 10 Pro 1909
MySQL Server Ver: 5.7
Processor: Corei7 10700 Series

Please Advice.

Thank You

[eukhost.com] Get 25% OFF VPS Hosting | 99.9% Network Uptime | Free Migration – Hosting, VPN, Proxies

eukhost is a renowned web hosting company in the UK since 2002, Offering 25% OFF on VPS Hosting. To benefit from the offer use coupon code “VPS25″. Hurry up! Avail this offer before it expires. 

This is a limited period offer and ends on 15 October 2020

So hurry before the offer ends. Initiate a LIVE CHAT with our friendly sales advisor they’ll guide you on the benefits of using our VPS Hosting Services.

Here is the List of VPS Hosting Plans:

VPS SSD Starter

• 2 vCPU Cores
• 30GB SSD Storage
• Unmetered Bandwidth
• cPanel or Plesk (Optional)
• 24/7 Pro Technical Support

Price: £15.59 / Mo inc VAT | Order Now

VPS SSD Standard

• 2 vCPU Cores
• 60GB SSD Storage
• Unmetered Bandwidth
• cPanel or Plesk (Optional)
• Free Domain SSL
• 24/7 Pro Technical Support

Price: £23.99 / Mo inc VAT | Order Now

VPS SSD Professional

• 4 vCPU Cores
• 120GB SSD Storage
• Unmetered Bandwidth
• cPanel or Plesk (Optional)
• Free Domain Wildcard SSL
• 24/7 Pro Technical Support

Price: £32.39 / Mo inc VAT | Order Now

VPS SSD Enterprise

• 6 vCPU Cores
• 240GB SSD Storage
• Unmetered Bandwidth
• cPanel or Plesk (Optional)
• Free Domain Wildcard SSL
• 24/7 Pro Technical Support

Price: £40.79 / Mo inc VAT | Order Now

For full list of VPS Hosting features, Visit: https://www.eukhost.com/vps-hosting

In case you have any questions, you can contact our sales department by initiating a chat or by dropping an email to sales@eukhost.com or call us on 0800 862 0380.

migration – Update existing node (not created by Migrate) via Migrate 8.5.x

Updating existing content was possible by changing the default behaviour of a migration with the system of record concept in Drupal 7.

In Drupal 8 this concept has been revised & is available with the overwrite_properties option in the destination of the migration YAML:

  plugin: 'entity:node'
  # define the entity properties that are to be updated.
    - title
    - field_foo
    - field_bar

seo – GSC: Big changes in coverage of HTTP property after HTTPS migration. Is this acceptable?

Some weeks ago, I migrated a 500k-webpage site from HTTP to HTTPS

  • I implemented the appropriate 301 redirection
  • I removed the sitemap of the HTTP property within Google Search Console

Today I found some abrupt changes in the figures of the coverage within Google Search Console:

  • “Excluded: Page with redirect”: boosted from 116k to 500k
  • “Excluded: Duplicate, submitted URL not selected as canonical”: dropped from 500k to null
  • “Excluded: Discovered – currently not indexed”: dropped fron 500k to null
  • “Error: Submitted URL seems to be a Soft 404”: dropped from 20k to null

I would like to believe that these changes are the expected ones and are good for the ranking of my pages. Any similar experience would be welcome.