partitioning – Ubuntu 20.04 install freezes on ‘Updates and other software’

I bought a Lenovo T450, with windows 10 pro already installed, 20BUS X64 Lenovo, Intel i5-5300u CPU, 8gb ram 224gb SSD.

I am trying to install Ubuntu 20.04 alongside windows to allow duel boot. I have read through multiple tutorials online but none of them have worked so far. Made space on the SSD, made a bootable USB drive (iso). When I boot up Linux I get ACPI errors that flash on and off. Once it has booted I can try Ubuntu and use apps. Looks like everything is working correctly.

When I try to install, the installation gets to the ‘Updates and other software’ page and then just spins and spins for hours. I have tried everything On that page,Normal installation, Minimal installation, with and without ‘other options’ selected. Each time just spins and spine. Until i try and move or close the view. At that point it tells me ‘Ubuntu 20.04 installer not responding’

I have seen some other posts about an issue like this, but none of the solutions worked. Mounting and dismounting my windows drive, did not work.

I’m lost right now and don’t know what to try so this was my last-ditch effort. I have found logs that might help someone figure out what is going on.

https://0bin.net/paste/Wo3tfRjX4Y2mX50I#Fjk5l6Xgm5-nAfnx+mtfIHqd1dwMhgDYo9IHbEptn41

partitioning – External Hard Drive “The backup GTP table is corrupt”

So I recently installed ubuntu on my old laptop after using Windows 10.
I was using an external HardDrive (Seagate Backup Portable 4TB – formatted as NTFS) on windows, that I started using on Ubuntu.
When I opened GParted, I was given the message:

The backup GPT table is corrupt, but the primary appears OK, so that will be used.

I found some answers (1) that had some similarities to my issue and I tried them but didn’t work.

Here is the result of sudo sfdisk -l /dev/sdb

The backup GPT table is corrupt, but the primary appears OK, so that will be used.
Disk /dev/sdb: 3,65 TiB, 4000787029504 bytes, 7814037167 sectors
Disk model: BUP BK          
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F33087C3-E03F-4A5A-A52A-2D5C30471082

Device      Start        End    Sectors  Size Type
/dev/sdb1      34     262177     262144  128M Microsoft reserved
/dev/sdb2  264192 7814035455 7813771264  3,7T Microsoft basic data

Partition 1 does not start on physical sector boundary.

and the result of gdisk p command:

Disk /dev/sdb: 7814037167 sectors, 3.6 TiB
Model: BUP BK          
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): F33087C3-E03F-4A5A-A52A-2D5C30471082
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 7814037133
Partitions will be aligned on 8-sector boundaries
Total free space is 3692 sectors (1.8 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34          262177   128.0 MiB   0C01  Microsoft reserved ...
   2          264192      7814035455   3.6 TiB     0700  Basic data partition

I tried the following:

  1. sudo gdisk /dev/sdb followed by the v command which resulted in the following:
Command (? for help): v

Problem: The secondary header's self-pointer indicates that it doesn't reside
at the end of the disk. If you've added a disk to a RAID array, use the 'e'
option on the experts' menu to adjust the secondary header's and partition
table's locations.

Problem: main GPT header's current LBA pointer (1) doesn't
match the backup GPT header's alternate LBA pointer(7814037166).

Problem: main GPT header's backup LBA pointer (7814037166) doesn't
match the backup GPT header's current LBA pointer (1).
The 'e' option on the experts' menu may fix this problem.

Problem: The backup partition table overlaps the backup header.
Using 'e' on the experts' menu may fix this problem.

Caution: Partition 1 doesn't begin on a 8-sector boundary. This may
result in degraded performance on some modern (2009 and later) hard disks.

Consult http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/
for information on disk alignment.

Identified 4 problems!
  1. I used gdisk e command as instructed to try and fix that, but when I close and open gdisk and enter the v command I get the same result.

  2. I tried to write (w) the table to the disk, but that also didn’t work.

  3. I ran the disk error checking option (in the harddrive properties) in Windows 10 and it resulted in no errors.

I am a little lost as to what do, any help would be greatly appreciated.

PS: I am new to Linux so bear with me :), and let me know if any other info is needed.

partitioning – Sql server partitioned table: what happens when an update of a record changes the destination partition?

I my company we are facing performance issues on a big Sql server DB (running on Azure, total size > 150GB).

We have one big table (1,5GB, 3 millions records) that has several quite slow “select”.

Those records have an “insert date” and I know that it is easy to make partitions based on date.

But I am exploring a different idea.

I have a “Status” column that can have different values, one of them is “Closed” and different other values that I’ll call “Open”.
My idea is to add a calculated persisted column with a value that can be “Open” or “Closed”, i’ll call it OpenClosedStatus.

“Closed tickets” continue to be accumulate and are rarely (if ever) read again, instead “open” tickets are normally less than 100 so having a dedicated partition should allow extremely fast read of their values.
Please note: the record is inserted once, selected potentially several hundred times and finally updated once (the update sets the “closed” status).

It is very complex to make proper indexes to speed up the several selects on the “Open” tickets because we have hundreds of instances of different programs that read “open” tickets accordingly with different criteria, therefore I am considering to partition them accordingly with the OpenClosedStatus.

This idea implies that a ticket is always inserted in the “Open” partition, then it should be transferred to the “Closed” partition when the status is updated as closed. The “OpenClosedStatus” will always be used in the WHERE clause of our queries, at least when searching for “open” tickets (therefore going on the small partition).

I do not find a reference of this “dynamic change” of partition on update of the record, I am therefore looking for advice for or against this atypical usage or partitions.

partitioning – My pen drive is not detected in boot menu for dual booting ubuntu 20.04 and windows 10 and RST disabling

I have a dell 15 g3 3590 laptop with 9th gen i5 processor, 8GB of DDR4 RAM, and an SSD of 512GB. I have windows preinstalled in it and my bios mode is UEFI. I have also disabled fast startup and secure boot and also made an unallocated partition of 100000MB for Ubuntu I made a bootable disk using rufus tool but when I chose GPT in it and the Live USB was created but when i rebooted my PC, In the boot menu my Live USB was not showing , But when I chose MBR in it my disk was showing in it and when i clicked it two lines were shown as

    failed to create kernel channel, -22
    failed to parse event in tpm final event log

and then ran the disk check but after that it was stuck on dell logo for a long time when i repeated this process i was able to enter to the setup but after connecting to a network, In the next menu It shows a warning of RST and asked to restart when i restarted it directly booted to windows. Please HELP ! I want to dual boot my laptop.

partitioning – What is LUKS and LVM2 PV in Gnome-Disk?

After a standard installation of Linux Mint Debian Edition 4, this is how the hard drive is partitioned according to gnome-disk:

enter image description here

Ext4 is the standard file system for current Linux systems AFAIK. But what are the additional partitions, i.e. what is LUKS/LVM2 PV and why is there 2.1 MB unallocated space at the beginning? If I wanted to create a sparse, bootable clone of this system, do I have to recreate this exact partition structure or would it break anything if I just copied all files with cp to the new drive with a single Ext4 partition, plus the bootloader (i.e. the first 446 Bytes) using dd?

Finally, what is the meaning behind gnome-disk‘s (visually) vertical partitioning of 255 GB into 2 x 255 GB?

partitioning – MSI GE66 – Grub menu not showing after install (dual boot)

just installed kubuntu 20.04 on my MSI GE66 Raider laptop with dual boot.

The bootloades should be installed in:

/dev/nvme0n1

Other available options are:

/dev/nvme01p3
/dev/nvme01p4
/dev/nvme01p6

The installation process goes on without any error and it seems it is installing the GRUB in the right partition but as I reboot my machine, it goes straight in Windows 10.

What can I do to fix this?

I have already disabled Secure Boot.

Thanks
Fabio

partitioning – Do i need to keep microsoft reserved partition if I’m instalingl windows 10 in dual booting?

I bought a new computer with ubuntu 18.04.2 LTS, and I want to install windows 10 in dual booting.
I have only 1 ssd of 256gb, so I need to unallocate some space for the installation. instead of 2 partitions (esp for the system and the rest for ubunu) there is 3, additional one is microsoft reserved.
I freed up some space and now my disc is like this:
partition image

So now, if I’m installing window 10 on the unallocated space, what will happen with the microsoft reserved? do I need to delete it first, to install on this partition or just install on the unallocated space?

partitioning – Transferring a New System to New Hardware?

I’m interested in porting my system to new hardware that I just purchased.

Can I make a disk ISO and just drop this onto another computer’s hard drive (of course making sure it has GPT partitioning and the EFI sectors).

Does the system react to the new hardware, or should I re-install the OS so that it has the appropriate drivers?

index – Known pitfalls with MySQL partitioning

Are there any known pitfalls with MySQL partitioning when server behavior differs a lot from the expected one?
I’ll try to explain what I mean. Let’s create partitioned table and fill it with random data.

CREATE TABLE PartitionedTable (
  id int(11) NOT NULL AUTO_INCREMENT,
  col int,
  PRIMARY KEY (id),
  KEY(col)
) PARTITION BY RANGE(id) (
    PARTITION p0 VALUES LESS THAN (2000),
    PARTITION p1 VALUES LESS THAN (4000),
    PARTITION p2 VALUES LESS THAN (6000),
    PARTITION p3 VALUES LESS THAN (8000),
    PARTITION p4 VALUES LESS THAN MAXVALUE
    );

INSERT INTO PartitionedTable (col)
SELECT FLOOR(RAND() * 1000)  AS col FROM 
(select 0 union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) t,
(select 0 union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) t2, 
(select 0 union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) t3, 
(select 0 union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) t4 ;

Now let’s check the following queries (I’ll use MySQL to be able to use EXPLAIN ANALYZE but it seems execution plan for other MySQL Server versions is the same).
The first query:

EXPLAIN ANALYZE SELECT col FROM PartitionedTable WHERE id >= 6000 AND id < 8000 ORDER BY col DESC LIMIT 1;

Execution plan:

-> Limit: 1 row(s)  (actual time=0.021..0.021 rows=1 loops=1)
    -> Filter: ((PartitionedTable.id >= 6000) and (PartitionedTable.id < 8000))  (cost=0.10 rows=1) (actual time=0.020..0.020 rows=1 loops=1)
        -> Index scan on PartitionedTable using col (reverse)  (cost=0.10 rows=1) (actual time=0.019..0.019 rows=1 loops=1)

Everything looks ok, server found necessary partition (you can see it if you run EXPLAIN without ANALYZE) and read one last row from index on col column.

The second query (which requests the same data but in a bit different way):

EXPLAIN ANALYZE SELECT MAX(col) FROM PartitionedTable WHERE id >= 6000 AND id < 8000;

Execution plan:

-> Aggregate: max(PartitionedTable.col)  (actual time=0.938..0.938 rows=1 loops=1)
    -> Filter: ((PartitionedTable.id >= 6000) and (PartitionedTable.id < 8000))  (cost=400.26 rows=2000) (actual time=0.037..0.799 rows=2000 loops=1)
        -> Index range scan on PartitionedTable using PRIMARY  (cost=400.26 rows=2000) (actual time=0.035..0.570 rows=2000 loops=1)

Now server found the same partition but decided to scan all data inside. It looks pretty inefficient and strange.

Link to dbfiddle: https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=f27c4b395b8f02a2972cc0ae306fe1cf

Slow insert in Postgresql when sharding using declarative partitioning and postgres_fdw?

We have been trying to partition postgresql database on google cloud using the inbuilt Postgresql declarative partitioning and postgres_fdw as explained here.

We are running commands as follow:

Shard 1:

CREATE TABLE message_1 (
    id SERIAL,                                                                                        
    m_type character varying(20),
    content character varying(256) NOT NULL,
    is_received boolean NOT NULL,                                                              
    is_seen boolean NOT NULL,
    is_active boolean NOT NULL,
    created_at timestamp with time zone NOT NULL,
    room_no_id integer NOT NULL,
    sender_id integer NOT NULL
);

CREATE TABLE message_2 (
    id SERIAL,                                                                                        
    m_type character varying(20),
    content character varying(256) NOT NULL,
    is_received boolean NOT NULL,                                                              
    is_seen boolean NOT NULL,
    is_active boolean NOT NULL,
    created_at timestamp with time zone NOT NULL,
    room_no_id integer NOT NULL,
    sender_id integer NOT NULL
);

Shard 2:

  CREATE TABLE message_3 (
    id SERIAL,                                                                                        
    m_type character varying(20),
    content character varying(256) NOT NULL,
    is_received boolean NOT NULL,                                                              
    is_seen boolean NOT NULL,
    is_active boolean NOT NULL,
    created_at timestamp with time zone NOT NULL,
    room_no_id integer NOT NULL,
    sender_id integer NOT NULL
);

CREATE TABLE message_4 (
    id SERIAL,                                                                                        
    m_type character varying(20),
    content character varying(256) NOT NULL,
    is_received boolean NOT NULL,                                                              
    is_seen boolean NOT NULL,
    is_active boolean NOT NULL,
    created_at timestamp with time zone NOT NULL,
    room_no_id integer NOT NULL,
    sender_id integer NOT NULL
);  

Source machine:

CREATE SERVER shard_1 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'shard_1_ip', dbname 'shard_1_db', port '5432');
CREATE SERVER shard_2 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'shard_2_ip', dbname 'shard_2_db', port '5432');

CREATE USER MAPPING for source_user SERVER shard_1 OPTIONS (user 'shard_1_user', password 'shard_1_user_password');
CREATE USER MAPPING for source_user SERVER shard_2 OPTIONS (user 'shard_2_user', password 'shard_2_user_password');

CREATE TABLE room (
    id SERIAL PRIMARY KEY,
    name character varying(20) NOT NULL,
    created_at timestamp with time zone NOT NULL,
    updated_at timestamp with time zone NOT NULL,
    is_active boolean NOT NULL
);

insert into room (
    name, created_at, updated_at, is_active
)
select
    concat('Room_', floor(random() * 400000 + 1)::int, '_', floor(random() * 400000 + 1)::int),
    i,
    i,
    TRUE
from generate_series('2019-01-01 00:00:00'::timestamp, '2019-4-30 01:00:00', '5 seconds') as s(i);

CREATE TABLE message (
    id SERIAL,                                                                                        
    m_type character varying(20),
    content character varying(256) NOT NULL,
    is_received boolean NOT NULL,                                                              
    is_seen boolean NOT NULL,
    is_active boolean NOT NULL,
    created_at timestamp with time zone NOT NULL,
    room_no_id integer NOT NULL,
    sender_id integer NOT NULL
) PARTITION BY HASH (room_no_id);

CREATE FOREIGN TABLE message_1
    PARTITION OF message
    FOR VALUES WITH (MODULUS 4, REMAINDER 1)
    SERVER shard_1;

CREATE FOREIGN TABLE message_2
    PARTITION OF message
    FOR VALUES WITH (MODULUS 4, REMAINDER 2)
    SERVER shard_1;

CREATE FOREIGN TABLE message_3
    PARTITION OF message
    FOR VALUES WITH (MODULUS 4, REMAINDER 3)
    SERVER shard_2;

CREATE FOREIGN TABLE message_4
    PARTITION OF message
    FOR VALUES WITH (MODULUS 4, REMAINDER 0)
    SERVER shard_2;

The problem we are facing is that when we are trying to insert data using following query:

insert into message (
    m_type, content, is_received, is_seen, is_active, created_at, room_no_id, sender_id
)                                
select                                      
    'TEXT',                                                                                    
    CASE WHEN s.i % 2 = 0 THEN 'text 1'
        ELSE 'text 2'
    end,                                        
    TRUE,                      
    TRUE,                      
    TRUE,                        
    dr.created_at + s.i * (interval '1 hour'),
    dr.id,
    CASE WHEN s.i % 2 = 0 THEN split_part(dr.name, '_', 2)::int                                  
        ELSE split_part(dr.name, '_', 3)::int
    end,
from dating_room as dr, generate_series(0, 10) as s(i);

It is taking nearly 1 hour 50 minutes to do so. When we are not sharding the table, it takes around 8 minutes to perform the same. So, that is basically 14 times slower than without sharding. Are we missing anything here or inserts are that slow in sharding using this method?

Citus seems to be performing better in insert as described in this video, so it seems a little odd to me that sharding will actually degrade the performance by this much. So, it might be the case that it will not have as good performance as citus but why so much low performance.

Thanks in advance!!!