RAID HW controller suitable for VM Environment,

I am currently running a high-end server with a 9271-8 and a single enterprise grade SSD, I can get those results using crystal branding and … | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1803607&goto=newpost

network attached storage – can I configure Synology DSM on a single disk and then mine data to configure RAID 1 afterwards?

Given situation:

Sylogy with 2x1TB (no more disks needed)

MyCloud with 2x3TB (Raid level 1)

Objective:

I want to run the 3TB disks in sync.

When I changed disk Synology wants to install DSM and warns about data loss.

Question:

May l

  • start with just insert 1x3TB into Synology and install DSM agreeing to lose all data on that disk
  • then extract the data by extracting the data from the other disk (using an HDD docking station)
  • then put on the second disc

Will that plan work?
What should you probably consider?

ubuntu: how to resolve a DegradedArray event when a drive has been permanently removed from RAID array?

When I installed my VPS a few months ago (Ubuntu 18.04), the default RAID configuration depended on 3 disks. I removed / dev / sdc from the array to create a new partition. The RAID array is now as follows:

~# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Nov 16 19:46:26 2019
        Raid Level : raid1
        Array Size : 305152 (298.00 MiB 312.48 MB)
     Used Dev Size : 305152 (298.00 MiB 312.48 MB)
      Raid Devices : 3
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Mon Mar 30 00:00:03 2020
             State : clean, degraded
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : 163-172-103-121:0
              UUID : b4acac7e:de2c1e5c:e43cc0ba:ad662e4a
            Events : 310

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       -       0        0        2      removed

As a result, the RAID array is still in sync on two disks, BUT I get a daily "DegradedArray Event" email.

And I didn't find the correct way to fix it, I mean mark RaidDevice 2 as permanently removed. And to be honest, I'm afraid of breaking something;)

Please let me know how to proceed.

Thank you !
++

drive – Repair corrupt GPT table for hardware RAID

After a system reboot on my Odroid XU4, I have unfortunately lost the ability to mount my hardware RAID 0 array. I'm a bit lost on how to fix the problem (and hopefully fix it). The drive is backed up every month, but data recovery is preferred if possible.

System information:

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.4 LTS
Release:    18.04
Codename:   bionic

fdisk gives the following result for unity:

sudo fdisk /dev/sda

The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sda: 7.3 TiB, 8001456963584 bytes, 15627845632 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FF0E1EB3-7607-42A4-9D17-B24199DB2EC7

Device           Start         End     Sectors  Size Type
/dev/sda1         2048 15607421876 15607419829  7.3T Linux filesystem
/dev/sda2  15607422976 15627845598    20422623  9.8G Linux filesystem

I don't recall seeing a warning about the partition table before all this mess. I have found similar posts related to GPT partition table in this forum (eg Correct a corrupted backup GPT table?), And most seem to suggest using gdisk to repair the partition table. I took a look at the disk using gdisk /dev/sda.

GPT fdisk (gdisk) version 1.0.3

Caution! After loading partitions, the CRC doesn't check out!
Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!

Warning! One or more CRCs don't match. You should repair the disk!

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: damaged

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************

expectant
Am I on the right track here trying to get my disk up and running? If so, it would repair the partition table using gdisk be destructive and lead to data loss?

RAID configuration HELP

Hi there,

I hired a new OVH server with 3 hard drives, and I want to use this layout:

– 2 drives in RAID 1
– 1 drive without RAID for ba … | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1802553&goto=newpost

Raid10 Band Size for Vmware – 6X12G 10K 1.8TB Disk H730P Mini Raid Controller

Below is our current server configuration. In a few weeks I will simulate disaster recovery by installing 7 new disks (1 hot spare) and restoring all virtual machines from backups.

Will I gain anything by resizing the RAID stripe to more than 64 KB? The RAID controller has options for 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB.

Any recommendation based on the specification below would be highly appreciated, thanks.

Hardware:

Dell R630
Dell H730P Mini Raid Controller
2XE5-2670 V3
512GB RAM
12G 1.8TB 10K SAS Disk

    Software:

   Vmware ESX 6.7 U3

    Configuration:

    RAID 10, 128k Stripe Size

hardware: RAID creation with certain requirements

I'm supposed to design a RAID with the following requirements:

  • storage capacity of at least 2 TB (2 000 GB)

  • At peak performance, the array must handle at least 500 random block reads and 1600 block writes per second;

  • the array resynchronization process should finish within 5 hours (resynchronization is needed when replacing a failed drive with a new one; this time does not include the time to physically replace the failed drive).

A block here is 8 KB.

I find it difficult to find a good solution to this problem. But perhaps I am wrong in the task; correct my train of thought if possible:

I have decided to opt for RAID 1 as it is reliable and easy to implement. Available HDDs now offer 1.51 MB / s for 4K write operations and a capacity of approximately 500 GB.

1600 block writes per second means I need to write 1600*8KiB = 13.1072 MB every second, you need around ten 500GB hard drives to meet just this requirement. Am I looking at the wrong hard drives or am I calculating this incorrectly?

raid: how to improve write speed of NAS NFS to my home server drives

Problem: I have slow write speeds on my NFS NAS. I would like to improve write speed on my home server.

My typical use is: copy a 1-30gb file on my Mac laptop to my ubuntu server in one of two places: 1) temporary storage – server system folder (SSD) or 2) permanent storage (HDD – raid6 – 5x6tb units – 5400rpm)

Let's first tackle the SSD write speed:

the line in my export file is:

 /home/username/Transfer   192.168.102.1/24(rw,insecure,no_subtree_check,all_squash,anonuid=1000,anongid=1000)

If I send a file and monitor the progress:

 MBP:home hugh$ rsync -va --progress /home/Source/ /Volumes/Destination/
    building file list ... 
    blah blah
    sent 1233757345 bytes  received 60 bytes  9753022.96 bytes/sec
    total size is 2487074514  speedup is 2.02

So I get 9.7 MB / s on an SSD. What can I add / remove from my export line to improve this?

Now for permanent storage:

The export line is:

 /mnt/Share/bigraid      192.168.102.1/24(rw,sync,insecure,no_subtree_check,all_squash,anonuid=1000,anongid=1000)

If I move something from the transfer folder to the RAID:

sent 1,359,312,494 bytes  received 38 bytes  209,125,004.92 bytes/sec
total size is 1,358,980,577  speedup is 1.00

Well, the raid can write at ~ 210MB / s. But if I move something from my laptop to the raid:

sent 1099794896 bytes  received 48 bytes  1532815.25 bytes/sec
total size is 1099660496  speedup is 1.00

The speed is much slower (1.5MB / s). Now I know I need to remove the "sync" and wait for the raid to stop syncing (it's brand new), so I'll report when that changes.

If anyone has any recommendations on how to improve writing performance, I would be very grateful.

Thank you

[NEW] WAKAWAKA Offers | High IOPS SSD RAID 10 plans from $ 2.00 onwards, limited stock!

About CLOUDCONE

Here's a little introduction to the company, CloudCone, LLC is a cloud hosting service provider that provides fully managed hourly billed virtual private cloud servers, AnyCast DNS and high performance dedicated bare metal servers as our core services. . We offer an unmatched stack of cloud services that collaborate to provide a scalable infrastructure for your online presence alongside an international team of support engineers and in-house DevOps.

  • Deploy virtual machines
  • Start / restart / shutdown virtual machines
  • One-click reinstallation of the operating system
  • Upload / download individual resources in one click
  • VNC control panel
  • SSH Keys
  • Reset root passwords
  • Backups – Backups – CloudCone
  • Snapshots – Snapshots – CloudCone
  • One-click recovery mode
  • rDNS update
  • Snapshots
  • Backups
  • Managed firewalls
  • Free AnyCast DNS management (up to 3 domains)
  • Detailed server statistics – Cloud View – CloudCone
  • REST API – Loading …

WAKAWAKA-M1

—————————-

  • 1 vCPU core
  • 1 GB of RAM
  • 20 GB RAID 10 SSD
  • 1 x IPv4 and 3 x IPv6
  • 1 TB of bandwidth
  • Free AnyCast DNS

$ 2.00 / MO (billed $ 0.00269 / HR)
Order here: https://app.cloudcone.com/compute/942/create

WAKAWAKA-M2

——————–

  • 1 vCPU core
  • 2 GB of RAM
  • 40 GB RAID 10 SSD
  • 1 x IPv4 and 3 x IPv6
  • 1 TB of bandwidth
  • Free AnyCast DNS

$ 4.00 / MO (billed $ 0.00537 / HR)
Order here: https://app.cloudcone.com/compute/943/create

WAKAWAKA-M3

——————–

  • 1 vCPU core
  • 4 GB of RAM
  • 80 GB RAID 10 SSD
  • 1 x IPv4 and 3 x IPv6
  • 1 TB of bandwidth
  • Free AnyCast DNS

$ 8.00 / MO (billed $ 0.01075 / HR)
Order here: https://app.cloudcone.com/compute/944/create

WAKAWAKA-M4

——————–

  • 2 vCPU cores
  • 8 GB of RAM
  • 120 GB RAID 10 SSD
  • 1 x IPv4 and 3 x IPv6
  • 2 TB bandwidth
  • Free AnyCast DNS

$ 16.00 / MO (billed $ 0.02151 / HR)
Order here: https://app.cloudcone.com/compute/945/create

NOTE: Add funds to match the relevant plan before implementing

Available plugins

1 Tb / s Dedicated Anti-DDoS Protection: $ 2.50 per month
Content delivery network: $ 0.045 per GB (45 Pops on 6 continents)
Additional IPv4: $ 1 per month

Support and care in the cloud

A team of support experts, qualified and enthusiastic enough to take on any task, are available 24 hours a day, 7 days a week. Our Cloud Associates and Cloud Engineers will work one-on-one with you to ensure all your concerns are addressed and will not rest until your projects are online and running smoothly.

Gifts included

Advanced server metrics: free
AnyCast DNS
7 days money back guarantee, no questions asked.
IPV6 addresses
WHMCS Resell Module

CloudCone mobile app

Do you want to experience the true accessibility and convenience of our brand? Use the Instant Support feature of our mobile app to quickly communicate with our support team anytime, anywhere. You can also access a detailed dashboard with your server overview, active tasks, and billing-related activities. It also includes three key functionalities for "start", "restart" and "shutdown". Download it for free through the Google Play Store and Apple App Store!

Social media

Terms and Conditions

Acceptable Terms – Terms and Policies – CloudCone
Order while supplies last
Offline hours are not applicable.
Affiliates: Affiliate Program – CloudCone
Promotional plans are semi-managed (best effort support in third party scripts)

CloudCone Network

Data center, server location: Multacom, Los Angeles, USA UU.
Looking Glass: Cloudcone Looking Glass – Mirror
Status page: CloudCone status

Network functions

200 Tier 1 multiple transit providers such as Amazon, Google Fiber, Japan Telecom, Etisalat, Hutchison, TATA Communications and China Telecom
BGP4 Best-Path routing
Custom routing policies per customer
Latency-based routing optimization
Redundant connections and divergent path fiber optic connections to operators including Level3, Cogent, Savvis, ACE, TATA, China Unicom
Automatic fault detection and redirection

For any questions, go to cloudcone.com and visit a chat
Thank you,
CloudCone, LLC

raid5: restore RAID 5 with Windows 10 storage spaces

I am configuring the specification for PC that will be used as host for virtual machines.
I planned to use a 120 GB SSD for the operating system (Windows) and 3x1TB on RAID 5 for the virtual machine.

I wonder if it's a good idea. What happens if Drive with OS fails? Will it be possible to install Windows on a new drive and rebuild RAID in 3 SSD? Or should I do RAID 1 with 2 120 GB SSD for Windows?