LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   Disk /dev/sda: 4799GB I cannot use all the 4.8TB space (https://www.linuxquestions.org/questions/linux-server-73/disk-dev-sda-4799gb-i-cannot-use-all-the-4-8tb-space-4175558977/)

ytd 11-16-2015 04:21 AM

Disk /dev/sda: 4799GB I cannot use all the 4.8TB space
 
Hello everyone,

I have a problem with using all the disk space.

Can someone help me, please?

server:~ # parted /dev/sda print
Model: FTS PRAID EP420i (scsi)
Disk /dev/sda: 4799GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 2146MB 2145MB primary ext3 boot, type=83
2 2146MB 19.3GB 17.2GB primary linux-swap(v1) type=82
3 19.3GB 2196GB 2177GB primary ext3 type=83

server:~ # df -h
Filesystem Size Used Avail Use Mounted on
/dev/sda3 2.0T 19G 1.9T 1% /
udev 16G 256K 16G 1% /dev
tmpfs 16G 724K 16G 1% /dev/shm
/dev/sda1 2.0G 66M 1.9G 4% /boot
server:~ #
server:~ #
server:~ #

server:~ # cat /etc/SuSE-release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 4
server:~ #

I want to create another partition with the rest of the disk space.
As you can see the total space is 4799GB and i only have one partition of 2177GB.

I'm not sure what to do. Maybe is a limit in SUSE 11?
Or maybe I'm using msdos partition table and that's the reason of unread space?

Please advice :)

ytd 11-16-2015 04:28 AM

I've found this article:

https://administratosphere.wordpress...the-2tb-limit/

But I don't want to change the partition table from msdos to GPT as I will lose all the data and I can't afford that. :(
Any advices?

ytd 11-16-2015 04:48 AM

Another article:

https://www.novell.com/support/kb/doc.php?id=7000331

If I'd go with yast2, can I change there the label without formatting?
I don't want to do something stupid and lose data...

berndbausch 11-16-2015 04:52 AM

Quote:

Originally Posted by ytd (Post 5450276)
I've found this article:

https://administratosphere.wordpress...the-2tb-limit/

But I don't want to change the partition table from msdos to GPT as I will lose all the data and I can't afford that. :(
Any advices?

Don't despair, ask uncle Google. There are many recipes to convert from MBR to GPT. It's risky but possible. Search for convert mbr gpt.
There are also tools that do it automatically. I saw that Aomei has this function; I have used this tool for resizing my partitions more than once, but I think it only runs on Windows.

descendant_command 11-16-2015 04:52 AM

Quote:

Originally Posted by ytd (Post 5450282)
I don't want to <snip> lose data...

Then don't have it in just one place!

wroom 11-16-2015 05:53 AM

Quote:

Originally Posted by ytd (Post 5450276)
Any advices?

Backup your raid with something like Clonezilla. Or whatever backup method you feel sure about.

Then rethink your raid plans. You will need to make raid volumes within the 2 TB limit.

This is because:
A) Your raid controller is limited to 2 TB volumes.
B) You want to use MSDOS partition tables, that is limited to 2 TB.

You haven't said anything about what drives, and how many you have. And not much about what design goals you have for the raid. But i guess it is speed & size you want.

You will need to use some scheme like making a bunch of hardware raid volumes, stripes/mirrors... , and then combining them together using LVM, mdraid or any other method. Since your hwraid volumes are within 2 TB, you can stay with MSDOS partitioning. And that is a good thing when working with raid, since GPT partitioning will write something at the end of the disk. Effectively capping the disk volume from growing, and also introducing other risks because raid technology most often will cut some sectors of the end of a disk to put its own volume id block there.

Example: Say you have eight 500 GB disks. You can make four mirrors in your hw raid, and then partition and put a btrfs raid0 volume on those four hw-mirrored volumes.

Example 2: Make two raid0 volumes of 2 TB each on your hw raid controller, and then put them together in a btrfs volume.


The performance of a hybrid hw/sw raid will be very good. If you want blazing speed, size and it does not hurt if your data happens to crumble to dust someday, because you run regular backups - Then you can make hw raid stripes and then further stripe them together with either LVM and making the logical volumes as stripes in the volume group, or using mdraid to stripe them together.

Remember: There are no safe shortcuts. There are no reliable ways of remaking your raid without first backing up your data.

wroom 11-16-2015 05:58 AM

Quote:

Originally Posted by ytd (Post 5450282)
Another article:

https://www.novell.com/support/kb/doc.php?id=7000331

If I'd go with yast2, can I change there the label without formatting?
I don't want to do something stupid and lose data...

You know the answer to your question already, don't you?

A: Don't do that.

You should:
Make backup.
Test backup.
Try solution.
Restore backup.
Verify restored backup.
Thoroughly test the new raid setup.

ytd 11-16-2015 07:15 AM

I can't do backup, I mean... I don't want to test things on a production server.
It has some DB's already configured and it's too much work to do and I don't want to start it from scratch again. There is just too much configure to do.

Are you sure my physical raid controller is limited to 2TB?

I don't want to use msdos partition table. GPT is fine, but in my situation I need to convert from msdos to GPT in order to use the whole space (don't care if thre are 2 or 3 or 4 partitions as long as I can use the whole space).

http://sp.ts.fujitsu.com/dmsp/Public...00i-ep420i.pdf says it supports 64TB, or I am not looking in the right place?

I have 5 disks of 1.2TB and using RAID5.

We need to use atleast RAID5, for data protection. We don't want neither RAID0 or RAID1, even if RAID1 is data protected, as it's a mirror for data.
My "boss" says he wants RAID5 alteast.

ytd 11-16-2015 07:38 AM

I'm not sure my thinking is right, but because of MSDOS partition table (label also) - that limits partitions to 2TB, I cannot see the whole space.
The partition is already created "correctly" with the maximum capacity (4.8TB) but I see only 2TB because of the limit of MSDOS partition table.

So like, I can't do anything, except reinstalling the SUSE and using GPT or mdraid, as you said. But then again, it's kinda wierd to use so larger disk space in raid5.

Anywya, it's a good tutorial here: https://www.youtube.com/watch?v=wL-BErhF_uM

But it's not 100 safe. It should work, but... I can't afford to play around, even if it looks safe.
I only have 20GB used of 4TB, but then again, it's about configuration... and clone the system (acronis / clonezilla) takes time, while the production server is down so it's not a very good option, for me.

Amybe just leave the "wasted" space of 2TB and never use them... :( sad option.

wroom 11-16-2015 08:34 AM

Quote:

Originally Posted by ytd (Post 5450323)
Are you sure my physical raid controller is limited to 2TB?

No. So i make the safe assumption.
Raid/disk controllers can be capped so that it only use a certain max size and leave the rest.
But in some cases it will "wrap" instead of "cap" the size. The effect of such is it makes minced meat of the data. So disk size capping is a thing that is best tested before making assumptions.

Quote:

Originally Posted by ytd (Post 5450323)
http://sp.ts.fujitsu.com/dmsp/Public...00i-ep420i.pdf says it supports 64TB, or I am not looking in the right place?

You are then probably allright with regards to controller "capping".


Quote:

Originally Posted by ytd (Post 5450323)
I don't want to use msdos partition table. GPT is fine, but in my situation I need to convert from msdos to GPT in order to use the whole space (don't care if thre are 2 or 3 or 4 partitions as long as I can use the whole space).

Quote:

Originally Posted by ytd (Post 5450323)
I have 5 disks of 1.2TB and using RAID5.

We need to use atleast RAID5, for data protection. We don't want neither RAID0 or RAID1, even if RAID1 is data protected, as it's a mirror for data.
My "boss" says he wants RAID5 alteast.

If you are setting up a raid5 of 4.8TB using EXT3 as file system and not having UPS power and battery backup on the raid controller cache, then you are asking for trouble.

Start with googling "raid5 write hole" and then make further research on how "well protected" data is on raid5 array of such size.

Use at least EXT4 as filesystem.

Or if this is a production server, then use ZFS with the raid controller serving the drives using passthru and set up the drives in clusters of three using RAIDZ1. Which of course implies you need six drives of 1.5 TB to do this. This will make a XFS pool of two RAIDZ1 arrays of 2.4 TB each, striped together into a total size of 4.8 TB.

Or, let the raid controller serve two raid5 volumes and combine these into a BTRFS filesystem with metadata as raid1 and data as raid0. (Best of three worlds).

A large raid5 without UPS power is deemed to fail.

Quote:

Originally Posted by ytd (Post 5450323)
I can't do backup, I mean... I don't want to test things on a production server.
It has some DB's already configured and it's too much work to do and I don't want to start it from scratch again. There is just too much configure to do.

According to your top post in this thread you have merely 20 GB data on the raid.
Backup of that shouldn't be an issue.

Quote:

Originally Posted by ytd (Post 5450275)
Code:

server:~ # df -h
Filesystem      Size  Used Avail Use Mounted on
/dev/sda3      2.0T  19G  1.9T  1 /
udev            16G  256K  16G  1% /dev
tmpfs            16G  724K  16G  1% /dev/shm
/dev/sda1      2.0G  66M  1.9G  4% /boot


If you use clonezilla, it will make a backup of only the used data on the disk. It will also make a backup that is easy to restore to a clean disk and get a bootable system.


For a quick fix, i would do the following:
1)
Make a Clonezilla backup, so that we can restore to the current state.

2)
Shrink the root partition down to some 32 GiB size.

3)
Move all partitions to beginning of drive.

4)
Make another backup of the disk volume.
I would in this case make an image backup of all from first sector to somewhere safely above the end of the last partition.

5)
Remove the large volume in the raid controller, and replace it with a smaller volume of some 48 GiB to 128 GiB. Make it raid5, or better yet raid1 or raid10.
This is your new bootable system volume.

6)
Restore the image backup done in 4 to this new, smaller volume.

7)
Resize the root partition so the whole raid volume is used. Then resize the root ext3 filesystem.
At this step you should be able to boot up your system and check that everything is working.

8)
Make another volume in the raid controller. A big one taking all capacity that's left on the drives.
If you have to use raid5, then do as you wish.

9)
Partition this new big volume with gpt and add your big volume share filesystem.
Use at least ext4 or better yet use btrfs on this one.

This will let you setup a volume that is not capped by any MSDOS partition table, and still leave the current working system as it is, with its MSDOS partition table.

Does this solve the problem for you?

ytd 11-16-2015 11:58 PM

Too much work and too much downtime for my server.
I don't know why they needed so much space... we also have a SAN and we can alocate space from it via FO HBA.

I'd rather leave the OS as it is...
tyvm for your effort.
I thought maybe I could find a safe and fast way to solve this.
Closed... :(

descendant_command 11-17-2015 12:24 AM

Quote:

Originally Posted by ytd (Post 5450754)
Too much work and too much downtime for my server.
I don't know why they needed so much space... we also have a SAN and we can alocate space from it via FO HBA.

I'd rather leave the OS as it is...
tyvm for your effort.
I thought maybe I could find a safe and fast way to solve this.
Closed... :(

Wow.

wroom 11-17-2015 04:19 AM

Quote:

Originally Posted by descendant_command (Post 5450765)
Wow.

Indeed.

ytd 11-17-2015 04:33 AM

The following message appears on my server during boot:
"Your VDs that are not configured for write-back are temporarily running on write-through mode, This is caused by the battery being charged, missing or bad, Please allow battery to charge for 24 hours before evaluating battery for replacement."

I've already searched google and found some answers, one of them in contradictory.

Solution from checkpoint.com:

This message can be ignored. The appliance functions properly.
https://supportcenter.checkpoint.com...tionid=sk75800


http://dl3.checkpoint.com/paid/e0/e0...8ec0e&xtn=.pdf

IBM: https://www-947.ibm.com/support/entr...d=migr-5076143

We are talking about the same server, which is brand new and it has more than one week uptime already.

LSI MegaRAID SAS-MFI BIOS
Version 6.19.05.0 (Build May 07, 2014)

FTS PRAID EP420i (scsi)

Shall I ignore this message?

wroom 11-17-2015 04:51 AM

Quote:

Originally Posted by ytd (Post 5450832)
The following message appears on my server during boot:
"Your VDs that are not configured for write-back are temporarily running on write-through mode, This is caused by the battery being charged, missing or bad, Please allow battery to charge for 24 hours before evaluating battery for replacement."

I've already searched google and found some answers, one of them in contradictory.

Solution from checkpoint.com:

This message can be ignored. The appliance functions properly.
https://supportcenter.checkpoint.com...tionid=sk75800


http://dl3.checkpoint.com/paid/e0/e0...8ec0e&xtn=.pdf

IBM: https://www-947.ibm.com/support/entr...d=migr-5076143

We are talking about the same server, which is brand new and it has more than one week uptime already.

LSI MegaRAID SAS-MFI BIOS
Version 6.19.05.0 (Build May 07, 2014)

FTS PRAID EP420i (scsi)

Shall I ignore this message?

Go and ask your boss.
He/she will know best what to decide.


All times are GMT -5. The time now is 08:48 AM.