Partitioning questions...
Hi all,
I am new to this and was wondering if someone could help me out. I have Ubuntu 12.04 installed on a custom built computer. I have a RAID 0 setup across 2 160 GB disks (512 sector size). I have been thinking that my system is not performing as well as it should be lately. When I apply the system updates, they are now taking an extended length of time to complete (up to 20-30 mins). I am thinking the problem is I/O related as it's things like updates which give me the most grief (slowing the system to a crawl and top is not showing high levels of memory/CPU utilization). I have done some digging and found some curious items in my partition tables. sfdisk is showing two empty partions (gparted complains about partitions not on the disk): Code:
kbarlow@atlantia:/home/kbarlow$ sudo sfdisk -d /dev/mapper/isw_cacihdigbh_Volume0 Code:
kbarlow@atlantia:/home/kbarlow$ sudo fdisk -l /dev/mapper/isw_cacihdigbh_Volume0 I think I should be able to simply delete the two empty partitions (I think the installer had some difficulties when setting up the partitioning structure). However, I am wondering if I have a misconfiguration with respect to the I/O size values... most of the samples I see that people have posted appear to have I/O sizes set to 512 or 4096 (advanced formatting). I don't know if these values are to be expected because of the RAID 0 configuration or what... Can anyone shed some light on this? Thanks. Keith |
as far as I know p3 and p4 are not real partitions but spaces (or slots) for two primary partitions. By default 4 partition entries exist and you use only two. p1 is a primary bootable linux partition, p4 is an extension. It may contain 4 additional entry, but it has only one (p5 which is a swap).
You cannot remove p3 and p4, they do not exist, they are just placeholder for two additional primary partition. |
pan64 is right, this is just a weird display format used by sfdisk, actually the partitions you see as empty just don't exist, which is why fdisk doesn't show them.
To your actual problem, have you tried a disk benchmark or just a simple test with Code:
hdparm -Tt /dev/mapper/isw_cacihdigbh_Volume0p1 |
I installed iotop and have tried to look at that to see what it says. Unfortunately, I haven't done it during the system updates yet. I need to examine that a bit more closely. I can say that the activity indicator on my system appears to be a constant red instead of a flurry of activity like it usually is. Maybe this is more indicative of pending hard drive failures? I ran an fsck on the partition and it reported it as clean but I am curious of the result as it completed immediately - took almost no time. I should check the SMART status of the drives though I believe the BIOS would warn me on boots if there was a message.
I will give the hdparm command you suggested a try and see what it results. I am glad to know the partition structure is normal. Those I/O size numbers not being in sync w/ the section sizes doesn't appear alarming to you guys I take it? Thanks. Keith |
@TobiSGD - Thanks! I took your suggestion and tried some benchmarks. I think I definitely do have a problem. My read speed turned out OK:
Code:
kbarlow@atlantia:~$ sudo hdparm -Tt /dev/mapper/isw_cacihdigbh_Volume0p1 Code:
IOzone 3.405: Thanks! |
[solved]
Running some diagnostics on my hard drives showed that one is in fact showing some health issues:
Code:
kbarlow@atlantia:~$ sudo ./HDSentinel Marking thread solved. Thanks for your help! |
Follow up
Just as a follow up for anyone who might be trying to diagnose similar errors.
More diagnosis confirmed the bad blocks were affecting the systems ability to read partition table information. I received errors indicating the partition table did not exist after removing and recreating the partition. Ultimately, after splitting the raid and removing the bad hard drive, the remaining good hard drive was able to sustain about 55 MB/s using the same test that yielded a 2.5 MB/s throughput in RAID-0 configuration. |
All times are GMT -5. The time now is 06:33 PM. |