Thank you for your reply.
I was at work while writing the post, so I didn't have time to be more specific.
Here goes the whole story:
The machine i question is my home server/NAS. For some time It had Ubuntu 11.x (don't remember exactly if it was 04 or 10), and then on Ubuntu (using VMWare) I've had running Windows 7 (since unfortunatelly I do need some apps that work only on under Win). About 2 weeks ago I've decided to use Proxmox VE (
http://proxmox.com/products/proxmox-ve), and to put on it Ubuntu 12.04.1 and Windows 7. And then it all began.
HW is:
Some Gigabyte motherboard with Phenom x6 3,2GHz and 16GB of RAM and some Radeon integrated video card.
The motherboard has 6 SATA3 and 2 SATA2 ports, in addition to that there's a LSI chip based RAID card (2SAS / 8SATA2 ports).
And now for the HDDs:
System (Proxmox, and the VMs): 3x Seagate 500GB (RAID0)
Storage (other stuff): 12x WD 2TB (RAID5) (waiting for the 13th drive to make it RAID6)
Info from Proxmox VE (this is based on Debian Squeeze):
Code:
root@szafran:/var# uname -a
Linux szafran 2.6.32-14-pve #1 SMP Tue Aug 21 08:24:37 CEST 2012 x86_64 GNU/Linux
root@szafran:/var# resize2fs
resize2fs 1.41.12 (17-May-2010)
Both RAIDs are software mdadm, and both are assembled under Proxmox (Debian). Then on the RAID5 array there's an LVM (one VG; one PV; two LVs). Ubuntu uses the LVs (since it's the fastest config to run under VM).
To add to the first post, after a few hours of the 16TB fs mounted and copying data to it, it hang :/ The OS still said it was mounted etc, but the fs wasn't doing anything and not responding to anything (on both Ubuntu and Proxmox/Debian). Had to reboot the physical machine.
And now some more mounting info (under Ubuntu VM):
Code:
root@NAS:/mnt# blockdev --setra 202752 /dev/vdb
root@NAS:/mnt# blockdev --setra 202752 /dev/vdc
root@NAS:/mnt# time mount -v -t ext4 /dev/vdb1 nowy
/dev/vdb1 on /mnt/nowy type ext4 (rw)
real 4m56.898s
user 0m0.000s
sys 4m55.698s
root@NAS:/mnt# time umount nowy
real 0m0.134s
user 0m0.000s
sys 0m0.016s
root@NAS:/mnt# time mount -v -t ext4 /dev/vdb1 nowy
/dev/vdb1 on /mnt/nowy type ext4 (rw)
real 4m56.916s
user 0m0.000s
sys 4m55.822s
root@NAS:/mnt# time umount nowy
real 0m0.133s
user 0m0.000s
sys 0m0.016s
root@NAS:/mnt# time mount -v -t ext4 /dev/vdc1 nowy2
/dev/vdc1 on /mnt/nowy2 type ext4 (rw)
real 0m14.379s
user 0m0.000s
sys 0m11.829s
root@NAS:/mnt# time umount nowy2
real 0m0.113s
user 0m0.000s
sys 0m0.012s
root@NAS:/mnt# time mount -v -t ext4 /dev/vdc1 nowy2
/dev/vdc1 on /mnt/nowy2 type ext4 (rw)
real 0m12.323s
user 0m0.000s
sys 0m11.729s
root@NAS:/mnt# time umount nowy2
real 0m0.446s
user 0m0.000s
sys 0m0.004s
vdb1 is 16TB with 9,7TB used.
vdc1 is 4TB and it's just free space.
Both fs were created using:
Code:
mkfs -O 64bit,extent,has_journal,uninit_bg,sparse_super,dir_index,large_file,flex_bg -t ext4 -T huge -b 4096 -v -m 0 -E stride=128,stripe_width=1536
As you can from the above - even after setting some cache for the drives (around 800MB per dev), and subsequent mounts - the mount times are horrible.
After that timed mounts I've run e2fsck -v -f on vdb1 - it still says that the fs is clean.
I've read somwhere that there's a problem with long mount times, but that supposed to be with the 3.4.X kernel, and I'm working on 3.2.X so it should be fine.