LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (https://www.linuxquestions.org/questions/linux-software-2/)
-   -   Getting no space left on device when there appears to be 433g free (https://www.linuxquestions.org/questions/linux-software-2/getting-no-space-left-on-device-when-there-appears-to-be-433g-free-846048/)

phoenix1030 11-23-2010 11:07 AM

Getting no space left on device when there appears to be 433g free
 
I'm using a large xfs partition on a large raid volume using 7 1TB drives. I am currently getting a "no space left on device" error message when creating files or directories. However I see the following output when checking space limitations.

df -h
[root@system]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hdd3 18G 775M 16G 5% /
/dev/hdd1 99M 15M 80M 15% /boot
tmpfs 1.8G 0 1.8G 0% /dev/shm
/dev/mapper/raid-vol 4.9T 4.5T 433G 92% /mnt/raid/vol



[root@system]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/hdd3 4.6M 39K 4.6M 1% /
/dev/hdd1 26K 37 26K 1% /boot
tmpfs 213K 1 213K 1% /dev/shm
/dev/mapper/raid-vol 1.7G 2.4M 1.7G 1% /mnt/raid/vol

[root@system]# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name raid
PV Size 5.46 TB / not usable 28.56 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 178849
Free PE 18849
Allocated PE 160000



[root@thevault Videos]# vgdisplay
--- Volume group ---
VG Name raid
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 852
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 5.46 TB
PE Size 32.00 MB
Total PE 178849
Alloc PE / Size 160000 / 4.88 TB
Free PE / Size 18849 / 589.03 GB

[root@system]# lvdisplay
--- Logical volume ---
LV Name /dev/raid/vol
VG Name raid
LV Write Access read/write
LV Status available
# open 1
LV Size 4.88 TB
Current LE 160000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1280
Block device 253:0



Any assistance would be helpful

AlucardZero 11-23-2010 11:52 AM

Creating files or directories of what size, where? And can you post the output of "mount" ?

valen_tino 11-23-2010 12:26 PM

Are you using quotas on the server? If not you may want to run e2fsck in single user mode......assuming that you have a good backup of the system.

phoenix1030 11-23-2010 01:36 PM

The file I'm creating is a simple testfile of 4.0k using the touch command: touch testfile





[root@system ~]# mount
/dev/hdd3 on / type ext3 (rw)
/proc on /proc type proc (rw)
/sys on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/hdd1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
/dev/mapper/raid-vol on /mnt/raid/vol type xfs (rw,usrquota,grpquota)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/rpc_pipefs type rpc_pipefs (rw)
automount(pid3297) on /net type autofs (rw,fd=4,pgrp=3297,minproto=2,maxproto=4)
automount(pid3271) on /misc type autofs (rw,fd=4,pgrp=3271,minproto=2,maxproto=4)

phoenix1030 11-23-2010 03:48 PM

OK so It looks like I've figured it out.

I added inode64 to the mount options in /etc/fstab

/dev/raid/vol /mnt/raid/vol xfs defaults,inode64,usrquota,grpquota 0 0

I remounted the array and poof no more errors. I must admit I'm a little confused as to why I needed this option. As I understand XFS a 32bit inode should be able to create inodes for up to 16TB and my array is only 7TB. Not to mention df -i shows the majority of the inodes unused.

Here's the description of the option:
inode64
Indicates that XFS is allowed to create inodes at any location
in the filesystem, including those which will result in inode
numbers occupying more than 32 bits of significance. This is
provided for backwards compatibility, but causes problems for
backup applications that cannot handle large inode numbers.



If someone can explain why this option corrected the issue I'd appreciate it. Thanks.


All times are GMT -5. The time now is 04:18 PM.