Getting no space left on device when there appears to be 433g free
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Getting no space left on device when there appears to be 433g free
I'm using a large xfs partition on a large raid volume using 7 1TB drives. I am currently getting a "no space left on device" error message when creating files or directories. However I see the following output when checking space limitations.
[root@system]# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name raid
PV Size 5.46 TB / not usable 28.56 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 178849
Free PE 18849
Allocated PE 160000
[root@thevault Videos]# vgdisplay
--- Volume group ---
VG Name raid
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 852
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 5.46 TB
PE Size 32.00 MB
Total PE 178849
Alloc PE / Size 160000 / 4.88 TB
Free PE / Size 18849 / 589.03 GB
[root@system]# lvdisplay
--- Logical volume ---
LV Name /dev/raid/vol
VG Name raid
LV Write Access read/write
LV Status available
# open 1
LV Size 4.88 TB
Current LE 160000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1280
Block device 253:0
The file I'm creating is a simple testfile of 4.0k using the touch command: touch testfile
[root@system ~]# mount
/dev/hdd3 on / type ext3 (rw)
/proc on /proc type proc (rw)
/sys on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/hdd1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
/dev/mapper/raid-vol on /mnt/raid/vol type xfs (rw,usrquota,grpquota)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/rpc_pipefs type rpc_pipefs (rw)
automount(pid3297) on /net type autofs (rw,fd=4,pgrp=3297,minproto=2,maxproto=4)
automount(pid3271) on /misc type autofs (rw,fd=4,pgrp=3271,minproto=2,maxproto=4)
I remounted the array and poof no more errors. I must admit I'm a little confused as to why I needed this option. As I understand XFS a 32bit inode should be able to create inodes for up to 16TB and my array is only 7TB. Not to mention df -i shows the majority of the inodes unused.
Here's the description of the option:
inode64
Indicates that XFS is allowed to create inodes at any location
in the filesystem, including those which will result in inode
numbers occupying more than 32 bits of significance. This is
provided for backwards compatibility, but causes problems for
backup applications that cannot handle large inode numbers.
If someone can explain why this option corrected the issue I'd appreciate it. Thanks.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.