Zone says "no space left on device" in /tmp when there's plently?
Solaris / OpenSolarisThis forum is for the discussion of Solaris, OpenSolaris, OpenIndiana, and illumos.
General Sun, SunOS and Sparc related questions also go here. Any Solaris fork or distribution is welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Zone says "no space left on device" in /tmp when there's plently?
I have an M3000 running a few zones. On the zone, when I try to extract a ~3GB file in /tmp, GNU tar runs for a while and says "No space left on device" (Sun tar says "HELP - extract write error"). But df says there's 8GB free in /tmp!
A .tar.gz file is not an archive. Its a compressed archive!
Unless you know the size of the uncompressed archive, which can be many times bigger then the 3+Gb compressed archive, you cannot tell how big the uncompressed version will be.
Example: $ ls -lh binutils-2.20.tar.gz
-rw-r----- 1 druuna internet 23M Oct 16 2009 binutils-2.20.tar.gz
$ gunzip binutils-2.20.tar.gz
$ ls -lh binutils-2.20.tar
-rw-r----- 1 druuna internet 123M Oct 16 2009 binutils-2.20.tar
Almost 6 times the size.
Another thing: When you gunzip a file you end up with only the .tar version. But during the process of decompressing both are present until the end. So you actually need 7 times the size in my example.
You have to find a partition/disk with more space to gunzip that file.....
it's about 4.5GB uncompressed and unpacked, so it will fit.
Also, I unpack with:
Code:
gunzip < filename.tar.gz | tar xvf -
which removes the need to have a .tar file ever present.
So what I think I'm running into is:
a) the host has ~19GB swap
b) the zone is limited to ~12G virtual address space
c) tmp is mounted on RAM like always
d) 'df' in a zone shows the host swap size for tmpfs filesystems (although shouldn't that include my 16GB of RAM?)
e) when I try to fill up /tmp I get denied at 12G due to (b) instead of at 19G
f) I should increase capped-memory.swap or mount /tmp with option size=NNNNm where NNNN is something like 8192, to limit it to 8192m which will show in df.
Am I right/wrong?
Last edited by AlucardZero; 08-18-2010 at 03:06 PM.
I do think gunzip does decompress the whole file first and pipes that to tar.
You actually need even more then I mentioned in post #4: If you have a compressed version (3 Gb), an un-tarred version of the file (4.5 Gb) and extract the archive (another 4.5 Gb) you need at least 12 Gb free space (not sure if both tar.gz and tar will be present the whole time).
The command you use makes it so you can do all tasks in an elegant one-liner instead of first unzipping the file, and then extract the archive and possibly zipping the file again (but that is what happens under the hood). Disregard, I'm wrong!
Hi, "gunzip < filename.tar.gz" puts the data on STDOUT, not on disk. | pipes it to tar. "-f -" makes tar read from STDIN. The .tar file never exists on disk; only the .tar.gz and the {unpacked and uncompressed files} do.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Quote:
c) tmp is mounted on RAM like always
tmpfs is virtual memory based (i.e. RAM + swap space), not RAM only.
Quote:
d) 'df' in a zone shows the host swap size for tmpfs filesystems (although shouldn't that include my 16GB of RAM?)
df shows how much free swap is available unless /tmp is capped with size=xxxx.
Quote:
e) when I try to fill up /tmp I get denied at 12G due to (b) instead of at 19G
yes.
Quote:
f) I should increase capped-memory.swap or mount /tmp with option size=NNNNm where NNNN is something like 8192, to limit it to 8192m which will show in df.
Limiting /tmp capacity would just make the things worse. Increasing capped-memory.swap might help. Extracting elsewhere like on a ZFS filesystem might even help better. How many files are there in the archive ? What is their average size ? tmpfs isn't good for storing a very large number of small files.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Quote:
Originally Posted by AlucardZero
Hi, "gunzip < filename.tar.gz" puts the data on STDOUT, not on disk. | pipes it to tar. "-f -" makes tar read from STDIN. The .tar file never exists on disk; only the .tar.gz and the {unpacked and uncompressed files} do.
The .tar file doesn't exist on disk as a file but it does use disk space anyway. The pipe lives in virtual memory and that virtual memory is using tmpfs space. Not only the pipe but the input and output buffers used by gunzip and the input buffer used by tar and this might be larger that what you expect.
Limiting /tmp capacity would just make the things worse.
My thinking is that it would prevent confusion. As long as I set capped-memory.swap large enough, limiting /tmp's size to, say, 8GB would mean it would behave as "df" would lead you to believe: 8GB capacity and full at 8GB, instead of 19GB capacity and full at 12GB.
I also advised the user who initially complained about this to consider /var/tmp instead (which is ZFS; the whole zone is except the default tmpfs filesystems).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.