Freeing up disk space after removing a file with 'rm'
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Freeing up disk space after removing a file with 'rm'
Hey there.
I recently was trying to cut down some disk space usage on a Fedora 10 machine. I had thought at the time that if I deleted the file with rm and then restarted the application that rights to the file it would free up the space, but apparently not. Reading around it seemed like it might be the case that I needed to stop the process first and then delete the file to actually free up the space.
So my question is, now that I removed that file is there anyway for me to free up the space w/o just restarting the whole machine? It was rather large and I don't want to just leave that big a chunk being used, but I would like to avoid restarting the server just to clear up space from one file.
The file in question was a log file from syslog-ng. After restarting syslog-ng a new version of the file in question was created but the disk space still was being used.
Let me know if there is a better place to ask this question.
Files work a bit differently in Linux than in Windows. In linux, a file doesn't actually get deleted until the number of links to the file is zero. IOW, if you delete it while it's opened, then that instance of the file will continue to exist (without a directory entry) until every process that has it open is ended - even if you create a new one using some other method. Note that this is only the case if you delete it externally (e.g. manually) to the process that has it open. If the process closes the file and deletes it, it's gone.
Files work a bit differently in Linux than in Windows. In linux, a file doesn't actually get deleted until the number of links to the file is zero. IOW, if you delete it while it's opened, then that instance of the file will continue to exist (without a directory entry) until every process that has it open is ended - even if you create a new one using some other method. Note that this is only the case if you delete it externally (e.g. manually) to the process that has it open. If the process closes the file and deletes it, it's gone.
Rebooting has nothing to do with it, BTW.
I think I understood that before hand, but it doesn't quite line up with what happened unless I am not following exactly. I deleted the file and then stopped syslog-ng which was the process writing to the file. I thought this would clear up the space, but it did not. Based on what I interpret from your post it should have.
So is that not the case? If not is there a way for me to get rid of it?
To the other poster, I tried 'sync' but still have the same amount of space showing.
I think I understood that before hand, but it doesn't quite line up with what happened unless I am not following exactly. I deleted the file and then stopped syslog-ng which was the process writing to the file. I thought this would clear up the space, but it did not. Based on what I interpret from your post it should have.
So is that not the case? If not is there a way for me to get ride of it?
To the other poster, I tried 'sync' but still have the same amount of space showing.
If I read you right, you got Quakeboy's info backwards; stop syslog-ng FIRST, and then delete the file, and finally, restart syslog-ng. That should do it.
Sasha
EDIT.. maybe I'm missing something here too
Last edited by GrapefruiTgirl; 11-17-2009 at 01:35 PM.
If I read you right, you got Quakeboy's info backwards; stop syslog-ng FIRST, and then delete the file, and finally, restart syslog-ng. That should do it.
Sasha
EDIT.. maybe I'm missing something here too
Sasha, I'm not quite sure what happens in the case where a program is appending data to a file that is externally deleted. You'd probably have to look to the syslog-ng code to see how it's working. In any case, if syslog-ng isn't running, there is no reason the file shouldn't be gone once you execute "rm", unless some other program also has the same logfile open. Generally, it's best to only remove archived logfiles, and not the active ones.
I can't remember the options - but can't you use lsof to see what files are open - you must still have a process using that file - until the inode count reaches 0 the file will be taking up space on the filesystem. The > trick will simply remove the data in the file and have no need for the os to clear up unused data so this might work unless the process remembers where in the file it is appending too and always does a seek.
The > trick will simply remove the data in the file and have no need for the os to clear up unused data so this might work unless the process remembers where in the file it is appending too and always does a seek.
That's why I said if something is writing, it gets more complicated.
What happens when a file is large enough to have more than one extent? If the writing process is now adding bytes to indirect blocks, does the ">" clear the indirect blocks, leaving you with a sparse file and more free blocks? It should, but I haven't written a program to test that. Maybe I can do that today.
Quote:
So, my guess would be that "> <fileid>" replaces the file's contents with null?
With bash, sh and some other shells, ">" empties the file, equivalent to
Code:
echo -n "" > file
With csh, no. That's just one more reason to dislike csh :-)
But - leaving the question of sparse files aside - for ordinary appending, the ">" will clear disk space.
Let's take this perl script:
Code:
#!/usr/bin/perl
open(I,">>./t");
while (1) {
print I "x" x 4096;
print "Block written\n";
sleep 10;
}
That writes 4096 "x"'s to a file every 10 seconds. I chose 4096 because that's the disk block size on this box; therefore each write shows up immediately in "ls -l" ad "df".
If you let that run a bit, "t" will grow and available disk space will of course decrease.
Code:
$ ls -l t; df
-rw-r--r-- 1 apl apl 32768 Nov 18 08:22 t
Filesystem 512-blocks Used Available Capacity Mounted on
/dev/disk0s2 155629664 100008768 55108896 65% /
....
$ ls -l t; df
-rw-r--r-- 1 apl apl 53248 Nov 18 08:23 t
Filesystem 512-blocks Used Available Capacity Mounted on
/dev/disk0s2 155629664 100006232 55111432 65% /
Now do the ">"
Code:
$ > t
$ ls -l t; df
-rw-r--r-- 1 apl apl 4096 Nov 18 08:23 t
Filesystem 512-blocks Used Available Capacity Mounted on
/dev/disk0s2 155629664 100006136 55111528 65% /
See? "t" has gone to zero (and had another 4096 bytes written to it) and available blocks have increased.
Nothing different happens if you use seek instead:
Code:
#!/usr/bin/perl
open(I,">./t");
while (1) {
seek I, 0, 2;
print I "x" x 4096;
print "Block written\n";
sleep 10;
}
Something different DOES happen if you use this:
Code:
#!/usr/bin/perl
open(I,">./t");
$x=0;
while (1) {
$mypos=$x * 4096;
seek I, 0, $mypos;
print I "x" x 4096;
print "Block $x written\n";
$x++;
sleep 10;
}
}
Code:
$ l t;df
-rw-r--r-- 1 apl apl 16384 Nov 18 08:36 t
Filesystem 512-blocks Used Available Capacity Mounted on
/dev/disk0s2 155629664 100006208 55111456 65% /
If you do an "od" while that is running, you'll see the expected
Code:
$ od -c t
0000000 x x x x x x x x x x x x x x x x
*
0030000
Now zero it out:
Code:
$ > t
$ l t;df
-rw-r--r-- 1 apl apl 0 Nov 18 08:36 t
Filesystem 512-blocks Used Available Capacity Mounted on
/dev/disk0s2 155629664 100006176 55111488 65% /
$ l t;df
-rw-r--r-- 1 apl apl 20480 Nov 18 08:36 t
Filesystem 512-blocks Used Available Capacity Mounted on
/dev/disk0s2 155629664 100006216 55111448 65% /
What happens here is that the file goes to zero and available space increases, but then when the writer writes again, it's back a large size instantly. But look at the available space - NOT back to what it was, and "od" shows why:
Code:
$ od -c t
0000000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
0040000 x x x x x x x x x x x x x x x x
*
That's an example of a "sparse file" that I mentioned earlier. Those nul bytes don't really exist.
I hope this helps. I'll be writing up this up at more length in a website article I will publish later today or tomorrow.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.