LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   After deleting lots of large files, free space only increases partially (https://www.linuxquestions.org/questions/linux-general-1/after-deleting-lots-of-large-files-free-space-only-increases-partially-4175491045/)

kubuntu-man 01-12-2014 02:58 PM

After deleting lots of large files, free space only increases partially
 
Hi,

I just have deleted 71 GB of files.
Free Space before: 117 GB
Free space after: 126 GB

So, instead of having 71 GB of additional free space, I only have 9 GB. I have double-checked that I really have deleted 71 GB and free space increased only 9 GB.

I also tried to sync, but no effect.

This is not the first time it happens. When this happens, I can free the space by umounting and then re-mounting the filesystem. In these cases, umounting takes several seconds instead of being immediate.

Nowadays, I can not easily umount and remount the filesystem, because it is permanently busy from my video recorder, the owncloud server for my family, and a few more services I did not have so far. And I don't want to get up at 3 o'clock in the night just to umount and remount :-)
(no, the 'at' utility won't do because one of the services needs a manual state check to find a good moment when it can be shut down)

So far, I only noticed this behaviour when deleting a large amount of data. In the other hand, I am not sure if it happens on other cases too and the difference is just not noteable.

The filesystem was created with 0% reserved for root (mkfs -m 0). According to fsck -f, the filesystem is not corrupted, and according to S.M.A.R.T. diagnostics extended test, the hardware is also OK.

So here are my 2 questions:
1. What is happening here ? Why is space freed at umount instead of at delete ?
2. Is there something I can do to trigger freeing the space without umount ?

suicidaleggroll 01-12-2014 05:10 PM

It sounds like you're deleting files that are currently open by running processes. In this case, you're simply deleting the reference to the file, but the space won't be freed until the process is stopped or restarted.

sag47 01-12-2014 10:10 PM

Quote:

Originally Posted by suicidaleggroll (Post 5096834)
It sounds like you're deleting files that are currently open by running processes. In this case, you're simply deleting the reference to the file, but the space won't be freed until the process is stopped or restarted.

I agree with this assessment.

Code:

lsof | grep 'deleted'

lleb 01-12-2014 11:00 PM

also did you do this via the GUI or via CLI?

kubuntu-man 01-13-2014 01:46 PM

It does not matter if I delete the files in the GUI or in the terminal. I know about the recycle bin ;-)

I forgot to mention in my question that I also checked that none of the files are open.
Not by any program, and I also stopped the NFS server (no SAMBA server installed). And even if I had not checked, I think it's not very likely to have open files with a total of 62 GB ;-)

But indeed, I have copied the files to another PC via NFS before deleting them. If the files would have still been open after stopping the NFS server, this would be a bug, wouldn't it ?
In the other hand ... why does umount work without complaining about "filesystem is busy", just take unusually long ? This means there must be some sort of cleanup going on.

But I have noticed something else:
Yesterday, I could not do the umount-remount cycle because of the runnning services (my statemet about "umount is working" was from the previous cases when the problem occured).
This morning, the space was freed - without any user action. So it looks like the space is freed with some delay. Maybe it's the same cleanup I suspect to be going on during umount.

jefro 01-13-2014 05:45 PM

lleb asked a good question, he could not have known.

Just for grins boot to a live cd and see what is says.

sundialsvcs 01-13-2014 09:05 PM

I predict that it would be more accurate to say that there is, indeed quite a bit of "background activity going on," which blocked the umount request until it could be completed.

You are, after all, deleting "71 gigabytes worth of" disk files here ... and simple reason says that it's going to take quite a bit of time to readjust all of the relevant data structures when you do that. While the filesystem, quite graciously, might not oblige any given user-process to wait around while it (the filesystem) "gets its paperwork in order," it certainly would (have to ...) block any request to un-mount the device.

docbop 01-13-2014 09:14 PM

Here's what an old Linux Journal article said.....

>>>>
rm is the command used, in Linux terminology, to unlink a file. What this means is that the directory entry for the file is removed. A side effect (and the effect that we generally expect) is that the file is deleted. But this may not be the case.

The Linux file system makes it possible for a file to have more than one name or directory entry. The ln command allows you to create these additional names or links. If these links are hard links, links created with the ln command without the -s option, you have a file that can be accessed by these multiple names.

By using the rm command on one of these names, you only delete the name, not the actual file. When the last name pointing to the file is removed, the file is finally removed.

<<<<

kubuntu-man 01-14-2014 01:08 PM

@jefro
I did not mean to disrespect lleb, I just wanted to state I already have thought about this.

As the umount, re-mount cycle (most times with a fsck in between) always helped, and the space was freed overnight this time, I would say that booting a live CD will probably not give more information. Additionally, the PC needs to run nearly 24/7 because of all the services, and it is always difficult to find a good moment to do this. Except, maybe, 3 o'clock in the night ...

@sundialsvcs
When deleting so much data, I pay attention to choose a moment when the disk is not (or not very) busy. Formerly, I ensured that the disk was completely idle.
Since I have all my new services runnning, I can not totally ensure this any more. But still, I choose a moment with minimal possible I/O load.
But indeed, I do not know if the filesystem driver still chooses to handle user processes' I/O with higher priority than the "internal paperwork".
BTW: the umount was never really blocked (like it is when having open files), it was only very slow.

@docbop
Yes, I know about the relation between directory entry and inodes. Also that an inode can have multiple hard links.
But if multiple hard links was the problem, neither the umount, re-mount cycle nor waiting over night would have helped. I have no services, cron-jobs and scripts that perform 'rm' in by /bigdata file system.

Up to now, the "internal paperwork" explanation of sundialsvcs appears the most reasonable one to me.

selfprogrammed 01-20-2014 04:25 PM

This discussion has gone on as if your FS was ext2.
What if the FS is ext4 with the journalling enabled.
Ext4 also has Delayed Allocation. You can do your own web search, I got 94000 hits.


All times are GMT -5. The time now is 10:40 PM.