-   Linux - Kernel (
-   -   can not delete a file that has been memory mapped until all dirty pages are written (

undernet 07-31-2013 10:30 PM

can not delete a file that has been memory mapped until all dirty pages are written
I am memory mapping files from java which wraps the mmap kernel function, it all works fine except that when I close my program down and try to delete the memory mapped file the delete hangs for ages until all the dirty pages are written to the file. So If I am memory mapping a 25gb file, do a load of writes (resulting in loads of dirty memory pages that map to the file), close the program down then try to delete the file, the kernel will prevent the delete from happening until all 25gb of dirty pages are written to it, this causes any program using the drive to hang until it has finished. The computer will not shutdown if I ask it to , it will just hang on the fedora shutdown logo, I have to manually turn the switch off, if I do, after I restart my SSD becomes frozen because of the power loss during writing meaning I have to disconnect the SATA power cable to reset it, highly Annoying!

I would like to be able to delete the file immediately and any unfinished dirty page writes are simply erased from memory as the file it is paging out to no longer exists. Is there a kernel parameter to enable this function to give deletes preference?

jailbait 08-01-2013 05:34 PM

Do you issue a munmap command before you delete the file?

Steve Stites

undernet 08-02-2013 05:55 PM

turns out there is no way to make sure a memory mapped file is unmapped in java.
see this 11 year old bug>

only workaround is to make all references to the memorymapped file null then call System.gc() in java and pray that the garbage collector runs.

sundialsvcs 08-05-2013 11:06 AM

It certainly does not surprise me in the slightest that, if you have dirtied hundreds or thousands of pages in a (memory-mapped) disk file, the operating system is going to make sure that all of those physical disk-writes actually get done!

If what you want, instead, is a RAM disk, such that you really don't care if physical disk-writes ever occur, then you can have that, also. . . and you can mmap() it also.

undernet 08-05-2013 01:55 PM

How can you create a RAM disk of the file if the file is bigger than RAM itself. Thats why I am memory mapping the file because it will not all fit in ram, if it did I would just perform writes to the RAM disk and then write the whole RAM disk out to SSD disk once every few minutes or so in case of power failure.

sundialsvcs 08-06-2013 10:16 AM

Most commonly, you map a portion of the file in a "sliding window" approach.

However, it all comes down to your algorithm of choice. If you are truly accessing "that much data," and "truly accessing all of it," then in the end you are going to pay the price of all those disk-writes ... whether the disk-writes come from memory-mapped I/O or from paging caused by the virtual memory subsystem.

You don't describe what you are actually doing .. what your algorithm is .. thus it is impossible to speculate how the algorithm might be improved upon. But I would hazard an almost-certain guess that it could be, using hash-tables or other such data structures which permit accessing a very large name space but which store only the portion that is used. If you're waiting "noticeable seconds or minutes" for a bunch of pending-writes, then you're beating-up the computer pretty good such that it's going to have bruises and a bad attitude. ;)

undernet 08-06-2013 07:49 PM

Its funny that you mention hashtables because thats exactly what my program is, a disk based hash table, I am currently stress testing it. The file is split into 4KB buckets matching the page size of the OS and SSD. When a write is performed the key is hashed to the right bucket and the Key/value inserted. The 4kb bucket is then written to mapped memory for the OS to page out to disk when it feels like it. It seems the deletes were taking ages because the SSD I was using was not up to scratch. I was using a samsung ssd 840 (not pro version) and it just would lock up under heavy writes for ages, I was also using the xfs file system which has issues with deletes. I have since switched to an intel ssd 320 with ext4 and its much better, I can delete the file soon after the program exits.

The issue I'm having now is to do with slow downs on large files. If i insert 100 million 4kb buckets on a 2gb file it all goes fine, I can do about 200k inserts per second from start to finish. But if I do the same number of inserts with same data on a 20gb file I get a gradual slow down after about 5 million inserts and it just gets worse from there. Iotop says my inserts are taking place at 100MB/sec with disk I/O near 100% when I first start, but at 8 million inserts down the line my inserts are only 10MB/sec with I/O still near 100% which I cant get my head around. It's not as if my data writing patterns change during the insert as each page being written is mostly random due to the hash function, the only thing I can think of that is changing is the linux kernels OS paging patterns but I don't have time to figure out whats going on inside the kernel paging thread.

I will try using a random access file instead of mapped memory and see if I get the same issues, it should be slower in theory due to the explicit OS write calls but maybe bypassing the OS virtual memory paging system might get me more consistent performance on large files.

All times are GMT -5. The time now is 02:43 AM.