Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Usually, to delete/remove a file from Linux terminal, we use the rm command (delete files), shred command (securely delete a file), wipe command (securely erase a file) or secure-deletion toolkit (a collection of secure file deletion tools).
We can use any of the above utilities to deal with relatively small files. What if we want to delete/remove a huge file/directory say of about 100-200GB. This may not be as easy as it seems, in terms of the time taken to remove the file (I/O scheduling) as well as the amount of RAM consumed while carrying out the operation.
In this tutorial, we will explain how to efficiently and reliably delete huge files/directories in Linux.
Occasionally, while dealing with files in Linux terminal, you may want to clear the content of a file without necessarily opening it using any Linux command line editors. How can this be achieved? In this article, we will go through several different ways of emptying file content with the help of some useful commands. Caution: Before we proceed to looking at the various ways, note that because in Linux everything is a file, you must always make sure that the file(s) you are emptying are not important user or system files. Clearing the content of a critical system or configuration file could lead to a fatal application/system error or failure.
With that said, below are means of clearing file content from the command line.
For new users, be sure the files are not something necessary or important.
BTW, You do make backups for your system in cases that faults may cause interruption to your daily activity by an errant operation(S). A restore will get you back to your original state for backup creation date.
I like to grand-father my backups. Grand-fathering is when you rotate your backups and store the grand-father securely off site or somewhere safe that will at least allow restore for the grand-father date creation. I have used month old restore to save customers from errant operations. Sure a daily should suffice but if something happens to the daily then you are out of luck restoring from a damaged backup.
Better safe and to insure a valid backup.
Hope this helps.
Have fun & enjoy!
Last edited by onebuck; 05-28-2017 at 09:21 AM.
Reason: typo
I've seen all sorts of read and write tests on various filesystems. Don't think I've ever seen one include a delete files metric. As files get larger and larger maybe we ought to learn if one filesystem has a value for deleting.
Sometimes huge log files are cause by a process that is holding the file open but not linked to it. Once you kill that process, it will release the disk resources and restore the disk space it used. You can check with lsof +L1 as root. Look at the file size, a number 0 in the NLINK column, the PID and the path.
Sometimes huge log files are cause by a process that is holding the file open but not linked to it. Once you kill that process, it will release the disk resources and restore the disk space it used. You can check with lsof +L1 as root. Look at the file size, a number 0 in the NLINK column, the PID and the path.
Also note that the "file size" entry in the directory is usually updated only when the file is closed. (If it were updated more often, this would effectively double the I/O load with very little added benefit.) The internal file-table entry contains up-to-date information but the directory will be stale (when a file is being actively written to).
Last edited by sundialsvcs; 05-21-2017 at 07:43 AM.
Also note that the "file size" entry in the directory is usually updated only when the file is closed. (If it were updated more often, this would effectively double the I/O load with very little added benefit.) The internal file-table entry contains up-to-date information but the directory will be stale (when a file is being actively written to).
For ext2/3/4, the file size isn't even in the directory. It's in the file's inode, and a stat() call to get information from that inode will use the kernel's in-core copy of the inode, which is updated in real time as the file is written. Any filesystem that allows hard links has to do it that way. What's the alternative -- seek out all of the (possibly hundreds of) directory entries for a file and update them all?? Yes, FAT variants, which do not support hard links, do store the file size in the directory entry.
Data that is still in unflushed stdio buffers, of course, has not yet been written to the file, at least as far as the kernel is concerned, and so would not be reflected in the file size until the buffer is flushed.
Perhaps you were thinking about NFS or other remote filesystems. There, the view from different client machines could indeed be inconsistent while one client was writing to the file.
Last edited by rknichols; 05-21-2017 at 08:40 AM.
Reason: Add, "Perhaps you were thinking ..."
Understanding UNIX/Linux file system: Part I <- Understanding Linux filesystems Part II <- Understanding Linux superblock Part III <- An example of Surviving a Linux Filesystem Failures Part IV <- Understanding filesystem Inodes Part V <- Understanding filesystem directories Part VI <- Understanding UNIX/Linux symbolic (soft) and hard links Part VII <- Why isn’t it possible to create hard links across file system boundaries?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.