LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 05-14-2017, 10:46 AM   #1
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Arrow How to Delete HUGE (100-200GB) Files in Linux


Hi,

I know seasoned Gnu/Linux users may find this material old hat to them but for new users it can be helpful;

How to Delete HUGE (100-200GB) Files in Linux
Quote:
Usually, to delete/remove a file from Linux terminal, we use the rm command (delete files), shred command (securely delete a file), wipe command (securely erase a file) or secure-deletion toolkit (a collection of secure file deletion tools).
We can use any of the above utilities to deal with relatively small files. What if we want to delete/remove a huge file/directory say of about 100-200GB. This may not be as easy as it seems, in terms of the time taken to remove the file (I/O scheduling) as well as the amount of RAM consumed while carrying out the operation.
In this tutorial, we will explain how to efficiently and reliably delete huge files/directories in Linux.
You may also find this helpful; 5 Ways to Empty or Delete a Large File Content in Linux
Quote:
Occasionally, while dealing with files in Linux terminal, you may want to clear the content of a file without necessarily opening it using any Linux command line editors. How can this be achieved? In this article, we will go through several different ways of emptying file content with the help of some useful commands.
Caution: Before we proceed to looking at the various ways, note that because in Linux everything is a file, you must always make sure that the file(s) you are emptying are not important user or system files. Clearing the content of a critical system or configuration file could lead to a fatal application/system error or failure.
With that said, below are means of clearing file content from the command line.
For new users, be sure the files are not something necessary or important.

BTW, You do make backups for your system in cases that faults may cause interruption to your daily activity by an errant operation(S). A restore will get you back to your original state for backup creation date.

I like to grand-father my backups. Grand-fathering is when you rotate your backups and store the grand-father securely off site or somewhere safe that will at least allow restore for the grand-father date creation. I have used month old restore to save customers from errant operations. Sure a daily should suffice but if something happens to the daily then you are out of luck restoring from a damaged backup.
Better safe and to insure a valid backup.

Hope this helps.
Have fun & enjoy!


Last edited by onebuck; 05-28-2017 at 09:21 AM. Reason: typo
 
Old 05-15-2017, 09:40 PM   #2
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,982

Rep: Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625
I've seen all sorts of read and write tests on various filesystems. Don't think I've ever seen one include a delete files metric. As files get larger and larger maybe we ought to learn if one filesystem has a value for deleting.
 
Old 05-21-2017, 03:35 AM   #3
LeoRW
LQ Newbie
 
Registered: May 2017
Posts: 7

Rep: Reputation: Disabled
Sometimes huge log files are cause by a process that is holding the file open but not linked to it. Once you kill that process, it will release the disk resources and restore the disk space it used. You can check with lsof +L1 as root. Look at the file size, a number 0 in the NLINK column, the PID and the path.

Last edited by LeoRW; 05-21-2017 at 03:44 AM.
 
Old 05-21-2017, 07:42 AM   #4
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,659
Blog Entries: 4

Rep: Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941
Quote:
Originally Posted by LeoRW View Post
Sometimes huge log files are cause by a process that is holding the file open but not linked to it. Once you kill that process, it will release the disk resources and restore the disk space it used. You can check with lsof +L1 as root. Look at the file size, a number 0 in the NLINK column, the PID and the path.
Also note that the "file size" entry in the directory is usually updated only when the file is closed. (If it were updated more often, this would effectively double the I/O load with very little added benefit.) The internal file-table entry contains up-to-date information but the directory will be stale (when a file is being actively written to).

Last edited by sundialsvcs; 05-21-2017 at 07:43 AM.
 
Old 05-21-2017, 08:28 AM   #5
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Quote:
Originally Posted by sundialsvcs View Post
Also note that the "file size" entry in the directory is usually updated only when the file is closed. (If it were updated more often, this would effectively double the I/O load with very little added benefit.) The internal file-table entry contains up-to-date information but the directory will be stale (when a file is being actively written to).
For ext2/3/4, the file size isn't even in the directory. It's in the file's inode, and a stat() call to get information from that inode will use the kernel's in-core copy of the inode, which is updated in real time as the file is written. Any filesystem that allows hard links has to do it that way. What's the alternative -- seek out all of the (possibly hundreds of) directory entries for a file and update them all?? Yes, FAT variants, which do not support hard links, do store the file size in the directory entry.

Data that is still in unflushed stdio buffers, of course, has not yet been written to the file, at least as far as the kernel is concerned, and so would not be reflected in the file size until the buffer is flushed.

Perhaps you were thinking about NFS or other remote filesystems. There, the view from different client machines could indeed be inconsistent while one client was writing to the file.

Last edited by rknichols; 05-21-2017 at 08:40 AM. Reason: Add, "Perhaps you were thinking ..."
 
Old 05-21-2017, 08:38 AM   #6
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,925

Original Poster
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Member response

Hi,

I should have linked this earlier; Linux File System
Quote:
Understanding UNIX/Linux file system:
Part I <- Understanding Linux filesystems
Part II <- Understanding Linux superblock
Part III <- An example of Surviving a Linux Filesystem Failures
Part IV <- Understanding filesystem Inodes
Part V <- Understanding filesystem directories
Part VI <- Understanding UNIX/Linux symbolic (soft) and hard links
Part VII <- Why isn’t it possible to create hard links across file system boundaries?
Hope this helps.
Have fun & enjoy!
 
  


Reply

Tags
backups, empty file, grand father backups, large file deletion, null, restore



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Linux merge huge gzip files and keep only intersection francy_casa Programming 10 01-15-2014 03:56 PM
Linux merge huge gzip files and keep only intersection francy_casa Linux - Newbie 1 01-15-2014 06:51 AM
Can I delete files in /mnt/tmp? and Files in the trash can will not delete? M$ISBS Slackware 15 10-02-2009 11:56 PM
200GB of files Deleted - How to Recover? newlinuxnewbie Linux - General 4 10-05-2005 12:03 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 03:58 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration