Latest LQ Deal: Linux Power User Bundle
Go Back > Forums > Linux Forums > Linux - Server
User Name
Linux - Server This forum is for the discussion of Linux Software used in a server related context.


  Search this Thread
Old 06-24-2011, 11:49 AM   #1
LQ Newbie
Registered: May 2004
Posts: 9

Rep: Reputation: 0
Extremely Large Metadata size for directory on an ext4 filesystem

I am running CentOS 5.5 with a 14T ext4 volume. We are sharing out a few sub-directories via NFS. Our customer was doing some penetration testing with our web app that writes to one of those NFS shares. We are not sure if they did something to cause the metadata to grow so large or if it is corrupt. Here is the listing:

drwxrwxr-x 1 owner owner 470M Jun 24 18:15 temp.bad

I guess the metadata could actually be that large, however we have been unable to perform any operations on that directory to determine if the directory is just loaded with files or corrupted. We have not run an fsck on the volume because we would need to schedule downtime for out customers to do so. Has anyone come across this before? Do you have any recommendations to examine more detail on this issue? Thanks.


Volume in question:
/dev/sdb1 ext4 13T 1.7T 11T 14% /data01
Old 06-25-2011, 05:10 AM   #2
Registered: Feb 2008
Location: Rome, Italy
Distribution: OpenSuSE 11.x, vectorlinux, slax, Sabayon
Posts: 206
Blog Entries: 2

Rep: Reputation: 45
Some question:
  1. how many files are there in the directory?
  2. what command did you use to know the dimension of the directory? Could you post the output of "ls -lsd <dir>"?

Third, a suggestion... you could try running a readonly fsck, by using the "-n" option... this will tell you if there is any problem, but remember, be careful and test this on another filesystem first! I tried on my /boot with ext3 and worked fine...
Old 06-27-2011, 05:47 AM   #3
Senior Member
Registered: Apr 2003
Location: Germany
Distribution: openSuSE 42.1_64+Tumbleweed-KDE, Mint 17.3
Posts: 4,057

Rep: Reputation: Disabled
What were the parameters during creation of the 14 TB file system? (AFAIR there is a default of 5 % of the file system reserved for the journal up to an upper limit of (...darn, memory fails here )).


centos55, ext4

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Improving MetaData Performance of the Ext4 Journaling Device DragonSlayer48DX Linux - News 0 01-06-2010 09:34 PM
Editing extremely large files, too large for memory? SirTristan Linux - Newbie 2 12-22-2009 04:06 PM
need linux script to copy large deep directory structure to other filesystem vanvoj Programming 4 11-04-2009 04:19 AM
Problems with ext4 on large drive GigaRoc Linux - Software 3 07-10-2009 03:01 PM
Large drive- too much space lost by large block size? tome Linux - General 5 12-14-2001 02:32 PM > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 03:31 AM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration