LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 06-24-2011, 10:49 AM   #1
mdpolaris
LQ Newbie
 
Registered: May 2004
Posts: 9

Rep: Reputation: 0
Extremely Large Metadata size for directory on an ext4 filesystem


I am running CentOS 5.5 with a 14T ext4 volume. We are sharing out a few sub-directories via NFS. Our customer was doing some penetration testing with our web app that writes to one of those NFS shares. We are not sure if they did something to cause the metadata to grow so large or if it is corrupt. Here is the listing:

drwxrwxr-x 1 owner owner 470M Jun 24 18:15 temp.bad

I guess the metadata could actually be that large, however we have been unable to perform any operations on that directory to determine if the directory is just loaded with files or corrupted. We have not run an fsck on the volume because we would need to schedule downtime for out customers to do so. Has anyone come across this before? Do you have any recommendations to examine more detail on this issue? Thanks.

Mike

Volume in question:
/dev/sdb1 ext4 13T 1.7T 11T 14% /data01
 
Old 06-25-2011, 04:10 AM   #2
clvic
Member
 
Registered: Feb 2008
Location: Rome, Italy
Distribution: OpenSuSE 11.x, vectorlinux, slax, Sabayon
Posts: 206
Blog Entries: 2

Rep: Reputation: 45
Some question:
  1. how many files are there in the directory?
  2. what command did you use to know the dimension of the directory? Could you post the output of "ls -lsd <dir>"?

Third, a suggestion... you could try running a readonly fsck, by using the "-n" option... this will tell you if there is any problem, but remember, be careful and test this on another filesystem first! I tried on my /boot with ext3 and worked fine...
 
Old 06-27-2011, 04:47 AM   #3
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE Tumbleweed-KDE, Mint 21, MX-21, Manjaro
Posts: 4,629

Rep: Reputation: Disabled
What were the parameters during creation of the 14 TB file system? (AFAIR there is a default of 5 % of the file system reserved for the journal up to an upper limit of (...darn, memory fails here )).
 
  


Reply

Tags
centos55, ext4



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Improving MetaData Performance of the Ext4 Journaling Device DragonSlayer48DX Linux - News 0 01-06-2010 08:34 PM
Editing extremely large files, too large for memory? SirTristan Linux - Newbie 2 12-22-2009 03:06 PM
need linux script to copy large deep directory structure to other filesystem vanvoj Programming 4 11-04-2009 03:19 AM
Problems with ext4 on large drive GigaRoc Linux - Software 3 07-10-2009 02:01 PM
Large drive- too much space lost by large block size? tome Linux - General 5 12-14-2001 01:32 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 07:37 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration