LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 10-25-2016, 07:43 PM   #16
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,126

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120

No, and is more aimed at large files, not your scenario. All filesystems are trying to accommodate this.
Given you are prepared to zip the data, (read) performance is not an issue ?.

I haven't used reiser in a long while - I prefer filesystems with current support.
As to your query re bytes-per-inode (ext4), let the mkfs worry about that - use the "-T small" parameter and it will adjust inode size and ratio for you. Have a look in /etc/mke2fs.conf for options.

Last edited by syg00; 10-25-2016 at 07:45 PM. Reason: last sentence
 
1 members found this post helpful.
Old 10-26-2016, 06:00 AM   #17
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by syg00 View Post
No, and is more aimed at large files, not your scenario. All filesystems are trying to accommodate this.
Given you are prepared to zip the data, (read) performance is not an issue ?.

I haven't used reiser in a long while - I prefer filesystems with current support.
As to your query re bytes-per-inode (ext4), let the mkfs worry about that - use the "-T small" parameter and it will adjust inode size and ratio for you. Have a look in /etc/mke2fs.conf for options.
Thanks for the great answer.

And yes read performance is not a problem, well I haven't seen any yet.

Well I've decided after a night's sleep. I'm backing the files up and I'll reformat the partition using ext4 and the -T small parameter. As for tar-ing the files each day I've thought of something else... Since the files in each day directory are json files contructed from much bigger xml files where I remove anything I don't actually need, I'm thinking of testing the creation of one json file containing the whole days worth of transactions. Then the web page (HTML/AngularJS) will deal with the information. I know I can create the single json file but I'm not sure of the JavaScript on the client customer will be able to handle the amount of information. If that fails well I still have the tar-ing idea. Probable un-tar the days file when needed by the client application into a temp folder where it will be cleared off by a cron job.

So I'm curently in the process of backing up. Will let you all know how it goes.

Thanks.
 
Old 10-26-2016, 01:25 PM   #18
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Well so far so good guys...

mkfs.ext4 -T small /dev/sda9 worked great.

Now I have 7864320 inodes instead of 1966080 so 4 times as much...

Pair to that my 2 options of either creating a single json file for each day or tar-ing the files for each day I think I'll be fine.

Thanks for all the great help from everyone.
 
Old 10-26-2016, 05:11 PM   #19
Lsatenstein
Member
 
Registered: Jul 2005
Location: Montreal Canada
Distribution: Fedora 31and Tumbleweed) Gnome versions
Posts: 311
Blog Entries: 1

Rep: Reputation: 59
XFS is the way to go.

https://www.centos.org/forums/viewtopic.php?t=51114

From RedHat

6.9. Migrating from ext4 to XFS
A major change in Red Hat Enterprise 7 is the switch from ext4 to XFS as the default filesystem. This section highlights the differences which may be enountered when using or administering an XFS filesystem.

Note
The ext4 file system is still fully supported in Red Hat Enterprise Linux 7 and may be selected at installation if desired. While it is possible to migrate from ext4 to XFS it is not required.


https://access.redhat.com/documentat...e/ch06s09.html


I have been using xfs on SSDs and 3 terrabyte drives for over a year, sans probleme.
 
Old 10-27-2016, 06:15 AM   #20
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Xfs is preferable when you can't predict how many inodes you will need.

What happens is xfs adds more inodes when you run out, without needing any intervention. It does have a small tendency to scatter the inodes around, which can impact performance - but it is relatively minor.

A bigger performance impact depends on how the files are organized. Directory searches (and deletes) can get slow if all the files are in one directory.
 
Old 10-27-2016, 07:32 AM   #21
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by jpollard View Post
Xfs is preferable when you can't predict how many inodes you will need.

What happens is xfs adds more inodes when you run out, without needing any intervention. It does have a small tendency to scatter the inodes around, which can impact performance - but it is relatively minor.

A bigger performance impact depends on how the files are organized. Directory searches (and deletes) can get slow if all the files are in one directory.
Merci Leslie and jpollard,

I'll certainly keeps this in mind as I'm sure I'll have to add more to the server down the line.

Thanks
 
Old 10-27-2016, 08:20 AM   #22
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,659
Blog Entries: 4

Rep: Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941
Also, you should (IMHO) always install the system with "LVM = Logical Volume Management" installed. (With Ubuntu, it is a standard option available during installation.) This would have allowed you to increase the available disk space simply by adding another "physical volume" and adding it to the appropriate storage pool. Linux applications would perceive a single mount-point that had suddenly grown bigger. They would not perceive that the storage was actually being provided by multiple devices, nor how the data was being distributed among those devices.

Last edited by sundialsvcs; 10-27-2016 at 08:22 AM.
 
1 members found this post helpful.
Old 10-27-2016, 08:35 AM   #23
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by sundialsvcs View Post
Also, you should (IMHO) always install the system with "LVM = Logical Volume Management" installed. (With Ubuntu, it is a standard option available during installation.) This would have allowed you to increase the available disk space simply by adding another "physical volume" and adding it to the appropriate storage pool. Linux applications would perceive a single mount-point that had suddenly grown bigger. They would not perceive that the storage was actually being provided by multiple devices, nor how the data was being distributed among those devices.
Thanks sundial,

Will keep this in mind as well. Feeling less and less noobish... lol
 
Old 11-01-2016, 08:49 AM   #24
Lsatenstein
Member
 
Registered: Jul 2005
Location: Montreal Canada
Distribution: Fedora 31and Tumbleweed) Gnome versions
Posts: 311
Blog Entries: 1

Rep: Reputation: 59
Bjr Michel

I am a Fedora user for the past 10 years. I recently installed SUSE's Tumbleweed. I have also run Centos, Ubuntu, etc.

RedHat, SUSE, and their derivatives have selected XFS for /home. I in fact use xfs for all partitions. Perhaps it is because I read that xfs is more resilient than ext4 (Good recovery after power failure so some statements claim). I am using xfs with an SSD and with spinning hard disks.

Can I as a workstation user, tell the difference between ext4 and xfs for performance or recovery? I think not. Have I lost data after a powerfailure crash? --No.
 
Old 11-01-2016, 01:59 PM   #25
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
I've never lost data with ext2, 3 or 4 even after a crash. Now files that are partially written sure. Same problem with xfs.

The ony real advantage xfs has is automatically extending the inode list and maximum file size 8 exbibytes. Both are extent based filesystems, with ext4 providing upward compatibility from ext2 and 3. Both ext4 and xfs use journals. The main limitations of ext4 are a fixed number of inodes (set at creation time), and a maximum of 16TB file and volume size.

On linux xfs has been quite reliable (older Irix based xfs tended to lose free blocks on a crash).
 
Old 11-02-2016, 06:26 AM   #26
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by Lsatenstein View Post
Bjr Michel

I am a Fedora user for the past 10 years. I recently installed SUSE's Tumbleweed. I have also run Centos, Ubuntu, etc.

RedHat, SUSE, and their derivatives have selected XFS for /home. I in fact use xfs for all partitions. Perhaps it is because I read that xfs is more resilient than ext4 (Good recovery after power failure so some statements claim). I am using xfs with an SSD and with spinning hard disks.

Can I as a workstation user, tell the difference between ext4 and xfs for performance or recovery? I think not. Have I lost data after a powerfailure crash? --No.
Quote:
Originally Posted by jpollard View Post
I've never lost data with ext2, 3 or 4 even after a crash. Now files that are partially written sure. Same problem with xfs.

The ony real advantage xfs has is automatically extending the inode list and maximum file size 8 exbibytes. Both are extent based filesystems, with ext4 providing upward compatibility from ext2 and 3. Both ext4 and xfs use journals. The main limitations of ext4 are a fixed number of inodes (set at creation time), and a maximum of 16TB file and volume size.

On linux xfs has been quite reliable (older Irix based xfs tended to lose free blocks on a crash).
Bonjour guys,

Thanks for the input. I've pretty much made up my mind to try xfs next time I have a partition added on my servers.

Thanks again for your facts and opinions
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Linux: No space left on device while df command shows lot of free space LXer Syndicated Linux News 0 08-26-2013 02:50 PM
df shows Missing Space on partition duryodhan Slackware 5 04-15-2008 08:21 AM
Ran out of disk space on / fof3 Linux - Newbie 5 12-17-2007 07:27 PM
i think i ran out of space simeandrews Linux - General 6 09-01-2005 02:54 PM
My /home partition shows up as unpartitioned space! purplecow Linux - Hardware 2 07-06-2004 05:20 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 07:53 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration