Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
If by "the mount" you mean the mount point itself (ex. /opt). It doesn't matter what filesystem type the mount itself is in. if your "/" partition is ext3, you can have a "/external" mount point off of that, and can mount any partition type to that mount point. So, your /external can mount a Reiser file system (or a number of others), that should take care of the 10Tb problem. Unless I'm wrong, once you start reading and writing from /external, it goes by the rules of that filesystem. The filesystem of the mount point (parent directory) doesn't matter.
Also, it sounded like you will be mounting a filesystem remotely. I don't think that NFS (the normal unix/linux way to mount a remote filsystem) has any limits on the size of the actual filesystem. But if it's important to you, according to this, NFSv3 and above can now handle file sizes larger than 4Gb.
ext3 *does* allow 16 TiB volumes. The problem is that that's only for architectures which support it, like alpha. And in any case, if you need 8 TiB now, I don't think that limiting you to 16 is a good plan for the future.
Ext4 could do ok however I wouldn't still use it on a serious production machine without first doing my own intensive tests. I have been using it without any problem, but I am not the kind of user who thinks "if it works for me, then it works and period, and the rest of you are all wrong" It is known to have some problems for some people and in any case it's not near as mature as ext2/3.
I dislike reiserfs for many reasons, maybe that's just me though. reiser4 is different, but seeying how uncertain its future is I wouldn't even put it between the possible choices if I plan to maintain a machine for a long time. It's my view anyway, not an invitation for a discussion.
XFS has a max volume size of 16 exabytes if my memory serves correctly. This would probably be my choice for the simple reason that's the only one that remains when we discard the rest of stable fs's, BUT be sure you use a good UPS. If you can't guarantee the stability of the power source, then by all means forget about XFS.
More modern filesystems like btrfs and zfs will not have this problem either. The problem is that zfs for linux is unstable (and it's a FUSE fs anyway), and btrfs is unstable and the on-disk format can change at any random moment which is nothing you would want on a serious server. However, zfs for solaris is supposed to be stable, as for freebsd (but I haven't tried it myself). A good thing about both btrfs and zfs is that you can add a disk to the storage pool without any problem at any given momment, besides other features that are really interesting like live snapshots and many more.
a 32-bit system client to a 64-bit system server mount, any problem with the disk read/write?
If you mount it through NFS, I don't thinks there is a problem. But, I could be wrong about that. Probably someone else will have a better idea on that. But individual file sizes could be a problem if you move them between filesystems. I've never had a problem, but I don't normally deal with 10Tb systems.
File systems should remain the same, it doesn't matter the cpu or the host system even if it's local and not through network. So, there's no problem on sharing a given fs to 32 or 64 bits clients from a 32 or 64 bits server, it doesn't matter if the server is built with an entirely different architecture either.
But, even if that was an issue (which it IS NOT), when you share via network it's the network protocol which counts. As nuwen52 stated. If you mount something as NFS, CIFS or samba, that's the only thing that you can see locally. Your local system has no idea about what the underlying fs is, it might not even be a discrete disk, but maybe an array or disks or any other virtual storage facility.
There are two issues here. The partition table type and the file system itself.
The default MSDOS partition table can describe at most 4G of 512B sectors as it uses 32 bit descriptors.
It cannot create partitions bigger then 2TB.
For bigger partitions or to partition larger drives, you need to switch to GPT and its supporting tools. GPT uses 64 bit descriptors, so it can describe 4Gx2Tb of space. It will take drives a long time to reach that size.
With the partition table limit removed, filesystems can grow well beyond 2Tb, be careful of believing that 2Tb is the limit of any filesystem and that its not imposed by the partition size that the filesystem exists on.