Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place! |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
01-30-2014, 11:38 PM
|
#1
|
Member
Registered: Nov 2013
Location: Sudbury, Ontario
Distribution: SuSE 13.1
Posts: 34
Rep:
|
Transferring Files from an NTFS Drive to an Ext4 Drive
Hello,
I have two 2TB disks.
Same make and model.
One is NTFS and has all my data on it.
I want to copy its contents over to the other drive, which will be formatted as an ext4 filesystem.
The NTFS drive is 98% full.
I'm guessing the reason why dragging and dropping all folders into the ext4 drive results in an out-of-disk space error before the entire process completes, is because the NTFS overhead is just too large?
I'm wondering if there's a way I can strip all the ACL's and timestamps and any other unnecessary overhead from the files in order to make this fit using dd? Probably not as this would copy the filesystem headers too?
For simplicity's sake:
NTFS = /dev/sda
EXT4 = /dev/sdb
Last edited by alemoo; 01-30-2014 at 11:40 PM.
|
|
|
01-31-2014, 12:10 AM
|
#2
|
Senior Member
Registered: Jan 2008
Location: Urbana IL
Distribution: Slackware, Slacko,
Posts: 3,716
|
well dd command actually makes a mirror of the sda1 the complete size of that partition then actually trying to write that to the other partition and you have 2 NTFS
be careful with the dd command there is no going back.
I was you open a terminal use the I mount all my drives in /mnt/sda1 and /mnt/sda2 copy over or move any way you want but when your out of space your out of space.
if you are mounted you could tar it up and copy the same time. cd to /mnt and "tar czf sda1.tar.gz sda1 -C /mnt/sda2
now you have a compressed tarball on sda2 and then extract it cd /mnt/sda2 tar xvf sda1.tar.gz
|
|
|
01-31-2014, 01:39 AM
|
#3
|
Moderator
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
|
Most likely your problem is that the formatting tool for ext4 by default reserves 5% of the space for the root user, making the usable space for normal users smaller than on the NTFS disk. You can use the tune2fs tool to change that.
|
|
|
01-31-2014, 02:03 AM
|
#4
|
Senior Member
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,908
|
Quote:
Originally Posted by alemoo
Hello,
I have two 2TB disks.
Same make and model.
One is NTFS and has all my data on it.
I want to copy its contents over to the other drive, which will be formatted as an ext4 filesystem.
The NTFS drive is 98% full.
I'm guessing the reason why dragging and dropping all folders into the ext4 drive results in an out-of-disk space error before the entire process completes, is because the NTFS overhead is just too large?
I'm wondering if there's a way I can strip all the ACL's and timestamps and any other unnecessary overhead from the files in order to make this fit using dd? Probably not as this would copy the filesystem headers too?
|
The ACL's (unless you have something REALLY complex) don't take up any space. And dd will NOT help you as you have to copy each individual file.
The ACL's for the foundation are just the owner, group, and world access, and all are stored in the inode - thus no access control lists are required. The "filesystem headers" are the inode, and already include the dates, access modes, so no nothing is needed to be done.
You should be able to get a complete copy by doing a "cp -rp /mnt /dest" as root, assuming that /dev/sda is mounted on /mnt, and /dev/sdb is mounted on /dest. Another way is a "tar -cf - -C /mnt | tar -xf - -C /dest" (and read the man page on tar/cp to double check what the options are doing).
You MAY have to tune the reserve space (as indicated above with tune2fs) to ensure enough space is available (use df to verify space is available). Since this is a data disk, you can set the reserve space to 0 if you want. The reserve is primarily aimed at shared system/user filesystems (such as /tmp and /var/tmp) so that the root processes can work even when users can no longer add data to the filesystems. It gives the administrator some workspace to get logged in and possibly recover from an out of space condition without having system processes fail.
You might even find that you have more free space on the destination because ext4 with extents don't require as much metadata as NTFS.
Quote:
For simplicity's sake:
NTFS = /dev/sda
EXT4 = /dev/sdb
|
|
|
|
01-31-2014, 04:50 PM
|
#5
|
Moderator
Registered: Mar 2008
Posts: 22,177
|
Try ext3.
All the ntfs stuff won't copy. It will be only file by file and no filesystem based metadata.
You might be running into a calculation issue.
You might consider a compression of some sort.
|
|
|
01-31-2014, 05:16 PM
|
#6
|
Senior Member
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,908
|
Quote:
Originally Posted by jefro
Try ext3.
All the ntfs stuff won't copy. It will be only file by file and no filesystem based metadata.
You might be running into a calculation issue.
You might consider a compression of some sort.
|
I think the ext3 metadata is a bit larger than that for ext4 - especially with its use of extents.
|
|
|
02-05-2014, 02:43 AM
|
#7
|
Member
Registered: Nov 2013
Location: Sudbury, Ontario
Distribution: SuSE 13.1
Posts: 34
Original Poster
Rep:
|
Awesome, I'll check out tune2fs and maybe even consider just compressing the entire backup drive.
|
|
|
02-05-2014, 04:49 AM
|
#8
|
Moderator
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
|
Quote:
Originally Posted by alemoo
Awesome, I'll check out tune2fs and maybe even consider just compressing the entire backup drive.
|
Keep in mind that compressing the drive will lead to larger files and unnecessary overhead when the data is not good compressable, like video or audio files (unless you deal with uncompressed files).
|
|
|
02-05-2014, 11:16 AM
|
#9
|
Member
Registered: Nov 2013
Location: Sudbury, Ontario
Distribution: SuSE 13.1
Posts: 34
Original Poster
Rep:
|
Yeah I wonder if compression is something I should go for, considering one drive is all videos. I've got four 2TB drives, all same make and model. One is all videos. The other is music, artwork, misc stuff. The other two are supposed to be offline backups in case something blows up
|
|
|
02-05-2014, 04:41 PM
|
#10
|
Moderator
Registered: Mar 2008
Posts: 22,177
|
jpollard may be correct in a way but I meant the ntfs file system metadata as opposed to the overhead on ext3.
Simply tar'ing the files could recover a bit.
I should have suggested ext2 or ext3 without journal maybe.
|
|
|
02-06-2014, 06:38 AM
|
#11
|
Senior Member
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,908
|
Both Ext2 and Ext3 require more pointers for data than Ext4. And btrfs takes less than that (at least as I understand it). Extents require less metadata as it allows for variable sized storage (a single extent is basically just a starting block+length. If the entire file is contiguous it requires only one extent, thus only one pointer. There is a maximum length involved so there are always pointers, but a lot fewer intermediate pointers (pointers to pointer blocks).
Ext3 without a journal is Ext2. The two (ext2/3) are interchangeable. Ext4 is also exchangeable, until the first extent is allocated, after that if you try to downgrade you get a corrupted filesystem...
A tar file would have the least overhead - but at the cost of very slow access. Though if the tar file is on ext2/3 there will be more than on ext4 with extents.
|
|
|
02-06-2014, 03:49 PM
|
#12
|
Moderator
Registered: Mar 2008
Posts: 22,177
|
But journal would consume space. Ext3 without journal isn't fully compatible with ext2 in all senses and uses. Ext2 is still a pretty good choice for speed over ext3 for most users.
Compression in the form of almost any of the common ways should help.
Btrfs is still a space hungry filesystem but has an on the fly compression. ZFS may be better in some ways to btrfs but I use btrfs.
Even using qemu's cqow3 might be a choice if odd.
I might be tempted to use a squashfs.
Compressing the data before transfer may aid in speeds when using apps like tar with some compression. One can test speeds and decide on what level to compress to for overall effect. Defaults tend to be in the middle. Example 1-9 would default to a 5. You can set it for more or less depending on how well your system compresses.
Depending on situation, you might be able to use rsync to just let it go.
Last edited by jefro; 02-06-2014 at 03:56 PM.
|
|
|
02-06-2014, 08:47 PM
|
#13
|
Senior Member
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,908
|
The journal is the only difference between ext2 and ext3. The journal only provides better (and faster) error recovery after a system crash/power failure.
fsck repair of an ext2 filesystem can take days, depending on how many files/directories you have.
I use ext4 for everything now. Ext3 was a good extention to ext2, the journal added better reliability. But ext4 is faster than ext3 on writes (ext2 is a bit faster yet). Ext4 is faster than ext2 or ext3 on reads, and has better reliability as well.
|
|
|
02-06-2014, 10:26 PM
|
#14
|
Moderator
Registered: Mar 2008
Posts: 22,177
|
I think we pretty much agree that a normal choice for most people on new distro with space would be ext4 or the newer filesystems. We lost our OP so we don't know what happened.
|
|
|
02-09-2014, 11:13 PM
|
#15
|
Member
Registered: Nov 2013
Location: Sudbury, Ontario
Distribution: SuSE 13.1
Posts: 34
Original Poster
Rep:
|
Sorry, I'm still here. I've been ridiculously busy in the past week. From the responses I get here it looks like I've got a bit of studying ahead of me to learn a bit more about filesystems. The backup drive is something I don't intend to directly access...so performance is not an option. If my main drive fails I'll just buy another drive and then copy the backup drive to the new drive.
A coworker also suggested XFS...never even heard of it before.
|
|
|
All times are GMT -5. The time now is 12:42 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|