LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 01-30-2014, 11:38 PM   #1
alemoo
Member
 
Registered: Nov 2013
Location: Sudbury, Ontario
Distribution: SuSE 13.1
Posts: 34

Rep: Reputation: Disabled
Transferring Files from an NTFS Drive to an Ext4 Drive


Hello,

I have two 2TB disks.
Same make and model.
One is NTFS and has all my data on it.
I want to copy its contents over to the other drive, which will be formatted as an ext4 filesystem.
The NTFS drive is 98% full.
I'm guessing the reason why dragging and dropping all folders into the ext4 drive results in an out-of-disk space error before the entire process completes, is because the NTFS overhead is just too large?

I'm wondering if there's a way I can strip all the ACL's and timestamps and any other unnecessary overhead from the files in order to make this fit using dd? Probably not as this would copy the filesystem headers too?

For simplicity's sake:
NTFS = /dev/sda
EXT4 = /dev/sdb

Last edited by alemoo; 01-30-2014 at 11:40 PM.
 
Old 01-31-2014, 12:10 AM   #2
Drakeo
Senior Member
 
Registered: Jan 2008
Location: Urbana IL
Distribution: Slackware, Slacko,
Posts: 3,716
Blog Entries: 3

Rep: Reputation: 483Reputation: 483Reputation: 483Reputation: 483Reputation: 483
well dd command actually makes a mirror of the sda1 the complete size of that partition then actually trying to write that to the other partition and you have 2 NTFS
be careful with the dd command there is no going back.

I was you open a terminal use the I mount all my drives in /mnt/sda1 and /mnt/sda2 copy over or move any way you want but when your out of space your out of space.

if you are mounted you could tar it up and copy the same time. cd to /mnt and "tar czf sda1.tar.gz sda1 -C /mnt/sda2
now you have a compressed tarball on sda2 and then extract it cd /mnt/sda2 tar xvf sda1.tar.gz
 
Old 01-31-2014, 01:39 AM   #3
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Most likely your problem is that the formatting tool for ext4 by default reserves 5% of the space for the root user, making the usable space for normal users smaller than on the NTFS disk. You can use the tune2fs tool to change that.
 
Old 01-31-2014, 02:03 AM   #4
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,908

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by alemoo View Post
Hello,

I have two 2TB disks.
Same make and model.
One is NTFS and has all my data on it.
I want to copy its contents over to the other drive, which will be formatted as an ext4 filesystem.
The NTFS drive is 98% full.
I'm guessing the reason why dragging and dropping all folders into the ext4 drive results in an out-of-disk space error before the entire process completes, is because the NTFS overhead is just too large?

I'm wondering if there's a way I can strip all the ACL's and timestamps and any other unnecessary overhead from the files in order to make this fit using dd? Probably not as this would copy the filesystem headers too?
The ACL's (unless you have something REALLY complex) don't take up any space. And dd will NOT help you as you have to copy each individual file.

The ACL's for the foundation are just the owner, group, and world access, and all are stored in the inode - thus no access control lists are required. The "filesystem headers" are the inode, and already include the dates, access modes, so no nothing is needed to be done.

You should be able to get a complete copy by doing a "cp -rp /mnt /dest" as root, assuming that /dev/sda is mounted on /mnt, and /dev/sdb is mounted on /dest. Another way is a "tar -cf - -C /mnt | tar -xf - -C /dest" (and read the man page on tar/cp to double check what the options are doing).

You MAY have to tune the reserve space (as indicated above with tune2fs) to ensure enough space is available (use df to verify space is available). Since this is a data disk, you can set the reserve space to 0 if you want. The reserve is primarily aimed at shared system/user filesystems (such as /tmp and /var/tmp) so that the root processes can work even when users can no longer add data to the filesystems. It gives the administrator some workspace to get logged in and possibly recover from an out of space condition without having system processes fail.

You might even find that you have more free space on the destination because ext4 with extents don't require as much metadata as NTFS.
Quote:

For simplicity's sake:
NTFS = /dev/sda
EXT4 = /dev/sdb
 
Old 01-31-2014, 04:50 PM   #5
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,177

Rep: Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645
Try ext3.

All the ntfs stuff won't copy. It will be only file by file and no filesystem based metadata.

You might be running into a calculation issue.

You might consider a compression of some sort.
 
Old 01-31-2014, 05:16 PM   #6
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,908

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by jefro View Post
Try ext3.

All the ntfs stuff won't copy. It will be only file by file and no filesystem based metadata.

You might be running into a calculation issue.

You might consider a compression of some sort.
I think the ext3 metadata is a bit larger than that for ext4 - especially with its use of extents.
 
Old 02-05-2014, 02:43 AM   #7
alemoo
Member
 
Registered: Nov 2013
Location: Sudbury, Ontario
Distribution: SuSE 13.1
Posts: 34

Original Poster
Rep: Reputation: Disabled
Awesome, I'll check out tune2fs and maybe even consider just compressing the entire backup drive.
 
Old 02-05-2014, 04:49 AM   #8
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by alemoo View Post
Awesome, I'll check out tune2fs and maybe even consider just compressing the entire backup drive.
Keep in mind that compressing the drive will lead to larger files and unnecessary overhead when the data is not good compressable, like video or audio files (unless you deal with uncompressed files).
 
Old 02-05-2014, 11:16 AM   #9
alemoo
Member
 
Registered: Nov 2013
Location: Sudbury, Ontario
Distribution: SuSE 13.1
Posts: 34

Original Poster
Rep: Reputation: Disabled
Yeah I wonder if compression is something I should go for, considering one drive is all videos. I've got four 2TB drives, all same make and model. One is all videos. The other is music, artwork, misc stuff. The other two are supposed to be offline backups in case something blows up
 
Old 02-05-2014, 04:41 PM   #10
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,177

Rep: Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645
jpollard may be correct in a way but I meant the ntfs file system metadata as opposed to the overhead on ext3.

Simply tar'ing the files could recover a bit.

I should have suggested ext2 or ext3 without journal maybe.
 
Old 02-06-2014, 06:38 AM   #11
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,908

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Both Ext2 and Ext3 require more pointers for data than Ext4. And btrfs takes less than that (at least as I understand it). Extents require less metadata as it allows for variable sized storage (a single extent is basically just a starting block+length. If the entire file is contiguous it requires only one extent, thus only one pointer. There is a maximum length involved so there are always pointers, but a lot fewer intermediate pointers (pointers to pointer blocks).

Ext3 without a journal is Ext2. The two (ext2/3) are interchangeable. Ext4 is also exchangeable, until the first extent is allocated, after that if you try to downgrade you get a corrupted filesystem...

A tar file would have the least overhead - but at the cost of very slow access. Though if the tar file is on ext2/3 there will be more than on ext4 with extents.
 
Old 02-06-2014, 03:49 PM   #12
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,177

Rep: Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645
But journal would consume space. Ext3 without journal isn't fully compatible with ext2 in all senses and uses. Ext2 is still a pretty good choice for speed over ext3 for most users.


Compression in the form of almost any of the common ways should help.

Btrfs is still a space hungry filesystem but has an on the fly compression. ZFS may be better in some ways to btrfs but I use btrfs.

Even using qemu's cqow3 might be a choice if odd.

I might be tempted to use a squashfs.

Compressing the data before transfer may aid in speeds when using apps like tar with some compression. One can test speeds and decide on what level to compress to for overall effect. Defaults tend to be in the middle. Example 1-9 would default to a 5. You can set it for more or less depending on how well your system compresses.



Depending on situation, you might be able to use rsync to just let it go.

Last edited by jefro; 02-06-2014 at 03:56 PM.
 
Old 02-06-2014, 08:47 PM   #13
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,908

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
The journal is the only difference between ext2 and ext3. The journal only provides better (and faster) error recovery after a system crash/power failure.

fsck repair of an ext2 filesystem can take days, depending on how many files/directories you have.

I use ext4 for everything now. Ext3 was a good extention to ext2, the journal added better reliability. But ext4 is faster than ext3 on writes (ext2 is a bit faster yet). Ext4 is faster than ext2 or ext3 on reads, and has better reliability as well.
 
Old 02-06-2014, 10:26 PM   #14
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,177

Rep: Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645Reputation: 3645
I think we pretty much agree that a normal choice for most people on new distro with space would be ext4 or the newer filesystems. We lost our OP so we don't know what happened.
 
Old 02-09-2014, 11:13 PM   #15
alemoo
Member
 
Registered: Nov 2013
Location: Sudbury, Ontario
Distribution: SuSE 13.1
Posts: 34

Original Poster
Rep: Reputation: Disabled
Sorry, I'm still here. I've been ridiculously busy in the past week. From the responses I get here it looks like I've got a bit of studying ahead of me to learn a bit more about filesystems. The backup drive is something I don't intend to directly access...so performance is not an option. If my main drive fails I'll just buy another drive and then copy the backup drive to the new drive.

A coworker also suggested XFS...never even heard of it before.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Horrible copying speed, from an ext4 drive to an ntfs one? Vexe Linux - Newbie 10 03-11-2013 10:20 AM
[SOLVED] ntfs or ext4 on my external usb drive poly_s Linux - General 6 11-10-2011 02:35 PM
[SOLVED] How to transfer files from windows drive to Mandriva ext4 drive? rrsc16954 Mandriva 7 01-23-2011 08:47 AM
Moving files from NTFS drive to a EXT3 drive?? webshark Slackware 10 05-28-2006 04:02 PM
Cannot Copy Files From Network FAT32/NTFS Drive to My Local Linux Drive michaelh Linux - Networking 3 10-29-2002 11:27 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 12:42 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration