Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am running RedHat 9.0 and i am trying to copy a file the is 3GB from a windows 2000 server to the RedHat machine. When the file copies, it takes up all 40GB of the harddrive on the linux machine. I've tried the comand "split -b 650000000 /mnt/windowsshare/filename /backups/filename" and it just created about 25 650MB files until the hard drive was filled up.
I am more familiar with Windows than linux. Can linux recognize files larger than 2GB properly?
Linux, or the filesystems commonly used with Linux, can handle big files. It depends on the type of filesystem you use, and on this page there is a table describing the functionality of different file systems when it comes to files and filesystem size.
The split command should work, so I'm guessing it may have something to do with the file size on the Windows file system is being reported wrong.
that chart says the maximum file size on an ext3 file system (the file system I am using) is 2GB so that may be part of my problem. But that doesn't seem to explain why the split didn't work right. I believe the file size is being reported properly, the file is only taking up 3GB of space on the windows machine.
Linux can handle up to 16 TB with certain filesystems. I used XFS and ReiserFS because they are much better than ext3. I used XFS for most of my partitions.
I have never seen this problem. It could be on your Windows system not calculating the correct size or smbclient not calculating correctly.
The only problem that I had is copying 4 GB through SAMBA that can only handle 2 GB per file at the time. I used the split command and it made four 1 GB files.
BTW, how many bytes did smbclient calculated for the file that you are trying to copy.
when i mount the share with smbmount and do an ls -l of the directory it is mounted in, the file is reported as a crazy huge file at 18 Exa bytes. If I connect with the smbclient command and do an ls -l it reports the correct size.
so the smbclient is working correctly for me but the smbmount is not. Does anyone know of a way I can use the smbclient command in a script and not the smbmount? will something like this work?:
cd /backup
smbclient //machine/share
split -b 650000000 /mnt/share/bigfile.ext
exit
#rest of script................
ok, i've figured out how to script the smbclient command, here it is:
smbclient //machine/share -U username%password -c "lcd /backup; get filename"
Ive tested it and it works on small files, what I'm woried about ist that it will hang when i run it on the large 3GB file. does anyone know how I could pipe the above command to something like split that will alow me to split the file up into smaller portions that can be transfered by smbclient?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.