Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I am running RedHat 9.0 and i am trying to copy a file the is 3GB from a windows 2000 server to the RedHat machine. When the file copies, it takes up all 40GB of the harddrive on the linux machine. I've tried the comand "split -b 650000000 /mnt/windowsshare/filename /backups/filename" and it just created about 25 650MB files until the hard drive was filled up.
I am more familiar with Windows than linux. Can linux recognize files larger than 2GB properly?
Linux, or the filesystems commonly used with Linux, can handle big files. It depends on the type of filesystem you use, and on this page there is a table describing the functionality of different file systems when it comes to files and filesystem size.
The split command should work, so I'm guessing it may have something to do with the file size on the Windows file system is being reported wrong.
that chart says the maximum file size on an ext3 file system (the file system I am using) is 2GB so that may be part of my problem. But that doesn't seem to explain why the split didn't work right. I believe the file size is being reported properly, the file is only taking up 3GB of space on the windows machine.
when i mount the share with smbmount and do an ls -l of the directory it is mounted in, the file is reported as a crazy huge file at 18 Exa bytes. If I connect with the smbclient command and do an ls -l it reports the correct size.
so the smbclient is working correctly for me but the smbmount is not. Does anyone know of a way I can use the smbclient command in a script and not the smbmount? will something like this work?:
split -b 650000000 /mnt/share/bigfile.ext
#rest of script................
ok, i've figured out how to script the smbclient command, here it is:
smbclient //machine/share -U username%password -c "lcd /backup; get filename"
Ive tested it and it works on small files, what I'm woried about ist that it will hang when i run it on the large 3GB file. does anyone know how I could pipe the above command to something like split that will alow me to split the file up into smaller portions that can be transfered by smbclient?