Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Has anyone tried using ssh/sftp/scp to transfer files as huge as 2Gig without encountering any problems on a daily basis? Could these handle files these big?
Has anyone tried using ssh/sftp/scp to transfer files as huge as 2Gig without encountering any problems on a daily basis? Could these handle files these big?
Has anyone tried using ssh/sftp/scp to transfer files as huge as 2Gig without encountering any problems on a daily basis? Could these handle files these big?
Thanks.
i use sshfs sometimes to do dvd rips. sometimes i create dvd.iso backups created with dd and scp the resultant image to various devices on my network. these files average 10 gb. other than waiting several minutes there is no problem.
Quote:
Originally Posted by ilesterg
Hmmm.that's a quick one. I still wonder why the old folks in my client made this custom app just to transfer files. Ok then, thanks.
Perhaps for historical reasons. It was not long ago that nearly all posix software (and software based upon posix libs) would bork at anything nearing the 2G limit. Some things at 1G! I still deal with remote servers with such behavior on a daily basis, though less often every year.
Background is that, my client no longer wants to add configurations to this old transfer method written in some old Unix code which I can't even comprehend (most of it), but I am quite hesitant to propose using ssh/sftp/scp instead of this app because I don't know the reason why they did not use ssh/sftp/scp during that time.tho again, I can't be sure how long the scripts have been there, living in the dark. So the only possible theory I had is that, the files are quite big even back then. Hence, the question.
These days 2 GB is small...
One of the reasons people used to have issues is that 2G = 2^31 (aka the +ve limit if you use signed 32 bit nums http://www.tsm-resources.com/alists/pow2.html)
I would ask. I would also be ready to do a test to prove that sftp would work for that size file. (or, rsync over ssh: you can resume and interrupted or incomplete transfer and much faster -- though only if you are updating text files in-place)
IF the software on these machines is modern, ssh should serve. If you are talking legacy systems, they may well be handling software that will NOT manage a file that large.
It appears that the guys who know are there, not here.
Or, if they are not there either, there may be no one there who remembers WHY and they want only to not break what works, thus the reluctance to make any modifications. That would then become an exercise in either education or politics, neither of which I would want to delve into here.
These days 2 GB is small...
One of the reasons people used to have issues is that 2G = 2^31 (aka the +ve limit if you use signed 32 bit nums http://www.tsm-resources.com/alists/pow2.html)
i thought that was a limitation with windows fat-32 filesystems ?
also its weird that the limit would be 2gb instead of 4gb (is there any such reason why a files size would be negative) ?
I think at the time the original coders used signed ints by default. I'm sure I remember seeing that problem come up, but it did vary over time.
It was probably the normal thing that when systems were written for PCs nobody expected to hit the limits that fast - not entirely unlike Y2K and of course once that's in the system it could have knock on effects if you tried to change it because the ecosystem around it expected the same..
It really depends on the actual SW, not always the FS code necessarily.
If you were already wanting files of 2GB, you were likely going for even bigger, so 4GB (32 bit unsigned) was really only a bandaid soln.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.