ssh/sftp/scp file transfer of huge files
Hi everyone,
Has anyone tried using ssh/sftp/scp to transfer files as huge as 2Gig without encountering any problems on a daily basis? Could these handle files these big? Thanks. |
Quote:
Best regards, HMW |
Quote:
|
Quote:
Quote:
|
Quote:
|
I use NFS mounts on LAN and rsync with --bwlimit=xxxx to avoid saturation.
|
Ask
Perhaps for historical reasons. It was not long ago that nearly all posix software (and software based upon posix libs) would bork at anything nearing the 2G limit. Some things at 1G! I still deal with remote servers with such behavior on a daily basis, though less often every year.
|
Background is that, my client no longer wants to add configurations to this old transfer method written in some old Unix code which I can't even comprehend (most of it), but I am quite hesitant to propose using ssh/sftp/scp instead of this app because I don't know the reason why they did not use ssh/sftp/scp during that time.tho again, I can't be sure how long the scripts have been there, living in the dark. So the only possible theory I had is that, the files are quite big even back then. Hence, the question.
|
These days 2 GB is small...
One of the reasons people used to have issues is that 2G = 2^31 (aka the +ve limit if you use signed 32 bit nums http://www.tsm-resources.com/alists/pow2.html) |
And again
I would ask. I would also be ready to do a test to prove that sftp would work for that size file. (or, rsync over ssh: you can resume and interrupted or incomplete transfer and much faster -- though only if you are updating text files in-place)
IF the software on these machines is modern, ssh should serve. If you are talking legacy systems, they may well be handling software that will NOT manage a file that large. It appears that the guys who know are there, not here. Or, if they are not there either, there may be no one there who remembers WHY and they want only to not break what works, thus the reluctance to make any modifications. That would then become an exercise in either education or politics, neither of which I would want to delve into here. |
Quote:
also its weird that the limit would be 2gb instead of 4gb (is there any such reason why a files size would be negative) ? |
I think at the time the original coders used signed ints by default. I'm sure I remember seeing that problem come up, but it did vary over time.
It was probably the normal thing that when systems were written for PCs nobody expected to hit the limits that fast - not entirely unlike Y2K ;) and of course once that's in the system it could have knock on effects if you tried to change it because the ecosystem around it expected the same.. It really depends on the actual SW, not always the FS code necessarily. If you were already wanting files of 2GB, you were likely going for even bigger, so 4GB (32 bit unsigned) was really only a bandaid soln. See also https://en.wikipedia.org/wiki/Year_2038_problem - the next big related problem. |
All times are GMT -5. The time now is 04:56 PM. |