LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (https://www.linuxquestions.org/questions/linux-software-2/)
-   -   ssh/sftp/scp file transfer of huge files (https://www.linuxquestions.org/questions/linux-software-2/ssh-sftp-scp-file-transfer-of-huge-files-4175557972/)

ilesterg 11-04-2015 11:37 AM

ssh/sftp/scp file transfer of huge files
 
Hi everyone,

Has anyone tried using ssh/sftp/scp to transfer files as huge as 2Gig without encountering any problems on a daily basis? Could these handle files these big?

Thanks.

HMW 11-04-2015 12:02 PM

Quote:

Originally Posted by ilesterg (Post 5444595)
Hi everyone,

Has anyone tried using ssh/sftp/scp to transfer files as huge as 2Gig without encountering any problems on a daily basis? Could these handle files these big?

Short answer: Yes. I have. No problem whatsoever.

Best regards,
HMW

ilesterg 11-04-2015 12:41 PM

Quote:

Originally Posted by HMW (Post 5444611)
Short answer: Yes. I have. No problem whatsoever.

Best regards,
HMW

Hmmm.that's a quick one. I still wonder why the old folks in my client made this custom app just to transfer files. Ok then, thanks.

schneidz 11-04-2015 12:48 PM

Quote:

Originally Posted by ilesterg (Post 5444595)
Hi everyone,

Has anyone tried using ssh/sftp/scp to transfer files as huge as 2Gig without encountering any problems on a daily basis? Could these handle files these big?

Thanks.

i use sshfs sometimes to do dvd rips. sometimes i create dvd.iso backups created with dd and scp the resultant image to various devices on my network. these files average 10 gb. other than waiting several minutes there is no problem.
Quote:

Originally Posted by ilesterg (Post 5444642)
Hmmm.that's a quick one. I still wonder why the old folks in my client made this custom app just to transfer files. Ok then, thanks.

status quo ?

chrism01 11-05-2015 01:47 AM

Quote:

Short answer: Yes. I have. No problem whatsoever.
me too :)

Emerson 11-05-2015 02:23 AM

I use NFS mounts on LAN and rsync with --bwlimit=xxxx to avoid saturation.

wpeckham 11-05-2015 05:33 AM

Ask
 
Perhaps for historical reasons. It was not long ago that nearly all posix software (and software based upon posix libs) would bork at anything nearing the 2G limit. Some things at 1G! I still deal with remote servers with such behavior on a daily basis, though less often every year.

ilesterg 11-05-2015 09:41 AM

Background is that, my client no longer wants to add configurations to this old transfer method written in some old Unix code which I can't even comprehend (most of it), but I am quite hesitant to propose using ssh/sftp/scp instead of this app because I don't know the reason why they did not use ssh/sftp/scp during that time.tho again, I can't be sure how long the scripts have been there, living in the dark. So the only possible theory I had is that, the files are quite big even back then. Hence, the question.

chrism01 11-05-2015 11:26 PM

These days 2 GB is small...
One of the reasons people used to have issues is that 2G = 2^31 (aka the +ve limit if you use signed 32 bit nums http://www.tsm-resources.com/alists/pow2.html)

wpeckham 11-06-2015 04:38 AM

And again
 
I would ask. I would also be ready to do a test to prove that sftp would work for that size file. (or, rsync over ssh: you can resume and interrupted or incomplete transfer and much faster -- though only if you are updating text files in-place)

IF the software on these machines is modern, ssh should serve. If you are talking legacy systems, they may well be handling software that will NOT manage a file that large.

It appears that the guys who know are there, not here.
Or, if they are not there either, there may be no one there who remembers WHY and they want only to not break what works, thus the reluctance to make any modifications. That would then become an exercise in either education or politics, neither of which I would want to delve into here.

schneidz 11-06-2015 06:28 AM

Quote:

Originally Posted by chrism01 (Post 5445418)
These days 2 GB is small...
One of the reasons people used to have issues is that 2G = 2^31 (aka the +ve limit if you use signed 32 bit nums http://www.tsm-resources.com/alists/pow2.html)

i thought that was a limitation with windows fat-32 filesystems ?

also its weird that the limit would be 2gb instead of 4gb (is there any such reason why a files size would be negative) ?

chrism01 11-08-2015 11:15 PM

I think at the time the original coders used signed ints by default. I'm sure I remember seeing that problem come up, but it did vary over time.

It was probably the normal thing that when systems were written for PCs nobody expected to hit the limits that fast - not entirely unlike Y2K ;) and of course once that's in the system it could have knock on effects if you tried to change it because the ecosystem around it expected the same..

It really depends on the actual SW, not always the FS code necessarily.
If you were already wanting files of 2GB, you were likely going for even bigger, so 4GB (32 bit unsigned) was really only a bandaid soln.

See also https://en.wikipedia.org/wiki/Year_2038_problem - the next big related problem.


All times are GMT -5. The time now is 10:36 AM.