LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 08-27-2012, 08:37 AM   #1
NetProgrammer
Member
 
Registered: Aug 2012
Posts: 30

Rep: Reputation: Disabled
is faster data transfer possible ?


Hi all!
I need your kind help urgently.
I'm writing a program to send some data using socket in linux.
In fact 2MByte of data must be transferred in just 40ms (milli seconds).
all of NICs and the switch are 100Mbit.
In this speed 2MByte of data have to transferred in about 200 ms. But my experince took place in 600 ms.
Is it natural or I can speed up by some tricks ?

waiting for your guidance.
regards
 
Old 08-27-2012, 08:49 AM   #2
dugan
LQ Guru
 
Registered: Nov 2003
Location: Canada
Distribution: distro hopper
Posts: 11,219

Rep: Reputation: 5309Reputation: 5309Reputation: 5309Reputation: 5309Reputation: 5309Reputation: 5309Reputation: 5309Reputation: 5309Reputation: 5309Reputation: 5309Reputation: 5309
Would compressing the data before sending it work?

Also, you have checked that this is mathematically possible, right? (I haven't; it's early).

Last edited by dugan; 08-27-2012 at 08:50 AM.
 
Old 08-27-2012, 08:59 AM   #3
NetProgrammer
Member
 
Registered: Aug 2012
Posts: 30

Original Poster
Rep: Reputation: Disabled
thank u for your reply
if I compress the data before sending, it costs time ! I dont want it.
yes, the mathematical coputations related to switch speed and transfer time are correct.
 
Old 08-27-2012, 10:16 AM   #4
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by NetProgrammer View Post
In fact 2MByte of data must be transferred in just 40ms (milli seconds).
all of NICs and the switch are 100Mbit.
In this speed 2MByte of data have to transferred in about 200 ms.
As you calculated, it can't be done on that link. 2MB in 40ms is 400 Mbit/s, which is four times higher than the theoretical maximum speed of a 100 Mbit/s connection. You need to move to Gigabit or start doing compression.
 
2 members found this post helpful.
Old 08-27-2012, 02:14 PM   #5
NetProgrammer
Member
 
Registered: Aug 2012
Posts: 30

Original Poster
Rep: Reputation: Disabled
thank u so much dear friend
of cource I have to use GBit soloutions.
But I concern about the overhead in my experience.
I mean if a 100MBit connection that must transfer 2 MByte of data in something about 200ms, does not work appropriately, so maybe even on a GBit link it will last longer than 40ms.
Therefore I would like to optimize everything (various buffers and so on) and minimize the overhead as much as possible in 100MBit mode and then switch to GBit connection.

Still waiting for your sugesstions and guidance.
 
Old 08-28-2012, 09:12 AM   #6
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,649
Blog Entries: 4

Rep: Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934
The bottom line here is that "the numbers don't lie." You're running out of milliseconds. You must use faster hardware, and pragmatically you also must use compression (hardware or software), and, again pragmatically, you're going to have to find a way to relax that timing requirement. It's not good enough that an arrow sometimes hits the target. Every arrow must hit that target, "even in the rain."

Also bear in mind that you can be quite certain that you are now "doing a thing already done." Pause what you're doing, put thoughts of sockets and such aside, and look at what you're doing in the context of others who have already approached the same business problem. How did they do it?
 
Old 08-28-2012, 09:58 AM   #7
ProtoformX
Member
 
Registered: Feb 2004
Location: Canada
Distribution: LFS SVN
Posts: 334

Rep: Reputation: 34
Quote:
Originally Posted by sundialsvcs View Post
The bottom line here is that "the numbers don't lie." You're running out of milliseconds. You must use faster hardware, and pragmatically you also must use compression (hardware or software), and, again pragmatically, you're going to have to find a way to relax that timing requirement. It's not good enough that an arrow sometimes hits the target. Every arrow must hit that target, "even in the rain."

Also bear in mind that you can be quite certain that you are now "doing a thing already done." Pause what you're doing, put thoughts of sockets and such aside, and look at what you're doing in the context of others who have already approached the same business problem. How did they do it?
I totally agree on the first part, but not on the second part, the reason for this is yes of course everything has been done that we are likely to do, but has it been done properly? Never look at other code, you will pick up the same bugs and problems their code has. I realize true hackers are few and getting fewer as the years go by, because we are stuck in a world were HHL and common libraries "solve" almost all our problems, but don't necessarily solve Your problem, once this is realized we try to work around the problem rather then reimplementing it, while this has worked so far, it is also the direct result of why we need CPU's 4 times the speed of what they actually need to be, we waste cycles doing stupid things rather then proper things.

The best way to do things in my opinion is to reinvent the wheel (not every time) but as often as necessary, if the code doesn't fit, don't make it, rewrite it!
Dev's call it saving time, I call it laziness. I have written tons of custom routines to do things, but I have also rewritten most of my code when I saw the possibility to make it faster and it almost always worked the way I intended it to work, however there were a few times were things didn't turn out or I made the problem more complex.

What I am saying is who cares if it's impossible mathematically to get 2MB @ 40ms on a 100mbit line, we know it's not possible, but let see what is actually possible then lets see if our code and reach that... that how real progress is made! By trying the stuff you know to be impossible and seeing what you can get out of it. Then use what you learn on a faster connection... This is how real hackers progress, and this is not how programming should be done:
Code:
#include everything_but_the_kitchen_sink.h
#include the_kitchen_sink.h

main()
{
    printf("HAI ZOMG I HAS TEH CODES IN TEH COMPUTERZ");
}

Last edited by ProtoformX; 08-28-2012 at 10:02 AM.
 
Old 08-28-2012, 12:39 PM   #8
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by ProtoformX View Post
I totally agree on the first part, but not on the second part, the reason for this is yes of course everything has been done that we are likely to do, but has it been done properly? Never look at other code, you will pick up the same bugs and problems their code has. I realize true hackers are few and getting fewer as the years go by, because we are stuck in a world were HHL and common libraries "solve" almost all our problems, but don't necessarily solve Your problem, once this is realized we try to work around the problem rather then reimplementing it, while this has worked so far, it is also the direct result of why we need CPU's 4 times the speed of what they actually need to be, we waste cycles doing stupid things rather then proper things.

The best way to do things in my opinion is to reinvent the wheel (not every time) but as often as necessary, if the code doesn't fit, don't make it, rewrite it!
Dev's call it saving time, I call it laziness. I have written tons of custom routines to do things, but I have also rewritten most of my code when I saw the possibility to make it faster and it almost always worked the way I intended it to work, however there were a few times were things didn't turn out or I made the problem more complex.

What I am saying is who cares if it's impossible mathematically to get 2MB @ 40ms on a 100mbit line, we know it's not possible, but let see what is actually possible then lets see if our code and reach that... that how real progress is made! By trying the stuff you know to be impossible and seeing what you can get out of it. Then use what you learn on a faster connection... This is how real hackers progress, and this is not how programming should be done:
Code:
#include everything_but_the_kitchen_sink.h
#include the_kitchen_sink.h

main()
{
    printf("HAI ZOMG I HAS TEH CODES IN TEH COMPUTERZ");
}
I'm still not seeing the point. We know beyond a shadow of a doubt that a 100 Mbit connection will never work without compression. Sure he could play with it and tune things to run as fast as possible, but no matter WHAT he does it still will never be enough. He must switch to gigabit to accomplish his goal, and as soon as he switches to gigabit none of the tuning he did for the 100 Mbit connection will apply anymore, he's going to have to re-do all of the buffer sizes and timing tweaks he made to get the 100 Mbit connection running as fast as possible.

We also don't know if ANY of that will even be necessary. He might swap to a gigabit connection, and instantly be below his 40ms threshold without ANY tweaks, at which point all of the tuning work he put into optimizing the 100 Mbit connection will be wasted time.

He HAS to switch to gigabit to accomplish his goal, so he should do that first. After he makes the switch he can re-assess the situation to see if any tuning is needed, and if so, he can explore it then. Otherwise he's just putting the cart before the horse.
 
Old 08-28-2012, 01:30 PM   #9
Celyr
Member
 
Registered: Mar 2012
Location: Italy
Distribution: Slackware+Debian
Posts: 321

Rep: Reputation: 81
You have to move out from Ethernet if you want an higher time bound (because of csma/cd) so have a look to token ring. The easiest thing is to relax the time constrains.
 
Old 08-28-2012, 01:50 PM   #10
ProtoformX
Member
 
Registered: Feb 2004
Location: Canada
Distribution: LFS SVN
Posts: 334

Rep: Reputation: 34
Quote:
Originally Posted by suicidaleggroll View Post
I'm still not seeing the point. We know beyond a shadow of a doubt that a 100 Mbit connection will never work without compression. Sure he could play with it and tune things to run as fast as possible, but no matter WHAT he does it still will never be enough. He must switch to gigabit to accomplish his goal, and as soon as he switches to gigabit none of the tuning he did for the 100 Mbit connection will apply anymore, he's going to have to re-do all of the buffer sizes and timing tweaks he made to get the 100 Mbit connection running as fast as possible.

We also don't know if ANY of that will even be necessary. He might swap to a gigabit connection, and instantly be below his 40ms threshold without ANY tweaks, at which point all of the tuning work he put into optimizing the 100 Mbit connection will be wasted time.

He HAS to switch to gigabit to accomplish his goal, so he should do that first. After he makes the switch he can re-assess the situation to see if any tuning is needed, and if so, he can explore it then. Otherwise he's just putting the cart before the horse.
Agreed, but it would be way better to develop the fastest way to send data on lower end hardware, this encourages optimal code, we know it will never work, and that's okay, the idea behind using the 100mbit connection is to see how good he can get his code... if you switch over to 1Gig and start fresh all you are doing is encouraging laziness (the hardware can handle it.. don't worry about it) This is what is wrong with IT RnD today... if the hardware does it who cares right? because we need 8 core CPU's running 4-6 Ghz to show us a png file why? I can tell you why, because people don't know how to hack anymore.. its always been (oh its good enough, it runs doesn't it?) why should I optimize my code.
 
Old 08-28-2012, 01:56 PM   #11
NetProgrammer
Member
 
Registered: Aug 2012
Posts: 30

Original Poster
Rep: Reputation: Disabled
Dear friends
I can't thank you enough for your discussion about my problem.
In fact I would like anyone one you to inform me about his/her similar experience.
I described that I know that I have to use GBit or higher.
but my question was and is that "is it natural to pass 2 MByte of data through a 100Mbit connection in 600ms or
something is wrong?" because based on mathematical calculations such a transfer must be done in something about 200 ms. I think maybe my code is not efficient enough.
I just want to learn your tricks related to transferring data on TCP/IP platform.

dear Celyr, your suggession is impressive ! thank you

thank you all again
 
Old 08-28-2012, 01:56 PM   #12
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by ProtoformX View Post
Agreed, but it would be way better to develop the fastest way to send data on lower end hardware, this encourages optimal code, we know it will never work, and that's okay, the idea behind using the 100mbit connection is to see how good he can get his code... if you switch over to 1Gig and start fresh all you are doing is encouraging laziness (the hardware can handle it.. don't worry about it) This is what is wrong with IT RnD today... if the hardware does it who cares right? because we need 8 core CPU's running 4-6 Ghz to show us a png file why? I can tell you why, because people don't know how to hack anymore.. its always been (oh its good enough, it runs doesn't it?) why should I optimize my code.
Because 90% of the time the budget won't allow programmers to reinvent the wheel every time they have to do something. Sure it would be nice, but productivity would drop like a rock. It's just not realistic.

Last edited by suicidaleggroll; 08-28-2012 at 01:57 PM.
 
Old 08-29-2012, 08:37 AM   #13
ProtoformX
Member
 
Registered: Feb 2004
Location: Canada
Distribution: LFS SVN
Posts: 334

Rep: Reputation: 34
Quote:
Originally Posted by suicidaleggroll View Post
Because 90% of the time the budget won't allow programmers to reinvent the wheel every time they have to do something. Sure it would be nice, but productivity would drop like a rock. It's just not realistic.
A down to earth programmer can do it if they want, it doesn't have to be 100% optimized squeeze every last bit of cycle power out of the cpu type deal.
Hell just rearranging the the instructions/code can drastically improve performance, it's not always about the best algorithm, and you don't always have to reinvent the wheel.

Look at an older piece of code, Wolfenstien 3D, for it's time it was pretty good, but I can rewrite that in the same language Carmack wrote it in (ASM) on the same hardware and it will be better then his... why? am I smarter then John? Hell no! But I do have a trick up my sleeve, I know as he did which instructions take the most time to execute, I know the shortcuts he used, but I know something now that he never considered when he wrote Wolf3D.. the order of the instructions, by reordering the instructions I can reduce the amount of instructions while making it faster because I am still wasting the same amount or perhaps a few less cycles he would in total, but because some of mine instructions are not in the loop because they have been reordered, mine are ordered in such a way that I do not waste any or as little execution time as possible.
 
Old 08-29-2012, 08:52 AM   #14
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by ProtoformX View Post
A down to earth programmer can do it if they want
Tell that to the boss who has to pay him to re-write code that already works and already gets the job done in the time required.

Last edited by suicidaleggroll; 08-29-2012 at 09:13 AM.
 
Old 08-29-2012, 10:07 AM   #15
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,649
Blog Entries: 4

Rep: Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934Reputation: 3934
Quote:
Originally Posted by suicidaleggroll View Post
Tell that to the boss who has to pay him to re-write code that already works and already gets the job done in the time required.
In this possibly "truly performance critical" situation, I would oblige my subordinates to furnish me with a formal proposal and to formally justify it both with me and with other stakeholders ... my approval first, then other business units. I would weigh all of the factors, including business risk, the potential impact of (inevitable) CPU architecture evolution, and so on.

Business experience would prejudice me to give heavy credence to the tenet that I learned decades ago in The Elements of Programming Style:
Quote:
"Don't 'diddle' code to make it faster: find a better algorithm."
My subordinate, or my colleague or even my superior, is going to have to conform to that because the technology, and therefore the enterprise, will be (harshly) ruled by that for years to come. Your coding-wizardry won't mean squat if the code won't run on an Intel (n+1)86 microprocessor, for instance.

Your acknowledged(!) technical brilliance, whoever you are, might win! Or, it might not. But, out of my obligation to my own business responsibility to the enterprise, and in full acknowledgement of yours, I am going to put you and your idea through "the bloody wringer from hell." And you should help. So that, if it goes down for whatever reason, it will do so only on paper. We do not want to discover any problem whatsoever at 60,000 feet and Mach Two.

---
Our business goal is not necessarily to answer the question, "Is faster data transfer possible?" That is merely what appears to be the statement of the problem directly in front of us on this particular two-lane road, which, in turn, is merely what appears to be the necessary way to do what we are here to do: "to get this package to Peoria." Therefore, our business goal is ... "to get this package to Peoria," reliably and safely and predictably, even in the rain. This time, next time, and the time after that.

Last edited by sundialsvcs; 08-29-2012 at 10:10 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] faster way to copy data to user space vdx Linux - Kernel 3 03-28-2012 04:02 AM
Syncing two device's data with something faster than rsync? veeruk101 Linux - Newbie 1 11-22-2011 01:27 PM
memcpy_toio transfers data in 4 byte chunks, but I need to transfer data in one lump. jbreaka4lyfe Linux - Embedded & Single-board computer 2 06-02-2008 11:25 AM
Transfer from USB compact flash card reader used to be faster... Cadmium Mandriva 17 07-15-2004 10:24 AM
No data transfer ! shaahul Linux - Hardware 3 09-16-2003 02:08 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 05:59 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration