Using IPCS and Performance of Shared Memory vs. Socket
I had built some executables on the Linux 2.4 kernel and ran them on the SUSE 9 (Linux 2.6) because Linux 2.4 build does not support POSIX shared-memory. IPC mechanism through shared-memory and sockets are working, but I noticed two things:
(1) ipcs was not showing shared-memory while the executables were running. Shouldn't there be something since there are shared-memory segments being mapped? ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status (2) performance measurement of socket vs. shared-memory on 40 MB was actually at par running on the same system (SUSE 9 with Linux 2.6). Given some reported ratio that says POSIX shared memory is 20x faster, I would assume that I would see a huge performance difference. However, the numbers does not reflect that. For 40 MB, dif1 = 2 seconds, dif2 = 3 seconds for shared-memory and dif1 = 2-3 seconds, dif2 = 2-3 seconds for sockets. See my crude measurement below. Any comment as to whether I am expecting too much out of shared-memory IPC? time (&start1); start2 = start1; // execute something time (&end1); dif1 = difftime (end1,start1); Roundtrip measurement through callback acknowledging data got across: time(&end2) dif2 = difftime (end2,start2); Thanks, kl |
I'm no kernel guru (*far* from it), but I suspect that shared memory can be *slightly* faster than sockets. Here's why:
I suspect that the kernel will notice that the connection is local and use a kernel-space buffer for the data you send/recv directly instead of sending it to eth0/whatever. "But wait, this sounds just like shared memory" you say. not exactly--your program has to switch to kernel-mode to stuff the data into the buffer or read from it. With shared memory, you just go at it. that being said, there may be a smarter implementation (which I'm dying to hear about--yet I'm too lazy to "use the source, luke" :D), but this is the one that's obvious to me. |
All times are GMT -5. The time now is 12:32 AM. |