LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   Why so many Random Segmentation Faults (https://www.linuxquestions.org/questions/linux-general-1/why-so-many-random-segmentation-faults-219028/)

vda 08-17-2004 09:41 PM

Why so many Random Segmentation Faults
 
For several years now I have been chasing seg fault issues in linux. I have read most of the posts available. Having more than 25 years working with programs, memory, drives, hardware, pointers, paged memory, and ad nosium, I am not satisfied with the general solutions which seem to be the answer to this problem: buy a new computer, with more memory and a bigger, newer processor. ON the otherhand my seg fault problems only seem to occur when swap space is used. If I buy a new computer with more memory, I wont need the swap space, so I will not get the seg fault. Right? Right- until i manage to fill it up...! Ok, so when I look at my problems and the numerous other posts involving this issue, Looks like this is the standard solution. But wait a minute: what this does is hide the fact that the linux swap system (since 2.4.?) does not consistantly seem to work. Ok so what is it that has been fixed with the swap system? Or maybe what is it that needs to be fixed? In any case my machine should run linux reliably, even if it needs swap space. I have shuffled drives, memory, etc... nothing changes. I am generally convinced the problem is indeed the software, and the kernel guys need to address the issue. After all is said and done - the reason for havind swap space in the first place is so you can run program combinations which will not fit into real memory. It should work as well with 16 meg as it does with 1 gig - a lot slower, but it still should work. My system is running two swap partitions of 512 meg each. The problem occurs when comiling large systems such as gcc, glibc, etc... Large PERL scripts also seems to produce a lot of random results indicating to me that it is possibly trapping the seg fauilt. I have seen mainframes run with as little as 4K of real memory... please do not tell me that linux is unable to function reliably on a pentium machine with 32 meg. Yes it may not be optimum, but it should work. A related question involves whether something in gcc/glibc/etc is perhaps also using swap space, over writing page space and causing this ? Or perhaps the warm boot system is fouling swap space. gcc seems to always be involved in this problem, so perhaps.... In any case the OS should be the only user of its swap space, and no one else should ever be allowed to touch it period. Suggestions as to how config settings, and software in general could be causing this are welcome. I am aware of the hardware issues, and have convinced myself that hardware is not the problem, and in general changing the hardware masks the real issue. After all I need linux to run well enough that I can max a brand new server out with no fear of ever getting a seg fault related to swap partitions, paging, or heavy use of the swap system.

Tinkster 08-18-2004 01:40 AM

Hi, and welcome to LQ.

In all those years of chasing segfaults - did it
ever occur that this might be a hardware-problem?
I've been making heavy use of swap over the
years, have compiled quite a few huge projects,
and the only time I've ever encountered spontaneous
segfaults in random applications was with bad
RAMs, and on two different motherboards.


Cheers,
Tink


All times are GMT -5. The time now is 03:23 PM.