Is sudden, constant seg faulting a sign of dying hardware?
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Is sudden, constant seg faulting a sign of dying hardware?
All of a sudden I'm getting constant segmentation faults. Apps exiting willy nilly for apparently no reason. Firefox is the worst but I suppose that's because I use it most.
It hadn't used to be like this. Even using versions of distros that I used to use now give me the same problem.
So I'm wondering if dying hardware could be the result. I've changed a bad RAM module recently but the memtest86+ shows no errors now.
if it is a mem issue - try swapping banks - sometimes that helps (dont ask me why - it just sometimes does)
Usually has to do with gold contacts on memory modules and tin sockets in the mobos. Gold will actually cause tin to corrode faster than normal (due to a natural galvanic reaction) creating a thin layer of oxidation that prevents good contact. Reseating the modules periodically scrapes away that thin layer of corrosion.
If the errors started appearing soon after you replaced the RAM module, then the cause could be that the new stick is not running at the same speed as the old stick, or that its speed is not supported by the mobo. Basically if you have two sticks of different speed installed, the faster one will have to keep waiting for the slower one to catch up (the chain is only as strong as its weakest link, and all that) and that can lead to instability. Personally, although you can have different RAM speeds installed simultaneously, I would not recommend it, for that reason.
That said, are there any messages in your logs (eg, /var/log/messages)?
Distribution: debian, gentoo, os x (darwin), ubuntu
Posts: 940
Rep:
Quote:
Originally Posted by Crito
Usually has to do with gold contacts on memory modules and tin sockets in the mobos. Gold will actually cause tin to corrode faster than normal (due to a natural galvanic reaction) creating a thin layer of oxidation that prevents good contact. Reseating the modules periodically scrapes away that thin layer of corrosion.
Distribution: debian, gentoo, os x (darwin), ubuntu
Posts: 940
Rep:
Quote:
Originally Posted by J.W.
If the errors started appearing soon after you replaced the RAM module, then the cause could be that the new stick is not running at the same speed as the old stick, or that its speed is not supported by the mobo. Basically if you have two sticks of different speed installed, the faster one will have to keep waiting for the slower one to catch up (the chain is only as strong as its weakest link, and all that) and that can lead to instability. Personally, although you can have different RAM speeds installed simultaneously, I would not recommend it, for that reason.
That said, are there any messages in your logs (eg, /var/log/messages)?
correct me if i am wrong - i thought the motherboard sets the speed - so that if bank1 has 400MHz ram in place and bank0 only 333MHz, then both banks would opperate at 333MHz to deliberately avoid issues you mentioned.
putting slower memory into faster motherboards causes issues, but putting faster memory into slower motherboards does not
i am thinking of the 'old' sdram days - 133's could easily opperate in 100's mainboards and they would work at the speed of 100's - not the otherway around though.
Well again, things can only operate as fast as the slowest component, whether that's the mobo or the RAM, and having components that run at different speeds may increase the chances of instability. It's not like it's a guaranteed recipe for disaster or anything, but it certainly increases the chances. Along those lines, if you put faster RAM into a mobo that can't support it, you are paying for performance that you cannot use. Again, not a major deal, but IMHO there's not much point in it. Just my 2 cents
Distribution: debian, gentoo, os x (darwin), ubuntu
Posts: 940
Rep:
but things dont run at different speed, that is the point i am trying to make - if you have a 400 and a 333 MHz bank, that does not mean that one bank runs at 400 and the other at 333...
the motherboard negotiates a speed at which all banks will run.
same with every piece of hardware in your computer...
if you have a agp card that does 8xagp and your motherboard only does 4xagp thats what you will get
if you have a amd athlon that does 2400 yet your motherboard will run at 2000 - that is what you will get
if you have a usb2 stick and only usb1 on your motherboard, that is the speed you will be working with when transfering data
in every case mentioned obove, reversing can and will cause issues though.
but the point is: memory bank0 and bank1 will always work at the same speed, never at different speeds.
I think you're missing my point. Yes, a 400Mz RAM stick in a mobo that can only support 333Mz RAM will be forced to slow down and run at 333Mz, but that is sort of like driving your car with the parking brake on. At least as I see it, there would be no reason in buying a product that is designed to run at a certain performance level if it would be impossible for me to actually use it at that performance level, ie, putting a 400Mz stick in a 333Mz mobo means I'm paying for performance I simply cannot use. Personally, I recommend using the fastest RAM the mobo can support, but not to buy RAM that is faster than what the mobo can support. In this case, putting a 400Mz stick into a 333Mz mobo is an automatic and unavoidable 17% performance reduction, and at least for me there's no benefit in doing that.
J.W., you are wrong and your example is wrong too. A much better example is a sports car that can reach 133 MPH, but the law limits to 100 MPH for the particular road. The sports car can easily traveling at 100 MPH. A normal car that has a limit of 100 MPH will be ok running at 100 MPH, but the speedometer will not show a steady 100 MPH.
Faster memory will always last longer than equal speed memory. It is like an 240 volt incadesent bulb being used in the US. The bulb will last 10 years instead a few months.
Electro - what you are saying is exactly the same thing as I am saying, namely that although a piece of equipment is capable of running at a faster speed, it is being limited to operating at a slower speed. Maybe my analogy wasn't ideal but I think my point is clear - the equipment is being held to run at a slower speed, that's all.
As for your comment on 240V lightbulbs, sorry, I have no idea what that has to do with a discussion about RAM and mobos. Please stay on topic, thanks.
J.W., faster memory is not being limited. The point I am trying to make is faster memory chips are better than memory chips that equal the bus speed or clock.
I was not off topic from the light bulb example. I was stating that faster memory will last longer when it is clocked slower.
Another example to see the affects of memory speed versus bus speed. Take two equal video cards that has 455 MHz memory chips and a bus speed of 450 MHz. Overclock one that equals the speed of the memory chips. The second video card just keep it at its orginal clock. The overclocked card will produce artifacts while the second card does not. The artifacts is hardware errors caused by the characteristics of memory chips. People that have overclocked their video cards will agree, but others that have not will be skeptical.
I'll start by saying that DDR ram is measured at double the FSB. DDR400 runs at a max of 200MHz, DDR333 runs at a max of 166MHz, and DDR266 runs at a max of 133MHz.
I'd suggest checking the bios and/or jumper settings on the board. If the system is forced to run at 200MHz FSB then DDR333 RAM will most likely generate errors. It will probably also be quite warm to touch.
Given that RAM is fairly cheap these days I do question the wisdom in buying DDR333 RAM.
Oh and don't trust memtest! I've had bad RAM that memtest didn't detect errors with, even after 8 hours of checking. However Windows 2003 server kept BSODing with that stick of RAM. I swapped it to another machine and the problem went with the RAM to that other machine.
Last edited by giblet1973; 09-28-2006 at 08:07 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.