LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 12-15-2008, 06:43 PM   #31
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: SlackwareŽ
Posts: 13,923
Blog Entries: 44

Rep: Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158

Hi,

Quote:
Originally Posted by Quakeboy02 View Post
I have to tell you guys that reading this thread would make a person think twice about building their own system. There is simply too much FUD being peddled in this thread. Put the motherboard in, put a fan on the case, and go. It really isn't any harder than that. But, to those who say it is, I challenge you to show me articles about computer that have died due to poor case ventilation. There should be loads of them. There aren't. It's really not that complicated guys!
Class as you want but to design a good system there is more than just placing fans within a case. A person should think out any design be it a computer or wurrly-gig. There are loads of problems associated with poor design, planning and maintenance of systems, be computers or anything else for that matter.

If you have the money to throw at a design then by all means do it. But I for one don't want a design that will cause me to loose my money nor my time. Yes, there are loads of problems with poor designed and built systems. I get them all the time in my shop.

Sure, if you want to be simplistic then by all means go ahead. It's your money. But I'm sure if you have customers that you support and that support is poor then your repeat customer(s) will be low. Yes, poor transfer will cause you problems. Maybe not immediate but you will develop issues because of poor design. Especially if the customers ambient temperatures are not the best fit for your poor design.

As far as complications, no it's not that complicated but you should be aware of the potential problem(s) with improper air transfer because of not sizing the system properly. Some common sense won't hurt either!

FUD? Great sales tool if you buy into that method of thought.
 
Old 12-15-2008, 06:59 PM   #32
jlinkels
LQ Guru
 
Registered: Oct 2003
Location: Bonaire, Leeuwarden
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195

Rep: Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043
There are some non-relevant statements being posted in this thread (is that called FUD??). The messages of Quake and Jiml8 are technically relevant though. As far as I understand Jim is and engineer, it might have to do something with that.

Let's first look at how to keep components cool. Components producing heat, mainly chips have to transfer their heat to the environment. Without heatsink, there are two thermal resistances (RTh) relevant, from junction (the die) to the case, Rth j-c and the most important the resistance from case to ambient (Rth c-a). The higher the resistance, the higher the temperature difference between ambient and die. Just like Ohm's law, the temperature difference is Rth * P where Rth is the series of Rth j-c + Rth c-a and P the dissipated power in the chip.

If Rth c-a is too high, it must be decreased. This can be done by attaching a heat sink to the case. An additional Rth is added, case to heat sink Rth c-h. The heat sink itself has a much lower Rth h-a than the case of the chip. Typical figures: Rth c-a 30K/W, while Rth c-h + Rth h-a can be 1-5K/W for passive cooling.
Reference: http://en.wikipedia.org/wiki/Thermal...in_electronics

To avoid enormous heat sinks it might be necessary to force an airflow over the heatsink to reduce Rth h-a even further. A typical figure is 0.2K/W. This is exactly how the CPU is being cooled, the North Bridge and the GPU(s). Some clever designs of A-brands have a duct mounted on top of the CPU heat sink which funnels the hot air from the CPU directly to the outside of the case.

If not, this heat goes into the ambient which is the inside of the case. It is correct that this heat has to be transferred out of the case. In earlier days, the power supply fan had sufficient capicity to provide this displacement. Air from the case was sucked into the power supply, went over the heat sinks and was blown to the outside. For modern power hogging computers additional fans at the front and rear of the case might be needed to displace the heated air with sufficient cool air.

The generic computer cases as manufactured for clone and DIY builders are a nightmare from a thermal design point of view. The interface cards are stacked horizontally in a corner of the case where there is no airflow at all. The hard disks are stacked also horizontally with even less space in between, and no airflow either. Even if you mount a fan at the front side of the case with the air flow directed to the interface cards, you'll only create turbulence. As such some heat will be removed, but it is far from ideal. If you mount a fan at the rear side of the box, with some luck air is sucked in the the front, and a laminar airflow is created over the disks. This effect is totally nulled if you mount a fan at both the front at the rear.

Mounting hard disk coolers is a non-elegant brute-force approach to transfer heat from the hard disk to the air in the case. There is no thermal design whatsoever involved, but by blowing sufficient air, there will certainly some cooling effect. Hopefully this hot air takes part of the airflow thru the box, otherwise you simply have hot hard disks in a balloon of hot air trapped in the case.

Heat transferred from the GPU's by forced cooling is usually as unelegant. The heat coming from the heatsink is simply blown upon the card and components neighbouring the graphics card, causing lots of turbulence again. Hopefully some air occasionally passing by will transfer this heat to the outside as well.

Solutions are not difficult, but generally hard to implement if they are not a integral part of the design. One good solution I have seen on grapic cards is a duct which directly funnels the hot air from the heatsink thru the slot plate to the outside.

A better solution would have been to create small slot in the chassis between the slots for the interface cards and put a large fan on the front of the case. This fan should produce a large (in terms of dimension, not neccesarily volume) airflow which flows laminarly over the interface cards and leave the case at the rear. Non-PC computer equipment is often designed like this where airflow is from botton to top in a 19" case and the cards are placed upright, stacked from left to right..

Hard disks should ideally have a heat sink integrated in the case, so a small laminar airflow would suffice to cool these. The hard disks should be placed in a sub cage with a fan on one end so air could flow freely over the disks. Obviously the hot air has to be blown outside the case and not inside.

In the current situation, fans are attached locally to heat sources and transfer heat by turbulence and blow the air into the case. More fans are added to the front and the rear to get the air out of the case and add more chaos to the turbulence.

If the case is opened, it doesn't neccesarily mean that the computer overheats. All heated air is in contact with the room, so there is a fair chance that enough turbulence is present to provide sufficient heat exchange with the room. This is not true of course for real designs where a laminar airflow is directed carefully over components. The Compaq Deskpro EN was such a design where the CPU cooler was passive and cooling was dependent on laminar airflow thru the case.

Since there is so much turbulence in the case instead of laminar airflow huge deviations in cooling capacity can result. This makes the entire discussion concerning air pressure and air density depending on air blowed in or sucked out purely academic.

Unfortunately, Jim's method to measure the temperature of critical components and adjust the airflow accordingly is one of the better ways to control the temperature.

jlinkels
 
Old 12-15-2008, 08:03 PM   #33
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
Originally Posted by jlinkels View Post
There are some non-relevant statements being posted in this thread (is that called FUD??).
FUD is an acronym for Fear Uncertainty and Doubt. When people start picking at the nits on the nuts of gnats, then FUD is certain to follow unless they are quantifying the size of nits and the impact they have on said gnats. Unfortunately we see the same FUD being spread on the motorcycle boards when people start talking about which oil to use.

Quote:
Let's first look at how to keep components cool. Components producing heat, mainly chips have to transfer their heat to the environment. Without heatsink, there are two thermal resistances (RTh) relevant, from junction (the die) to the case, Rth j-c and the most important the resistance from case to ambient (Rth c-a). The higher the resistance, the higher the temperature difference between ambient and die. Just like Ohm's law, the temperature difference is Rth * P where Rth is the series of Rth j-c + Rth c-a and P the dissipated power in the chip.
OK, even my eyes have now rolled up past my eyelids. This is probably all true, but pretty much irrelevant overkill for the subject at hand, and I just don't care. You don't need an engineering degree to bolt parts in a case and have a computer that lasts a long time. My message will continue to be: "Put it together, put a fan on the case, start it up, enjoy." If it needs to be more complicated than that, then the business of selling individual pieces to consumers will die. It's clearly a thriving business, so the simpler case(sic) must be the one that matters.

Last edited by Quakeboy02; 12-15-2008 at 08:05 PM.
 
Old 12-15-2008, 08:58 PM   #34
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 116Reputation: 116
Quote:
Originally Posted by Quakeboy02 View Post
OK, even my eyes have now rolled up past my eyelids. This is probably all true, but pretty much irrelevant overkill for the subject at hand, and I just don't care. You don't need an engineering degree to bolt parts in a case and have a computer that lasts a long time. My message will continue to be: "Put it together, put a fan on the case, start it up, enjoy." If it needs to be more complicated than that, then the business of selling individual pieces to consumers will die. It's clearly a thriving business, so the simpler case(sic) must be the one that matters.
This is fine for a basic system, when the case is large enough and has enough empty space in it to accommodate the likely inefficiencies. One integrated motherboard, one processor, a couple of memory sticks, one hard drive, maybe (MAYBE!) a separate video card, maybe a NIC, one hard drive, one DVD drive...and away you go.

If you start to get much more than that in the box, you need to start being careful.
 
Old 12-15-2008, 10:35 PM   #35
arnuld
Member
 
Registered: Dec 2005
Location: Punjab (INDIA)
Distribution: Arch
Posts: 211

Original Poster
Rep: Reputation: 30
Quote:
Originally Posted by pixellany View Post

... SNIP...

"died due to poor ventilation"? Well, not specifically, but I've certainly had the following:
[LIST]
[*]Shutdown (2)
-----Solution: Remove and reinstall CPU with new thermal compound.

Any one of these would eventually lead to failure of some component.

Well, once I opened my cabinet to clean up the MOBO and heat-sink on the Processor but I found that heat-sink was totally glued to the processor beneath it, then I applied a little force and both CPU and the heat-sink came out together and while socket was still locked :\ . There is lock on the socket,you pull-up a small hook the it is unlocked, and you can only pull-up the hook only if the heat-sink is removed. Well, what the heck, the green-liquid on AMD64 CPU which is called by AMD as cooling-agent worked like glue that glued the heat-sink and CPU together and hence I wan unable to remove the heat-sink as separate from CPU and hence unable to unlock the socket. Once removed, I cleaned the cooling-agent with petrol and lots of dust form the fan and heat-sink and then unlocked the socket, placed the CPU and heat-sink on it and then ran it and it is still working from last 1 year.

I asked myself many times, did I do something wrong ? but then it reminds me of my friend who owns a cyber-cafe near to a main-road of our market and all of the dust goes down inside his Pentium 4 CPU running on Intel MOBO. In 2007, after 3 years of continuous running (daily 10 AM to 9 PM) he open his heat-sink and saw that cooling-agent was completely dried up, like a dry-leave it went-off without even touching, then he just cleaned his MOBO and put it back and still using it like earlier, no problems.
 
Old 12-15-2008, 10:57 PM   #36
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
I had typed up a long comparison of the cooling needs for the various components in a computer and managed to lose it. However the one thing that stood out was the magic number of +122F/50C for operating or ambient temps on the various components (depending on the device). Some are higher, and I can't even find a temp warning for NVIDIA's 9800 GTX+, but 122F seems to be the magic number for silicon. It's hard to imagine that even a primitive cooling system wouldn't be able to keep the ambient air in even the most crowded case at well below 122F.

Would a lower temp than 122F be better? Common sense would say yes, but then common sense rarely is. If by "better" you mean less likely to exceed the maximum rating, then yes, lower is better. If you mean that 70F is dramatically better than 95F for the ambient case temperature - I'd have to see some proof.
 
Old 12-15-2008, 11:29 PM   #37
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
Quake

There is a HUGE difference between case temps and the temps of components. The lower the temperature of the case the faster you can pull heat out of those components. Also any "comparison" of the cooling needs of various components would be essentially worthless. While the P4 cpus used to be the hottest thing in the case a few years ago, today it is usually the northbridge or the GPU that is the hottest. For example take a look at the atom 270 based desktops(as well as many other newer cpus), the cpu is passively cooled but the northbridge still requires a fan. If you look at some of the newer 10K drives they are also pretty high on the hot list, while most drives would be reasonably low on such a list. I would think the fact that almost any data center is cooled well below 95F would be proof. Companies do not spend that kind of money without good reason.
 
Old 12-16-2008, 06:53 AM   #38
salasi
Senior Member
 
Registered: Jul 2007
Location: Directly above centre of the earth, UK
Distribution: SuSE, plus some hopping
Posts: 4,070

Rep: Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897
Quote:
Originally Posted by Quakeboy02 View Post
Would a lower temp than 122F be better? Common sense would say yes, but then common sense rarely is.
If you believe something like MIL217, the US military standard for the assessment of the reliability of electronic designs, you will believe that cooler is better; you will believe that the Arrhenius equation applies and that the difficulty of exceeding the the energy barrier (ie, the difficulty of a failure occuring) halves per 10 degrees C of temperature rise (roughly).

The trouble is, having done professional work in this area, I can tell you that, in practice, MIL217 is wrong (but then, it doesn't say that it isn't): it can or does work well for components that wear out (think fans and electrolytic capacitors) but not for semiconductors (processors, chipsets).

However, unless thermal shock is an issue (and in this context and in this way, it isn't) cooler, down to 'room temp' at least, is never worse and is sometimes better.

(Cooler may be worse if the cooling system induces thermal shock and cooler may be worse if you go sub-zero...but discussing liquid nitrogen cooling systems is out of scope.)

Quote:
If you mean that 70F is dramatically better than 95F for the ambient case temperature - I'd have to see some proof.
No, its not dramatically, although it is more than double for some components. See previous.

Quote:
For example take a look at the atom 270 based desktops(as well as many other newer cpus), the cpu is passively cooled but the northbridge still requires a fan.
True, but you seem to have chosen the least typical example. The CPU is a new energy-efficient design, specifcally and carefully designed for best 'bang per buck' on a 'units of computation power for amount of heat dissipated' basis. Intel, controversially, decided to save money by reusing an existing and old chipset. It is an old design and didn't even get a die shrink, which would have helped its thermals. Most new CPUs get new chipsets, so that isn't a normal situation; Intel's new chipset isn't due until sometime midway through 09 (IIRC), until then their low power (computing and dissipation) part will continue to be very close in dissipation to, eg, a Pentium 2140, at a system level and the 2140 is significantly better in computing power terms.

Really, until the new chipset is available, there isn't much of an argument for the Atom; the poor power performance of the chipset makes that much difference to the system overall.

(Apologies to the OP; I can't see any of this helping you!)


Quote:
If you look at some of the newer 10K drives they are also pretty high on the hot list, while most drives would be reasonably low on such a list.
Look also at the progress with raptors/velociraptors; over the years, they have moved from performance approaching decent SCSI drives at a real power (&, of course, size) penalty to drives that, while not the lowest power dissipation are at the top of the mainstream rather than lying outside it. Note also that drives tend to deal with their power dissipation in a different way to most components; most components need air flowing directly around them to have a decent chance of getting rid of heat, drive, being bolted directly to the case get rid of a significant fraction of their power by conduction (in a lesser way, this is also true of step-down mosfets on the motherboard, but as they can only conduct heat to realtively local areas on the motherboard, this isn't the same situation as drives).
 
Old 12-16-2008, 10:44 AM   #39
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 116Reputation: 116
Quote:
Originally Posted by salasi View Post
ILook also at the progress with raptors/velociraptors; over the years, they have moved from performance approaching decent SCSI drives at a real power (&, of course, size) penalty to drives that, while not the lowest power dissipation are at the top of the mainstream rather than lying outside it. Note also that drives tend to deal with their power dissipation in a different way to most components; most components need air flowing directly around them to have a decent chance of getting rid of heat, drive, being bolted directly to the case get rid of a significant fraction of their power by conduction (in a lesser way, this is also true of step-down mosfets on the motherboard, but as they can only conduct heat to realtively local areas on the motherboard, this isn't the same situation as drives).
Many drives these days are NOT bolted directly to the case, but are isolation mounted for noise reduction purposes. The drives in my system are mounted like that, except for the system drive which is hard-mounted in a drive cooler.
 
Old 12-16-2008, 12:06 PM   #40
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
Originally Posted by lazlow View Post
I would think the fact that almost any data center is cooled well below 95F would be proof. Companies do not spend that kind of money without good reason.
lazlow,

You've missed the point entirely about 95F. Go back to my second post, the one on cooling. See the formula that discusses cooling. Run the numbers and you will see very quickly that you can't get actually get a DT (delta temperature, i.e. difference between incoming airflow and outgoing airflow) of zero with passive cooling. What you can do easily is get DT down to 20. If you take a comfortably warm house to be 75F, add 20F, you get 95F. This 95F is on the inside of the case, not the room temperature. Considering that 122F ambient is the average limit, I don't see any reason why 95F isn't a good target for ambient air inside a computer case.

Let's not lose sight of the fact that hundreds of thousands of cases, motherboards, CPUs, disks, etc are sold every year on ebay and even at Best Buy. Given that this remains a thriving business, the cooling situation cannot possibly be as dire as some of you are intimating. Put the thing together and enjoy it. All the rocket science has already been done and judged good.
 
Old 12-16-2008, 12:10 PM   #41
jlinkels
LQ Guru
 
Registered: Oct 2003
Location: Bonaire, Leeuwarden
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195

Rep: Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043
I forgot to mention this in my previous post, but of course it is a shame that the last decade no significant large-scale improvement has been made in power efficiency for the CPU's, GPU's and chip sets.

Think about this, the past 6 years processors have become 10-20 times more powerful (this is 4 times 1.5 years, processor power increase is about 2^^4 according to Moore's law), but dissipation has almost not increased. If I remember well, in 2002 an AMD64 Athlon dissipated around 60-70 watts and that is equal to what we see in the latest dual core CPU's. Am I far off?

It also means that if development had only focused on power saving, we now would run a 2002 class processor with 6 Watts. I think this is very much what the VIA C7 and Intel Atom 245 do. The Atom maybe even a bit less than 6 Watts. Obviously it is possible to save on energy, but as long as 200 Watts for a moderately powerful system is accepted by the buyers, no one feels obliged to do so for the mainstream market.

What I want to say is that it is disgusting how much Intel/AMD/Nvidia have been focusing on speed and marketing faster CPU's and GPU's, maybe pushed by the ever increasing demands of some bloated OS's, without thinking about the use of natural resources and production of CO2. What more are our computing demands as compared to 2002? Transparent windows? Anti-virus software?

Is it coincidence that mainboards with the Atom emerged when the oil price headed $100? Again, it seems to be a matter of money and marketing rather than common sense regarding the degradation of the environment. And of course my own electricity bill. I am afraid that now the oil price is back to $47, and development budgets will be cut because of the recession, we have to wait much longer for energy saving processors and chipsets.

Too bad that only in Europe it is possible to lay on strict government rules regarding energy consumption, like the phase out of incandescent lamps in favor of energy saving light bulbs. I wished that the EU would rule out energy hogging computer equipment as well. With a potential market of 700 million people, maybe that would motivate those companies to focus a bit more on efficiency.

And it would also help tremendously with the cooling problems as decribed in this thread.

jlinkels
 
Old 12-16-2008, 01:56 PM   #42
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
Quake

First, no I did not miss your earlier post. I just do not agree with it. The ability of heat dissipating devices(cpu cooler, northbridge coolers, etc) drop off as the temperature differential becomes less. For instance when my case temp is 28C my temps are 48C cpu,and 44C northbridge. Now if I allow my case temp to jump to 33(a relatively small change) my temps change significantly, 56C cpu and 48C northbridge(right next to incoming airflow). While I have not tested it I would imagine that if I allowed my case temp to rise to your proposed 35c(95F) both my cpu and my northbridge would be at (or rapidly) approaching their rated thermal limits(60c and 55C). Just for the record 70cfm in and 55cfm out, on my main system. I would also like to point out one of the dangers of using information found on the internet (your earlier link) is that they can be very dated. Case in point:

Quote:
The governing principle in fan selection is that any given fan can only deliver one flow at one pressure in a given system.
Either you link is massively outdated (by years) or it is poorly informed. Variable speed fans and fan controllers have been available for years (10? more?).

As to your earlier point that 100cfm is enough, I would go 100-150cfm just depending on the specific system. With the current direction of equipment, that number is almost certainly dropping.

Salasi

While the atom is one of the more extreme examples and I agree they should get a smack up side the head for using that chipset, it is not the only example(as I mentioned above). AMD's 45watt cpus are in the same boat. You can put a good cooler (like a Thermalright si128) and cool them without a fan, except the northbirdge is still too hot(needs fan). While I do not know Intel's line all that well, it is my understanding that they have the same general situation.
 
Old 12-16-2008, 02:13 PM   #43
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
Originally Posted by lazlow View Post
Quake

First, no I did not miss your earlier post. I just do not agree with it.
You have a right to your opinion, but you should be bringing some numbers with you, such as figures for heat related failures of consumer built machines.

Quote:
The ability of heat dissipating devices(cpu cooler, northbridge coolers, etc) drop off as the temperature differential becomes less. For instance when my case temp is 28C my temps are 48C cpu,and 44C northbridge. Now if I allow my case temp to jump to 33(a relatively small change) my temps change significantly, 56C cpu and 48C northbridge(right next to incoming airflow).
33C - 28C = 5C = 10F. That's not a relatively small change.

Quote:
Just for the record 70cfm in and 55cfm out, on my main system.
Are you saying that you have two fans, one moving air in and one moving air out? Wouldn't those figures be limiting, rather than additive? IOW, it's doubtful that you're actually getting 125cfm of airflow. It's even problematic whether you're getting 55cfm of airflow, though possible.

Quote:
I would also like to point out one of the dangers of using information found on the internet (your earlier link) is that they can be very dated.
Have the laws of physics from whence the formulas on that page came changed, or are the formulas simply wrong?

I do understand that you are strongly wedded to your ideas. What I don't understand is how you can justify them in the real world. Heat related problems simply don't happen that often in the real world using off the shelf parts. To quote a Wendy's commercial: "Where's the beef?"
 
Old 12-16-2008, 03:44 PM   #44
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
No, the laws of physics have not changed, nor did I say that they had. However in any paper like this certain assumptions are made. The further conditions are from those assumed conditions the less relevant the provided information is. Since this paper appears to be well over a decade old, the amount of heat generated in a case has continually increased (at least until relatively recently)over the last decade, the sources of that heat have shifted, and the technology in fans has drastically changed, I am suggesting that you have to read it keeping all that in mind.

No, the total flow through the case is certainly not in the 100-150cfm range, but 99.9% of the people have no way of measuring actual throughput. What people can do is look at the rated amount that their fans put out. In my case I would suspect my actual throughput is in the low 60s CFM. With the 55cfm fan moving more air than it is rated due to positive pressure and the 70 cfm fan moving less due to the same pressure. This is not an effect that is limited to multi fan setups, a single fan in a case will certainly not move the amount of air that it is rated at either. Even in the link you provided it gave the basic idea of Series/Parallel operation as well as its advantages/limitations.

Relative to your 25F(95-70), 10F is less than half what you suggested. I guess it is a matter of opinion, but I would consider less than half to be relatively small.

Talk to virtually anybody who works on personal systems(I do), try and tell them that heat related problems are a rare thing. They will probably laugh at you. As far as how Dell(or anybody who sells pre setup machines) sets up there machine goes, there goal is to make the system to last just long enough for the user to feel that they have gotten good value from their product(repeat sales). Not to build a long lasting product that they can only sell you once.(which would apply to almost any product).

Last edited by lazlow; 12-16-2008 at 03:52 PM.
 
Old 12-16-2008, 04:27 PM   #45
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
Originally Posted by lazlow View Post
Talk to virtually anybody who works on personal systems(I do), try and tell them that heat related problems are a rare thing. They will probably laugh at you. As far as how Dell(or anybody who sells pre setup machines) sets up there machine goes, there goal is to make the system to last just long enough for the user to feel that they have gotten good value from their product(repeat sales). Not to build a long lasting product that they can only sell you once.(which would apply to almost any product).
Can you, or anyone out there, quantify this, or are we stuck in the Fear, Uncertainty, Doubt cycle? How many machines are you talking about? Is it 50%, 10%, 1%, 0.1% or some even smaller percentage of the total motherboard/CPU combos installed by Joe Schmoe? So far, this thread has produce only "I'm sure", "I'll bet", or "talk to anyone" as a quantity.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Power Supply replacement question bigalexe Linux - Hardware 15 03-11-2008 09:49 PM
Uninterruptible Power Supply question jantman Linux - Hardware 9 01-30-2007 12:01 AM
Basic Power Supply question lothario Linux - Hardware 2 12-16-2006 08:21 AM
Power supply fan question wapcaplet Linux - Hardware 2 11-16-2004 05:04 PM
Heat sink, or not to heat sink ? Pres Linux - Hardware 4 07-13-2003 03:49 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 01:51 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration