LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 08-08-2017, 10:38 AM   #61
the3dfxdude
Member
 
Registered: May 2007
Posts: 730

Rep: Reputation: 358Reputation: 358Reputation: 358Reputation: 358

Quote:
Originally Posted by upnort View Post
Good point. Heat means energy consumption, which is something I want to limit. Reducing energy consumption is one reason I began using onboard graphics many years ago.

Yesterday I noticed modern discrete fanless graphic cards that the heat sinks are huge, as cwizardone mentioned. Looking a tad bit deeper I notice that larger PSUs are required/recommended even when the card is fanless. My guess is even when the graphics card is fanless a decent chassis fan is a must.

As I do not do anything that stresses graphics chips, I probably would not need to worry about the heat as critically as others. Nonetheless, a consideration point.
Power and heat go hand-in-hand. When looking at a particular model #, find the power dissipation, and you'll know relatively how much cooling you'll need. In a heat sink, there is going to be some air flow required, but probably nothing more than a typical case can provide. So if you do get the higher end video, you'll need appropriate power supply, and case fans. I find case fans more reliable, quieter, and easier to replace. But I'd have to admit, I've been fanless (both Nvidia and ATI) since 2000. So you'd have to spend a little more to get a good GPU, and of course, don't try to short cut, and cheap out. But naturally, the passive cooled options are going to not be as expensive as the highest end stuff, that requires localized heat removal.

I wish I could give a recommendation on what to try, but I've not bought a discrete video card in 7 years now. I've only purchased an Intel J1900 board in the last couple years and it works fine. It is completely fanless system.
 
Old 08-08-2017, 10:51 AM   #62
Regnad Kcin
Member
 
Registered: Jan 2014
Location: Beijing
Distribution: Slackware 64 -current .
Posts: 663

Rep: Reputation: 460Reputation: 460Reputation: 460Reputation: 460Reputation: 460
I built two J1900 boxes. One drives a ELISA plate reader and the other is on the desk of a colleague. The colleague's unit is a ASROCK board and the one in the lab is a BIOSTAR. both work fine but the asrock is by far the nicest unit.
 
Old 08-08-2017, 02:10 PM   #63
enorbet
Senior Member
 
Registered: Jun 2003
Location: Virginia
Distribution: Slackware = Main OpSys
Posts: 4,784

Rep: Reputation: 4434Reputation: 4434Reputation: 4434Reputation: 4434Reputation: 4434Reputation: 4434Reputation: 4434Reputation: 4434Reputation: 4434Reputation: 4434Reputation: 4434
Of course it is a given that "power and heat goes hand-in-hand" and it is also true that it is rather silly to own a 1000 hp muscle car to use for simple urban commute. However if you commonly need to do long hauls with lots of cargo it's ridiculous to buy a Geo Metro for that. Apologies for car analogies but they do apply and we all can relate. Similarly if you replace a 6 cylinder 200hp engine with a 400hp V8 you're flirting with disaster if you don't also replace/upgrade the radiator.

Back to the world of computers as engines it is possible to get by for a time "running in the red zone" but know that you are depending mostly on luck. 60C = 140F which is the human threshold of pain. Most will pull quickly away by 130F but just about everyone but severely disturbed masochists will not immediately draw back from anything over 60C/140F. Certainly silicon doesn't feel pain and can happily operate above that level but consider there is a reason that Crays are cooled with liquid nitrogen and most large server farms are liquid cooled. In Ham Radio it is an old cliche that "a dollar's worth of antenna is worth 10 dollars of amplifier" and likewise mission critical computing recognizes that a dollar in cooling is worth much more in hardware not to mention the cost of inaccuracy or downtime. This is especially true if any mechanical devices still exist in your PC such as hard drives and optical drives.

The very best fans one can buy generally cost under $40 USD and very good smaller ones can cost as low as 10 bucks. They draw very little current since electric motors are one of the most efficient engines in existence. The cost/benefit ratio is extremely favorable. Professional Gamblers aren't really gamblers. They're just really good at risk assessment. True Gamblers are more often than not Losers. You might be lucky but just know that if you have some agenda that doesn't include risk reduction you're playing against The House.

Knowing the risks and planning accordingly is never unwise.

Last edited by enorbet; 08-08-2017 at 02:12 PM.
 
2 members found this post helpful.
Old 08-08-2017, 06:05 PM   #64
the3dfxdude
Member
 
Registered: May 2007
Posts: 730

Rep: Reputation: 358Reputation: 358Reputation: 358Reputation: 358
Quote:
Originally Posted by enorbet View Post
Back to the world of computers as engines it is possible to get by for a time "running in the red zone" but know that you are depending mostly on luck. 60C = 140F which is the human threshold of pain.
Actually I'm quite aware of the limitations of ICs, having working with thermal analysis there firsthand. From a consumer level, knowing the power dissipation required (usually can be found on performance parts like video cards) is a good idea when building your own. However, as a consumer you don't manufacture the card (at least I don't do that). So some amount of trust needs to be developed. For somebody that doesn't know what that mean when it comes to cooling requirements do a little comparison. Pick out a device, find some card manufacturers, and compare their heat sinks / fans. You'll know relatively who's short-changing you with an ill-equipped part. I've personally never had an issue picking a good one.

Now on the other hand, I have seen one Nvidia device once (15+ years ago), that some one had issues with their machine freezing in games. Passive cooling was probably more common on the real cheap stuff back then. Well, I took a look at the card and found about the oddest heat sinks I'd ever seen. So because that seemed so suspect, I put a fan on it, ran the game, and then no more problem...
 
Old 08-08-2017, 06:11 PM   #65
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: McKinney, Texas
Distribution: Slackware64 15.0
Posts: 3,858

Rep: Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225
Quote:
Originally Posted by the3dfxdude View Post
Now on the other hand, I have seen one Nvidia device once (15+ years ago), that some one had issues with their machine freezing in games. Passive cooling was probably more common on the real cheap stuff back then. Well, I took a look at the card and found about the oddest heat sinks I'd ever seen. So because that seemed so suspect, I put a fan on it, ran the game, and then no more problem...
I had an SLI setup a couple of years ago. While playing a fairly heavy in the 3D department game, the game would slow down drastically after about 5 minutes of play. Turns out that one of the two cards had a sticky fan. Normal use wouldn't overheat the card, but that game would. One card would shut down due to overheating and the other one really couldn't handle the game's requirements by itself.
 
Old 08-08-2017, 06:38 PM   #66
the3dfxdude
Member
 
Registered: May 2007
Posts: 730

Rep: Reputation: 358Reputation: 358Reputation: 358Reputation: 358
Quote:
Originally Posted by Richard Cranium View Post
I had an SLI setup a couple of years ago. While playing a fairly heavy in the 3D department game, the game would slow down drastically after about 5 minutes of play. Turns out that one of the two cards had a sticky fan. Normal use wouldn't overheat the card, but that game would. One card would shut down due to overheating and the other one really couldn't handle the game's requirements by itself.
Yeah that's why I go the passive route. Your card was spec'd that it relied on active cooling for the higher clock speeds. And then you got to keep them clean. I go for the really big heat sink, and case fans if needed. Easier to clean, easier to replace. Good ones stay pretty quiet. And usually the heat sink is spec'd well since that's how they tested it. I'd probably have to go water cooling to keep myself happy if I ever build a "muscle" computer. But I'm not really interested in builds like that.
 
Old 08-08-2017, 09:54 PM   #67
upnort
Senior Member
 
Registered: Oct 2014
Distribution: Slackware
Posts: 1,893

Original Poster
Rep: Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161
Everybody here has been a huge help. An enjoyable thread! I learned a bit too, which is always nice.

I think I decided on a board:

ASUS Z170-K LGA 1151.

Has 2 PS/2 ports, 6 SATA III ports, 2 standard PCI slots, and DVI-D, HDMI, VGA. Yeah, I forgot in my original post that I have a PCI capture card. That PCI slot requirement eliminated many boards.

For another $20 I could go with an i5-6500, but I am going with an i5-6400 Skylake 2.7 GHz with Intel HD Graphics 530. Might seem somewhat "low end" spec wise, but will be oodles faster than the current rig and will be nicer on the electric bill.

I am increasing RAM to 16 GB in case I start running more than two VMs concurrently. The quad core will help with that too.

The budget AMD APUs were tempting. After more research I got the feeling I would prefer more CPU muscle for VMs and compiling.

Curiously, the WD hard drive in my current system is SATA III but the mobo is SATA II. I will see a faster hard drive response too. (SSDs are a topic for another thread!)
 
2 members found this post helpful.
Old 08-08-2017, 10:40 PM   #68
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: McKinney, Texas
Distribution: Slackware64 15.0
Posts: 3,858

Rep: Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225
Quote:
Originally Posted by upnort View Post
Curiously, the WD hard drive in my current system is SATA III but the mobo is SATA II. I will see a faster hard drive response too.
Well, the head seek time won't change. You'll see faster data transfer rates for sure, but initial reads shouldn't be that different.

Hey, if I'm lying, you'll be happy.
 
Old 08-08-2017, 11:22 PM   #69
upnort
Senior Member
 
Registered: Oct 2014
Distribution: Slackware
Posts: 1,893

Original Poster
Rep: Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161
Quote:
Hey, if I'm lying, you'll be happy.
I remember when I had SATA II drives on SATA I ports and updated to a SATA II motherboard. Dramatic increase, no, but still noticeable, especially with file transfers between drives.
 
Old 08-09-2017, 03:33 AM   #70
cwizardone
LQ Veteran
 
Registered: Feb 2007
Distribution: Slackware64-current with "True Multilib" and KDE4Town.
Posts: 9,097

Rep: Reputation: 7276Reputation: 7276Reputation: 7276Reputation: 7276Reputation: 7276Reputation: 7276Reputation: 7276Reputation: 7276Reputation: 7276Reputation: 7276Reputation: 7276
Quote:
Originally Posted by upnort View Post
....I think I decided on a board:

ASUS Z170-K LGA 1151.......
Very good choice!

Anything to be gained (or lost) by going with the newer
Intel 270 chip set?
 
Old 08-09-2017, 10:03 AM   #71
upnort
Senior Member
 
Registered: Oct 2014
Distribution: Slackware
Posts: 1,893

Original Poster
Rep: Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161
Quote:
Anything to be gained (or lost) by going with the newer Intel 270 chip set?
About $80 higher price and too new (cutting edge)?
 
Old 09-26-2017, 11:23 PM   #72
upnort
Senior Member
 
Registered: Oct 2014
Distribution: Slackware
Posts: 1,893

Original Poster
Rep: Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161
I finally got my new ASUS Z170-K motherboard installed.

The previous ASUS M3N78-EM motherboard is now in the LAN server. For some oddball reason, network traffic to client systems now seems snappier from the server. Same drives, same 1 Gbps. Another 4 GB of RAM, so perhaps more disk caching is the difference.

The SATA III drives now finally run at full SATA III speeds. I'm running the EFI in legacy mode. I have no motivation to reformat the hard drives with an EFI partition.

The new on-board HD Graphics 530 is much faster than the previous on-board NVidia 8300. Having two more cores in the CPU is nice. Should make a noticeable change with VMs.

A few things I have not resolved.

1) I seem unable to get any fan or temperature sensor info. The UEFI seems to already control fan speed, thus I don't know that I am missing anything other than not having a nice conky display. Any ideas how to get lm-sensors and pwmconfig working on this motherboard? (After surfing the web for info I tried loading the nct6775 module. No change in the sensors-detect output.) Is this a problem with 14.2 and the 4.4 kernel? I tried a recent LiveSlak ISO with a 4.9 kernel and fared no better.

sensors command output:
Code:
asus-isa-0000
Adapter: ISA adapter
cpu_fan:        0 RPM

coretemp-isa-0000
Adapter: ISA adapter
Physical id 0:  +26.0 C  (high = +84.0 C, crit = +100.0 C)
Core 0:         +24.0 C  (high = +84.0 C, crit = +100.0 C)
Core 1:         +24.0 C  (high = +84.0 C, crit = +100.0 C)
Core 2:         +21.0 C  (high = +84.0 C, crit = +100.0 C)
Core 3:         +21.0 C  (high = +84.0 C, crit = +100.0 C)
The coretemp module is loaded automatically on boot.

2) Not a bug or nuisance, but I do not see any Tux logos when I boot. Same hard drives. Same Slackware 14.2 64-bit. Would have been nice to see 4 Tuxes.

3) Is there a way to save and restore the UEFI config? flashrom? I haven't thought about this kind of thing in years, but the CMOS battery that came with motherboard failed me twice. Replaced, but the loss of configs meant doing everything from scratch. I did not notice anything in the UEFI interface to save/restore, but I can't see the end of my nose either.

Otherwise I am quite happy with the nex board. Thanks to all for the help!
 
Old 09-27-2017, 02:55 AM   #73
Didier Spaier
LQ Addict
 
Registered: Nov 2008
Location: Paris, France
Distribution: Slint64-15.0
Posts: 11,057

Rep: Reputation: Disabled
Quote:
Originally Posted by upnort View Post
2) Not a bug or nuisance, but I do not see any Tux logos when I boot. Same hard drives. Same Slackware 14.2 64-bit. Would have been nice to see 4 Tuxes.
A framebuffer is needed to draw the small animals. If you have "vga = normal" in /etc/lilo.conf that would explain why you don't see them.

Oh, and to be picky: s/UEFI/firmware/. Your firmware is UEFI-able, but you are using it in Legacy aka BIOS mode using its compatibility support module aka CSM.
 
Old 09-27-2017, 10:03 AM   #74
upnort
Senior Member
 
Registered: Oct 2014
Distribution: Slackware
Posts: 1,893

Original Poster
Rep: Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161
Regarding the sensors, I discovered the nct6775 module does work, but the acpi_enforce_resources=lax boot option is needed. I haven't decided whether tinkering with fan speeds is worth the bother -- the system is silent and the Asus Q-Fan feature seems to handle fan speeds almost exactly like pwmconfig.

Quote:
A framebuffer is needed to draw the small animals. If you have "vga = normal" in /etc/lilo.conf that would explain why you don't see them.
No such option in my GRUB config. That I don't see the critters is just a curiousity and not a show stopper. As I shared, same drive, just moved to a new motherboard. I run with logo.nologo, but with this new system I wanted to see four Tuxes at least once and had temporarily disabled that boot option. I wanted to say, with a British accent, "There are FOUR Tuxes!"
 
Old 10-21-2017, 12:12 AM   #75
upnort
Senior Member
 
Registered: Oct 2014
Distribution: Slackware
Posts: 1,893

Original Poster
Rep: Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161Reputation: 1161
For the three people who might be interested, or less, a summary of installing and configuring the new motherboard.
 
1 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] The system time switches the time zone automatically but doesn't change the time RandomTroll Linux - Software 9 03-15-2013 12:28 PM
motherboard power light on, nothing else starts. Sabertooth x58 motherboard xwjitftu Linux - Hardware 5 08-05-2011 08:09 AM
how to understand user time, sys time, wait time, idle time of CPU guixingyi Linux - Server 1 08-24-2010 10:10 AM
PAM time restrictions - changing Time.conf so it gets time from sql table noodlesoffire Linux - Newbie 1 04-04-2010 04:41 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 02:03 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration