LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 10-11-2016, 02:24 AM   #1
aikempshall
Member
 
Registered: Nov 2003
Location: Bristol, Britain
Distribution: Slackware
Posts: 900

Rep: Reputation: 153Reputation: 153
Solid State Drives


I'll be shortly upgrading from 14.1 to 14.2 at the same time will install a Solid State Drive (SSD) the result will be that I will have /dev/sda (128G SSD) and /dev/sdb (1000G HDD). I will also increase the ram to 16G.

My current thoughts are to have
  1. Three partitions on the SSD where -
    1. /dev/sda1 (solid state drive) contains /boot, /root, /etc/, /usr, etc. plus mount points for /home and /var;
    2. /dev/sda2 (solid state drive) contains the contents of the home directory which will be mounted on /home mount point on /dev/sda1;
    3. /dev/sda3 (solid state drive) a swap partition - shouldn't need to use it very much as with 16G of RAM and I have no requirement for hibernation. I could try the system with swap turned off and see what happens!
  2. Two partitions on the HDD where -
    1. /dev/sdb1 (HDD) contains the contents of /var which will be mounted on /var mount point on /dev/sda1;
    2. /dev/sdb2 (HDD) contains volume data (pictures, videos, etc.) that's not used very often. Ideally I would like to be able to mount this on a mount point in my home directory - not sure if I can do this.


In setting up the drives I will be using fdisk. Will it be acceptable to go with whatever fdisk suggests for heads and sectors as long as the first partition starts at sector 2048 and I size the partitions in increments of Gigabytes?

Of course the other option is to plunge into LVM.


Thanks in anticipation.

Alex
 
Old 10-11-2016, 02:58 AM   #2
Mark Pettit
Member
 
Registered: Dec 2008
Location: Cape Town, South Africa
Distribution: Slackware 15.0
Posts: 619

Rep: Reputation: 299Reputation: 299Reputation: 299
I have a similar setup, which I built only a week or so back. I moved some of the /var stuff off, but left most of it on the SSD. I moved the slackbuilds cache off.

But, I left my /home on the SSD - especially to take advantage of the speed for web browsing (firefox) and email (thunderbird). I moved my multimedia (movies, flacs, mp3's etc) off to the HD, via soft-link.

I would chuck the /swap totally. Honestly - with 16 GB you will be fine. (I have same and I also chucked it).
 
Old 10-11-2016, 08:31 AM   #3
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
If you want to move /var/ over just to minimize the writing to the SSD from the logs, there really is no need. SSDs are now quite capable of writing many orders of magnitude of what the average person would need before they fail (and I mean almost completely rewriting the contents of the drive every day for several years). If you want to read a fun article, check out this final article in a series by techreport where they attempt to cause SSDs to fail by writing too much data to the drives (they link all the previous articles in the series in there as well). The first drive didn't fail until 700TBs had been written (that's almost 400GBs per day for 5 years).

As to your question on mounting /dev/sdb2 under a folder on your home folder, that will work without issue. Just make sure the folder is created and you can mount any partition to any location on any drive. It's one of the great features of Linux. You can either mount it manually via the mount command or automatically using an entry in your /etc/fstab

Depending on your normal memory usage, as Mark Pettit suggested, you could probably get away with not creating a swap. Personally, I couldn't, because I occasionally use more than my amount of RAM (32GB), but I know my use-cases are not normal. Even then, if you really needed a swap, you can create a swap file and turn it on (using swapon) when necessary (like compiling a really big program). This could simplify your partition scheme slightly.

As to fdisk, you're correct in that you can just use the defaults. It has supported partition alignment for many versions.

You may also want to look into changing the schedular from the default noop (first in first out) to deadline (prioritizes reads over writes, so your system could be more responsive during heavy disk activity). While noop tends to perform higher in benchmarks, deadline tends to give better real-world performance. You can do this by creating a rule in /etc/udev/rules.d/ (I called mine 55-ssd-scheduler.rules) that contains the following:

Code:
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"
For more information on that, see this post and for even more information on SSDs and Linux/Slackware, see the rest of the thread. There's some good info in there.

Good luck! SSDs are a great addition to a system.
 
Old 10-11-2016, 03:24 PM   #4
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: SlackwareŽ
Posts: 13,923
Blog Entries: 44

Rep: Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158
Member response

Hi,

I believe that the default scheduler for Slackware is [cfq] not [noop].

I set my SSD scheduler to [noop] and find no issues. If you are using a large Dbase then possibly some advantage to the [deadline];
Code:
From '/etc/rc.d/rc.local';
#!/bin/sh
#
# /etc/rc.d/rc.local:  Local system initialization script.
#
# Put any local startup commands in here.  Also, if you have
# anything that needs to be run at shutdown time you can
# make an /etc/rc.d/rc.local_shutdown script and put those
# commands in there.
# anything that needs to be run at shutdown time you can
# make an /etc/rc.d/rc.local_shutdown script and put those
# commands in there.
#
#09-25-12 gws 20:07
#
#08-20-12 gws 14:38
#set minimum swappiness
#
echo 1 > /proc/sys/vm/swappiness

#08-20-12 gws
#sets scheduler for SSD to 'noop'
#SSD=(device ID's of all 'SSD': see note below)
#Note Information revised from ArchWiki;

#This provides the links listed with targets information to place in bash array
#'SSD= ( ) parentheses in below 'SSD= ( );

#ls -l /dev/disk/by-id
#lrwxrwxrwx 1 root root  9 Aug 19 11:27 ata-OCZ-AGILITY3_OCZ-C93VFN4X0532CVMP -> ../../sda

#SSD=(ata-Crucial_CT256MX100SSD1_14270C86314F)
# anything that needs to be run at shutdown time you can
# make an /etc/rc.d/rc.local_shutdown script and put those
# commands in there.
#
#09-25-12 gws 20:07
# 
#08-20-12 gws 14:38
#set minimum swappiness
#
#echo 1 > /proc/sys/vm/swappiness

#08-20-12 gws
#sets scheduler for SSD to 'noop'
#SSD=(device ID's of all 'SSD': see note below)
#Note Information revised from ArchWiki;

#Note Information revised from ArchWiki;

#This provides the links listed with targets information to place in bash array
#'SSD= ( ) parentheses in below 'SSD= ( );

#ls -l /dev/disk/by-id
#lrwxrwxrwx 1 root root  9 Aug 19 11:27 ata-OCZ-AGILITY3_OCZ-C93VFN4X0532CVMP -> ../../sda

#Wed Nov 19 13:43:36 CST 2014 GWS change to new drive
# ata-Crucial_CT256MX100SSD1_14270C86314F

SSD=(ata-Crucial_CT256MX100SSD1_14270C86314F)

declare -i i=0
while [ "${SSD[$i]}" != "" ]; do
  NODE=`ls -l /dev/disk/by-id/${SSD[$i]} | awk '{ print $NF }' | sed -e 's/[/\.]//g'`
  echo noop > /sys/block/$NODE/queue/scheduler
  i=i+1
done
  
#
#Copies for future use or edits
#commented out
#08-20-12 gws 14:45
#get some additional gain by setting up a write-back cache
#hdparm -W1 /dev/sda   #where x= a,b,c,d...

#
#Need to do a bumblebeed install
#11-19-14 gws 14:03

#09-20-12:16:14 gws bumblebee
#if [ -x /etc/rc.d/rc.bumblebeed ]; then
#     /etc/rc.d/rc.bumblebeed start
#fi


#declare -i i=0
#while [ "${SSD[$i]}" != "" ]; do
#  NODE=`ls -l /dev/disk/by-id/${SSD[$i]} | awk '{ print $NF }' | sed -e 's/[/\.]//g'`
#  echo noop > /sys/block/$NODE/queue/scheduler
#  i=i+1
#done

#08-20-12 gws 14:45
#get some additional gain by setting up a write-back cache
hdparm -W1 /dev/sda   #where x= a,b,c,d...

#
#09-20-12:16:14 gws bumblebee

if [ -x /etc/rc.d/rc.bumblebeed ]; then
     /etc/rc.d/rc.bumblebeed start
fi
I only setup the SSD devices and leave the global setup as [cfq] for the rotational drives.

Hope this helps.
Have fun & enjoy!
 
1 members found this post helpful.
Old 10-11-2016, 04:21 PM   #5
Emerson
LQ Sage
 
Registered: Nov 2004
Location: Saint Amant, Acadiana
Distribution: Gentoo ~amd64
Posts: 7,661

Rep: Reputation: Disabled
Interesting. My SSD has noop and my HDD has cfq and I have done nothing to set it up this way. Is my kernel smart enough to do it by itself?
 
Old 10-11-2016, 05:03 PM   #6
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by onebuck View Post
I believe that the default scheduler for Slackware is [cfq] not [noop].
Not since 14.2. Well, the default scheduler is still CFQ, but udev overwrites that for SSDs. When Slackware switched from udev to eudev, it gained the file /lib/udev/rules.d/60-persistent-storage.rules which contains:

Code:
# do not edit this file, it will be overwritten on update

# enable in-kernel media-presence polling
ACTION=="add", SUBSYSTEM=="module", KERNEL=="block", ATTR{parameters/events_dfl_poll_msecs}=="0", \
  ATTR{parameters/events_dfl_poll_msecs}="2000"

# forward scsi device event to corresponding block device
ACTION=="change", SUBSYSTEM=="scsi", ENV{DEVTYPE}=="scsi_device", TEST=="block", ATTR{block/*/uevent}="change"

# watch metadata changes, caused by tools closing the device node which was opened for writing
ACTION!="remove", SUBSYSTEM=="block", KERNEL=="loop*|nvme*|sd*|vd*|xvd*", OPTIONS+="watch"

# set noop on solid state drives
SUBSYSTEM=="block", ACTION=="add", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="noop"
I bolded the pertinent part at the bottom. This will automatically set any drives that do not spin (including thumbdrives) to noop.

Really, noop is a decent selection for SSDs, but if you ever happen to have your drive go crazy with writes (whether from copying, compiling, swapping, etc), deadline will prioritize reads over writes, so it is less likely to cause you hangups in your usage of the machine. However, depending on your intended use of that machine, that may not be desirable.

Quote:
Originally Posted by Emerson View Post
Interesting. My SSD has noop and my HDD has cfq and I have done nothing to set it up this way. Is my kernel smart enough to do it by itself?
Actually, the kernel doesn't do any of it. You can even set deadline or cfq as the default elevator in the kernel or your bootloader appends line, and the udev rule mentioned above will overwrite it.
 
2 members found this post helpful.
Old 10-12-2016, 04:09 AM   #7
aikempshall
Member
 
Registered: Nov 2003
Location: Bristol, Britain
Distribution: Slackware
Posts: 900

Original Poster
Rep: Reputation: 153Reputation: 153
So in general I'm on the right track

In respect to the discussion regarding noop, deadline and cfq I didn't realise that such things existed. Had to look them up on the web for an explanation.

Quote:
Originally Posted by bassmadrigal View Post
Not since 14.2. Well, the default scheduler is still CFQ, but udev overwrites that for SSDs. When Slackware switched from udev to eudev, it gained the file /lib/udev/rules.d/60-persistent-storage.rules
I already have 14.2 installed in a VirtualBox machine where the appropriate rule appears in the file 60-block.rules.

I'm now swaying to leave the default set up as is, in respect of scheduling, whilst taking on board the following and monitor what happens

Quote:
Originally Posted by bassmadrigal View Post
You may also want to look into changing the schedular from the default noop (first in first out) to deadline (prioritizes reads over writes, so your system could be more responsive during heavy disk activity). While noop tends to perform higher in benchmarks, deadline tends to give better real-world performance.
Thanks for all the comments they've been most helpful and raised my confidence considerably.

Alex
 
Old 10-12-2016, 06:05 AM   #8
kjhambrick
Senior Member
 
Registered: Jul 2005
Location: Round Rock, TX
Distribution: Slackware64 15.0 + Multilib
Posts: 2,159

Rep: Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512
aikempshall --

Yes, sounds like you're on track

I've been meaning to send my $0.02 but I've been crazy-busy at work.

In addition to the excellent stuff bassmadrigal posted, there is also 'stuff' you can optimize in /etc/fstab and in /etc/sysctl.conf

You might want to add the noatime option to the entries in /etc/fstab for the partitions on your SSD ( maybe even add it to all your partitons, unless you run mutt as a mail reader )

If you're running a pure SSD Box and your swap partition(s) are on the SSD, you would want to modify your /etc/sysctl.conf ( see below ).

And, since you mentioned VMs, if you host Virtual Machines, you might want to research the sysctl: vm.vfs_cache_pressure

Finally, there was another fairly recent, longish thread here on LQ about SSD tuning: Fine Tuning a new SSD for Slackware. ... LOTS of juicy tidbits there

HTH and have fun !

-- kjh

#
# I have a 'pure-SSD' Laptop ( swap is on SSD Partitions ) and I run VMWare Workstation 12.x on my Slackware 14.2 System so BOTH of the following sysctl settings apply:
#
Code:
# cat /etc/sysctl.conf
#
# kjh was here.  Docs are in /usr/src/linux-4.4.14/Documentation/sysctl/vm.txt
#
# for SSD ...
#
# from https://www.kernel.org/doc/Documentation/sysctl/vm.txt
#
# This control is used to define how aggressive the kernel will swap
# memory pages.  Higher values will increase agressiveness, lower values
# decrease the amount of swap.  A value of 0 instructs the kernel not to
# initiate swap until the amount of free and file-backed pages is less
# than the high water mark in a zone.
# 
# The default value is 60.
# 
vm.swappiness=0
#
# for VMWare ...
#
# from https://www.kernel.org/doc/Documentation/sysctl/vm.txt
#
# This percentage value controls the tendency of the kernel to reclaim
# the memory which is used for caching of directory and inode objects.
# 
# At the default value of vfs_cache_pressure=100 the kernel will attempt to
# reclaim dentries and inodes at a "fair" rate with respect to pagecache and
# swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer
# to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
# never reclaim dentries and inodes due to memory pressure and this can easily
# lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
# causes the kernel to prefer to reclaim dentries and inodes.
# 
# Increasing vfs_cache_pressure significantly beyond 100 may have negative
# performance impact. Reclaim code needs to take various locks to find freeable
# directory and inode objects. With vfs_cache_pressure=1000, it will look for
# ten times more freeable objects than there are.
#
vm.vfs_cache_pressure=50
#
# I don't run the mutt mail reader so noatime won't hurt me in /etc/fstab
#
Code:
# cat /etc/fstab
#
# B60830 - kjh removed nodiratime on ext4 partitions ( thanks to onebuck on LQ )
#          see:  http://www.linuxquestions.org/questions/slackware-14/fine-tuning-a-new-ssd-for-slackware-4175587710/page4.html#post5597800
#
/dev/sda1        swap             swap        defaults                 0   0
/dev/sdb1        swap             swap        defaults                 0   0
/dev/sda3        /                ext4        defaults,noatime         1   1
/dev/sda2        /boot            ext4        defaults,noatime         1   2
/dev/sdb2        /home            ext4        defaults,noatime         1   2
/dev/sdd1        /opt             ext4        defaults,noatime         1   2
# /dev/cdrom     /mnt/cdrom       auto        noauto,owner,ro,comment=x-gvfs-show 0   0
# /dev/fd0       /mnt/floppy      auto        noauto,owner             0   0
/dev/sdc2        /boot/efi        vfat        defaults,noatime,noauto  0   0
/dev/sdc4        /win10           ntfs-3g     defaults,noatime,noauto  0   0
devpts           /dev/pts         devpts      gid=5,mode=620           0   0
proc             /proc            proc        defaults                 0   0
tmpfs            /dev/shm         tmpfs       defaults                 0   0
 
1 members found this post helpful.
Old 10-12-2016, 08:39 AM   #9
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: SlackwareŽ
Posts: 13,923
Blog Entries: 44

Rep: Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158
Member response

Hi,
Quote:
Originally Posted by bassmadrigal View Post
Not since 14.2. Well, the default scheduler is still CFQ, but udev overwrites that for SSDs. When Slackware switched from udev to eudev, it gained the file /lib/udev/rules.d/60-persistent-storage.rules which contains:
Thanks for the update. I need to continue picking through 14.2.
I've procrastinated on my new Laptop purchase so I guess that I will bite the e-bullet and get it now. I did get a new 'SSD' 480GB at a great price and I am waiting to install on that with the new Dell Laptop. I still use my Dell XPS702 Intel i7 16MB, 240GB SSD, 320GB HD secondary, Optimus & Intel GPUs. I would like to get the new XPS but that's a lot more than what I want to spend. I'll back down to the Dell Inspiron with a 15" screen. I will wait to find a Dell refurbished factory certified at a reduced cost. A great deal if you find the right one.

<snip>

Too many things on my TODO list and I need to start hitting it hard.

Have fun & enjoy!
 
Old 10-12-2016, 09:06 AM   #10
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by onebuck View Post
I did get a new 'SSD' 480GB at a great price and I am waiting to install on that with the new Dell Laptop. I still use my Dell XPS702 Intel i7 16MB, 240GB SSD, 320GB HD secondary, Optimus & Intel GPUs. I would like to get the new XPS but that's a lot more than what I want to spend. I'll back down to the Dell Inspiron with a 15" screen. I will wait to find a Dell refurbished factory certified at a reduced cost. A great deal if you find the right one.
I know this is bumping the off-topic portion, so for that, I apologize (and I'll refrain if desired), but I really should look at getting a new laptop. Mine is an old, low-end Asus eeePC circa 2011 and it is seriously showing its age. How have you felt about your dual-card setup? Based on my previous usage of bumblebee (which is extremely outdated), I was not impressed and would rather not need to run certain commands to ensure things are going to run properly (ie, on the right video card). Have things improved or do you think it's ideal to avoid dual-video card laptops for Linux?
 
Old 10-12-2016, 12:34 PM   #11
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: SlackwareŽ
Posts: 13,923
Blog Entries: 44

Rep: Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158
Member response

Hi,

Both Intel and the Optimus have their pluses. Most times I will use the Intel just for the power savings. Plugged in and needing the Optimus, I am sure to use my Laptop cooler to keep things cool on the LapDesk. Bumblebeed is getting long in the tooth but still works for my needs.

I am looking at the Dell Inspiron 17 at a refurbish certified price. It has Intel chipset and still provides 1600 x 900 so good enough for my needs and a good price.

I am spoiled with my Dell XPS702 17" Laptop with dual drive bays that spoil you but it's getting old. Still usable Laptop and I am needing/wanting a new Laptop anyway.

Have fun & enjoy!
 
Old 10-15-2016, 08:40 AM   #12
Martinus2u
Member
 
Registered: Apr 2010
Distribution: Slackware
Posts: 497

Rep: Reputation: 119Reputation: 119
Quote:
Originally Posted by aikempshall View Post
In respect to the discussion regarding noop, deadline and cfq I didn't realise that such things existed. Had to look them up on the web for an explanation.
And then there is BFQ (which needs to be patched into the kernel as it is not standard). The author compares it to CFQ, NOOP and Deadline in two YT videos:

For SSD: https://www.youtube.com/watch?v=1cjZeaCXIyM

For HDD: https://www.youtube.com/watch?v=ZeNbS0rzpoY

On his web site he has more detailed statistics and stuff.

http://algo.ing.unimo.it/people/paolo/disk_sched/
 
1 members found this post helpful.
Old 10-16-2016, 04:04 AM   #13
Regnad Kcin
Member
 
Registered: Jan 2014
Location: Beijing
Distribution: Slackware 64 -current .
Posts: 663

Rep: Reputation: 460Reputation: 460Reputation: 460Reputation: 460Reputation: 460
my working machine has a couple of 250 gb ssd drives. I use one for containing the operating systems for slackware and win7 and another is for holding a large bioinformatics dataset primarily. having the bioinformatics databases on the ssd saves a lot of time. I have a 1 tb and a 2 tb drive in ntfs format for backup and for easy sharing with windows. I have used default settings for the ssd since the beginning and have used this machine for almost 3 years now with no issues.

A few months ago I added an ASUS Mini (nividia) graphics card which handles multi-monitoring better than the stock Intel display on the motherboard, but it gives me a chance to set nividia kernel modules up each time i update -current. Nouveau is ok but just sorta ok. The Nividia drivers suit me better.

I still need windows sometimes for reading files that some folks send me that just wont open right with LibreOffice, and for running VPN. I am hoping to learn how to do the VPN in Linux.

I have built some other systems for the lab using ssd drives. There have been zero problems so far, but I do like the samsung ssd's better than other brands I have tried.

This system works great for me. the motherboard is an ITX and i have the whole thing powersupply and all fastened to a wooden base. the whole thing drops into a luggage-like laptop case so that I have a luggable machine. I do provide a swap drive but i have 8 gb of memory and the swap drive is seldom used except when doing some large compiles.

I had tried putting an ssd into an old fujitsu laptop. it then ran so fast that the laptop would get hot and shut itself down.

Heat dissipation and power supplies seem to be the bottleneck in faster processing speeds now that I am using SSD's. I have burned up several low wattage and mid wattage power supplies. Now I have a Great Wall 1000W supply and it works like a champ, amazing better than some more famous main brand power supplies that are RIP in the junk box.

I have tried some different heat dissipation devices and also fan-less power supplies. Those will burn up eventually with my older generation 4-core -i7 chip I have built some 2-core -i7's and some baytrail 4-thread machines that function fine on the silent fanless power supplies but my main machine demands lots of cooling if it is running at full tilt boogie.

Last edited by Regnad Kcin; 10-16-2016 at 04:15 AM.
 
2 members found this post helpful.
Old 10-16-2016, 08:57 AM   #14
kjhambrick
Senior Member
 
Registered: Jul 2005
Location: Round Rock, TX
Distribution: Slackware64 15.0 + Multilib
Posts: 2,159

Rep: Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512
Interesting Info Regnad Kcin

I would never have imagined that running a faster drive could cause overheating in the CPU ( or GPU ) but it does kinda-sorta make sense if the other subsystems are running on the verge of overheating ...

Thanks. One to keep in the back of my mind when trouble-shooting overheating issues !

-- kjh
 
Old 10-16-2016, 11:02 AM   #15
Regnad Kcin
Member
 
Registered: Jan 2014
Location: Beijing
Distribution: Slackware 64 -current .
Posts: 663

Rep: Reputation: 460Reputation: 460Reputation: 460Reputation: 460Reputation: 460
laptops dont really have very good cooling. it ran a lot faster with the ssd.

I dont use laptop these days as i am so used to an instantaneous response machine that the waiting and waiting gets me upset.

I remember though using an Epson QX-10 with 2 floppies and i thought it was awesome sitting there buzzing at me.
 
1 members found this post helpful.
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Solid state drives jmore9 Ubuntu 7 06-15-2012 08:12 PM
Solid State Drives carlosinfl Linux - Hardware 19 12-12-2010 01:44 PM
Solid state drives are AWESOME. manwichmakesameal General 16 05-30-2010 05:40 PM
Are solid state drives any good? abefroman Linux - Hardware 1 08-13-2009 03:07 PM
Solid State Hard Drives fatman General 3 01-16-2007 07:06 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 03:49 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration