LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 01-10-2020, 01:02 PM   #16
273
LQ Addict
 
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680

Rep: Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373

My point was to leave your /home on the fast SSD and then just mount the spinning rust in a directory within that home.
There really is no need to put your entire /home onto spinning rust instead and make every application which has a file containing settings in home mean spinning up a hard drive. I type "spinning up" because not having /home on the HDD means the HDD can spin down and save power and reduce noise also.
 
Old 01-10-2020, 01:33 PM   #17
JeremyBoden
Senior Member
 
Registered: Nov 2011
Location: London, UK
Distribution: Debian
Posts: 1,947

Rep: Reputation: 511Reputation: 511Reputation: 511Reputation: 511Reputation: 511Reputation: 511
Repetitive spin-up/spin-down of hard disks will seriously shorten their lifespan.
 
1 members found this post helpful.
Old 01-10-2020, 01:37 PM   #18
273
LQ Addict
 
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680

Rep: Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373
Quote:
Originally Posted by JeremyBoden View Post
Repetitive spin-up/spin-down of hard disks will seriously shorten their lifespan.
Repetitive being the operative word. How often is your home partition read?
Edit: Sorry I meant in relation to all the data on the large HDD.

Last edited by 273; 01-10-2020 at 01:40 PM.
 
Old 01-10-2020, 03:37 PM   #19
Mikech
Member
 
Registered: Jan 2006
Location: USA
Distribution: Mint
Posts: 90

Original Poster
Rep: Reputation: 24
273 and Jeremy

Thanks for you input guys. That's very thoughtful of you. It never occurred to me to mount my data drive inside my home. DUH!!! I wish I had thought of it. I think I can still do it.

273 you are funny guy! Hard drives don't rust. Unless your water cooling breaks but maybe not even then. I believe the case is stainless?

Jeremy, I have never had a Western Digital drive fail on me, not at home nor at work. Ever. Not since they first became available so long ago I can't remember when. Back when they were much bigger.

I still have every 3.5 inch hard drive I have ever used (shhh - don't tell them at work -they frown on that sort of thing even though none of it is classified). Some of them almost fifteen years old and occasionally I need to plug one in and they still work. Not for my own data but for my wife's music because she won't do backups (I do them for her surreptitiously). So spinning up and down does not wear them out.

Believe me if they don't fail for me then they are not going to fail for a normal user either because I abuse them a LOT! I have had Seagate drives fail and after the last one I will never buy another Seagate drive.

By the way, the so called "studies" listed in the HowtoGeek link are anecdotal and are not consistent with scientific rigor. The second study even admits "Given our limited sample size, I wouldn’t read too much into exactly how many writes each drive handled." Until studies are performed with statistically valid results, by actual real unbiased scientists, I cannot assume that SSDs can match the reliability of a Western Digital hard drive. My rant about WD hard drives is also anecdotal (although with a lot more than 6 drives) but its my experience so I will continue to rely on those hard drives. I asked our network administrator how often he has to replace hard drives for our thousands of users and he said it is a fairly rare occurrence (all Western Digital). They even got rid of all of our UPS's because the failure rate and data loss did not justify the expense of replacing the batteries.

This latest computer is so incredibly fast that it doesn't matter what I use. Simulations I run at work (that can take up to 20 minutes on the Windoze10 Dell desktop) now run in seconds on my home computer. Gimp which used to take several minutes to load with all of its plugins now loads instantly. Literally too fast to time. And I have not even started to overclock it.


System: Host: q-Z390-AORUS-PRO-WIFI Kernel: 5.0.0-37-generic x86_64 bits: 64 compiler: gcc
v: 7.4.0 Desktop: Linux Mint 19.3 Tricia
Machine: Type: Desktop System: Gigabyte product: Z390 AORUS PRO WIFI
CPU: Topology: 6-Core model: Intel Core i5-9600K bits: 64 type: MCP arch: Kaby Lake rev: D
L2 cache: 9216 KiB
Memory: 30.36 GiB used: 3.42 GiB (11.3%) Init: systemd
v: 237 runlevel: 5 Compilers: gcc: 7.4.0 alt: 7 Client: Unknown python3.6 client
inxi: 3.0.32
 
1 members found this post helpful.
Old 01-10-2020, 04:09 PM   #20
Mikech
Member
 
Registered: Jan 2006
Location: USA
Distribution: Mint
Posts: 90

Original Poster
Rep: Reputation: 24
CORRECTION

The google study was done by real scientists and is not anecdotal. But after reading the actual paper I don't see how anyone can conclude that SSDs are as reliable as hard drives. Just this one statement alone argues against that:

"While flash drives offer lower field replacement rates than hard disk drives,they have a significantly higher rate of problems that can impact the users such as un-correctable errors."

Hey guys, I am the user and I do not want to be impacted! So I will continue to use hard drives for the things that matter.
 
Old 01-10-2020, 04:34 PM   #21
Mikech
Member
 
Registered: Jan 2006
Location: USA
Distribution: Mint
Posts: 90

Original Poster
Rep: Reputation: 24
For those of you interested in computer reliability, this is a talk at Stanford from the primary author of the Google study. I suspect the Google names are there only because they provided the data that she analyzed.

https://www.youtube.com/watch?v=60OmhRJ0CUA

Of course you may draw your own conclusion from it but for me the bottom line is that SSD are substantially less reliable than hard drives. Predictive maintenance drive replacement is a factor I understand very well and is not a major factor in my decision making. Mission critical systems are very conservative with drive replacements even if they have not failed.

Last edited by Mikech; 01-10-2020 at 05:50 PM.
 
Old 01-11-2020, 12:03 AM   #22
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,803

Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
Quote:
Originally Posted by Mikech View Post
I do know that things get mounted in Fstab but in looking at mine there are only two entries full of hieroglyphics and I have all 6 sata drives working and mounted on logon. Is there a secret fstab somewhere?
Those hieroglyphics are likely the UUIDs assigned to the drives when you installed Linux. Distribution installers seem to prefer use these (for some reason) by default for mounting the operating system filesystems. There's probably a dialog in the installer that lets you select other mounting options (I know openSUSE has it). If the UUIDs bug you, take look at creating labels for the filesystems you're mounting. If you're using ext4 on those filesystems, issue:
Code:
sudo tune2fs -L labelname /dev/sdXN   # See tune2fs(8) for more
then you can configure them in "/etc/fstab" using something like "LABEL=slack142 / ...", "LABEL=homefs /home ...", etc. instead of "UUID=hieroglyphics ...". (I find labels far easier to remember than UUIDs.) If you're using another filesystem -- like btrfs, xfs, etc. -- there may be utilities that allow you to label those filesystems.

Q: You have disks that are being mounted (automagically?) but aren't specified in "/etc/fstab"?

Q: Are some of the disks connected via USB?

I'd get the mounting information using "mount" and create records for them in "/etc/fstab". Can you post the contents of your "/etc/fstab" and the output you receive from "mount" (or "sudo mount")? In my experience, USB drives occasionally get different device names after reboots. That's why I use the filesystem labels so I won't have to care about that happening.

Quote:
... I am a health physicist ...
I still recall getting back a physics home assignment where I was to calculate the safe exposure for some isotope where the professor added a note next to my answer that read "I'd be dead!". Probably a Good Thing that Physics was not my major.

Quote:
... back in the 70s I had to program in machine language to realign the peak drift on a gamma spectroscope using a PDP 11 so I had to learn machine language).
I'll bet we ran across the same textbook back in those days: "The Minicomputer In The Laboratory". All about using PDP-11s for lab data acquisition, etc. (Still have a copy stashed away in storage.)

Cheers...
 
1 members found this post helpful.
Old 01-11-2020, 07:28 AM   #23
JeremyBoden
Senior Member
 
Registered: Nov 2011
Location: London, UK
Distribution: Debian
Posts: 1,947

Rep: Reputation: 511Reputation: 511Reputation: 511Reputation: 511Reputation: 511Reputation: 511
To see just the basic mount information (hiding "virtual" in-memory file systems) use
Code:
df -h
or
findmnt -D
 
Old 01-11-2020, 09:20 AM   #24
273
LQ Addict
 
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680

Rep: Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373
Mikech: I refer to hard drives as "spinning rust" because once upon a time the medium used to store the magnetic charge was a ferrous or ferric oxide (can never recall which).

Edit: Regarding reliability of media: Have a backup because all media fail and verify that backup preferably on another machine. You must expect everything to fail at some point and have redundancy in place. Always have a backup and that goes for everything in life.

Last edited by 273; 01-11-2020 at 09:23 AM.
 
Old 01-11-2020, 09:36 AM   #25
hazel
LQ Guru
 
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 7,601
Blog Entries: 19

Rep: Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456
Quote:
Originally Posted by 273 View Post
Mikech: I refer to hard drives as "spinning rust" because once upon a time the medium used to store the magnetic charge was a ferrous or ferric oxide (can never recall which).
I think it would have been ferrosoferric oxide Fe3O4, also known as magnetite or lodestone.
 
1 members found this post helpful.
Old 01-11-2020, 09:43 AM   #26
273
LQ Addict
 
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680

Rep: Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373
Quote:
Originally Posted by hazel View Post
I think it would have been ferrosoferric oxide Fe3O4, also known as magnetite or lodestone.
Thanks, seems about right. Referring to your signature who's blinding whom with science?
 
Old 01-11-2020, 09:48 AM   #27
hazel
LQ Guru
 
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 7,601
Blog Entries: 19

Rep: Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456Reputation: 4456
Quote:
Originally Posted by 273 View Post
Thanks, seems about right. Referring to your signature who's blinding whom with science?
Chemistry is easy to understand. It's basically just cookery. Different oxides of iron are like cakes with different amounts of sugar and fat in them. But electronics and computer science are a closed book to me. I know more or less how to make a computer do what I want but I have no idea how it does it. Every time I try to read that stuff, my head starts to spin.
 
Old 01-11-2020, 10:16 AM   #28
273
LQ Addict
 
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680

Rep: Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373
Quote:
Originally Posted by hazel View Post
Chemistry is easy to understand. It's basically just cookery. Different oxides of iron are like cakes with different amounts of sugar and fat in them. But electronics and computer science are a closed book to me. I know more or less how to make a computer do what I want but I have no idea how it does it. Every time I try to read that stuff, my head starts to spin.
Fair enough, cookery has always been a mystery to me :~)
 
Old 01-11-2020, 02:42 PM   #29
Mikech
Member
 
Registered: Jan 2006
Location: USA
Distribution: Mint
Posts: 90

Original Poster
Rep: Reputation: 24
Thanks again for everyone's help.

Rnturn and Jeremy.

Thank you; those were very helpful tips. I was wrong. Because I could see the other drives in the home menu and could click on them I assumed they were mounted. They were not. When I clicked on them they were being mounted in the background transparently to the user. Nice feature but dangerous for some poor sap who thinks they are mounted and then can't understand why his cron job failed. If I didn't figure this out all of my automatic backups would have failed because the script assumes they are mounted at startup. I assume that fstab has to run first otherwise Linux won't find the script to run.

I'm starting to feel almost like an expert now. I did learn a great deal. The real joy is that I have to return the motherboard because of a defective USB 3.0 20 pin header. I would fix it myself since it is just a mechanical failure but the board is under warranty. So I may get to try out all this new knowledge again real soon. It shouldn't but you never know. The system may not be happy about a new motherboard and I may have to start over. Its happened before. Fortunately I have lots of computers to use in the meantime.

Rnturn: No I don't recall that book. I always worked with the documents and manuals from DEC and the spectroscope. I had no formal computer education other than programming in FORTRAN that was starting to become a requirement for science majors.

273:

I still think referring to them as spinning rust is hilarious. I like your sense of humour.

Hazel:

I am the opposite, I know how the computer works (its just a big box of on/off switches with a timer and a traffic cop) but I am still struggling with Linux after all these years plus I am really old and can't remember stuff.

Last edited by Mikech; 01-11-2020 at 06:28 PM.
 
Old 01-12-2020, 04:28 PM   #30
Hermani
Member
 
Registered: Apr 2018
Location: Delden, NL
Distribution: Ubuntu
Posts: 261
Blog Entries: 3

Rep: Reputation: 113Reputation: 113
Talking

Quote:
Originally Posted by Mikech View Post
"While flash drives offer lower field replacement rates than hard disk drives,they have a significantly higher rate of problems that can impact the users such as un-correctable errors."

Hey guys, I am the user and I do not want to be impacted! So I will continue to use hard drives for the things that matter.
If you want your data to survive, what you need is not a hard drive or SSD but a backup solution or strategy.

Quote:
More than 20% of flash drives develop uncorrectable errors in a four year period, 30-80% develop bad blocks and 2-7% of them develop bad chips. In comparison, previous work on HDDs reports that only 3.5% of disks in a large population developed bad sectors in a 32 months period.
So the number of uncorrectable errors in a SSD is one order of magnitude higher than on hard drives but also hard drives do develop bad blocks. This means that you also can't trust those spinning platters - a fact that we all know. How often haven't we all heard that ticking sound of a failed hard drive. Or the very slow, whining spin-up of another drive broken down...

I now use btrfs RAID 5 on my spinning platters /home calls home. And all data is also being backed up to BD-R "just in case" (and that case will come, sooner or later). Please don't rely on spinning platters alone, I've been bitten in the butt before by that mistake..
 
1 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
how to find which hard disk in my system holds whiich mount points? siva balan Linux - Hardware 2 12-01-2010 02:09 AM
Hard drive mount points change on reboot. Springs Linux - Hardware 2 01-28-2010 03:54 AM
play free (World of WarCraft , xBox Live, Live Points, Wii Points, Free Habbo) laraaj Linux - Games 1 02-09-2007 05:50 PM
creating mount points and is the data safe if i change system JMK Linux - Newbie 4 01-22-2004 09:57 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 05:51 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration