[SOLVED] How to change the file system mount points with new hard drives.
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
My point was to leave your /home on the fast SSD and then just mount the spinning rust in a directory within that home.
There really is no need to put your entire /home onto spinning rust instead and make every application which has a file containing settings in home mean spinning up a hard drive. I type "spinning up" because not having /home on the HDD means the HDD can spin down and save power and reduce noise also.
Thanks for you input guys. That's very thoughtful of you. It never occurred to me to mount my data drive inside my home. DUH!!! I wish I had thought of it. I think I can still do it.
273 you are funny guy! Hard drives don't rust. Unless your water cooling breaks but maybe not even then. I believe the case is stainless?
Jeremy, I have never had a Western Digital drive fail on me, not at home nor at work. Ever. Not since they first became available so long ago I can't remember when. Back when they were much bigger.
I still have every 3.5 inch hard drive I have ever used (shhh - don't tell them at work -they frown on that sort of thing even though none of it is classified). Some of them almost fifteen years old and occasionally I need to plug one in and they still work. Not for my own data but for my wife's music because she won't do backups (I do them for her surreptitiously). So spinning up and down does not wear them out.
Believe me if they don't fail for me then they are not going to fail for a normal user either because I abuse them a LOT! I have had Seagate drives fail and after the last one I will never buy another Seagate drive.
By the way, the so called "studies" listed in the HowtoGeek link are anecdotal and are not consistent with scientific rigor. The second study even admits "Given our limited sample size, I wouldn’t read too much into exactly how many writes each drive handled." Until studies are performed with statistically valid results, by actual real unbiased scientists, I cannot assume that SSDs can match the reliability of a Western Digital hard drive. My rant about WD hard drives is also anecdotal (although with a lot more than 6 drives) but its my experience so I will continue to rely on those hard drives. I asked our network administrator how often he has to replace hard drives for our thousands of users and he said it is a fairly rare occurrence (all Western Digital). They even got rid of all of our UPS's because the failure rate and data loss did not justify the expense of replacing the batteries.
This latest computer is so incredibly fast that it doesn't matter what I use. Simulations I run at work (that can take up to 20 minutes on the Windoze10 Dell desktop) now run in seconds on my home computer. Gimp which used to take several minutes to load with all of its plugins now loads instantly. Literally too fast to time. And I have not even started to overclock it.
The google study was done by real scientists and is not anecdotal. But after reading the actual paper I don't see how anyone can conclude that SSDs are as reliable as hard drives. Just this one statement alone argues against that:
"While flash drives offer lower field replacement rates than hard disk drives,they have a significantly higher rate of problems that can impact the users such as un-correctable errors."
Hey guys, I am the user and I do not want to be impacted! So I will continue to use hard drives for the things that matter.
For those of you interested in computer reliability, this is a talk at Stanford from the primary author of the Google study. I suspect the Google names are there only because they provided the data that she analyzed.
Of course you may draw your own conclusion from it but for me the bottom line is that SSD are substantially less reliable than hard drives. Predictive maintenance drive replacement is a factor I understand very well and is not a major factor in my decision making. Mission critical systems are very conservative with drive replacements even if they have not failed.
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,803
Rep:
Quote:
Originally Posted by Mikech
I do know that things get mounted in Fstab but in looking at mine there are only two entries full of hieroglyphics and I have all 6 sata drives working and mounted on logon. Is there a secret fstab somewhere?
Those hieroglyphics are likely the UUIDs assigned to the drives when you installed Linux. Distribution installers seem to prefer use these (for some reason) by default for mounting the operating system filesystems. There's probably a dialog in the installer that lets you select other mounting options (I know openSUSE has it). If the UUIDs bug you, take look at creating labels for the filesystems you're mounting. If you're using ext4 on those filesystems, issue:
Code:
sudo tune2fs -L labelname /dev/sdXN # See tune2fs(8) for more
then you can configure them in "/etc/fstab" using something like "LABEL=slack142 / ...", "LABEL=homefs /home ...", etc. instead of "UUID=hieroglyphics ...". (I find labels far easier to remember than UUIDs.) If you're using another filesystem -- like btrfs, xfs, etc. -- there may be utilities that allow you to label those filesystems.
Q: You have disks that are being mounted (automagically?) but aren't specified in "/etc/fstab"?
Q: Are some of the disks connected via USB?
I'd get the mounting information using "mount" and create records for them in "/etc/fstab". Can you post the contents of your "/etc/fstab" and the output you receive from "mount" (or "sudo mount")? In my experience, USB drives occasionally get different device names after reboots. That's why I use the filesystem labels so I won't have to care about that happening.
Quote:
... I am a health physicist ...
I still recall getting back a physics home assignment where I was to calculate the safe exposure for some isotope where the professor added a note next to my answer that read "I'd be dead!". Probably a Good Thing that Physics was not my major.
Quote:
... back in the 70s I had to program in machine language to realign the peak drift on a gamma spectroscope using a PDP 11 so I had to learn machine language).
I'll bet we ran across the same textbook back in those days: "The Minicomputer In The Laboratory". All about using PDP-11s for lab data acquisition, etc. (Still have a copy stashed away in storage.)
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Mikech: I refer to hard drives as "spinning rust" because once upon a time the medium used to store the magnetic charge was a ferrous or ferric oxide (can never recall which).
Edit: Regarding reliability of media: Have a backup because all media fail and verify that backup preferably on another machine. You must expect everything to fail at some point and have redundancy in place. Always have a backup and that goes for everything in life.
Mikech: I refer to hard drives as "spinning rust" because once upon a time the medium used to store the magnetic charge was a ferrous or ferric oxide (can never recall which).
I think it would have been ferrosoferric oxide Fe3O4, also known as magnetite or lodestone.
Thanks, seems about right. Referring to your signature who's blinding whom with science?
Chemistry is easy to understand. It's basically just cookery. Different oxides of iron are like cakes with different amounts of sugar and fat in them. But electronics and computer science are a closed book to me. I know more or less how to make a computer do what I want but I have no idea how it does it. Every time I try to read that stuff, my head starts to spin.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Quote:
Originally Posted by hazel
Chemistry is easy to understand. It's basically just cookery. Different oxides of iron are like cakes with different amounts of sugar and fat in them. But electronics and computer science are a closed book to me. I know more or less how to make a computer do what I want but I have no idea how it does it. Every time I try to read that stuff, my head starts to spin.
Fair enough, cookery has always been a mystery to me :~)
Thank you; those were very helpful tips. I was wrong. Because I could see the other drives in the home menu and could click on them I assumed they were mounted. They were not. When I clicked on them they were being mounted in the background transparently to the user. Nice feature but dangerous for some poor sap who thinks they are mounted and then can't understand why his cron job failed. If I didn't figure this out all of my automatic backups would have failed because the script assumes they are mounted at startup. I assume that fstab has to run first otherwise Linux won't find the script to run.
I'm starting to feel almost like an expert now. I did learn a great deal. The real joy is that I have to return the motherboard because of a defective USB 3.0 20 pin header. I would fix it myself since it is just a mechanical failure but the board is under warranty. So I may get to try out all this new knowledge again real soon. It shouldn't but you never know. The system may not be happy about a new motherboard and I may have to start over. Its happened before. Fortunately I have lots of computers to use in the meantime.
Rnturn: No I don't recall that book. I always worked with the documents and manuals from DEC and the spectroscope. I had no formal computer education other than programming in FORTRAN that was starting to become a requirement for science majors.
273:
I still think referring to them as spinning rust is hilarious. I like your sense of humour.
Hazel:
I am the opposite, I know how the computer works (its just a big box of on/off switches with a timer and a traffic cop) but I am still struggling with Linux after all these years plus I am really old and can't remember stuff.
"While flash drives offer lower field replacement rates than hard disk drives,they have a significantly higher rate of problems that can impact the users such as un-correctable errors."
Hey guys, I am the user and I do not want to be impacted! So I will continue to use hard drives for the things that matter.
If you want your data to survive, what you need is not a hard drive or SSD but a backup solution or strategy.
Quote:
More than 20% of flash drives develop uncorrectable errors in a four year period, 30-80% develop bad blocks and 2-7% of them develop bad chips. In comparison, previous work on HDDs reports that only 3.5% of disks in a large population developed bad sectors in a 32 months period.
So the number of uncorrectable errors in a SSD is one order of magnitude higher than on hard drives but also hard drives do develop bad blocks. This means that you also can't trust those spinning platters - a fact that we all know. How often haven't we all heard that ticking sound of a failed hard drive. Or the very slow, whining spin-up of another drive broken down...
I now use btrfs RAID 5 on my spinning platters /home calls home. And all data is also being backed up to BD-R "just in case" (and that case will come, sooner or later). Please don't rely on spinning platters alone, I've been bitten in the butt before by that mistake..
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.