Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Cinnamon Mint 20.1 (Laptop) and 20.2 (Desktop)
Posts: 1,672
Rep:
Quote:
Plugged it back in after installation. It no longer appears.
So, where did it appear BEFORE the installation? Was it seen by a different OS before you unplugged it and installed Manjaro?
What is the ID of the drive you disconnected; sata_1, sata_2 or what? (That's the motherboard ID, not what's reported by the OS.)
As above, what is the ID of the drive Manjaro is installed on?
I reckon your Manjaro system, including the boot loader is all on the one drive (The only one you had attached during install) You'd therefore need to mount the drive you had disconnected (which the system doesn't know about) and add it back into fstab.
The elusive HDD does not appear in the Bootloader.
Before the installation it was visible alongside C drive, on a windows install.
So mounting this drive that can't be found. How could I do this?
and adding it to fstab?
would this be done through the BIOS, or with the system fully on?
Distribution: Cinnamon Mint 20.1 (Laptop) and 20.2 (Desktop)
Posts: 1,672
Rep:
Quote:
So the Port made a difference to the HDD existence? strange must be installed there on a deep level.
Yup! you can't just plug disks back in any old where after building a system as you'll screw up the path to the disk.
Similarly if you have a multi-disk RAID set, you can't just swap a couple of the disks in the RAID around and expect the RAID to function, you've screwed up the hardware configuration The RAID software knows it's wrong but you can't mount the RAID volume till the disks are in the correct positions.
Well, most raid configurations put configuration information on the disks/partitions used.
That information is SUPPOSED to permit any ordering of the physical disks to allow the RAID to be rebuilt.
This is what happens when a connection quits working for some reason - plug the disk into a different port and the controller reassembles the raid.
It is also the advantages of software raid - plug the disk into any controller, and the system should be able to reassemble the raid set.
That is what happens with LVM. First the partitions are identified by the kernel, then the software uses the UUIDs of the various partitions that have LVM raid to assemble the volume.
Distribution: Cinnamon Mint 20.1 (Laptop) and 20.2 (Desktop)
Posts: 1,672
Rep:
Quote:
Well, most raid configurations put configuration information on the disks/partitions used.
That information is SUPPOSED to permit any ordering of the physical disks to allow the RAID to be rebuilt.
I've only ever dealt with hardware RAIDS; HP SmartArrays, Sun T3s, StorEdge 3000s etc, Yup, old stuff. It never worked for me at 03:00 in the morning when I got called out and managed to mix up a couple of disks.
I agree, each disk has information written to it on a small system partition which defines where the disk exists in the RAID. Software RAIDS? I have no real experience so I'll take your word for it.
I've only ever dealt with hardware RAIDS; HP SmartArrays, Sun T3s, StorEdge 3000s etc, Yup, old stuff. It never worked for me at 03:00 in the morning when I got called out and managed to mix up a couple of disks.
I agree, each disk has information written to it on a small system partition which defines where the disk exists in the RAID. Software RAIDS? I have no real experience so I'll take your word for it.
Play Bonny!
Most of my hardware raid was with NetApp and some old Sun products. In those, it didn't matter where you put the disks in the rack. The raid systems used up to about 10MB of each disk to hold the information - up to 10 was because the NetApp devices also booted from them. The Sun raid stored data in NVRAM and on disk. That way it could rebuild using hot spares quickly.
At one level or another, all raids are software driven - the software runs in the raid controller, and then presents an image of a disk/partition rather than a POD.
The problem I've had with hardware raids is that the recovery is a bit out-of-control. In the simple raid controllers, SOMETHING has to direct the controller - and the software that does that has not been portable, nor available for all systems that may be connected to the controller. And that makes it hard to recover from failures. The one I have (disabled) is a MegaRaid driver. The first disks I put in (for testing) got marked - and would not work at all when connected to a non-MegaRaid controller (something about the identification caused problems) until AFTER I overwrote the first 5 MB of the disks - worked fine then).
The NetApp systems I used (some of their smallest) were all based on BSD... and they presented a network connection rather than a hard disk connection. So they counted (at least by me) as a software raid.
My current disk support uses btrfs for raid support (raid 1 right now), but the system has to look for the UUIDs of the disks to identify what partitions to use and how to use them - Linux can't guarantee the device identification - as it does scans in parallel, and it depends on what disk spins up first as to the identification given. Yes, rules for udev can be used to make things the same - but you are still depending on data on the disk - like the disk serial number. Then give it a name - which will not identify how the device is plugged in...
I used to like the old Solaris naming: /dev/dsk/c<x>t<y>d<z>s<n> Where c stood for controller, t for target, d for disk, s for slice (a SCSI target could have up to 16 disks attached, though usually only one was). You could always know which physical disk had what function that way. Oh well, I've accepted that the Linux way is more flexible...
Distribution: Cinnamon Mint 20.1 (Laptop) and 20.2 (Desktop)
Posts: 1,672
Rep:
Quote:
used to like the old Solaris naming: /dev/dsk/c<x>t<y>d<z>s<n> Where c stood for controller, t for target, d for disk, s for slice (a SCSI target could have up to 16 disks attached, though usually only one was). You could always know which physical disk had what function that way.
Ah, Yes... Happy days.
The CnTnDn identification was always good for identifying which disk had failed, especially in the 16 disk rack in the front of an Enterprise 450.
Invariably when the system had been built the SCSI cards hadn't been set up properly to allow the disk-associated-led to show the duff disk The disks therefore weren't always where you thought but were still in groups of four. We always used to get a read/analyse run on one of the disks either side of the faulty one (ready light dims/flickers, good disks led solid) to identify the culprit.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.