[SOLVED] second hard drive showing first drive / when mounted
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
second hard drive showing first drive / when mounted
On my current system I have drives sda, sdb, sdc; 1TB, 1.5TB, 1.5TB respectively.
SATA drive a is where I installed the OS and the other SATA drives b and c are from an old system that was linux software mirror, something I was using to play and learn. Now I have added b and c to my current system and before doing so I stopped the mirror and removed the partitions. I added the drives to my current system and used cfdisk to create one partition on each and then mounted the new drives. I then run ls and I see bin/....etc. listed in the new drive. Is this normal or when I create the new drive partitions should I use Logical instead of Primary.
Is there a way to get my drives back to non raid that I have missed doing?
Deleting then creating partition(s) doesn't do anything to the under-lying data - including the RAID meta-data. So when you mount /dev/sd[bc] everything reappears. If you do a mkfs on the partition you created on each of those drives (prior to mounting), you should be ok. Note this will delete all that data, so be sure before you proceed.
Assignment of /dev block devices is done by kernel and or udev. Also mounting the disks is upto you. I can't understand your problem correctly? Please clarify.
I added the drives to my current system and used cfdisk to create one partition on each and then mounted the new drives. I then run ls and I see bin/....etc
How did you add, what did you make with cfdisk, what can be seen in bin?
How your drives are mounted now?
Thank you all for posting. I was able to solve my issue as explained below:
Computer #1 I had setup with one drive. Computer #2 I had setup with three drives in raid configuration.
Computer #2 was partitioned as / raid1; /home raid5. One drive failed on this system.
I then decided to add the other two drives to computer #1 and do a fresh install. I disabled the mirror on the two new drives before moving over to new system and then used cfdisk to repartition the drive, but what I actually done was just increase the size of my first partition to consume the entire drive area. According to what I have found and also the previous post by syg00 the data was still there, and what I was seeing was the / of my previous install. That made more sense then.
I then learned alot about drives and switched to gdisk to handle my partitioning do to my new learned info about BIOS and drive geometry.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.