[SOLVED] Building a Backblaze storage pod need help!
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have followed the specs they give exactly and i have almost completed it but I am stuck trying to mount the raids. There are 3 RAID6 and i recieve the same error for each one
Code:
mount /dev/md0 /mnt/raid0
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
This is what i have done(ill just show one since they are identical except for which drives make the raid of course)
Code:
mdadm --detail /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Thu Mar 4 14:50:11 2010
Raid Level : raid6
Array Size : 19046800448 (18164.44 GiB 19503.92 GB)
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Raid Devices : 15
Total Devices : 15
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Mar 8 10:47:33 2010
State : clean
Active Devices : 15
Working Devices : 15
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
UUID : 138552f3:a4c26ab1:6083a234:87308e9b (local to host idnyrec-linux)
Events : 0.10
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
4 8 64 4 active sync /dev/sde
5 8 80 5 active sync /dev/sdf
6 8 96 6 active sync /dev/sdg
7 8 112 7 active sync /dev/sdh
8 8 128 8 active sync /dev/sdi
9 8 144 9 active sync /dev/sdj
10 8 160 10 active sync /dev/sdk
11 8 176 11 active sync /dev/sdl
12 8 192 12 active sync /dev/sdm
13 8 208 13 active sync /dev/sdn
14 8 224 14 active sync /dev/sdo
mkfs.jfs /dev/md0
mkfs.jfs version 1.1.12, 24-Aug-2007
Warning! All data on device /dev/md0 will be lost!
Continue? (Y/N) y
\
Format completed successfully.
19046800448 kilobytes total disk space.
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdo[14] sdn[13] sdm[12] sdl[11] sdk[10] sdj[9] sdi[8] sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
19046800448 blocks level 6, 64k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU]
unused devices: <none>
mdadm --detail --scan --verbose > /etc/mdadm.config
mkdir /mnt/raid0
i edited the fstab and added the line:
/dev/md0 /mnt/raid0 jfs defaults 0 2
then when i go to mount i get the message:
Code:
mount /dev/md0 /mnt/raid0
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
I see that in your first post now, I did not scroll down far enough.
The next logical step in fault location would be to format a non-RAID-ed partition to jfs and see if you can mount it. Any partition, any disk would do.
As far as I know jfs is not used real often, I wonder if there is something wrong with the driver of this file system.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
The only thing I can recommend is to implement this thing step by step. You have demonstrated that jfs is a mountable file system. Can you build a RAID1 with jfs? No? Can you do it with xfs? Yes? Can you build a RAID5 with jfs. Etc, etc.
And unfortunately I don't have any other helpful ideas, sorry.
Hi, I am new to Linux.I am trying to partition a raid array of 15 x 1.5 TB drives. I was trying to use Fdisk but it was not allowing me to use the full size and then I read that it can only handle 2TB partitions. I then tried to use gparted but it doesnt show my raid array as a device.
Hey im new to Linux. I'm running Debian 2.6.26-21lenny4 and I am trying to mount a 15 disk raid6 array made up of 1.5 TB hd's using jfs. After building and formatting the array I try to mount it and I recieve this error
Code:
mount: wrong fs type, bad option, bad superblock on /dev/md2,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
I was successfully able to build and mount the raid6 array's following exactly the same steps using only 13 of the drives but when i try 14 or 15 of the drives I recieve that error message.
Please post ENTIRE output of dmesg and contents of /var/log/dmesg file (if present in your distribution). If it is not there, reboot your machine, log in as root and capture output of dmesg.
Kernel has a buffer for it's messages. As they appear, older get pushed out. I think it is a controller / multiplexer problem and to solve it, 'early' initialization messages might be helpful.
Or try to read and understand them yourself. It's the fastest way to solve problems. Linux speaks and is meaningful.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.