External Firewire hard disk drive - mount glitches
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
External Firewire hard disk drive - mount glitches
My apologies for the length of this post. I wanted to include as much detail as possible up front.
I am suddenly having problems with an external IEEE-1394 hard disk that worked on an older distro.
I previously used Fedora Core 1 with the latest RPM'd 2.4 kernel. I used the well-known rescan-scsi-bus.sh utility to detect the drive. I could mount it and back up my internal drive data no problem.
Since then I have upgraded to FC2, "uname -a" as follows:
Linux lithium 2.6.8-1.521 #1 Mon Aug 16 09:01:18 EDT 2004 i686 athlon i386 GNU/Linux
I employ a generic PCI Firewire card and a generic HD enclosure with a 160 GB Maxtor drive. I have partitioned it as follows (ouput from "fdisk -l"):
Disk /dev/sda: 163.9 GB, 163928604672 bytes
255 heads, 63 sectors/track, 19929 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 14590 117194143+ 83 Linux
/dev/sda2 14591 19929 42885517+ c W95 FAT32 (LBA)
In summary, the partition I would like to use for backing up my system is /dev/sda1, an ext2 filesystem (same errors encountered with ext3 as well).
I ordinarily run a short backup script as root:
mount /dev/sda1 /mnt/1394
rsync -a --delete-after --exclude=/mnt --exclude=/dev --exclude=/proc --exclude=/sys /* /mnt/1394
This used to work like a charm, but since upgrading I get some weird errors that suggest a problem with writing to the disk. An example error follows:
recv_generator: mkdir "/mnt/1394/var/www/icons/small" failed: No such file or directory
stat "/mnt/1394/var/www/icons/small" failed: No such file or directory
Again, I get many, many thousands of these scrolling past me (perhaps even one for every file on the machine).
In the end, rsync reports
rsync error: some files could not be transferred (code 23) at main.c(633)
To test what was going on I did some manual studies. I found that after running the script I could no longer mount the partition manually, getting a "must specify the filesystem type" error, and when I specified "-t ext2" I got
mount: wrong fs type, bad option, bad superblock on /dev/sda1,
or too many mounted file systems
There are only a handful of mounted filesystems, as evidenced by the output of "df -h":
If I reboot the system and restart the HD enclosure I still cannot mount the partition. However, deleting and remaking the parititon (with fdisk, and a partition table write in between), followed by a "/sbin/mkfs.ext2 /dev/sda1" allows me to once again to mount the (now empty) partition. FYI, the output from "mkfs.ext2" is as follows:
mke2fs 1.35 (28-Feb-2004)
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
14663680 inodes, 29298535 blocks
1464926 blocks (5.00%) reserved for the super user
First data block=0
895 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
This filesystem will be automatically checked every 25 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
As stated, I can mount the filesystem manually and even write files to it, unmount, remount, and see the files. However, rsync, as always, fails. After failure, running fsck on the partition returns a superblock error, and directing it to use a backup superblock returns the same error (I didn't write down the exact wording).
I am thus at a loss. It seems like if any large amount of information is written to the drive the problem occurs, so I can't back up my disk! I am unsure if this is a hardware problem or a software bug.