Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I run OpenSuSE on my little lappy here so I just opened up my can o' system tools and formatted this beast with ext3... first I used gparted to do the job... but after formatting the drive failed to mount... I opened "computer" and double clicked on the "USB drive" and I get "Unable to mount location, Can't mount file."
Strange, I thought... so I used qtparted... same results... so I used terminal and ran
Code:
# mkfs.ext3 /dev/sdb1
Same bleeding results!!!
so now I am sitting here thinking to myself... this is a replacement for another Seagate barracuda drive (450GB model)... and THAT drive worked fine! so I grabbed that drive and using acronis migrate easy, I copied the partition from my 450GB drive to the new drive... SUCCESS!!! I was then able to mount the drive easily!
so next I used gparted to resize the partition to the full 1.5TB... FAIL!
the drive is back to "Unable to mount location, Can't mount file."
Now I have created a folder in media (/media/MVARCHIVE/) and using the following command:
Code:
# mount /dev/sdb1 /media/MVARCHIVE/
I can mount the drive as root, and access it, and add my crap to it!
so... why can't I just MOUNT the frikin' drive as a normal user? why do I have to mount it as root in terminal? Is this fixable? or am I just hosed?
Well, typically normal users *can't* mount stuff unless they are given explicit permission via an fstab entry. These days there is also some primitive policy control but I just have it deactivated and I really can't recall the details.
Anyway, the polite thing to do is to create an fstab entry for that external disk. If you rely on various other tools to magically mount things rather than using the lowest level system tools you have (like 'mount'), you also run the risk of encountering more bugs.
ok... that makes sense... but why did it work (automount) on a smaller drive of the same make? also why can I create a smaller partition on the same drive and it successfully automounts?
ok... that makes sense... but why did it work (automount) on a smaller drive of the same make? also why can I create a smaller partition on the same drive and it successfully automounts?
As I already suggested, it may be a problem with the particular automount tools you are using. I really have no idea because not enough information was presented to make a valid deduction.
Be careful adding a line in /etc/fstab for external storage devices. I suggest set a label for the drive or partition, so you can mount by label instead by device node.
Automounting is not perfect.
Last edited by Electro; 11-14-2008 at 02:12 AM.
Reason: I thought I put not perfect, it is corrected now
Be careful adding a line in /etc/fstab for external storage devices. I suggest set a label for the drive or partition, so you can mount by label instead by device node.
Automounting is perfect.
I agree with you, I actually am hesitant to screw around with fstab anyway... I prefer my tools to behave in an expected manner and automount just isn't...
I plan to try several other distros to see if this is a distro specific problem... sorry pinniped, I just don't buy the whole automount tools being broken or defective... I have two drives here, both Seagate barracuda ST3***AS models (meaning they have identical firmware and controller models) the only physical difference between these drives is the storage density, and the only logical difference is the media density. Heck they use the same logical layout and bit offset! so this is not a physical defect.
If I create a partition UNDER 1000 GB I can automount fine... the only time I experience a problem is when I create a partition OVER 1000 GB... you know... I have this problem on a raid set I have... 1.7 TB... it does the same thing... I never realized it because I DO use an fstab entry for that one, only because it's an internal raid set (md0) and the physical identifier will never change.
Ok, changing tak... pinniped, I DO agree with your assessment. something wrong with the automount tools... anyone know of any that work on partitions greater than a terrabyte?
I agree with you, I actually am hesitant to screw around with fstab anyway... I prefer my tools to behave in an expected manner and automount just isn't...
I plan to try several other distros to see if this is a distro specific problem... sorry pinniped, I just don't buy the whole automount tools being broken or defective... I have two drives here, both Seagate barracuda ST3***AS models (meaning they have identical firmware and controller models) the only physical difference between these drives is the storage density, and the only logical difference is the media density. Heck they use the same logical layout and bit offset! so this is not a physical defect.
If I create a partition UNDER 1000 GB I can automount fine... the only time I experience a problem is when I create a partition OVER 1000 GB... you know... I have this problem on a raid set I have... 1.7 TB... it does the same thing... I never realized it because I DO use an fstab entry for that one, only because it's an internal raid set (md0) and the physical identifier will never change.
Ok, changing tak... pinniped, I DO agree with your assessment. something wrong with the automount tools... anyone know of any that work on partitions greater than a terrabyte?
I corrected my last post. I suggest do not depend on automounting.
I suggest do not buy Seagate hard drives. I suggest Hitachi and Western Digital hard drives because they do not have as many problems as Seagate hard drives.
Not all USB to SATA/IDE converters supports more than 1 terabyte hard drives.
IcoNyx;
Most likely your problem is that your kernel was not compiled
with large drive support, most of the shelf distros, except
for server versions aren't built with this option enabled.
look in your /boot/config-(whatever-your-kernel ver) for
CONFIG_LBD=y if it's not there you will have to either
re-build your kernel or make do with a smaller partition
Good Luck
ps. I would favor a smaller partition, just because a check
disk on a huge drive takes a looong time. :-)
Sorry for the lack of responce... Issue resolved... sort of...
I ended up reformatting the drive using acronis... that fixed it.
Unfortunately I have now gone back to Debian... Love the deb... And my issue is back... now I know formatting this beast is not the fix... it's a band aid, also I have absolutly NO idea why formatting it with one utility as opposed to another would resolve the issue when the partition table and jornal should be the same no matter what... right?
but there you have it. format with enough Partitioning tools and you will eventually fix it!
Quote:
Originally Posted by rcbpage
IcoNyx;
Most likely your problem is that your kernel was not compiled
with large drive support, most of the shelf distros, except
for server versions aren't built with this option enabled.
look in your /boot/config-(whatever-your-kernel ver) for
CONFIG_LBD=y if it's not there you will have to either
re-build your kernel or make do with a smaller partition
Good Luck
ps. I would favor a smaller partition, just because a check
disk on a huge drive takes a looong time. :-)
rcb
Nope. sorry man, I already checked for this before I posted the first time. Mu kernels have all supported large volumes... sort of... this has been an issue with me since before I got a single 1500GB drive. A while back I was running a mass storage controller with 7 300s and they worked GREAT... except as an internal raid controller (md0) I tended to mount using fstab and leave it connected permanently... as an external drive, I want to be able to pull the drive out and connect it willie-nillie... ie: hot plug. Sometimes it's handy when you need to move a mass of binaries from one PC to another and back "sneaker-net" style.
*Sigh* I have the feeling this one's going to haunt me for a LOOONG time.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.