Nautilus shows 457 mb available for an empty 298 gb Hard Drive?
Hello,
I added a Seagate 320gb hard drive to use for samba storage on my ubuntu 7.1 file server. I am unable to determine why the drive says it is 298 gb in size, but looking at the drive properties in Nautilus browser at media/bigdrive shows 457.8 mb of space available. Attempting to add files to this new drive quickly takes me to "out of space", which is not correct. The drive was originally formated on a windows machine with the mfg utility as Fat32, then I realized that a ext3 partition would be better for backup purposes. The drive was reformatted using gnome partition editor from another machine on the network. The solution to this problem has me stumped. Does the forum have any suggestions on how I might troubleshoot/solve this problem. Thank you. mg92865 from fdisk output for the drive in question mike@blackbox:~$ sudo fdisk -l Password: Disk /dev/sda: 41.1 GB, 41110142976 bytes 255 heads, 63 sectors/track, 4998 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 1216 9767488+ 83 Linux /dev/sda2 1217 2432 9767520 83 Linux /dev/sda3 2433 4256 14651280 b W95 FAT32 /dev/sda4 4257 4998 5960115 82 Linux swap / Solaris Disk /dev/sdb: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 14087 113153796 83 Linux |
Unmount any mounted partitions then use cfdisk to have your way with the disk.
Code:
sudo cfdisk /dev/sdb |
Results of cfdisk
Thanks for the suggestion. After looking over the cfdisk command, I attempted this. The program came back with a fatal error, unable to open the drive.
Does this mean the drive should just go back to Seagate, or it has just been formatted incorrectly? Thanks, mg92865 |
Okay, first I neglected to answer the first question in your post. When you purchased the 320 gigabyte drive the manufacturer used 1000 x 1000 x 1000 to equal 1 gigabyte (GiB). However when your computer tells you the capacity of the drive it uses 1024 x 1024 x 1024 to equal 1 gigabyte (GB). Notice the difference between GiB and GB. That tells you whether the drive manufacturer is using 1 billion to equal 1 gigabyte or 1,073,741,824 to equal 1 gigabyte. So a drive with 320 GiB capacity actually has 319,975,063,552 bytes which is equal to 298 GB.
Now on to your problem accessing the second disk in your Ubuntu machine. I don't think that your disk is broken and here is why. When you are learning to use a new system and you try to do something and you get an error the chances that you made a mistake are much better than the chances that new hardware is broken. Yes? First make sure that /dev/sdb1 isn't mounted anywhere using the mount command. Code:
sudo mount Code:
sudo mkfs -t ext3 /dev/sdb1 Code:
sudo mount -t ext3 /dev/sdb1 /mnt/sdb1 |
Thank you for your comments. Yes, errors on my part seems to go hand in handwith progress in linux. This is where I am at in resolving this issue.
I took the drive out, went back to dos based seagate tools, ran the long and the short tests, and all seems to work ok. When formatted as ntfs the drivemounts properly under windows home xp, as a full 298 gb. I then mounted the drive in the ubuntu box, and attempted to format it as ext3 with the command you supplied: mike@blackbox:~$ sudo mkfs -t ext3 /dev/sdb1 mke2fs 1.40-WIP (14-Nov-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 39075840 inodes, 78142160 blocks 3907108 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 2385 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 36 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. mike@blackbox:~$ Then I performed a sudo mkdir test sudo mount /dev/sdb1 test rerunning the sudo mount command shows that the drive is mounted. I am wondering if I am just confused about where I am at because this is not the boot drive. Thank you for helping me confirm that the drive works, and can be mounted. I will be doing a little additional reading to see if can understand mount points on a second hard drive. mg92865 |
Don't get discouraged. I took a long time getting started in Linux. You will reach a point where you don't have to look up every single thing. Then you can start to have fun.
In Ubuntu I think you will find that the new disk partition will automatically be mounted at /media/sdb1 unless you tell Ubuntu to do something else with it. A mount point is just a directory. The directory doesn't have to be empty. If you mount a file system/partition on a directory that has files in it you won't be able to see those files until you unmount the file system. If you start to get discouraged remember that there was a time that you didn't know squat about Windows but now it is familiar to you. Linux is the same way. :) |
No real problem with the learning curve, just for sure there is one.
Anyway, I now have the drive in the linux server, I can see the drive, and copy files to and from it. So for this problem, it looks like the help has solved the problem. Case closed, thanks again for the assistance. mg92865 |
All times are GMT -5. The time now is 10:28 PM. |