Red HatThis forum is for the discussion of Red Hat Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
egh... I've been trying to install RH9 and about 7/8 through the first disc i get "rpmdb fatal region error; run recovery" for every package that should be installed after that, even after i change discs. In the graphic mode it doesn't show that i'm getting these errors but it's pretty obvious when it takes less than a second to install a 50mb package and you don't see the progress bar move at all, and when i quit out there are a lot of lines of this error. In text install though the second it goes wrong the error appears for every package that should install after that.
I mediachecked the disc and it works fine and i re-burned the iso multiple times. I have no idea what to do... I've been trying to fix this for multiple days now and i searched all over the internet. The Red Hat server is being upgraded so i can't check on their site to see anything either :\
Please help, thanks.
--Pat
oh and also, after the installation i can login as root but i can't do too much from there or i don't know what to do
Long-shot possibility: could it be that you are out of physical disk space on either the drive itself or in an individual partition? In other words, suppose that you've created a separate partition for /usr that's only say 100Mg. Since /usr lives in its own partition, if it were to fill up, it's got nowhere else to write to, and things could get hung up. Dunno if that applies to your situation, but it's conceivable it could. How did you partition your drive(s)?
Also, when you begin the re-install, during the partitioning step, do you reformat the partitions? You should - if you don't whatever data existed on them will remain as is, and those old files could just end cluttering things up. If you've been throwing massive error messages during multiple failed installs, and not starting with a clean slate by reformatting, it's possible that all those useless log files will just accumulate and you will start out each successive installation attempt with less and less unused space. As I said though, this is a fairly speculative scenario. Also note: if you've created /home on its own partition, and it contains data you want to save, then do not reformat it or you'll lose whatever you had on it.
Otherwise, if the MD5SUM's were verified for each CD and each one also passed the media check, then there should be no reason to doubt their integrity. -- J.W.
misc: how do i tell if my harddrive is bad or my ram is bad??
JW: I re-partitioned and formatted my harddive so there is a good amount of space on each partition and this still happens. The partitioning and formatting phase moves more quickly than i'd think though for a 60 GB hard drive. It takes less than 5 minutes to 'format' it, and from what i remember formatting takes ages.
Bad blocks are marked and skipped upon file system creation. No need to get a new hard drive as long as the number of bad blocks doesn't increase daily due to hardware failures. Check your file systems regularly and keep backups of your data.
well since i have bad blocks i can't install red hat 9... that's the original problem and i don't know how to fix this... i've been researching like crazy but i haven't found anything so help would be greatly appreciated
Does it refuse to install when it finds bad blocks?
Sorry, seems I'm not up-to-date on exact installer behaviour with regard to bad blocks. I don't have any bad hdd with which to test it. The bad blocks check has been removed in Fedora Core 1 even: https://bugzilla.redhat.com/bugzilla....cgi?id=109442
You may need to partition your hard disk drive manually, use mke2fs -c or leave out the space that is bad.
well the part where there are bad blocks i think it doesn't install. mke2fs says it won't run because hda2 is mounted. is there a way to unmount or something? I tried leaving out the space that is bad but this is still not working and still messes up at the same part... i still haven't checked the ram chips so i'm going to do that now
pat - during the format, as misc stated, you want to be sure you are checking for bad blocks. At the risk of telling you something you already know, certain sections of the hard drive may be flawed during the manufacturing process, or may become unusable over time (basically as a result of "wear and tear"). Realistically, this is relatively rare, and isn't something to worry about, and usually the amount of unusable space is next to zero. However, given that a digital storage system needs to either store a one or a zero for each bit of data, if the magnetic media on the disk is damaged in such a way that data cannot be stored on it, then obviously you would want to avoid writing any data to that section of the disk. The format utility is equipped to inspect each section of the disk, and if any particular sections fail the integrity test, the disk will mark that section(s) as bad, so that it won't be used.
If you format and check for bad blocks, that's all you should have to do. Reboot and restart the installation process, and you should be good. It seems unlikely that your RAM is a factor here. On the other hand, if you reformat but still run into (new) bad blocks, then it's possible that your disk does need to be replaced. In my experience, only very new or very old disks have problems, so if it's brand new, you might have just gotten stuck with a lemon, or if it's very old, it may need to be put out to pasture. In the first case bring it back to the retailer and exchange it, in the latter you would want to just buy a new one.
To unmount, format with ext3, then remount /hda2, you would do the following as root:
1. umount /dev/hda2
2. mkfs -t ext3 /dev/hda2
3. mount /dev/hda2 /<mountpoint> (where <mountpoint> is your exact mountpoint)
Then something is accessing it -- even if you are doing a directory listing on it, that process counts as an 'access', and will generate the message you see. To avoid this issue, exit any programs you are running, and go to the shell prompt (aka the terminal session). You cannot be in any directory that physically exists on /hda2. In other words, move to a directory on /hda1 or /hda3 or whatever. Then unmount /hda2, etc, and you should be good.
If you still have problems, then do not even start your Window Manager (meaning the graphical interface). Just login, stay at the dollar prompt, and run the command from there. -- J.W.
i've been running the command from there... linux won't install all the way so i can't even get to any graphical interface. so i'm either booting on part of a linux installation or boot: linux rescue from the install disc. From the prompt how do i make sure that i'm not in any /hda2 directory or chage directories?
First, did you restart the installation process completely from the beginning, with a reformat that checked for bad blocks? That should definitely resolve the issue you described in your original post. If it does, then just continue with the installation normally. (Honestly, having to manually unmount a partition and then reformat it is NOT part of a typical initial Linux installation.)
How did you partition your drive when you were installing Redhat? As I indicated before, if you are in any directory in the mountpoint that lives on /dev/hda2, you will not be able to unmount hda2 (because it is busy). It would be very helpful if you provided a complete description of your partitioning scheme, either by posting your fstab or doing a df.
To be perfectly honest if you are having this much trouble with bad blocks, your disk drive looks suspect. Seriously, installing Redhat is pretty straightforward, and should only take about 45 minutes or so, with the entire partitioning/formatting phase only maybe 3 or 4 minutes max. If you continually run into hardware error messages, then if I were you I'd consider chucking that drive and getting a new one. -- J.W.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.