Red HatThis forum is for the discussion of Red Hat Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm running RedHat Enterprise Linux AS Release 3 (Taroon Update 6) on my
main server. Since August, the /home1 and /home2 partitions have been
using LVM on RAID5 with snapshot partitions being used for tape dumps.
The actual filesystems are ext3, which has journaling.
Twice now, I've gotten the following message on the console:
Message from syslogd@puma at Tue Nov 15 04:02:26 2005 ...
puma kernel: journal commit I/O error
Message from syslogd@puma at Tue Nov 15 04:02:26 2005 ...
puma kernel: Assertion failure in journal_flush_Rsmp_e2f189ce() at journal.c:1356: "!journal->j_checkpoint_transactions"
Message from syslogd@puma at Tue Nov 15 04:02:30 2005 ...
puma kernel: Kernel panic: Fatal exception
FYI, at 4:02am, the current snapshot is removed, a new one is created, and
a tape dump is begun. The effects of this error are that (1) I can no
longer "lvremove" the snapshot partitions, because the system says they
are in use, but they're really not; and, (2) I can't do tape dumps because
the "sync" system call hangs.
These can both be fixed by rebooting, but I don't want to be rebooting my
server in order to do tape dumps. I have two questions:
1. Should I be reporting this kernel error to anyone? If so, to whom and how?
2. Any suggestions about how to avoid this or fix it?
I got the same error messages using Red Hat ES3 kernel 2.4.21-47.
When we used the kernel 2.4.21-32 this problem did not appear.
I had created a snapshot on the root file system.
And got the following errors while mounting the snapshot.
[root@nccdev01 bin]# mount /dev/Volume00/root_clone /root_clone
mount: block device /dev/Volume00/root_clone is write-protected, mounting read-only
Message from syslogd@nccdev01 at Tue Jan 9 16:07:00 2007 ...
nccdev01 kernel: journal commit I/O error
Message from syslogd@nccdev01 at Tue Jan 9 16:07:00 2007 ...
nccdev01 kernel: Assertion failure in journal_flush_Rsmp_e3ba0c6d() at journal.c:1356: "!journal->j_checkpoint_transactions"
Message from syslogd@nccdev01 at Tue Jan 9 16:07:00 2007 ...
nccdev01 kernel: Kernel BUG at journal:1356
Message from syslogd@nccdev01 at Tue Jan 9 16:07:00 2007 ...
nccdev01 kernel: invalid operand: 0000
Message from syslogd@nccdev01 at Tue Jan 9 16:07:00 2007 ...
nccdev01 kernel: Kernel panic: Fatal exception
The solution in my case to avoid the Kernel panic was to mount with "-t ext2"
[root@nccdev01 bin]# mount -t ext2 /dev/Volume00/root_clone /root_clone
Kernel 2.4.21-47.EL, lvm-1.0.8-14 from the current CentOS 3.8, latest updates. Also tried removing the original volume (couldn't do it because a snapshot was open), vgchange -an (disallowed on active snapshot) and so on. Finally I made the snapshot fill up so it would become inactive, but that didn't work either. Here's what I'm left with:
[root@garlic /]# lvdisplay /dev/vg00/SSlvol4
--- Logical volume ---
LV Name /dev/vg00/SSlvol4
VG Name vg00
LV Write Access read only
LV snapshot status INACTIVE destination for /dev/vg00/lvol4
LV Status NOT available
LV # 10
# open 1
LV Size 2 GB
Current LE 64
Allocated LE 64
snapshot chunk size 64 KB
Allocated to snapshot 100.00% [511 MB/511 MB]
Allocated to COW-table 1 MB
Allocation next free
Read ahead sectors 1024
Block device 58:9
[root@garlic /]# lvdisplay /dev/vg00/lvol4
--- Logical volume ---
LV Name /dev/vg00/lvol4
VG Name vg00
LV Write Access read/write
LV snapshot status source of
/dev/vg00/SSlvol4 [INACTIVE]
LV Status available
LV # 4
# open 0
LV Size 2 GB
Current LE 64
Allocated LE 64
Allocation next free
Read ahead sectors 1024
Block device 58:3
Advice on how to whack these two LV's is greatly appreciated. Thank you.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.