Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
the request to reboot refers mainly to people who just installed another kernel. I assume you have not installed another kernel.
Can you check that module for vfat exists first or is built in by looking at your current kernel config found in /boot/config<kernel-version>
and mine is
# DOS/FAT/NT Filesystems
CONFIG_VFAT_FS=y
2) if yours is a m then try running a modprobe with root powers first
I just tested it, and a newly created (under Linux) vfat partition on my USB stick is working fine. This is my only kernel upgrade I've done on this install:
linux-image-3.16.0-4-amd64:amd64 (3.16.7-ckt11-1+deb8u3, 3.16.7-ckt20-1+deb8u3)
That was on the 25th of January I upgraded that. Have you done an upgrade since then and not rebooted?
Basically it claims that the bootloader has not truly updated and booted the new kernel....ignoring the fact you appear to claim you have only one?
altho you said only kernel upgrade.
one way to check would be just run a kernel version command
Code:
uname -r
secondly the poster did a manual bootloader command assuming grub with root powers
Code:
update-grub
grub-install /dev/sda
changing sda to what ever mbr you are using
altho I am reluctant to suggest re-embed grub in mbr by doing it we at least eliminate any bootloader issues.
Let me know your thoughts if this does not appeal to you
As far as the old bug goes, the dmesg log seems to be different.
uname gave the following result, so I'm still guissing just 1 kernel, as expected.
Code:
# uname -r
3.16.0-4-amd64
I also updated grub as you suggested, and after doing that, which gave me some errors regarding the null disk, although I dont think that's relevant:
Code:
# grub-install /dev/sdi
Installing for i386-pc platform.
grub-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing fr om core image..
grub-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing fr om core image..
grub-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing fr om core image..
grub-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing fr om core image..
I couldn't get the mount command to work. Still the same error and dmseg
EDIT:
The trend indeed seems to be kernel issues. Regarding bootloader, I'm fairly sure grub is being used, after all, that's what it sais on bootup at the boot selection. Also checked for errors or failed messages like the person in the post had, but I couldn't find any that I can imagine relating to this problem.
Not sure what the danger of re-embedding grub is, If it could help, i'm willing to try it though.
sdi means you have attempted to use a MBR of the ninth drive unless I am mistaken?
how many drives do you have....not partitions but actual drives available at boot up please
Code:
blkid
as root might help. Most people either dual boot with windows on first drive and Linux on second or linux on first drive
so I was expecting sda or sdb
I do indeed have 9 drives:
-5 actual hard drives that form a raid6 data array
-4 usb drives that run in a raid1 config to run os. (planning to change this to actual drives somewhere in the future). although at the moment it seems like one of those 4 is out, will try to fix it when at home.
-the 1 usb drive im trying to mount. sdh at the moment
for some reason I decided it was a good idea to try and install grub on the usb trying to mount... so no I'm not sure about this. I do recall having some challenges installing grub in such a way that my 4 usb raid1 could boot even with only 1 usb present. So grub should be installed in all 4 I think. This construction however, does sound like the problem could be here.
EDIT:
I am starting to suspect that somehow the data raid might be screwing stuff up, especially as that raid is migrated from an earlier system and might indeed be cousing some sort of bootloader kernel conflict. Will try to disconnect the raid and see if it solves it later today.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.