LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   Routing boot time fsck not happening - Ubuntu Server 12.04 (https://www.linuxquestions.org/questions/linux-server-73/routing-boot-time-fsck-not-happening-ubuntu-server-12-04-a-4175493189/)

taylorkh 01-30-2014 04:56 PM

Routing boot time fsck not happening - Ubuntu Server 12.04
 
I am running Ubuntu 12.04 Server - 32 bit on an old Dell Power Edge 400 SC server with 5 large hard drives. It gets booted up and shut down frequently as I use it for backup storage and only bring it up when needed. The drives are supposed to be fsck checked every 30 boots (I believe that is the default when Ubuntu creates an ext4 file system). With Ubuntu 10.04 server it would in fact do that - and sometimes I had to wait a LONG time for all the drives to be checked as they seemed to be all on the same schedule. With 12.04 this is not happening.

If I connect into the server from a shell I see the following
Quote:

[ken@taylor12 ~]$ ssh taylor10
Welcome to Ubuntu 12.04.4 LTS (GNU/Linux 3.8.0-35-generic i686)

* Documentation: https://help.ubuntu.com/

System information as of Thu Jan 30 17:16:08 EST 2014

System load: 0.13 Processes: 157
Usage of /: 16.9% of 15.62GB Users logged in: 0
Memory usage: 11% IP address for eth0: 192.168.0.110
Swap usage: 0%

Graph this data and manage this system at:
https://landscape.canonical.com/

*** /dev/sdd1 will be checked for errors at next reboot ***

Last login: Thu Jan 30 16:58:14 2014 from 192.168.0.112
I rebooted and then reconnected. I received the same message. I must manually unmount and fsck the file system to clear the message.

I found some discussion about this dated 2011 but for the most part the thread explained how to do the job manually (which I know how to do). I do not find anything in launchpad which sounds like this - although I am not a very good launchpad searcher.

A look at the offending file system with tune2fs -l shows
Quote:

Mount count: 53
Maximum mount count: 30
so a check SHOULD be forced. Stranger still... I have checked the file systems on the other 4 large drives and they ALL show the mount count > max mount count yet they are not listed as "will be checked for errors at next reboot" nor are they checked upon a reboot.

Any ideas?

TIA,

Ken

rknichols 01-31-2014 12:48 PM

What is in field 6 of the /etc/fstab lines for those mounts? A "0" in that field means they will not be examined at boot time.

taylorkh 01-31-2014 01:38 PM

Thanks rknichols! I believe you nailed it. I changed the 6th field to 1 and now the drives are checked at boot time when appropriate.

Ken


All times are GMT -5. The time now is 02:37 AM.