Strange behavior with new SATA III drive
I don't know whether this is a Slackware problem per se.
I updated my HTPC to Slackware 14.0. I moved the system to a new Western Digital WD10EZEX 1TB SATA 3 drive and have a second of the same model for cloning. Although the ASUS M3N78-EM motherboard only supports SATA 2, I nonetheless noticed speed improvements with the newer SATA 3 drive.
Before installing the new 1TB drives I performed a full dd if=/dev/zero and smartctl long test. No problems or errors encountered. All seems well except for a single and annoying anomaly.
After a shutdown/reboot, Xbmc (10.1) takes about 20 seconds to load. Xbmc will start immediately with any subsequent load as long as I don't reboot or shutdown. When I manually flush the kernel cache, Xbmc still starts normally within 5 seconds. The oddball behavior appears only after a reboot/shutdown.
By shutdown I mean the system is using standby power in order to self-awaken for scheduled recordings.
The Xbmc log shows no related errors or warnings but does show a repeatable 16 second pause early in the log. The 16 second gap never appears thereafter or when using the older hard drive.
Uncertain whether the problem was the new hard drive, I temporarily restored my previous Western Digital WD6400AAKS 640 GB SATA 2 drive (same Slackware 14.0). Amazingly, Xbmc then always started in about 5 seconds, even after a reboot/shutdown. Swapping back to the new 1TB drive and again Xbmc takes about 20 seconds to load after a reboot/shutdown. I again swapped to the older drive and Xbmc started normally.
I see the same results with the second "cloned" drive," somewhat affirming the anomaly is related to the drives.
Although SATA 3 devices auto-negotiate to SATA 2, and my dmesg log indicates the drive is booting at 3.0 Gps, I nonetheless tried a jumper across pins 5 and 6, which limits the drive to 3.0 Gps. No change in dmesg or Xbmc.
On a whim I recompiled Xbmc on the HTPC rather than my normal office system. No change.
I'm at a loss how to resolve the problem short of buying new drives or using the older drive (that has 360 GB less capacity). I don't want to autostart Xbmc as I don't always use Xbmc when starting the system.
Any ideas? :scratch:
P.S. Yes, I'm aware Xbmc 12.0 is available. Not interested at the moment. :)
How did you partition the drive?
How did you move the system to the new drive?
My first guess is that the drive is not aligned correctly and your not using 4KB sector boundaries.
I used gparted to create the partitions, which defaults to using MiB alignment rather than cylinder alignment. I used the defaults.
All partitions are ext4 except the data (videos) partition, which is xfs.
/dev/sda1 /boot (ext4)
/dev/sda3 /home (ext4)
/dev/sda5 /usr/local (ext4)
/dev/sda6 /tmp (ext4)
/dev/sda7 /var (ext4)
/dev/sda8 / (ext4)
/dev/sda9 data (xfs)
After creating the partitions on the new drive I copied the files from the old drive to the new drive. I did not use dd because I wanted to increase some of the partition sizes on the newer, larger drive and perform some nominal file/directory tweaking. I performed the updates using partedmagic. The entire operation was fast except copying files from the data partition, because many of the files are large video files.
I haven't noticed any other problem with the new drive. To have only one app behave oddly is a head scratcher. The xbmc files are on an ext4 partition (sda8).
/dev/sda8 / ext4 defaults,noatime 0 1
All partitions are using a 4096 block size, except sda1 (/boot), which is a small 128M partition and uses a 1024 block size:
tune2fs -l /dev/sdaX | grep "Block size"
Block size: 4096
fdisk does not complain about sector boundaries:
fdisk -lu /dev/sda
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0006861b
Device Boot Start End Blocks Id System
/dev/sda1 2048 264191 131072 83 Linux
/dev/sda2 264192 2361343 1048576 82 Linux swap
/dev/sda3 2361344 44304383 20971520 83 Linux
/dev/sda4 44304384 1953523711 954609664 5 Extended
/dev/sda5 44306432 46403583 1048576 83 Linux
/dev/sda6 46405632 48502783 1048576 83 Linux
/dev/sda7 48504832 50601983 1048576 83 Linux
/dev/sda8 50604032 79964159 14680064 83 Linux
/dev/sda9 79966208 1953523711 936778752 83 Linux
The block size on the older drive is also 4096.
Please explain why sector boundaries would cause such an anomaly. :)
Okay, but I'm only seeing this behavior with one single app, not the entire drive. :scratch:
hdparm shows the new drive performing faster:
hdparm -tT /dev/sda (new drive)
Timing cached reads: 2122 MB in 2.00 seconds = 1061.63 MB/sec
Timing buffered disk reads: 540 MB in 3.01 seconds = 179.69 MB/sec
hdparm -tT /dev/sdb (old drive)
Timing cached reads: 2128 MB in 2.00 seconds = 1064.01 MB/sec
Timing buffered disk reads: 334 MB in 3.00 seconds = 111.18 MB/sec
The strange part is the 16 second delay disappears with subsequent starts, even after flushing the linux cache.
|All times are GMT -5. The time now is 01:10 AM.|