LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   Slackware Kernel Panic with extra PCI Sata controller sata_sil24 PCI40010 (https://www.linuxquestions.org/questions/linux-hardware-18/slackware-kernel-panic-with-extra-pci-sata-controller-sata_sil24-pci40010-4175642372/)

crions 11-14-2018 03:46 PM

Slackware Kernel Panic with extra PCI Sata controller sata_sil24 PCI40010
 
1 Attachment(s)
I have recently installed in a Slackware 14.1 the following board:

IOCrest SATA II 4 x PCI RAID Host Controller Card SY-PCI40010
https://www.amazon.com/IOCrest-SATA-.../dp/B002R0DZZ8

I have previously used this board with Ubuntu 16 (no problems, same board and processor, 32 bit)
And the main use is a an option to add extra hard drives (No RAID is used).


Those are the hard drives:

Code:

Disk /dev/sda: 320.1 GB, 320072933376 bytes
/dev/sda1      617334858  625142447    3903795  82  Linux swap
/dev/sda2  *          63  617334857  308667397+  83  Linux
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
/dev/sdb1              1  4294967295  2147483647+  ee  GPT
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
/dev/sdc1              1  4294967295  2147483647+  ee  GPT
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
/dev/sdd1              1  4294967295  2147483647+  ee  GPT
Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
/dev/sde1              63  3907029167  1953514552+  83  Linux

This is the board:

Code:

03:00.0 RAID bus controller: Silicon Image, Inc. SiI 3124 PCI-X Serial ATA Controller (rev 02)
        Subsystem: Silicon Image, Inc. Device 7124
        Flags: bus master, stepping, 66MHz, medium devsel, latency 64, IRQ 19
        Memory at febffc00 (64-bit, non-prefetchable) [size=128]
        Memory at febf0000 (64-bit, non-prefetchable) [size=32K]
        I/O ports at ec00 [size=16]
        Expansion ROM at feb00000 [disabled] [size=512K]
        Capabilities: [64] Power Management version 2
        Capabilities: [40] PCI-X non-bridge device
        Capabilities: [54] MSI: Enable- Count=1/1 Maskable- 64bit+
        Kernel driver in use: sata_sil24

What is driving me crazy is...
Without any hard drive sata cable connected to the controller board (4 ports)
the computer boots up and after you connect the hard drive to port 0 the sde drive
shows up nicely... Dmesg result

Code:

[  26.893348] ADDRCONF(NETDEV_UP): eth0: link is not ready
[  27.627039] r8169 0000:01:00.0: eth0: link up
[  27.627192] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  37.762021] eth0: no IPv6 routers present
[  72.282736] ata1: exception Emask 0x10 SAct 0x0 SErr 0x0 action 0xe frozen
[  72.282742] ata1: irq_stat 0x00800080, device exchanged
[  72.282752] ata1: hard resetting link
[  82.283017] ata1: softreset failed (timeout)
[  82.283023] ata1: hard resetting link
[  84.426024] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 0)
[  84.426532] ata1.00: ATA-8: ST2000DM001-9YN164, CC4B, max UDMA/133
[  84.426534] ata1.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 31/32)
[  84.427028] ata1.00: configured for UDMA/100
[  84.427034] ata1: EH complete
[  84.427154] scsi 2:0:0:0: Direct-Access    ATA      ST2000DM001-9YN1 CC4B PQ: 0 ANSI: 5
[  84.427801] sd 2:0:0:0: [sde] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[  84.427804] sd 2:0:0:0: [sde] 4096-byte physical blocks
[  84.427845] sd 2:0:0:0: [sde] Write Protect is off
[  84.427848] sd 2:0:0:0: [sde] Mode Sense: 00 3a 00 00
[  84.427865] sd 2:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[  84.729871]  sde: sde1
[  84.730703] sd 2:0:0:0: [sde] Attached SCSI disk

But if I reboot the computer with the attached sata (sde, or sdf or sdg) Slackware does not boot up!!!
I need to have this machine restart with no headaches... And I'm wondering why Ubuntu does not crash...

The kernel panic is:
VFS: Unable to mount root fs on unknown-block(8,2)
Attachment 28966

ehartman 11-14-2018 04:19 PM

Quote:

Originally Posted by crions (Post 5926127)
I have recently installed in a Slackware 14.1 the following board:

IOCrest SATA II 4 x PCI RAID Host Controller Card SY-PCI40010

The kernel panic is:
VFS: Unable to mount root fs on unknown-block(8,2)
Attachment 28966

The system seems to want to mount the sde drive as the root fs, instead of sda (8,2 is the THIRD S-ATA controller, so your new board.
So the new board seems to take precedence to your bootloader (it probably becomes sda to the kernel).

I actually have the same problem on a HP workstation with external E-sata port (controller of which is on the motherboard), the disk, connected to that, must NOT be switched on, when rebooting the system.
Luckily that is much more easy to assure with an external disk.

mrmazda 11-14-2018 07:01 PM

What is the root= parameter on your Slack bootloader's kernel cmdline? If it's device name, switching it to LABEL or UUID should avoid the device enumeration usurpation that almost assuredly is assigning sda instead of sde to the PCI card when it has a connected and powered HD during POST. There may be a BIOS change you can make to keep the onboard SATA ports prioritized over the PCI card. Rebuilding the initrd with the Silicon Image module explicitly excluded might also help, as might using the PCI card's own setup utility.

crions 11-15-2018 06:50 AM

ehartman: Thank you for the clarification on enumeration (8,2). Unfortunately this guy will be a server so I cannot have to reconnect the drives on each power failure.

mrmazda: Thank you for the insight.
Quote:

What is the root= parameter on your Slack bootloader's kernel cmdline? If it's device name, switching it to LABEL or UUID should avoid the device enumeration usurpation that almost assuredly is assigning sda instead of sde to the PCI card when it has a connected and powered HD during POST. There may be a BIOS change you can make to keep the onboard SATA ports prioritized over the PCI card. Rebuilding the initrd with the Silicon Image module explicitly excluded might also help, as might using the PCI card's own setup utility.
The Motherboard BIOS(DEL) and the independent board BIOS(CRTL-S) don't see each other. And they do have the same config as per running Ubuntu. No RAID in use.
To remove the sata_sil24 from initrd seems a good idea, can you point me a site where I can read how to do that?


See my LILO config below, no fancy parameter on kernel loading.

Code:

LILO configuration file
# generated by 'liloconfig'
#
# Start LILO global section
# Append any additional kernel parameters:
append=" vt.default_utf8=0"
boot = /dev/sda

# Boot BMP Image.
# Bitmap in BMP format: 640x480x8
  bitmap = /boot/cedalion.bmp
# Menu colors (foreground, background, shadow, highlighted
# foreground, highlighted background, highlighted shadow):
  bmp-colors = 255,0,255,0,255,0
# Location of the option table: location x, location y, number of
# columns, lines per column (max 15), "spill" (this is how many
# entries must be in the first column before the next begins to
# be used.  We don't specify it here, as there's just one column.
  bmp-table = 60,6,1,16
# Timer location x, timer location y, foreground color,
# background color, shadow color.
  bmp-timer = 65,27,0,255

# Standard menu.
# Or, you can comment out the bitmap menu above and
# use a boot message with the standard menu:
#message = /boot/boot_message.txt

# Wait until the timeout to boot (if commented out, boot the
# first entry immediately):
prompt
# Timeout before the first entry boots.
# This is given in tenths of a second, so 600 for every minute:
timeout = 60
# Override dangerous defaults that rewrite the partition table:
change-rules
  reset
# VESA framebuffer console @ 640x480x64k
vga = 785
# Normal VGA console
#vga = normal
# Ask for video mode at boot (time out to normal in 30s)
#vga = ask
# VESA framebuffer console @ 1024x768x64k
#vga=791
# VESA framebuffer console @ 1024x768x32k
#vga=790
# VESA framebuffer console @ 1024x768x256
#vga=773
# VESA framebuffer console @ 800x600x64k
#vga=788
# VESA framebuffer console @ 800x600x32k
#vga=787
# VESA framebuffer console @ 800x600x256
#vga=771
# VESA framebuffer console @ 640x480x64k
#vga=785
# VESA framebuffer console @ 640x480x32k
#vga=784
# VESA framebuffer console @ 640x480x256
#vga=769
# End LILO global section
# Linux bootable partition config begins
image = /boot/vmlinuz
  root = /dev/sda2
  label = Linux
  read-only
# Linux bootable partition config ends


onebuck 11-15-2018 07:02 AM

Member Response
 
Hi,

You can look at; http://docs.slackware.com/start

Hope this helps.
Have fun & enjoy!
:hattip:

crions 11-15-2018 08:29 AM

Solution with persistent naming
 
After this few tips I went quest the persistent naming option. This is something that Ubuntu does quite a lot with hardware (sometimes to my annoyance)

So I went Slackware Documentation Project:
https://docs.slackware.com/howtos:sl...sistent_naming

And fixed boot using the UUID of the SDA2 to the boot process (initrd) and not recompiling kernel.
Needed to reinstall the LILO with the new initrd and voilą...

Now Slackware is booting properly and all drive mapped with no problems.

This is my NEW fstab using UUID:

Code:

#/dev/sda1
UUID=59de16b2-b48f-4423-bbe2-31b17f76a281      swap                    swap    defaults        0  0
#/dev/sda2
UUID=11d6ab52-f72d-453e-bd65-6ccca3dd3308      /                      ext4    defaults        1  1
#/dev/sdb1
UUID=6ee59670-bf12-4cec-8741-ff0c911e4736      /crions/backup/hd_sdb  ext4    defaults        0  0
#/dev/sdc1
UUID=ca9919f8-0cd8-451e-a624-5bffa36d7216      /crions/backup/hd_sdc  ext4    defaults        0  0
#/dev/sdd1
UUID=39243492-f1cf-4976-96a2-b73721a3dd25      /crions/backup/hd_sdd  ext4    defaults        0  0
#/dev/sde1
UUID=b6f21a88-9086-4bf3-bf39-980c89ce1fc6      /crions/backup/hd_sde  ext4    defaults        0  0
#/dev/cdrom      /mnt/cdrom      auto        noauto,owner,ro,comment=x-gvfs-show 0  0
/dev/fd0        /mnt/floppy      auto        noauto,owner    0  0
devpts          /dev/pts        devpts      gid=5,mode=620  0  0
proc            /proc            proc        defaults        0  0
tmpfs            /dev/shm        tmpfs      defaults        0  0

This was the initrd build command:
Code:

mkinitrd -c -k 3.2.29 -f ext4 -r "UUID=11d6ab52-f72d-453e-bd65-6ccca3dd3308" -m usbhid:ehci-hcd:uhci-hcd:mbcache:jbd2:ext4 -u -o /boot/initrd.gz
Thank you all for the tips :-)

ehartman 11-15-2018 08:30 AM

Quote:

Originally Posted by crions (Post 5926313)
ehartman: Thank you for the clarification on enumeration (8,2). Unfortunately this guy will be a server so I cannot have to reconnect the drives on each power failure.

I didn't expect you to <grin>.

Others have already mentioned using a kernel and initrd without the module for the new board. If that module gets loaded AFTER the disk with the root fs has been found and mounted, there's no problem anymore.

A even more radical solution would be to connect your original root disk to the new board (and adjust the other accordingly, so that the kernel would find your "old" disks on the new board and the "to be added ones" on the mother controllers.

But you seem to have marked this SOLVED, so you managed to get your server working again;
congratulations!


All times are GMT -5. The time now is 03:39 PM.