Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place! |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
11-04-2016, 07:32 AM
|
#1
|
LQ Newbie
Registered: May 2016
Posts: 3
Rep:
|
Linux behavior if hardware stops responding
This question may seems ambiguous, but what i wanted to understand was how linux kernel (any distribution) will behave in case some h/w stops responding. I have a debian booted over a VM. If i make any of my emulated device to sleep and then do a r/w call on that device, what are the general principles of linux to handle such scenarios.
|
|
|
11-04-2016, 07:57 AM
|
#2
|
LQ Guru
Registered: May 2005
Location: boston, usa
Distribution: fedora-35
Posts: 5,326
|
my understanding is that the linux kernel is modular so that particular module would stop responding if you, e.g., unplug the sound card.
a monolithic kernel would kernel-panic/blue screen.
|
|
|
11-04-2016, 08:04 AM
|
#3
|
LQ Guru
Registered: Sep 2013
Location: Somewhere in my head.
Distribution: Slackware (15 current), Slack15, Ubuntu studio, MX Linux, FreeBSD 13.1, WIn10
Posts: 10,342
|
Quote:
Originally Posted by schneidz
my understanding is that the linux kernel is modular so that particular module would stop responding if you, e.g., unplug the sound card.
a monolithic kernel would kernel-panic/blue screen.
|
I think you got your kernels mixed up Linux is a
Quote:
architecture - Why is Linux called a monolithic kernel? - Stack Overflow
stackoverflow.com/questions/1806585/why-is-linux-called-a-monolithic-kernel
Nov 27, 2009 - 216
down vote
accepted
A monolithic kernel is a kernel where all services (file system, VFS, device drivers, etc) as well as core functionality (scheduling, memory allocation, etc.) are a tight knit group sharing the same space. This directly opposes a microkernel.
A microkernel prefers an approach where core functionality is isolated from system services and device drivers (which are basically just system services). For instance, VFS (virtual file system) and block device file systems (i.e. minixfs) are separate processes that run outside of the kernel's space, using IPC to communicate with the kernel, other services and user processes. In short, if it's a module in Linux, it's a service in a microkernel, indicating an isolated process.
Do not confuse the term modular kernel to be anything but monolithic. Some monolithic kernels can be compiled to be modular (e.g Linux), what matters is that the module is inserted to and runs from the same space that handles core functionality (kernel space).
The advantage to a microkernel is that any failed service can be easily restarted, for instance, there is no kernel halt if the root file system throws an abort. This can also be seen as a disadvantage, though, because it can hide pretty critical bugs (or make them seem not-so-critical, because the problem seems to continuously fix itself). It's seen as a big advantage in scenarios where you simply can't conveniently fix something once it has been deployed.
The disadvantage to a microkernel is that asynchronous IPC messaging can become very difficult to debug, especially if fibrils are implemented. Additionally, just tracking down a FS/write issue means examining the user space process, the block device service, VFS service, file system service and (possibly) the PCI service. If you get a blank on that, its time to look at the IPC service. This is often easier in a monolithic kernel. GNU Hurd suffers from these debugging problems (reference). I'm not even going to go into checkpointing when dealing with complex message queues. Microkernels are not for the faint of heart.
The shortest path to a working, stable kernel is the monolithic approach. Either approach can offer a POSIX interface, where the design of the kernel becomes of little interest to someone simply wanting to write code to run on any given design.
I use Linux (monolithic) in production. However, most of my learning, hacking or tinkering with kernel development goes into a microkernel, specifically HelenOS.
Edit
If you got this far through my very long-winded answer, you will probably have some fun reading the 'Great Torvalds-Tanenbaum debate on kernel design'. It's even funnier to read in 2013, more than 20 years after it transpired. The funniest part was Linus' signature in one of the last messages:
Linus "my first, and hopefully last flamefest" Torvalds
Obviously, that did not come true any more than Tanenbaum's prediction that x86 would soon be obsolete.
NB:
When I say "Minix", I do not imply Minix 3. Additionally, when I mention The HURD, I am referencing (mostly) the Mach microkernel. It is not my intent to disparage the recent work of others.
|
Last edited by BW-userx; 11-04-2016 at 08:08 AM.
|
|
1 members found this post helpful.
|
11-04-2016, 08:04 AM
|
#4
|
LQ Guru
Registered: Sep 2013
Location: Somewhere in my head.
Distribution: Slackware (15 current), Slack15, Ubuntu studio, MX Linux, FreeBSD 13.1, WIn10
Posts: 10,342
|
Quote:
Originally Posted by schneidz
my understanding is that the linux kernel is modular so that particular module would stop responding if you, e.g., unplug the sound card.
a monolithic kernel would kernel-panic/blue screen.
|
I think you got your kernels mixed up Linux is a
Quote:
architecture - Why is Linux called a monolithic kernel? - Stack Overflow
stackoverflow.com/questions/1806585/why-is-linux-called-a-monolithic-kernel
Nov 27, 2009 - 216
down vote
accepted
A monolithic kernel is a kernel where all services (file system, VFS, device drivers, etc) as well as core functionality (scheduling, memory allocation, etc.) are a tight knit group sharing the same space. This directly opposes a microkernel.
A microkernel prefers an approach where core functionality is isolated from system services and device drivers (which are basically just system services). For instance, VFS (virtual file system) and block device file systems (i.e. minixfs) are separate processes that run outside of the kernel's space, using IPC to communicate with the kernel, other services and user processes. In short, if it's a module in Linux, it's a service in a microkernel, indicating an isolated process.
Do not confuse the term modular kernel to be anything but monolithic. Some monolithic kernels can be compiled to be modular (e.g Linux), what matters is that the module is inserted to and runs from the same space that handles core functionality (kernel space).
The advantage to a microkernel is that any failed service can be easily restarted, for instance, there is no kernel halt if the root file system throws an abort. This can also be seen as a disadvantage, though, because it can hide pretty critical bugs (or make them seem not-so-critical, because the problem seems to continuously fix itself). It's seen as a big advantage in scenarios where you simply can't conveniently fix something once it has been deployed.
The disadvantage to a microkernel is that asynchronous IPC messaging can become very difficult to debug, especially if fibrils are implemented. Additionally, just tracking down a FS/write issue means examining the user space process, the block device service, VFS service, file system service and (possibly) the PCI service. If you get a blank on that, its time to look at the IPC service. This is often easier in a monolithic kernel. GNU Hurd suffers from these debugging problems (reference). I'm not even going to go into checkpointing when dealing with complex message queues. Microkernels are not for the faint of heart.
The shortest path to a working, stable kernel is the monolithic approach. Either approach can offer a POSIX interface, where the design of the kernel becomes of little interest to someone simply wanting to write code to run on any given design.
I use Linux (monolithic) in production. However, most of my learning, hacking or tinkering with kernel development goes into a microkernel, specifically HelenOS.
Edit
If you got this far through my very long-winded answer, you will probably have some fun reading the 'Great Torvalds-Tanenbaum debate on kernel design'. It's even funnier to read in 2013, more than 20 years after it transpired. The funniest part was Linus' signature in one of the last messages:
Linus "my first, and hopefully last flamefest" Torvalds
Obviously, that did not come true any more than Tanenbaum's prediction that x86 would soon be obsolete.
NB:
When I say "Minix", I do not imply Minix 3. Additionally, when I mention The HURD, I am referencing (mostly) the Mach microkernel. It is not my intent to disparage the recent work of others.
|
Last edited by BW-userx; 11-04-2016 at 08:08 AM.
|
|
1 members found this post helpful.
|
All times are GMT -5. The time now is 04:55 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|