Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
You can even sometimes load two of them, if both are installed. You can even have them compiled into the binary - though that can cause problems depending on the cards (they must not use the same shared memory, or control register addresses)
If you look at the configuration files for most distributions, you will see that they have all the common video card drivers selected. It is the system init that loads the drivers for the identified hardware. Usually loading is done in the initrd, though it can be delayed until after the real root is mounted and the final init program runs.
Ah, now I'm thoroughly confused, every time I've gone to "install" video drivers in linux, I've needed the kernel source. That always said to me that if the source is needed, it's just compiling itself into the kernel. I suspect that is now not necessarily the case.
Modules exist independently of the kernel, yes? Does this mean I could have an ATI module and an Nvidia module that I could shuffle around onto the right machines?
Previous post was in response to @jpollard - but yes, the modules are supplemental to (rather than "independent" of) the kernel, and yes you can have both present. Simply a matter of loading the correct one.
That sounds reasonable, but I think before I get in too deep, I should learn more about the linux kernel. Coming from windows land, I'm used to drivers for things being separate entities from the kernel, and you go to install them, and they sit in a folder somewhere, and get called when needed (or at boot or whatever) but the sense I'm getting from linux is that they're much closer to the heart of the OS, hence the need for compiling modules and the general level of depth when installing devices.
If I end up with a kernel module, is that going to be like a file, or a tar.gz that I could copy to a usb drive and move from machine to machine?
Windows programmer at work, Linux at home guy here.
For video cards you have two kinds of drivers: the open source ones like nouveau (nvidia), radeon* (ati); closed-source ones from nvidia and ati (the companies).
The "driver" portion of Windows/Linux is identical in function between the two kernels. Linux "probes" the hardware at startup to see which driver to load, or can be told which driver to load. Windows does essentially the same thing and will additionally offer to go download the "stock" driver for the hardware. The "stock" drivers are the ones that the respective companies have submitted to Microsoft for inclusion in the Windows install media.
So, the Linux kernel modules are (mostly) all of the drivers and driver support code for all of the hardware that Linux supports. On my Slackware Linux machines at home I have the full kernel modules installed, which includes open source drivers for both ati and nvidia cards, and the kernel handles figuring out which card I have and loading the correct driver.
Linux modules, or drivers, aren't really any "closer" to the kernel than are Windows drivers, generally speaking, it is just that they're more "exposed" to the user. This is a consequence of the difference between an open source OS and a closed source OS.
EDIT: On Linux the kernel modules (drivers) can be compiled into the kernel, compiled as "modules" where they're simply files in a directory, or not compiled at all, which means that that kernel/Linux installation would not support those devices for which the modules were not compiled. IIRC, kernel module file names end in .ko, so you could do a 'locate .ko' and figure out where they're stored. I'm at work right now and don't remember off of the top of my head.
Kernel modules (drivers, filesystems, encryption, network,...) are all files stored in /lib/modules/<kernelversion>/kernel/...
All modules have the extension ".ko" for "kernel object". They are compiled/linked to a specific kernel version (using the appropriate headers and system.map for the target kernel) such that the module can be loaded into the kernel memory, and connected to the kernel data structures so that they may be used.
Each of these could be linked directly into the kernel, but that would bloat the kernel a LOT and use up a fair amount of memory (well over 10 MB). Most modules are not in use on any given system, or even at any given time. The base generic kernel is usually about 5MB.
The advantage of modules is that they can be replaced, the active module unloaded, and the replacement loaded, thus the system updated without requiring a reboot. SOME modules can't be unloaded (they may be in constant use after they get loaded), which would require a reboot.
Performance wise, the only thing saved by compiling in is some boot time, which isn't all that significant. If all needed drivers are compiled in, then there is no need for the module loader either which saves some memory. I believe a number of embedded configurations use this just to eliminate the real need for an initrd, though that is still useful for storing applications for the embedded system.
As for the open source drivers vs the closed source drivers... the major difference is that the company providing the closed source drivers can put custom/proprietary functions in the driver. The Open source drivers won't have those, and sometimes (when the information is unavailable from the vendor) the driver has to be reverse engineered - which can take quite a while, and be very error prone. When the information is available, the open source drivers usually match what the vendor can provide, and it assures that the driver won't disappear if the vendor decides to stop providing the proprietary driver, or drops support for the board.
At the risk of getting yelled at - my goal is to have a machine that never changes. No updates, no new software. This will be a machine I configure once to do one thing and that thing will never change. Security is no concern to me for this application. Essentially, if compiling things into the kernel makes it boot faster and the key loss is modular upgrade-ability, that's an easy choice to make.
I wonder...then if I should just have two different kernels, because the video card will be the only thing that changes from machine to machine.
I have a whole boatload of research in front of me....I'm very curious to see how small I could make the kernel for my application.