LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   Do you use alternative kernels? (https://www.linuxquestions.org/questions/slackware-14/do-you-use-alternative-kernels-4175490033/)

moisespedro 01-03-2014 03:06 PM

Do you use alternative kernels?
 
I am trying to have my system as fast as posible and was searching for custom kernels like zen-kernel, for example. But I didn't find any reason to use them, it doesn't seem they make that much of a difference. What do you think?

ReaperX7 01-03-2014 03:19 PM

Kernels won't make your system run any faster than the hardware will allow. You could strip a kernel to barebones minimum trying to reduce the memory footprint, but that's about it.

To get a system to run faster you have to do things like overclocking, replacing older hardware with newer and faster versions, and maybe adding more RAM, a faster hard drive, or faster CPU.

Didier Spaier 01-03-2014 03:24 PM

Fast doing what? Please give some examples or tasks you'd like to accelerate else you could receive pointless advices.

Generally speaking, I don't think you'll gain, much customizing your kernel, and fast hardware can be a more important factor than software optimization.

PS ReaperX7 was faster ;)

moisespedro 01-03-2014 03:40 PM

Ok, let me explain myself better: I often see those people, like the ones developing/using them, saying it is optimized/faster/whatever. It seems that isn't the case.

astrogeek 01-03-2014 03:55 PM

I know your question is not about Gentoo, but be careful to not fall into this trap...

HOLY COW I'M TOTALLY GOING SO FAST OH F***.

There is no magic incantation like "-OMG speed=150%". Just do the things that you understand, one by one, and gauge the result yourself on your own system.

As stated earlier, kernel-wise you can reduce the memory footprint and remove unneeded modules, but generally there are not any dramatic speed gains to be made there.

*** EDIT ***

I was not familiar with the "zen kernel" so I did a quick search and found a lot of 404 pages, including what appears to be the ZenKernel home page on the buntu wiki, which says:

Quote:

OBSOLETE, NEW WEBSITE

This document is currently obsolete, the new one can be found on the new Zen kernel home page at: http://zen-kernel.org/tutorials/dist...u-installation
... and leads to another 404...

So unless I missed something, the ZenKernel, whatever it was, is no more...

moisespedro 01-03-2014 04:13 PM

Here

ReaperX7 01-03-2014 04:14 PM

Zen is supposed to be some kind of universal desktop kernel for everyday usage. To be honest, it's not really that great. It's often best to stick to the kernel provided by your distribution, or built yourself.

moisespedro 01-03-2014 04:16 PM

There is liquorix too (and many others), but it seems to be worse than stock kernel
http://www.phoronix.com/scan.php?pag...uorix_32&num=1

moisespedro 01-03-2014 04:22 PM

Oh, and by the way I am testing Gentoo (what a funny webpage lol) but I am still totally confused (I am not a very skilled linux user but whatever). I like the idea behind it and I like compiling stuff but I don't know if it is worth it and it is definitely not as simple as Slackware.

metaschima 01-03-2014 04:27 PM

A new kernel compiled for your processor, will boost performance by a bit. Another thing I have found to increase performance is recompiling glibc, glib, and glib2 using '-march=native' in the Slackbuilds.

moisespedro 01-03-2014 04:30 PM

I am running a recompiled kernel and I am not seeing much difference. And I don't feel comfortable enough to recompile glibc, glib or glib2.

TobiSGD 01-03-2014 05:35 PM

It depends, a different kernel can run faster, but usually not because of better optimization, but because a newer kernel may contain bug fixes that speed up the system. For example, kernels 3.11 and earlier had a bug in the ondemand CPU governor that was fixed in 3.12. Under certain circumstances and with specific benchmarks this bugfix could speed up the system up to 90%.
But usually you will increase performance to a much better extent if you rather compile your applications for your specific CPU or GPU. But don't expect wonders from that either.

metaschima 01-03-2014 06:19 PM

Quote:

Originally Posted by moisespedro (Post 5091432)
I am running a recompiled kernel and I am not seeing much difference. And I don't feel comfortable enough to recompile glibc, glib or glib2.

It's simple, just run 'lftp' to mirror the slackware directory you want for example:

Code:

lftp -c 'open ftp://mirrors.usc.edu/pub/linux/distributions/slackware/slackware64-14.1/source/l/glib/; mirror'
Make sure the slackware version is right. Then edit the Slackbuild to contain '-march=native' for your architecture or for all if you are not sure. Then you run the Slackbuild as root, wait for it to finish, and then run 'upgradepkg --reinstall' on the package that is created.

aus9 01-03-2014 07:07 PM

moisespedro

I don't use Slackware.

If you are going to quote an old link about Liquorix kernels as per post number 8 you may not have observed that it appears to be dated 27 March 2012

rant starts.....giggles

and that is the point why I use and will continue to use it. Altho on Debian sid

reason

If there is a kernel security update or patch required I have always found that Steven Barrett AKA damentz does a great job of pumping out updates very quickly

now look at your repo for Slackware and tell me what is the kernel version?

I will attempt to show it via web pages ok

at time of writing this rant.....forgive me as I don't have slack installed to check YMMV

slackware
http://slackbuilds.org/mirror/slackw...s/VERSIONS.TXT
claims 3.10.17 for 32 bit

liquorix
http://liquorix.net/debian/pool/main/l/linux-liquorix/
claims 3.12-6 for 32 bit

sorry if I offend any one

rant ends

hitest 01-03-2014 07:15 PM

I usually use the kernel that ships with Slackware, but, I have compiled my own kernel before. Alien Bob (one of our lead slackware developers) has a good kernel compile guide that works very well:

http://alien.slackbook.org/dokuwiki/doku.php?id=linux:kernelbuilding&s[]=kernel&s[]=compile

moisespedro 01-03-2014 07:35 PM

Quote:

Originally Posted by aus9 (Post 5091508)
moisespedro

I don't use Slackware.

If you are going to quote an old link about Liquorix kernels as per post number 8 you may not have observed that it appears to be dated 27 March 2012

rant starts.....giggles

and that is the point why I use and will continue to use it. Altho on Debian sid

reason

If there is a kernel security update or patch required I have always found that Steven Barrett AKA damentz does a great job of pumping out updates very quickly

now look at your repo for Slackware and tell me what is the kernel version?

I will attempt to show it via web pages ok

at time of writing this rant.....forgive me as I don't have slack installed to check YMMV

slackware
http://slackbuilds.org/mirror/slackw...s/VERSIONS.TXT
claims 3.10.17 for 32 bit

liquorix
http://liquorix.net/debian/pool/main/l/linux-liquorix/
claims 3.12-6 for 32 bit

sorry if I offend any one

rant ends

I noticed it was an almost 2 years old benchmark, I just used it as an example and it was the only one I've found. I got curious about custom kernels, just wanted to know if they were worth it. And that is right, for x86_64 slackware too, it runs 3.10.17 kernel version.

TobiSGD 01-03-2014 08:42 PM

Quote:

Originally Posted by aus9 (Post 5091508)
now look at your repo for Slackware and tell me what is the kernel version?

Since you don't use Slackware it will be forgiven that you don't know that in Slackware's release versions kernels are never updated (well, almost never) ;).
It is the responsibility of the user to keep track of security updates and install newer kernels. But the user is not all alone, if you look in /testing/source in the repository you will see that there are configuration files for the 3.12 kernel.

moisespedro 01-03-2014 08:54 PM

I am not crazy about security and I trust Volkerding and Linux enough to not get crazy about new kernel versions.

aus9 01-03-2014 09:09 PM

TobiSGD
fair enough altho it strikes me, that compiling takes time and is hw dependent and hence back to the original theme


Quote:

I am trying to have my system as fast as posible
You can go faster with less bloat

KDE and Gnome/variants are more bloated than others
XFCE is a decent compromise
I use Enlightenment but I have a decent video card etc

Next look at what services you have running either on startup or continuous
Naturally my web browser with its features is a bit bloated compared to lighter ones
----chrome without disabling some features is more intensive than firefox---without disabling some features
YMMV

moisespedro 01-03-2014 09:11 PM

I am using XFCE, I can't stand Gnome 3, KDE 4 (KDE 3 was tolerable), Unity and every other bloated wm. I thought about using a *box wm but I don't really know, it might be a bit too much.

enorbet 01-03-2014 09:12 PM

I am a recovering speed freak... well, I don't mean from any stupid chemicals (unless that includes nitromethane) but as a teen I was an obsessed hot rodder and by the time I got a P133 to effectively become a p200 I was hooked on overclocking. I would overclock your phone given the chance (probably draw the line at toasters but hey... maybe we can shave 20 secs off that time LOL). It is possible I will tear up if someone says Celeron 300A in conversation, but then that's the kind of friends I have. For a time I was also a rabid "kernel shaver". I said "was" because the release of the 2.6 kernel changed all that.

The 2.4 kernel had 3400 kloc and it wasn't too hard to get the bzImage down under 1.4M where it would fit on a floppy and because it didn't unload modules like the 6000 kloc 2.6 kernel, it was well worth shaving it to be essentially "embedded" ie: only existing hardware supported and no hardware supported that was not on the system. This improved boot times and because the kernel, the actual kernel, was accessed sometimes many times a second, it improved speed of the whole system. This has been virtualized. No longer does the entire actual kernel swing like some dead weight.

The only remaining speed advantage to a small kernel is boot times, assuming you're no longer using a 2.4 kernel.

I am quite involved in audio recording so another reason I did and still do compile custom kernels is to create a realtime, low latency beast, essential for crucial timings in track mixing and overdubbing, especially after many "takes". This speed boost is still available and rightfully mentioned in Alien Bob's "How To" for custom kernel building. That and selecting your (nearly) exact CPU and not accepting i486, i586 for an i7 or whatever has a nice cumulative effect, making the system smooth and responsive.

Presently there are zero kernel speed boosts available to any would-be kernel hot rodders that are not mentioned in Bob's How To, and only go low latency if your hardware is known solid. I still overclock some but it is all so pedestrian and expected now that it has largely lost it's renegade thrill. However as a hangover perhaps, I still don't trust stock coolers and always get a monstrous cooler. Heat is the Enemy!

ReaperX7 01-03-2014 09:26 PM

A newer kernel might offer some benefits but beware that there are cases where slower operation times actually turn out to have better stability, overall reliability, and in some cases, work as the manufacturer intended. Optimal performance means simply that, the operation in question runs at it's optimal speed regardless of how fast or slow that is.

When you think about getting a new kernel to build for your system, plan carefully and see what's available. Right now on LFS I'm running 3.12.3 which was from the stable line. It's stable and runs my hardware as I need it. It's built off using the defconfig method to auto-detect the architecture and most basic hardware components, with additions through menuconfig added required by the LFS/BLFS books, and my hardware system driver specifications. It's very trimmed down as far as kernels go however so my memory footprint is very small.

moisespedro 01-03-2014 09:29 PM

I thought about trying to compile the kernel with the only things my machine (reading everything) but I don't think it is worth it. I even thought about trying to tweaking my boot time customizing the rc files but I only turn on my machine once a day so it is not that much of a big deal. The only thing I do is to try to use lightweight apps but only if I like the experience. Navigating throught something like "links" is as lightweight as it can be but that doesnt' mean I am gonna use it. I think it is better to act this way, otherwise I would go crazy thinking about every option I could change to improve my system. Which is something kinda funny to realize, considering I am setting up a Gentoo box now. Anyways, this made me lol.

enorbet 01-03-2014 09:42 PM

Just a comment for the record - Hot Rodders sneer at "what the manufacturer intended".

At least half the performance AND safety features of modern gasoline powered vehicles are directly due to such sneering. PC manufacturers, like car manufacturers, don't like "hobbyists" looking under the hood. They tend to hide nuts 'n bolts and never admit that they need substantial cooling or can benefit from tweaking. If it wasn't for overclockers your typical OEM PC would still be using a single 80mm fan, if that, and there would be no accommodation for fan or cpu throttling or many other performance and convenience features. Manufacturers were forced to catch up.

Read, experiment, take notes, be careful and do what you will.

fsauer 01-04-2014 01:12 AM

Just as a reminder. Sometmes a new kernel is a necessity. On my Zenbook ux31 all kernels >3.9.x failed, until it was operating again under 3.12.1. I learned the hard way to install new kernels additionally, instead of updating :)

enorbet 01-04-2014 02:54 AM

Oh geez here we go again.... what is this talk of "bloat" when referring to Linux? Unlike Windows just having lots of stuff installed does not constitute bloat. One can trim down KDE to the nubs and just shutoff services right and left. Is default KDE Plasma Desktop more resource heavy than Blackbox, of course... than XFCE or even Enlightenment? Yes...default. But one can pile up services and features in those as well. If you need or want those services, who is qualified to call that bloat? If you don't need nor want those, just shut those off.

Besides, ram and hard drives are cheap so why not get all the features you can use?... even explore what you might like to use. If you look at the capability built in to even just KDE's run command most people would be shocked (if you don't have KDE installed so you can see this for yourself by clicking on the little wrench icon next to the Run dialog, just look here for a shadow of a clue ). If you see what Plasmoids can do now most people would be shocked. Can you use those features? You're never going to find out unless you try.

In real life a Dune Buggy can run circles around a Lincoln Town car but not in the PC world and which would you rather have for a long trip?

aus9 01-04-2014 06:26 AM

enorbet

excuse me but as I am the one to have said "more bloated"
I was not implying Linux was more bloated than Windows so pls spare me your ranting,
I am the only one allowed to rant here.....cos I said I was ranting but you have not prefaced your comments to show its a rant

Quote:

which would you rather have for a long trip?
I prefer Toyota.....giggles

solarfields 01-04-2014 07:13 AM

Quote:

HOLY COW I'M TOTALLY GOING SO FAST OH F***.
astrogeek, this is hilarious

enorbet 01-04-2014 01:16 PM

Quote:

Originally Posted by aus9 (Post 5091759)
enorbet

excuse me but as I am the one to have said "more bloated"
I was not implying Linux was more bloated than Windows so pls spare me your ranting,
I am the only one allowed to rant here.....cos I said I was ranting but you have not prefaced your comments to show its a rant


I prefer Toyota.....giggles

That's because I wasn't ranting and I never thought you were saying Linux is more bloated than windows, though perhaps my exasperation at mythology shows through :P It's just amazing how long such myths persist. I find references to bloat most commonly come from Gentoo and Arch users who revel in minimalistic installs. In truth, those distros make that work pretty well for them as long as you stick to the right repositories and don't try to make them be something they are not meant to be, such as really good at compiling from source.

Whats chaps me (though Astrogeek said it in a much more hilarious way) is that Arch, Gentoo and other minimalists persist in believing minimalism or CFlag setting makes their distro appreciably faster. It is commendable that Gentoo has managed to continue to exist when it's whole original reason for being was proven in benchmark after benchmark to be a false assumption.

Especially since the Linux kernel became truly modularized, the system cares not how much stuff is installed or capable of running, only what services are actually active. It is possible to have a Linux install that spans hundreds of gigabytes, with 50GB or more installed apps and a kernel of 5MB and on the very same box, another distro (or the same one pared down) to 15-20GB with a 2MB kernel and they will feel absolutely identical in speed, with the minor exception of boot times.

If it makes you feel slim and trim to keep things minimal that's perfectly valid, just don't try to imply that there is some performance advantage to that, or that someone elses system is "bloated" because it uses more hard drive space, having more stuff installed. That is a non sequitur.

metaschima 01-04-2014 01:31 PM

I have tested with recompiled glibc and there is a real difference. The difference is not great, but is noticeable because so many things depend on glibc. Same thing goes for the kernel.

Don't expect huge differences, but the difference is there.

Now, I would NOT go so far as to recompile every single package, because that is too tedious. However, I think the benefit of recompiling a few key packages outweighs the cost of a few minutes compile time. On my new machine even the kernel compiles in 5 to 10 minutes. I think it is time well spent :)

astrogeek 01-04-2014 02:19 PM

Quote:

Originally Posted by solarfields (Post 5091777)
astrogeek, this is hilarious

I can't take credit for it - I saw it linked in another thread here some time ago. Now every time I see talk of optimizing for speed I think of it. It is hilarious, and often so close to the mark!

ReaperX7 01-04-2014 03:05 PM

You could also rebuild every Slackware package with the -O3 optimization level rather than -O2 default, but you risk serious instability issues if you do.

metaschima 01-04-2014 03:49 PM

Quote:

Originally Posted by ReaperX7 (Post 5091951)
You could also rebuild every Slackware package with the -O3 optimization level rather than -O2 default, but you risk serious instability issues if you do.

That is not a good idea, and the costs would outweigh the benefits, especially stability-wise. Using '-march=native' is safe, while '-O3' is not guaranteed to be. With so many packages to rebuild the time used would not be worth the minor performance increase.

ReaperX7 01-04-2014 05:20 PM

Exactly. Often you'll find that running correctly and stably doesn't involve calculating speed into the equation.

grothen 01-04-2014 06:18 PM

:) so do you stick with the default kernel or do you upgrade with new releases?

moisespedro 01-04-2014 06:29 PM

Quote:

Originally Posted by grothen (Post 5092015)
:) so do you stick with the default kernel or do you upgrade with new releases?

If there is no need I don't upgrade it

qweasd 01-04-2014 08:17 PM

I tend to kernel-hop until the system stops having issues. Although, to be fair, sometimes I just upgrade because the new features sound awesome (I doubt I get any detectable benefit from them). Thankfully, Pat updates configs quite often. As it stands, I've been using 3.10.17 since late October, and it's been highly stable, so I won't upgrade until a truly grave security or stability issue is discovered, such a critical ext4 bug.

ReaperX7 01-05-2014 02:49 AM

I tend to stick to the kernel I build my SVN of LFS with which is usually the latest stable kernel. After that, it is only recompiled as needed to add modules or features. When I used Slackware, I stuck to whatever the -Current tree kernel was.

Claudiu.Ionel 01-05-2014 04:12 AM

Quote:

Originally Posted by moisespedro (Post 5091379)
I am trying to have my system as fast as posible and was searching for custom kernels like zen-kernel, for example. But I didn't find any reason to use them, it doesn't seem they make that much of a difference. What do you think?

You can make your system run at it's best if you use all the hardware you have in it. So your system will always be as fast as your hardware is, or slower if there are some bugs in the system. Generally you can change the Kernel ( the heart of your system ) if you try to use some new feature that it has been implemented into it. Let's say if you run some new fancy laptop you should use a 3.2 <= Kernel .
Also you should update your kernel if there are some important security concerns, some bugs, or incompatibilities with something ...
A good place to learn more about the Kernel is at his home : https://www.kernel.org/ . Be sure to read the changelogs if you are interested to see what bugs have been fixed and other stuff.

brianL 01-05-2014 05:16 AM

I'm usually satisfied with the generic kernel. I've only tampered with it once, that was before Slack64 came out, to get it to "see" more than 3.2 GB RAM.

rogan 01-06-2014 06:51 PM

I always "roll my own" and usually the latest of the series that comes with
Slackware. Sometimes there are some speed benefits if you include only drivers
and options necessary for your machine (and use).

Even the "generic" kernel is a pretty big beast with frame-pointers, tracers and
lots of debug stuff. Very useful if something goes bad and you have to find out
what happened.

I don't want that functionality. I usually just do a
make allnoconfig && make menuconfig and fill in the stuff that's necessary.
If I don't need modules I make it monolithic. These days with the nouveau driver
I don't see any real reason not to.

Is there a difference? Well I've done some tests. I found a small benchmarking
program here: http://www.unix.com/source/bm.zip. If you want to run it, just unpack
an execute ./Run, takes about an hour. Don't even try to compile anything in there.
The code is ancient and gcc barfs immediately. Some of the tests wont complete
correctly, I've included the ones that do.

Tests on 'generic' (with the deadline scheduler):

BYTE UNIX Benchmarks (Version 3.11)
System -- Linux 3.10.17 #1 SMP Wed Oct 23 2013
x86_64 AMD Phenom(tm) 9650 Quad-Core Processor AuthenticAMD
GNU/Linux
Start Benchmark Run: Mon Jan 6 2014.

Dhrystone 2 without register variables 12299645.3 lps (10 secs, 6 samples)
Dhrystone 2 using register variables 12471500.5 lps (10 secs, 6 samples)
C Compiler Test 979.3 lpm (60 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 119317.6 lpm (60 secs, 6 samples)
Recursion Test--Tower of Hanoi 117369.4 lps (10 secs, 6 samples)
System Call Overhead Test 2569626.8 lps (10 secs, 6 samples)
Pipe Throughput Test 1544249.7 lps (10 secs, 6 samples)
Process Creation Test 9651.5 lps (10 secs, 6 samples)
File Read (10 seconds) 4786429.0 KBps (10 secs, 6 samples)
File Write (10 seconds) 1324905.0 KBps (10 secs, 6 samples)
File Copy (10 seconds) 122986.0 KBps (10 secs, 6 samples)
File Read (30 seconds) 4776556.0 KBps (30 secs, 6 samples)
File Write (30 seconds) 1162625.0 KBps (30 secs, 6 samples)
File Copy (30 seconds) 84572.0 KBps (30 secs, 6 samples)

Tests on a monolithic kernel (deadline scheduler):

BYTE UNIX Benchmarks (Version 3.11)
System -- Linux 3.10.25-mono #1 SMP Mon Jan 6 2014
x86_64 AMD Phenom(tm) 9650 Quad-Core Processor AuthenticAMD
GNU/Linux
Start Benchmark Run: Mon Jan 6 2014.

Dhrystone 2 without register variables 12222226.7 lps (10 secs, 6 samples)
Dhrystone 2 using register variables 12313789.8 lps (10 secs, 6 samples)
C Compiler Test 1004.3 lpm (60 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 129666.7 lpm (60 secs, 6 samples)
Recursion Test--Tower of Hanoi 129083.8 lps (10 secs, 6 samples)
System Call Overhead Test 2633972.7 lps (10 secs, 6 samples)
Pipe Throughput Test 1574751.0 lps (10 secs, 6 samples)
Process Creation Test 9394.2 lps (10 secs, 6 samples)
File Read (10 seconds) 5439652.0 KBps (10 secs, 6 samples)
File Write (10 seconds) 1368310.0 KBps (10 secs, 6 samples)
File Copy (10 seconds) 125975.0 KBps (10 secs, 6 samples)
File Read (30 seconds) 5509959.0 KBps (30 secs, 6 samples)
File Write (30 seconds) 1202322.0 KBps (30 secs, 6 samples)
File Copy (30 seconds) 84547.0 KBps (30 secs, 6 samples)

The main difference seems to be in the memory subsystem (files get cached
pretty much instantly with 4G ram). Read speeds are significantly higher.

I guess I'll stick with this one.

narz 01-06-2014 10:03 PM

I rebuild my kernel with the CK/BFS patch. I don't know that it performs any better than the generic kernel but the BFS scheduler is designed for better interactivity and lower latencies, which I think is important. I also make my kernel preemptible and I turn off dynamic ticks as recommended by Con Kolivas.

lems 01-06-2014 10:50 PM

I built the 3.12.6 kernel. It boots a bit faster than the -huge kernel I used previously, and: YouTube videos with pipelight run smoother than before. They weren't really choppy, but the image would stand still for a short period of time occasionally. I don't know if it's because I set CONFIG_PREEMPT (I chose low-latency desktop) or because of the newer kernel, but it's a nice improvement.

jtsn 01-07-2014 10:51 PM

For a standard Slackware kernel there are some optimizations you could do:

1. Set the CPU family to match your actual CPU instead of CONFIG_M486 or CONFIG_MPENTIUMIII.

2. If you're on a desktop, you could rebuild the kernel with CONFIG_SCHED_AUTOGROUP enabled. It doesn't give you more throughput, but should result in a more responsive user experience. Same goes for CONFIG_PREEMPT.

3. If you're on Slackware 32 bit (for any reasons), you could install a 64 bit kernel. On machines with more than 1 GB RAM this results in better performance.

4. You could replace in-kernel drivers with better performing external drivers, like r8168, nvidia and fglrx.

That's it basically. On a modern PC there is no point in building a "minimal kernel" or using weird compiler flags.

ReaperX7 01-07-2014 10:56 PM

Number 4 is going to be a good debate due to the politics of those drivers. However, drivers from an OEM source being official will carry better scheduling and timing factors than the normal open source derivatives, at least until they catch up.

jtsn 01-07-2014 11:01 PM

Quote:

Originally Posted by ReaperX7 (Post 5094003)
Number 4 is going to be a good debate due to the politics of those drivers.

Well, at least the r8168 for the RTL8168/8111 PCIe NICs is just a plain GPLv2 FOSS driver and performing way better than the in-kernel r8169 driver (originally written for an age-old PCI controller and then heavily customized).

ReaperX7 01-07-2014 11:19 PM

Very true.

There are several other drivers as well from other OEMs as well, several of which are on SlackBuilds.org luckily. One is a RaLink driver I think, and a few others are some various printer drivers, last I looked.


All times are GMT -5. The time now is 03:18 AM.