SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
is there a reason why the huge kernel does not include the hyper-v features build in but only as modules?
this could be an impediment for Slackware on Azure :-)
no, seriously, it would be nice if huge kernel would just work on hyper-v without making a way around
Kernels load modules when they determine they are dealing with a workload that require it. That's not true just of the virtualization modules, all the linux kernel modules insert and extract in a dizzying maelstrom of functionality.
Not familiar with Azure (no worries, I'm not familiar with a lot of stuff) but what I'm referencing is widely known and unless it's just a virtualization platform for Microsoft (it's a microsoft product, right?) it will account for this process.
Kernels load modules when it determines it's dealing with a workload that requires it. That's not true just of the virtualization modules, all the linux kernel modules insert and extract in a dizzying maelstrom of functionality.
hm, but when you need a module to access the storage (hd), than auto loading on boot does not solve anything
so it would be nice to have at least the storage module in the kernel.
Which storage module? There are quite a few available to the kernel, and you don't want it to load them all.
That's why it waits for workload and configures in line.
If you know ahead of time you'll need a specific module (like the one that lets you mount a SAN and boot your instance) you build the module into the initramfs and then reference it when you boot the node however as soon as it comes out of initramfs it reverts back to the original system (insert/unload as needed). The module for sata/pata et al hard-drive is included in the initramfs by default so when the kernel detects workload (which can be a request to access a resource) it loads the driver and accesses the resource (the SAN, the Hard drive).
To clearly understand why it needs to be this way consider how many memory registers the kernel would require in it's heap to load every driver it has access to (tens of thousands). It would render processing times on the kernel unmanageable, it would probably end up running kernel code from swap, which isn't going to work no matter how fast your SSD is...
Which storage module? There are quite a few available to the kernel, and you don't want it to load them all.
That's why it waits for workload and configures in line.
If you know ahead of time you'll need a specific module (like the one that lets you mount a SAN and boot your instance) you build the module into the initramfs and then reference it when you boot the node however as soon as it comes out of initramfs it reverts back to the original system (insert/unload as needed). The module for sata/pata et al hard-drive is included in the initramfs by default so when the kernel detects workload (which can be a request to access a resource) it loads the driver and accesses the resource (the SAN, the Hard drive).
with 14.1 you can not boot Slackware after an installation, you need to use the Slackware CD, mount the hd, create an initrd, add it to lilo, than you can start to work.
I think this should not be required with the huge kernel.
checking the config of current it seems the required modules are still compiles as modules for the huge, I wounder what the reason for this is. I think it should just work and the 2 or 3 modules should be added to the huge kernel.
edit:
saw your edit to late. obviously you did ont get the question right. so please stop explaining things about memory registsters for the hyperv storage module, thanks
with 14.1 you can not boot Slackware after an installation, you need to use the Slackware CD, mount the hd, create an initrd, add it to lilo, than you can start to work.
I never ran into that, though I generally just roll a platform and a couple of virtual machines along on current. I don't do nightly builds/deployments (by any stretch of the imagination) so I may have missed it.
RE: The contents of Huge:
I was editing my prior post while you were responding, sorry. I referenced memory management in the last paragraph, the only thing I'd add is to the best of my understanding it's not uncommon for a module to be "present" (i.e. in the initrd image) and available to the kernel however it's unloaded, i.e. not currently interacting with the kernel code, and as such producing no memory overhead or doing any work. The modules you want loaded by default, the intel and amd virtualization modules, are included in every linux kernel initrd for x86 architecture I'm familiar with, you just don't see them loaded unless their is a workload they are needed for.
edit:
OK so why don't I just explain this part to you. It's not slackwares kernel, take your issue up with Linus Torvalds.
I have noticed that vsftpd is not compatible with crypt() implementation in glibc 2.17 and newer. Please consider applying the following patch to vsftpd in Slackware to fix this issue.
Question: The -current hplip package includes a /usr/lib/systemd directory. Is this directory required even if we do not use systemd? Same for the gvfs package.
Thanks. I'm already testing gvfs-1.28.2 for myself and while using "--disable-libsystemd-login" i do not get those systemd-files. Might be worth testing for the gvfs-1.26-package in -current. For hplib i could not find such an option. Will do some more testing...
Question: The -current hplip package includes a /usr/lib/systemd directory. Is this directory required even if we do not use systemd? Same for the gvfs package.
Good question DarkVision.
I wondered the same thing but it can't hurt having them around, especially since they're all isolated in /usr/lib/systemd/{system,user}/ {?}
It's also interesting to see which apps run as background daemons ...
I still run wicd from /extra/ and it also has a /systemd/system/ entry.
I understand why wicd needs to run as a daemon.
OTOH, I installed evince as a dependency for some other SBo.
Why does a freakin' document viewer need to run as a daemon ?
This isn't Windows and evince isn't acrobat -- processes start quickly on Linux and I've never directly run evince !
Anyhow, this is on my reading list ... someday ...
Wouldn't the following be a better solution as the above still has the ability to fall-through to later code should crypt() return NULL:
Code:
p_crypted = crypt(str_getbuf(p_pass_str), p_spwd->sp_pwdp);
if ( !p_crypted )
return 0;
if (!vsf_sysutil_strcmp(p_crypted, p_spwd->sp_pwdp))
return 1;
Anyway, looks like this ought to be fixed upstream and not at the distro level.
GazL --
You're right, the patch should be applied by the vsftpd maintainers.
Looking at the patch submitted by nixi in context, the code is isolated to function vsf_sysdep_check_auth( ) which is within an #ifndef VSF_SYSDEP_HAVE_PAM block.
See below.
Function vsf_sysdep_check_auth( ) checks for an entry in /etc/shadow and if that does not exist it looks in /etc/passwd
Your suggestion would bypass the existing /etc/passwd test if there's no valid entry in /etc/shadow changing the existing functionality:
Function vsf_sysdep_check_auth( ) checks for an entry in /etc/shadow and if that does not exist it looks in /etc/passwd
Your suggestion would bypass the existing /etc/passwd test if there's no valid entry in /etc/shadow changing the existing functionality:
Yes, that was what I was getting at. On a system with SHADOW, if crypt() returns NULL then it will most likely be because the password entry in /etc/shadow is an invalid or unsupported format. As the password field in /etc/passwd will be the character 'x' on a shadow enabled box I can see absolutely no point in falling-through to the non-shadow code, only to let it run crypt() again against /etc/passwd only to have it return NULL again and then eventually fall-through to the return 0 at the end of the function. IMO, if crypt can't decode the shadow password field, then access should be denied and it should not go on to check against the field in /etc/passwd. But that's just my take on it. YMMV.
And after I started looking at vsftp code ( I couldn't help myself ), maybe the best way to fix this issue and more would be to patch vsf_sysutil_strcmp( ) in sysutil.c so that it detects NULL pointers and returns 'not equal' ...
Function vsf_sysutil_strcmp( ) is simply a wrapper around strcmp( ) so there is an opportunity for a sanity-check there ( otherwise what's the point of vsf_sysutil_strcmp( ) ? )
That would fix the issue at hand and possibly others as well.
Anyhow ... vsftp won't even build as-is on a Current 64 + multilib system !
This is because shell function vsf_findlibs.sh locates 32-bit libraries in /usr/lib/ rather than native 64-bit libraries in /usr/lib64/
The strings /lib/ and /usr/lib/ are hard-coded in the vsf_findlibs.sh script ...( sigh )...
I've got a deadline looming and I've got no time to fix vsf_findlibs.sh right now
-- kjh
Patch that could fix NULL pointers in vsf_sysdep_check_auth( ) and elsewhere too:
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.