SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Is there a way to get makepkg to use all available cpus like the way you can get make to do it with the -j $(nproc) option?
No, makepkg is a shell script, thus runs in a shell (/bin/sh) and a single shell runs in a single cpu cq core. Make is a compiled program so can do a little bit more.
No, makepkg is a shell script, thus runs in a shell (/bin/sh) and a single shell runs in a single cpu cq core. Make is a compiled program so can do a little bit more.
While I believe you are correct that makepkg cannot use more than a single core, it is not correct that shell scripts cannot run commands in parallel.
In slackware current, there is a new makepkg option
Code:
--threads <number> For xz/plzip compressed packages, set the max
number of threads to be used for compression. Only has an
effect on large packages. For plzip, the default is equal to
the number of CPU threads available on the machine. For xz,
the default is equal to 2 (due to commonly occuring memory
related failures when using many threads with multi-threaded
xz compression).
While I believe you are correct that makepkg cannot use more than a single core, it is not correct that shell scripts cannot run commands in parallel.
Of course a script CAN make use of parallel options to the programs it executes, but the script itself is pure single-threaded. Only external (non-bash ones) commands have the chance to run on another cpu/core or even multiple ones.
So I found the /sbin/makepkg script and looked at it, and decided that I should not try to figure out if and where gnu parallel could speed it up--I would break it,lol; but I look forward to speeding up the compression with the --threads option! I wonder if that current /sbin/makepkg would work on the 14.2-stable branch... I upgraded the number of cpus of a cloud instance just to make a kernel and slackware package, and downgraded the moment it was through so I wouldn't pay to much, and noticed that, with all the cpus, the kernel was compiling faster than the slackware package was created... I have it scripted so I don't accidentally leave the upgraded instance on, draining the bank; but I'm gonna change the script to downgrade after the kernel compile, and then use makepkg back on 1 cpu. And I'll check out --threads. Thanks for the information. Slack On!
Of course a script CAN make use of parallel options to the programs it executes, but the script itself is pure single-threaded. Only external (non-bash ones) commands have the chance to run on another cpu/core or even multiple ones.
Sure, but as far as I understand, the question was not about invoking makepkg in parallel in order to make many packages at once, but rather whether it can perform some of its internal operations in parallel. Per the comparison with Make, Make is invoked once but can compile many source files simultaneously. The fact that makepkg is a shell script does not prevent it from doing something analogous to that, which is what post #2 implied.
Last edited by montagdude; 01-07-2019 at 03:21 PM.
So I found the /sbin/makepkg script and looked at it, and decided that I should not try to figure out if and where gnu parallel could speed it up--I would break it,lol; but I look forward to speeding up the compression with the --threads option! I wonder if that current /sbin/makepkg would work on the 14.2-stable branch... I upgraded the number of cpus of a cloud instance just to make a kernel and slackware package, and downgraded the moment it was through so I wouldn't pay to much, and noticed that, with all the cpus, the kernel was compiling faster than the slackware package was created... I have it scripted so I don't accidentally leave the upgraded instance on, draining the bank; but I'm gonna change the script to downgrade after the kernel compile, and then use makepkg back on 1 cpu. And I'll check out --threads. Thanks for the information. Slack On!
Excuse me, and the advantages of a parallelized makepkg will be what?
Just for records, my box spends 9 hours to compile the Qt5, then it needs only a couple of minutes to package it.
As if I care if it creates the package in 1 or 3 minutes, after spending 9 hours with the fans at max.
Last edited by ZhaoLin1457; 01-07-2019 at 03:51 PM.
Excuse me, and the advantages of a parallelized makepkg will be what?
I think it is more to use multiple threads when compressing the package. Sure, in the grand scheme of things it may only be a fraction of the time used to compile, but savings are savings. I wish the linker could use multiple threads when compiling... when you have 15 cores unused and 1 core at 100%, it's a little frustrating when it seems to go forever.
Excuse me, and the advantages of a parallelized makepkg will be what?
Just for records, my box spends 9 hours to compile the Qt5, then it needs only a couple of minutes to package it.
As if I care if it creates the package in 1 or 3 minutes, after spending 9 hours with the fans at max.
With 64cpus and 58GB RAM, for an 8MB kernel:
Code:
Kernel compiled in 262 seconds!
Slackware package built in 4015 seconds
When including the modules in the slackware package, makepkg becomes the bottleneck. I think the --threads option gonna help this later number!
I just connect to a virtual machine running slackware64-14.2, and can allocate cpu/ram resources accordingly for time I'm connected. In the end it was only a couple of bucks to save 9 hours of time (less than I would have spent on coffee in such a period).
When including the modules in the slackware package, makepkg becomes the bottleneck. I think the --threads option gonna help this later number!
It will only help on the compression phase and not even that much when you create .txz packages, xz isn't that good in multi-threading (it doesn't even support it when DEcompressing, see the man page). Creating .tlz files may be much faster, although less compressed as Pat has included plzip (Parallel Lzip) in -current.
The archiver tar, which does the bulk of the work, is not multi-threaded.
BTW: in my system I've built and installed lbzip2, a parallel bzip2 utility and that helps speeding up that kind of compression.
in my system I've built and installed lbzip2, a parallel bzip2 utility and that helps speeding up that kind of compression.
I seem to recall Pat adding lbzip2 to current as well. Yes… yes he did
Code:
+--------------------------+
Wed Apr 11 20:38:06 UTC 2018
a/lbzip2-2.5-x86_64-1.txz: Added.
a/pkgtools-15.0-noarch-7.txz: Rebuilt.
explodepkg: support parallel bzip2 (lbzip2). Thanks to ruario.
installpkg: support parallel bzip2 (lbzip2). Thanks to ruario.
makepkg: support parallel bzip2 (lbzip2). Thanks to ruario.
pkgdiff: added tool to compare the file contents of two packages.
Thanks Everybody: I made an instance for current, and tried a build: lbzip2 squeezed some more seconds out of it!
32cpu / 28.8G ram:
Code:
Slackware package /tmp/kernel-slabc-4.20.0-x86_64-sib4.tbz created.
Kernel compiled in 500 seconds!
Slackware package built in 2 seconds!
* (quit)
the resultant package was 55M
To be fair, I was making this thread while waiting on makepkg to finish; that particular 4015 second job created a 803MB package, that was a mistake, and unfair to makepkg. It finished in time for me to report the time. But when I looked into it, I had installed packages intended for another device onto the developer instance, and I think they were interfering. Just removed them with removepkg, and slackpkg clean-system, and subsequent makepkg was still a few hundred seconds longer than make. But not after building in current with the lbzip2. Wow. 2 seconds! It sped up my heart rate too.
I did notice that I had to pass to makepkg PKGTYPE="-.tbz" for it to succeed. Thanks Slackware.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.