SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
xf86-video-sis and various other drivers are probably not going to be maintained since Mesa3D dropped a lot of the drivers related to DRI1 support. It's really a sad state that support has been reduced to using the vesa/modesetting driver for many non-mainstream video cards over the lack of willingness by the developers to bring the drivers up to modern specs even for EXA.
Yeah, that's too bad. I'm considering learning how to write graphics drivers to keep several of my computers working.
If you think that running an old version of Mesa is possible, I'll help in some way.
I already had readline here (lots of rebuilds for that one).
I just got rcs queued up.
I'm going to leave bsd-games alone for now. It's not obvious to me from some casual searching where the correct homepage/download link is, and while I could grab the archive from another distribution, I'd rather not do that without knowing that what they have is official (as well as it can be).
I'm going to leave bsd-games alone for now. It's not obvious to me from some casual searching where the correct homepage/download link is, and while I could grab the archive from another distribution, I'd rather not do that without knowing that what they have is official (as well as it can be).
Procps-ng's pkill does work a bit more effectively though and has more options when trying to kill the process tree. It'll be good to have pkill if alternative init systems can be imported into Slackware more effectively to supplicant killall5.
Distribution: slack 7.1 till latest and -current, LFS
Posts: 368
Original Poster
Rep:
I have been using procps-ng for some time already instead of procps.
The new top shows atleast which process is its parent.
apart from that, a user will not notice the difference
I would like to see it actually adopted.
Not really. No.
Btw turns out that there are games that use it, so my first statement is incorrect (about the lack of usage).
I wanted to try 0ad (SDL2 support) and made some changes to the script at SBo.
Erik should merge them at some point, if he doesn't find the requirement of enet >= 1.3 as a no-go (which is incompatible with the old 1.2)
And since "/run" is a tmpfs these subdirectories will disappear at shutdown. And should therefor be re-created at boot time (in rc.S).
Will do. Thank you.
Well, see if this patch will apply to whatever version you have (it's generated on 2.02.114) and then see if it fixes the problem properly; if so, I'll just carry it as a distro patch. My experience with upstream is that bug reports are ignored, even with patches.
The point is, using the current design framework puts a tmpfs in /run by default already, but places a second tmpfs in /dev/shm. The modification to link /dev/shm to /run/shm negates using duplicate tmpfs mounts so that udev only mounts one time, and the init script to mount tmpfs creates all the appropriate directories in /run to which then, anything using the shared memory system uses the tmpfs which is then flushed completely upon dismount when /run is dismounted so that you avoid yet another redundant tmpfs dismount.
Well, see if this patch will apply to whatever version you have (it's generated on 2.02.114) and then see if it fixes the problem properly; if so, I'll just carry it as a distro patch. My experience with upstream is that bug reports are ignored, even with patches.
Your patch did not work, because "dmeventd.c" wants to create its pid file in "/run/lvm/" as well. I moved the "mkdir" into the main function and now it works.
Code:
--- LVM2.2.02.100/daemons/dmeventd/dmeventd.c.orig 2013-08-13 17:44:43.000000000 +0700
+++ LVM2.2.02.100/daemons/dmeventd/dmeventd.c 2014-12-09 10:53:24.744482356 +0700
@@ -1967,6 +1967,9 @@
if (setenv("LC_ALL", "C", 1))
perror("Cannot set LC_ALL to C");
+ /* Create rundir */
+ mkdir(DEFAULT_DM_RUN_DIR, 0700);
+
if (_restart)
restart();
I added an "ls" in "rc.S" for testing and it's ok now:
Code:
/run/lock/lvm/:
total 0
drwx------ 2 root root 40 Dec 9 18:16 .
drwx------ 3 root root 60 Dec 9 18:16 ..
/run/lvm/:
total 4
drwx------ 2 root root 100 Dec 9 18:16 .
drwxr-xr-x 6 root root 140 Dec 9 18:16 ..
prw------- 1 root root 0 Dec 9 18:16 dmeventd-client
prw------- 1 root root 0 Dec 9 18:16 dmeventd-server
-rw------- 1 root root 5 Dec 9 18:16 dmeventd.pid
Your patch did not work, because "dmeventd.c" wants to create its pid file in "/run/lvm/" as well. I moved the "mkdir" into the main function and now it works.
Code:
--- LVM2.2.02.100/daemons/dmeventd/dmeventd.c.orig 2013-08-13 17:44:43.000000000 +0700
+++ LVM2.2.02.100/daemons/dmeventd/dmeventd.c 2014-12-09 10:53:24.744482356 +0700
@@ -1967,6 +1967,9 @@
if (setenv("LC_ALL", "C", 1))
perror("Cannot set LC_ALL to C");
+ /* Create rundir */
+ mkdir(DEFAULT_DM_RUN_DIR, 0700);
+
if (_restart)
restart();
I added an "ls" in "rc.S" for testing and it's ok now:
Code:
/run/lock/lvm/:
total 0
drwx------ 2 root root 40 Dec 9 18:16 .
drwx------ 3 root root 60 Dec 9 18:16 ..
/run/lvm/:
total 4
drwx------ 2 root root 100 Dec 9 18:16 .
drwxr-xr-x 6 root root 140 Dec 9 18:16 ..
prw------- 1 root root 0 Dec 9 18:16 dmeventd-client
prw------- 1 root root 0 Dec 9 18:16 dmeventd-server
-rw------- 1 root root 5 Dec 9 18:16 dmeventd.pid
Much appreciated; I've updated the patch and pushed it into my pending queue.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.