LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 11-04-2023, 04:29 PM   #31
Didier Spaier
LQ Addict
 
Registered: Nov 2008
Location: Paris, France
Distribution: Slint64-15.0
Posts: 11,058

Rep: Reputation: Disabled

Quote:
Originally Posted by Petri Kaukasoina View Post
Both cleaning /tmp with rm in /etc/rc.d/rc.S and using a tmpfs get rid of stale xfce ICE sockets only if you reboot.
Code:
23:08:07 up 562 days,  7:58,  8 users,  load average: 0.00, 0.00, 0.00
Correct, however... How many people run xfce in a machine up since more than one year?

[ot]: you made me realize that in my VPS (server without X) I still have .ICE-unix and .X11-unix in /tmp. I will investigate why[/ot].

EDIT: created by /etc/rc.d/rc.S. Well, these empty directories won't hurt, I assume, though I do not need them.

Last edited by Didier Spaier; 11-04-2023 at 05:12 PM.
 
Old 11-04-2023, 05:27 PM   #32
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 969

Rep: Reputation: 656Reputation: 656Reputation: 656Reputation: 656Reputation: 656Reputation: 656
Quote:
Originally Posted by Didier Spaier View Post
however... How many people run xfce in a machine up since more than one year?
(Raises hand)

Well the machine that I use xfce on when writing this only has an uptime of 177 days since last power outage, but I have been running xfce on machines with uptimes of more than 1 year. I have also been running xfce on machines with long uptimes where different users have been logged in to different sessions on different virtual terminals. That is, the first user gets DISPLAY :0 on virtual terminal 7, the second user gets DISPLAY :1 on virtual terminal 8 and the third user gets DISPLAY :2 on virtual terminal 9.

But it doesn't take an entire year for the PIDs to wrap around on my machine:

Code:
root       289  0.0  0.0      0     0 ?        S<   May11   0:00 [bioset]
root       290  0.0  0.0      0     0 ?        S<   May11   0:00 [bioset]
henca      474  0.0  0.0 211688  7948 ?        S    Aug27   0:21 orage
henca      601  0.0  0.0  90528  1848 ?        S    Oct27   0:01 xterm -class UXTerm -title uxterm -u8
root       617  0.0  0.0  37964   680 ?        Ss   May11   0:01 /sbin/udevd --daemon
henca      630  0.0  0.0  21908   476 pts/98   Ss+  Oct27   0:00 -tcsh
root       682  0.0  0.0      0     0 ?        S<   May11   0:00 [kpsmoused]
Looking for some more processes I find:

Code:
henca     7051  0.0  0.0  87720   516 ?        S    Jul19   0:00 xterm -class UXTerm -title uxterm -u8
henca     7065  0.0  0.0  21440   324 pts/52   Ss+  Jul19   0:00 -tcsh
henca     7245  0.0  0.0  89700  2348 ?        S    Jul09   0:01 xterm -class UXTerm -title uxterm -u8
henca     7251  0.0  0.0 419924   404 ?        Sl   Aug12   0:00 /usr/libexec/gvfsd-burn --spawner :1.16 /org/gtk/gvfs/exec_spaw/2
Looking at PID 7065 created July 19 and PID 7245 crated July 09 it seems as if the PIDs might need no more than 10 days to wrap around. So how many times do you PID has wrapped since I started xfce?

Code:
henca    24902  0.0  0.0 258688  3652 ?        Sl   May11   0:16 xfce4-session
It doesn't happen often, but sometimes a user gets thrown out when trying to login to X (with kdm or tdm). At a second attepmt it usually works. I haven't bothered much about this, but now I think I understand what the problem is thanks to selfprogrammed.

regards Henrik
 
Old 11-04-2023, 05:49 PM   #33
rkelsen
Senior Member
 
Registered: Sep 2004
Distribution: slackware
Posts: 4,456
Blog Entries: 7

Rep: Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560
So we've established that this is an XFCE bug?

My main box here has 11 files in /tmp/.ICE-unix after being rebooted only a few times in the last 18 months.

I don't use XFCE, so OP's problem doesn't happen here.
 
Old 11-05-2023, 03:27 AM   #34
Petri Kaukasoina
Senior Member
 
Registered: Mar 2007
Posts: 1,794

Rep: Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473
Quote:
Originally Posted by henca View Post
it seems as if the PIDs might need no more than 10 days to wrap around.
On my main box it takes less than 4 hours to wrap around. (cron runs a sh script examining what machines are up in the local network...)
 
1 members found this post helpful.
Old 11-05-2023, 04:02 AM   #35
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,879

Rep: Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317
If true, this is a bunch of incorrect assumptions:
1. /tmp is cleaned up upon reboot automatically (sometimes not)
2. /tmp/.ICE-unix and/or /tmp/.X11-unix is removed during termination of X session (sometimes not)
3. there are no "alien" /tmp/.ICE-unix and/or /tmp/.X11-unix file/dir(s) in /tmp (sometimes there is something)
4. we have only one X session
5. we can probably add the assumption that pids are always increasing

none of them can cause this error by themselves, but only if all these assumptions are wrong => do not assume anything.

By the way it is not only slackware related.

Last edited by pan64; 11-05-2023 at 04:34 AM.
 
Old 11-05-2023, 05:21 AM   #36
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 969

Rep: Reputation: 656Reputation: 656Reputation: 656Reputation: 656Reputation: 656Reputation: 656
Quote:
Originally Posted by rkelsen View Post
So we've established that this is an XFCE bug?

My main box here has 11 files in /tmp/.ICE-unix after being rebooted only a few times in the last 18 months.

I don't use XFCE, so OP's problem doesn't happen here.
The bug in XFCE is that it doesn't even attempt to clean up the sockets it created at exit. Even well written desktop environments might leave such sockets behind if they are unable to exit cleanly. Examples of such cases are:
  • A power outage
  • Someone killing the process with "kill -KILL" or "kill -9"
  • The process bugs out with something like a segfault

The fix in rc.S will handle the situation with unclean shutdowns like a power outage.

When XFCE allways leaves those sockets behind I blame XFCE.

If some other desktop environment like KDE would segfault and leave those sockets behind I would blame that desktop environment for having the bug to cause the segfault.

If someone kills a process with "kill -9" giving it no possibility to clean up I would blame that someone.

I power outage might breakt things like file systems and data consistency, having a UPS might be useful.

If some user presses the reset or power button on a computer I would blame that user for sockets left behind.

regards Henrik
 
2 members found this post helpful.
Old 11-05-2023, 05:29 AM   #37
Petri Kaukasoina
Senior Member
 
Registered: Mar 2007
Posts: 1,794

Rep: Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473
On one machine here there were lots of sockets in /tmp/.ICE-unix. I created file /etc/cron.hourly/ice and made it executable chmod +x /etc/cron.hourly/ice.
Code:
#!/bin/sh
find /tmp/.ICE-unix -maxdepth 1 -type s ! -newerat now-30mins ! -exec fuser -s '{}' \; -delete
Cron runs it once an hour and it deletes sockets from /tmp/.ICE-unix older than 30 mins which are not accessed at the moment. It did remove all stale sockets and left the one xfce was using.
 
4 members found this post helpful.
Old 11-05-2023, 06:17 AM   #38
rkelsen
Senior Member
 
Registered: Sep 2004
Distribution: slackware
Posts: 4,456
Blog Entries: 7

Rep: Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560
Quote:
Originally Posted by henca View Post
The bug in XFCE is that it doesn't even attempt to clean up the sockets it created at exit.
So that's really the heart of OP's problem. XFCE should do something about this, although there are more than 4 solutions to the problem posted in this thread, they each only target the symptoms.

The reason I've got a few stale sockets lying around is most probably that I've had a few situations where I was forced to do the 3 finger salute (ctrl-alt-backspace) to restart X... pushing the limits of my hardware...
 
Old 11-05-2023, 09:12 AM   #39
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,879

Rep: Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317Reputation: 7317
Quote:
Originally Posted by Petri Kaukasoina View Post
On one machine here there were lots of sockets in /tmp/.ICE-unix. I created file /etc/cron.hourly/ice and made it executable chmod +x /etc/cron.hourly/ice.
Code:
#!/bin/sh
find /tmp/.ICE-unix -maxdepth 1 -type s ! -newerat now-30mins ! -exec fuser -s '{}' \; -delete
Cron runs it once an hour and it deletes sockets from /tmp/.ICE-unix older than 30 mins which are not accessed at the moment. It did remove all stale sockets and left the one xfce was using.
The correct way would be to check if they are really in use. It is not a problem if you have only one [living] session (or just a few), otherwise [probably] it won't work properly. And I don't know if it is an answer to post #7.

I would rather try to check if pid is already in use and use the next one. It is not easily implementable. The other way would be probably to chroot all the X environments, therefore they have their own tmp dir.
 
Old 11-05-2023, 09:56 AM   #40
allend
LQ 5k Club
 
Registered: Oct 2003
Location: Melbourne
Distribution: Slackware64-15.0
Posts: 6,374

Rep: Reputation: 2750Reputation: 2750Reputation: 2750Reputation: 2750Reputation: 2750Reputation: 2750Reputation: 2750Reputation: 2750Reputation: 2750Reputation: 2750Reputation: 2750
Could not resist passing on the conclusion from an old Fedora thread
Quote:
15th November 2011, 09:48 AM
It isn't a bug.

IF you want it erased on boot, then mount it as tmpfs.

But you do have to remember, some jobs (usually just user jobs) will use /tmp and expect files to be there when they resume.

But it is up to you.

It is a bug in libICE which is masked by clearing /tmp/. See my original post.
 
1 members found this post helpful.
Old 11-05-2023, 10:01 AM   #41
Petri Kaukasoina
Senior Member
 
Registered: Mar 2007
Posts: 1,794

Rep: Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473Reputation: 1473
Quote:
Originally Posted by Petri Kaukasoina View Post
Code:
find /tmp/.ICE-unix -maxdepth 1 -type s ! -newerat now-30mins ! -exec fuser -s '{}' \; -delete
Quote:
Originally Posted by pan64 View Post
The correct way would be to check if they are really in use. It is not a problem if you have only one [living] session (or just a few), otherwise [probably] it won't work properly.
I don't know if your comment was about my one-liner or about how libICE.so should be used in xfce. Anyway, that one-liner does check if the socket is really in use using command 'fuser', and does not depend on how many sessions there are.

You see all active UNIX domain sockets with command 'netstat -apAunix', so for example 'netstat -apAunix|grep /tmp/.X11-unix' will show the corresponding streams, and the owner of the socket (PID and program name). 'fuser /tmp/.X11-unix/X0' will show the PID as well.

'netstat -lpAunix' will show only all the listening sockets.

Last edited by Petri Kaukasoina; 11-05-2023 at 10:51 AM.
 
1 members found this post helpful.
Old 11-05-2023, 04:19 PM   #42
rkelsen
Senior Member
 
Registered: Sep 2004
Distribution: slackware
Posts: 4,456
Blog Entries: 7

Rep: Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560
Quote:
Originally Posted by allend View Post
Could not resist passing on the conclusion from an old Fedora thread
Quote:
It is a bug in libICE which is masked by clearing /tmp/. See my original post.
The hole gets deeper.

If it's a bug in libICE, then why does it only happen to XFCE users?
 
Old 11-06-2023, 05:34 AM   #43
commandlinegamer
Member
 
Registered: Dec 2007
Posts: 163

Rep: Reputation: 51
Quote:
Originally Posted by volkerdi View Post
X does clean them up with a proper exit, but they are left behind with a crash or exit via keyboard zap.
Interesting - I often close X using Ctrl-Alt-BkSpace before shutting down the machine.


FWIW, I added lines to rc.local to remove ICE-Unix sockets a couple of years back, as occasionally it failed to login the first time.
 
Old 11-06-2023, 05:48 PM   #44
selfprogrammed
Member
 
Registered: Jan 2010
Location: Minnesota, USA
Distribution: Slackware 13.37, 14.2, 15.0
Posts: 635

Original Poster
Rep: Reputation: 154Reputation: 154
I love the discussion here. This is what I had hoped for, as solution for the system, not the OP.

*** tmpfs
As a solution for the system this does not pass even preliminary considerations. This only misuses a side-effect of tmpfs to coverup the problem further. In addition it uses main memory and without any benefit from the main memory resource that is being tied up to accomplish this trick. If the same clear at shutdown was accomplished by any other means, it would serve equally as well (badly, but as well as tmpfs did). The one liner clear of /tmp/.ICE-unix does that, and the cost is so low as to be incomparable.

*** Do we blame XFCE4, or Xorg
I run XFCE4.
I shutdown almost exclusively using the desktop menu shutdown buttons. So cannot blame three-finger-salute shutdown.
I got many stale sockets, in spite of shutting down the system from the desktop.
I found reports from ubuntu and OpenBSD with identical symptoms, so they must have the same situation happening.

The startxfce4 script, as the last thing, starts the /etc/xdg/xfce4/xinitrc script.
That script as the last thing, launches the xfce4-session.
None of these scripts have anything after launching, like they never expect to be able to finish, or cleanup.

From my reading of the rc.6 script, I doubt that XFCE4 is being given the chance to shutdown cleanly.
That rc.6 script just kills any process that is left. A process that is killed that way does not have a chance to finish running any script (AFAIK). There might be a way to trigger a run-this-when-I-DIE script, but I doubt that any such has been setup (AFAIK).

I think part of the problem is due to not killing the desktop earlier in any clean manner. It seems (not verified) to be left to be killed by the mass-extinction of all processes that happens in the rc scripts, and that is leaving behind a few things uncleaned, like these stale sockets. MAYBE. There are many scripts and code not examined yet.
But, this is probably what is in common between Slackware and Ubuntu and OpenBSD.

Does that desktop menu shutdown actually shutdown anything of Xorg.
That Xorg can shutdown is evident in the Xorg log.
I have some doubt that there is anywhere, that any of the scripts or services that started Xorg are ever given a clean shutdown.

I did verify that it is Xorg that creates the ICE socket. I do not know if there is anywhere in Xorg that would remove it. That could be quite a search to find.
Because Xorg creates the socket, I would look to Xorg to clean it up afterwards.
Xorg does log shutdown events in its log, it is managing to run a shutdown. This may be the only good place to attach a fix to the desktop shutdown specifically. It probably is hard coded and not a script, so not very easy to modify or patch.

The question would be is there anywhere in XFCE scripts that a shutdown cleanup could be attached. Is it even getting any chance to cleanup itself.

There are too many ways to shutdown the system. Some of them are going to be un-clean, by their nature.
The clean up of ICE and X11 directories will have to exist. A few lines in the scripts will suffice for that.

I think what we are discussing is how much farther can be get with making XFCE4 or Xorg cleanup after themselves, too.

When, and if, the upstream Xorg maintainers will take any action is questionable.


*** My script
My script seems to work, but where I called it from was wrong.
It does not work well when called from rc.K (Because I was misled by the name, it does not get run).
It needs to be called from rc.6. Being called right after all the processes get killed provides the best effect. On my last shutdown it removed ALL the stale sockets.
I have not seen any harm when it is called from the command line, arbitrarily.

I update this only as an alternative to the discussion. Some people may need the script, such as run weekly by cron (or maybe much more often than that it sounds).
To some degree, the possible solutions may be driven by which interfere the least with the users needs. This is one reason I oppose tmpfs, it wastes resources.


The patch to rc.6.
Code:
--- orig-sw15.0/rc.6	2022-06-18 12:02:59.000000000 -0500
+++ rc.6	2023-11-05 20:53:50.172932689 -0600
@@ -208,8 +208,14 @@
   echo "Sending all processes the SIGKILL signal."
   /sbin/killall5 -9 $OMITPIDS
 fi
 
+# Clean up old sockets:
+# This is most effective if done after Xorg has been killed.
+if [ -x /etc/rc.d/rc.clean_sockets ]; then
+  /etc/rc.d/rc.clean_sockets
+fi
+
 # Try to turn off quota.
 if /bin/grep -q quota /etc/fstab ; then
   if [ -x /sbin/quotaoff -a -z "$container" ]; then
     echo "Turning off filesystem quotas."

*** I wish to thank everyone for their attention to this problem.
I will shortly be buried in getting out a software release. I will try to continue investigating as time permits.

Last edited by selfprogrammed; 11-06-2023 at 10:31 PM.
 
Old 11-07-2023, 12:06 AM   #45
_peter
Member
 
Registered: Sep 2014
Location: paris
Distribution: slackware
Posts: 314

Rep: Reputation: Disabled
this is from 2009-07-01 11:03:33 UTC
https://gitlab.freedesktop.org/xorg/...ice/-/issues/1
 
1 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Printing control files accumulate in the directory /var/spool/cups tonreg CentOS 15 11-30-2015 09:26 AM
Is there a clone of Ice Mirror from ice-graphics, with CD/DVD burning features? frenchn00b Linux - Software 0 05-22-2009 03:44 AM
Tool to accumulate customizations for migration to new installations theNbomr Linux - Software 5 03-17-2008 12:26 PM
How do I remove: protoname=ICE prododata="" netid=local/NASCI :/tmp/.ICE-unix/3344 NightSky Slackware 3 02-20-2008 03:06 PM
how to get the accumulate cpu time of a process chrislam Programming 1 10-14-2007 04:02 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 09:04 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration