LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 06-26-2011, 05:15 PM   #1
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Rep: Reputation: 174Reputation: 174
How to close "screen" session from a script?


I am using screen to start a LONG running script on my server over an ssh connection. This works fine. I can see that the script is continuing to run after I disconnect from the screen session using Ctrl-A d. When the script is complete I can reconnect to the screen session and manually terminate it.

I would like to be able to terminate the screen session at the end of the script. I tried issuing an exit command at the end of the script. That did NOT work. Any ideas?

TIA,

Ken
 
Old 06-26-2011, 05:18 PM   #2
Tinkster
Moderator
 
Registered: Apr 2002
Location: earth
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
Blog Entries: 11

Rep: Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928
You could tack a "kill -9 PID" at the end of your script if you
can uniquely identify the screen instance you're firing up :}


Cheers,
Tink

Last edited by Tinkster; 06-26-2011 at 05:19 PM.
 
Old 06-26-2011, 05:45 PM   #3
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Original Poster
Rep: Reputation: 174Reputation: 174
Thanks Tinkster,

I think that would be rather easy for one script running. My plan is to initiate 4 parallel scripts so I will have to do some investigation. If I KNEW which script would end last I could have it iterate and kill all "screen" processes. Or I could run the scripts serially. I am not sure which would be faster. Basically I am using dd if=/dev/urandom of=(partition I want to clear)/bigfile to do a quick and dirty wipe of free space. Well at least quick compared to using sfill -l -l which I computed would take 12 DAYS! I guess I need to fill up more of the 5 GB of storage on my server with crap so I have less free space to wipe

Ken
 
Old 06-26-2011, 07:06 PM   #4
Tinkster
Moderator
 
Registered: Apr 2002
Location: earth
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
Blog Entries: 11

Rep: Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928
You could conceivably fire of all processes; check them in a while loop w/
a 1-second sleep, and kill screen if they're all gone ....
something like
Code:
/path/to/script1 &
/path/to/script2 &

...
# after the last of the four
while pgrep "script1|script2|..." >/dev/null 2>&1
do
  sleep 1
done
pkill  -9 "SCREEN"

No guarantee for completeness above ;D


Cheers,
Tink
 
Old 06-26-2011, 07:55 PM   #5
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,358

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
An alternative to the above would be to utilise the $! Special Var http://tldp.org/LDP/abs/html/internalvariables.html and track the actual pids of the created processes.
 
Old 06-27-2011, 08:40 AM   #6
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Original Poster
Rep: Reputation: 174Reputation: 174
Thanks folks! I allowed two processes to run overnight (14 hours) and based on the progress they made I think I will have to run sequentially. That will make things easier.

Ken
 
Old 06-27-2011, 12:25 PM   #7
Tinkster
Moderator
 
Registered: Apr 2002
Location: earth
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
Blog Entries: 11

Rep: Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928
Out of curiosity: is this path to be wiped on an external USB device?
 
Old 06-27-2011, 01:47 PM   #8
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Original Poster
Rep: Reputation: 174Reputation: 174
In answer to your curiosity... The system consists of:

A Dell Poweredge 400 SC server (Pentium 4; 2.33 GHz; 3 GB RAM) with the following hard drives (all SATA):

Western Digital Caviar Black 1 TB
Western Digital Caviar Green 1 TB
Western Digital Caviar Green 1 TB
Western Digital Caviar Green 2 TB

The green drives are low power consumption and a little slow but not that slow. I think the limiting factor is the CPU which is consumed 100% by dd.

I have had better results creating a file of say 1 GB with dd if=/dev/urandom and then copying the 1 GB files to the drive to be wiped from within a loop (cp to an incremented file name each time) until it runs out of space.

Ken
 
Old 06-27-2011, 02:16 PM   #9
Tinkster
Moderator
 
Registered: Apr 2002
Location: earth
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
Blog Entries: 11

Rep: Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928Reputation: 928
You could (to help the CPU) use a decent blocksize with dd ...

dd if=/dev/urandom of=/... bs=4096


Of course the problem may be the urandom usage.


[edit]
Scrap that - slow as. From urandoms man-page
Quote:
The kernel random-number generator is designed to produce a small
amount of high-quality seed material to seed a cryptographic pseudo-
random number generator (CPRNG). It is designed for security, not
speed, and is poorly suited to generating large amounts of random data.
Users should be very economical in the amount of seed material that
they read from /dev/urandom (and /dev/random); unnecessarily reading
large quantities of data from this device will have a negative impact
on other users of the device.
So really, you probably just want to generate a 4096 or 2048
byte file, and slap that over the devices in the loop ;}
[/edit]



Cheers,
Tink

Last edited by Tinkster; 06-27-2011 at 02:24 PM.
 
Old 06-27-2011, 06:05 PM   #10
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Original Poster
Rep: Reputation: 174Reputation: 174
Thanks! Good point. urandom is really slow. I had at one time played with copying various size files of random data to fill free space. Larger files seemed to be faster.

Ken
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Problem running .sh script with "screen" and "java" IJustinI Linux - Newbie 31 06-01-2010 07:13 AM
slackware 12.1 blank screen after clicking "end current session" nick_slack Slackware 7 05-25-2008 09:01 PM
How to run "evolution --force-shutdown" from a script without an x session" biglesias Linux - Software 2 04-14-2008 10:36 AM
"NIM thread blocked" & "Deadman Switch (DMS) close to trigger" errors sosborne Linux - Networking 1 06-28-2006 02:07 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 07:48 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration