LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 02-23-2006, 10:42 AM   #1
Thinking
Member
 
Registered: Oct 2003
Posts: 249

Rep: Reputation: 30
fail-safe update prog, how much is the file system a problem?


hiho@ll

situation:
a directory with many files (usually php scripts)
the scripts are used the whole time
they get executed (most by command line php) and die a few seconds later
but the scripts can be executed at the same time, this means one script can run more than once at any time (using "ps" command you would see many php processes using the same .php script)

so i have files which are used very often the whole time

problem:
updating one ore more scripts

question:
Q1: how can i update the scripts without affecting the whole runtime environment (no failures, if not possible -> Q2)?
Q2: how can i update the scripts without affecting the whole runtime environment very much (minimum failures)?
Q3: i think the biggest problem i will have is this:
script1.php gets opened by php process and it beginns to read it
update process tries to open the file for writing (i think the update will be a simple cp -f)

Q3.1 can the cp command update the script if it is opened by the php process for reading?
Q3.2 in which situations will the cp -f command fail to update?

cp -f begins to write something in the file
so the php process will read a mixture of the old script and the new script, or it can't read until the end, cause cp -f didn't finish (e.g. its a big file) so php will do a parse error
so how can i be sure that the update will work (the copy command), and the running scripts don't get affected?
hmm, i could use "mv" command, which should be much faster, not?
do i have the same problem, a script can't get executed, cause it gets currently updated by the mv command?

result:
a fast working update prog to update many servers at one time which is fail-safe
 
Old 02-23-2006, 11:14 AM   #2
marozsas
Senior Member
 
Registered: Dec 2005
Location: Campinas/SP - Brazil
Distribution: SuSE, RHEL, Fedora, Ubuntu
Posts: 1,508
Blog Entries: 2

Rep: Reputation: 68
I think a description of how unix/linux deal with this scenario is helpfull to answer your questions.

When a process opens a file for reading, the whole file goes to a buffer. The process continues to read from that buffer, not from the disk itself.

When a process opens a file for writing, the file is written in a buffer. When the process close the opened file, than it will be writen back to the disk.

If a process opens a file for reading and other opens the same file for writing, the process that is reading will not be affected by the writing process. As soon the writing process closes the file, all reading process started after that, will get the new content.

So, I think is safe to install new versions of php scripts "on the fly".

The above description is not valid for databases of any kind. It's good only for regular files.
 
Old 02-23-2006, 11:21 AM   #3
Thinking
Member
 
Registered: Oct 2003
Posts: 249

Original Poster
Rep: Reputation: 30
sounds very goood!

thx marozsas

but:

Q1: what about big files? or to ask it differently, how much of a file gets buffered? so what happens if the file is very big (well, i don't have this problem, but just of interest)
Q2: this would mean it will work with rsync? so i don't need to do this all by myself?
or does rsync some (for my specific problem) crazy stuff, which makes a problem with my scenario?

thx!!!
 
Old 02-23-2006, 11:01 PM   #4
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,415

Rep: Reputation: 2785Reputation: 2785Reputation: 2785Reputation: 2785Reputation: 2785Reputation: 2785Reputation: 2785Reputation: 2785Reputation: 2785Reputation: 2785Reputation: 2785
The trick is to have the new version written to the disk (under different name eg new_<old_name>), then then use mv cmd (eg mv new_<old_name> <old_name> ) which just changes entry in inode table (ie dir file). Takes a fraction of a sec to change.
Always keep prev versions eg <old_name>_YYYYMMDD so you can back out broken scripts, using same trick.
Use rsync to move files between machines, but not for renaming part.
 
Old 02-24-2006, 06:18 AM   #5
marozsas
Senior Member
 
Registered: Dec 2005
Location: Campinas/SP - Brazil
Distribution: SuSE, RHEL, Fedora, Ubuntu
Posts: 1,508
Blog Entries: 2

Rep: Reputation: 68
Quote:
Q1: what about big files? or to ask it differently, how much of a file gets buffered? so what happens if the file is very big (well, i don't have this problem, but just of interest)
The buffer is in the disk too, not only ram memory. It uses the swap space. The files that are closed in the buffer are committed to filesystem by the sync command. All disk operation goes thru the buffer. The filesystem is very robust.
Even overwriting a real big file that takes forever to finish, the "previous" content will be available until the very last microsecond before the operation is committed.

Quote:
Q2: this would mean it will work with rsync? so i don't need to do this all by myself?
or does rsync some (for my specific problem) crazy stuff, which makes a problem with my scenario?
None I can imagine.
Well, in fact, I can think in one.
I am supposing theses php are standalone. Or at least, if they work together, the API will not change. I mean, a change in the logic in one of theses scripts will not affect any other script which depends on.

Have faith in Linux, my brother

Cheers,
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Red Hat 9: Is it safe to update glib2,pango... via synaptic with fedora packages? gevero Linux - Software 0 03-17-2005 05:11 AM
Should I Use The VI prog to change an File HooX Linux - Newbie 2 01-11-2005 02:52 AM
Fedora Update FAIL yoniblo Fedora - Installation 1 12-28-2004 08:25 PM
is this kernel .config safe for system? rgiggs Slackware 14 07-20-2004 02:20 PM
Yum Update complains of missing file to do update, but file exists! davidas Linux - Newbie 0 03-28-2004 12:14 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 09:10 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration