ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Ok, so I'm a python newbie, and am attempting to code a file locking method in a class. The class is no big deal, the method is no big deal, and in fact, doing the actual file locking is no big deal, as it's fairly similar to how I would do it in perl.
My questions are more for the logic of the whole deal.
The easiest thing to do would be to simply say;
Code:
fcntl.flock(fileHandle, fcntl.LOCK_SH)
but, my questions are this:
* How long does it try to get a lock?
* What happens if there is an error with the file or something else? How long will it attempt to gain the lock then?
* Could this possibly hang the code if there is some weird error with the file, or will it simply try for a set amount of time and then raise in IOError, or what?
I'd really hate to include a file locking mechanism in the code, and then have it freeze due to some error or another.
So, that leads me down THIS mindset:
Try to lock the file, using LOCK_NB (which causes the lock to try once, and if it fails it returns, raising an IOError), and loop a set number of times. If it fails all the way until the end, then produce an error message, otherwise, keep on going as normal.
Quite frankly, I'm not even sure how to code this is python... I have a perl version of that very thing quick works well, but I'm at a loss as to how o do this in python. Anyone feel like hookin' me up with some insight and/or code? I don't mind writing the code myself, but I want to make sure that I'm putting myself into the right python mindset. (=
flock(fd, op)
Perform the lock operation op on file descriptor fd (file objects providing a fileno() method are accepted as well). See the Unix manual flock(3) for details. (On some systems, this function is emulated using fcntl().)
According to W. Richard Stevens, the fcntl() function with F_SETLK called (the blocking version) waits 'until the lock can be granted', which could indeed be forever. I think you're better off going with the non-blocking option, like what you suggested.
Seems silly to me that the function would go "until a lock was granted", which could inevitiblely hang the program. Does this mean that people generally don't use locking, or that people simply accept that fact that their programs might hang, or what?
I think the 'until a lock can be granted' is important for the "must write" types of things - remember that file locking is for multiple processes, so the whole program shouldn't die (and theoretically you could have a 'monitor' process that handled timeout issues for hung children). Having the non-blocking option lets you work around that (in C, you would normally get back an error code return that told you WHY the call failed, to support error handling - my guess is that the Python return should do the same).
Looks to me like os.open may be a better choice for you. Mind if I ask what this is for? I've rarely heard of people even using file locking in python, generally because there is a better solution.
The reason for the file locking is for dealing with text files from a web script. If one opens the file and writes to it, I don't want to have to worry about overwriting the changes from another copy of the same script.
I'm curious as to the better solution, and in the meantime, I'll look at os.open. (=
--- edit ---
Looks like os.open is simply for handling the opening and closing of files w/o the buffering, overhead of the open function. Not really what i need. (=
Last edited by TheLinuxDuck; 08-04-2003 at 05:00 PM.
Originally posted by TheLinuxDuck Strike: Better solution? What is the better solution?
The reason for the file locking is for dealing with text files from a web script. If one opens the file and writes to it, I don't want to have to worry about overwriting the changes from another copy of the same script.
The better solution is often: don't use files in this way. Either design the system so that one can't possibly have two requests to the same file at the same time, or don't use files for storing the data.
Originally posted by Strike The better solution is often: don't use files in this way. Either design the system so that one can't possibly have two requests to the same file at the same time, or don't use files for storing the data.
I haven't a clue how one would go about designing the system so that two requests don't come at the same time. To me, that sounds more complicated and impractical than file locking. I would appreciate some insight into this.
TLD: the 'better solution' IMO is to use a database backend for your web data. This will allow you to leverage the database's file locking, etc. and focus on your presentation of data to the outside world. Python has built in database capability (shelves or something) as well as interfaces to full-featured RDBM systems like PostgreSQL.
Hmm.. I'm just thinking that it's not very practical to put all data into a database. For example, having a config file for a script isn't very practical in a database... and it is possible to have web-based editing of that config file.. another thing is html code. Though I could possibly conceive of putting the html code into a database, that just seems inappropriate to me. Especially when html code could be edited either via a web-based front-end, or via the command line (which, of course, voids locking anyway). I just don't see using a DB as a viable option for every type of data.
Bear in mind that my system already will be heavily implementing mySQL DB's for alot of data, so using a DB isn't a foreign concept to me.. as I said, I just see certain things as being inappropriate for a DB.
Gotcha - makes sense. By the way, I looked up the fcntl.flock() in Programming Python this morning @ home, and the write-up there confirms what I got from Stevens - the blocking version could in theory wait forever for the lock to be available. Also, the scheme is that readers (shared locks) have preference over writers (exclusive locks).
Second thought - why would putting HTML into the database be so bad? Large text chunks are supported by most (if not every) major DB, and it does solve the file-locking and (for fully ACID compliant DBs) any conflicts over editing order.
I agree that thinking of config files and HTML files as chunks that go into a database is a bit nontraditional and perhaps it might even seem wrong, but when you think about it, it isn't all that weird. Stuka already hit all the major points - it solves the file locking problem and there are datatypes that support huge text chunks (even huge binary chunks, or BLOBs), but there were two more things I wanted to mention that it does for you: it tends to makes things a bit faster (the database can cache requests and whatnot), and it simplifies the amount of thinking you have to do (you don't have to remember how file locking works, and you are already doing things with a database so you will already have that domain covered). Plus, file locking is nasty and low-level whereas database querying is much more high-level
Ok, I can see putting html files into a database, but that of itself opens up a can of worms: Having a frontend to edit/update/delete/etc html documents in the database.
Writing this is much more complex than writing a file locking system and using the text/html-editing capabilities of many fine pieces of software around. But, having to worry about locking the html files isn't so much a big deal, because the chances of me implementing a web-based frontend for it is very slim. (= Especially in this particular case, since the htmldocs are being created on dreamweaver.
Quote:
strike said:you don't have to remember how file locking works.... Plus, file locking is nasty and low-level...
Your statements confuse me a bit. I say this because file locking is very easy to do, if you don't worry about the potential 'hang issue' (and the potential for a lock to loosen, as I've seen some online docs indicate as a possibility). I mean, what could be harder than opening a file then calling flock on that filehandle? And as I do in a custom perl package, the openFile/editFile methods simply do the locking for me, so it's very simple and easy-to-use. Besides, I would consider the building of some query strings much more nasty than writing a file locking implementation.
As for DB caching and such as a benefit, the file system also caches.
I hope that I am not simply being closeminded, but I don't really see a benefit in the case of config files, and possibly html docs.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.