Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am trying to figure out a totally odd behavior of the ext3 filesystem mounted in Ubuntu 9.10
There is a Korn Shell script, part of which does the following in the loop:
while ((1)); do
mv dir1/file dir2;
if [[ ! -r dir2/file ]]; then
echo "ERROR"
ls -l dir1/* dir2/*
exit 1
elif
echo "OK"
fi
done
Given that dir2/file always exists and that I do not move it asynchronously with "&", my script should never hit the "ERROR" statement. The odd thing is that it does, and quite randomly (no pattern at all). However when it does hit the ERROR case, ls -l prints that file is in dir2 and it is readable! I tried using "-e" instead of "-r" test - no luck.
I never seen anything like this in 10 years of my programming experience. Same script worked fine on Fedora 11, and yet it wouldn't work on Ubuntu.
Any ideas how to solve this would be greatly appreciated.
It may be its telling you its done when it schedules the write and doing the error check before it has actually synced to disk (i.e. it's still in the buffer).
Try adding a "sleep 1" between the mv line and the if line so it pauses a second before checking for the file's existence.
It may be its telling you its done when it schedules the write and doing the error check before it has actually synced to disk (i.e. it's still in the buffer).
Try adding a "sleep 1" between the mv line and the if line so it pauses a second before checking for the file's existence.
I could use something like this. Though I'd still like to get to the core of the problem rather then sticking "sleep 1" between every other line to handle potentially related cases.
The core of the problem may be the difference in hardware for you Fedora and Ubuntu systems causing slower writes on the latter. (e.g. does one have SCSI and the other ATA or a USB external drive?)
Or it could be an I/O workload - if you had one that was doing heavy reads/writes of disk for some reason (e.g. a database) and the other wasn't then I/O might be blocked.
The thing is ext3 is a "cooked" filesystem so any write you do is going into a buffer before it is actually on disk. The point I was trying to make is that your script may be getting to the read line before the data is actually flushed to disk from the buffer. That would explain the randomness you mentioned.
The core of the problem may be the difference in hardware for you Fedora and Ubuntu systems causing slower writes on the latter. (e.g. does one have SCSI and the other ATA or a USB external drive?)
Or it could be an I/O workload - if you had one that was doing heavy reads/writes of disk for some reason (e.g. a database) and the other wasn't then I/O might be blocked.
The thing is ext3 is a "cooked" filesystem so any write you do is going into a buffer before it is actually on disk. The point I was trying to make is that your script may be getting to the read line before the data is actually flushed to disk from the buffer. That would explain the randomness you mentioned.
This makes sense, although I am using the same hardware except video card.
Do you know if ext4 is any better in this regard?
Thanks for taking your time to explain!
It's not really a matter of which filesystem you use - they're all buffered (unless you use something like OCFS2 which is designed for Oracle Databases and does direct I/O instead). In general buffering is a good thing as it allows you to complete most tasks more quickly by reporting the write when it gets to the buffer in memory (very fast) rather than waiting until it flushes the buffer to disk (slow comparatively due to the need to go from electronic speed to mechanical speed). For most purposes this is a desirable behavior.
ext4 actually has a delayed commit built into it for performance reasons so it wouldn't be "better" in the sense you meant - in fact there is a chance of data loss if you lose power to the system before the write. However, there are other reasons why people think ext4 is "better" that deal with performance and scalability.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.