LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   Ftruncate (https://www.linuxquestions.org/questions/programming-9/ftruncate-433581/)

Flesym 04-11-2006 06:19 AM

Actually I don't understand your problem with sed. An example:
Code:

$ echo "line 1\nline 2\nline 3" > theFile.txt
$
$ cat theFile.txt
line 1
line 2
line 3
$ sed -i '1d' theFile.txt
$
$ cat theFile.txt
line 2
line 3

???

muha 04-11-2006 06:20 AM

Flesym did a good suggestion with using two files ... but:
to delete from line 1 until delimiter \n (i think):
Code:

sed -i '1,/\\n$/d' theFile.txt
assuming the \n is always at the end of a line.

Flesym 04-11-2006 06:22 AM

[double Post, sorry]

Deepak Inbasekaran 04-11-2006 06:42 AM

Quote:

Another more efficient way would be to use two (or even more) files: Begin by filling the first file until it reaches 1MB; then open the second file and keep on writing to this until it is full. Now blank the first file again and log to this for the next 1MB and so on... This way you can review at least your desired last 1MB and won't have to worry about performance, because you only append and truncate (both are cheap file operations), so there is no buffer and no additional string operation needed. But of course this has to fit into your software design
iam using this method only right now but i got review comments from my manager tat , after i fill up the 2 files,i create a new one and delete the old one, but the concept is jus the insertion of a single record will be followed by the loss of 1MB of data (though we have 1MB more at hand) still it sounds a bit odd to him, thats the reason i am trying other methods. and my system generates a lot of messages at a very short interval of time .

Deepak Inbasekaran 04-11-2006 06:54 AM

Quote:

sed -i '1,/\\n$/d' theFile.txt
this wipes out the entire file

muha 04-11-2006 07:05 AM

true, sorry :D It tries to find the largest hit so that would be the entire file ...

Deepak Inbasekaran 04-11-2006 07:07 AM

ya gud thing i tried it on a prototype and not on my actual thing :D

primo 04-11-2006 07:14 PM

Why not use logrotate(8) ?

Deepak Inbasekaran 04-11-2006 10:47 PM

Quote:

Why not use logrotate(8) ?
ya i thot of tat too ... logrotate removes old files after a regular time interval right (say once in a day )? so incase if there are no log files being generated in my system , then all my log files become older by time so they may be deleted but i may need them for debugging(1MB of log data is needed all the time) , so because of this reason i dint opt for it , but may be u cud correct me if am wrong, exactly how does logrorate operate ?

chrism01 04-11-2006 10:56 PM

logrotate does anything you want, it won't delete anything unless you tell it to. Typically you move the files to another dir eg an archive dir & gzip them.

Deepak Inbasekaran 04-12-2006 01:14 AM

finally i have settled in this concept :
Quote:

i will keep 5 files each of 200KB and i have a circular buffer for 200kb and my msgs goto tat buffer and from it i write each file after the buffer gets filled so it will b a 200kb write each time to the file rather than writing file for every msg. after filling up 5 files i delete the first and create a new one and rename the others and will carry on doing this continuously
so how does this concept sound ? any issues tat any of u cud find ?

Flesym 04-12-2006 05:04 AM

I wouldn't wait with the writing until the buffer is full. This may cause that you will loose the last 200kb during a crash. There is nothing wrong with writing some text to the end of a file. At least as long this won't happen several thousand times per second. The rest sounds good.

Deepak Inbasekaran 04-12-2006 05:24 AM

ya the problem is my system is generating 1000s of msgs at a short time so i prefer the buffer approach rather than writing each time to the file . also i plan to have some code tat writes off the data in the buffer to a file before the crash.

chrism01 04-12-2006 06:54 PM

If you can predict the crash like that, you can probably prevent it ....

Deepak Inbasekaran 04-12-2006 11:54 PM

thatsy i have a small buffer size so tat the data lost wont be much.


All times are GMT -5. The time now is 03:19 PM.