Now i am doing some data analysis and i cannot help but generate about 369.5MB of data every 20seconds (luckily my experiments last only 60seconds), but the mess is that i stream this data to my analysis software - propriety(mathematica), and the software that generates this data also propriety has some header tags that mathematica does not like.
So i open this with a text editor but most times it simply crahes and does not complete the Job, does anyone know an efficient way of removing this tags? I mean simply open the "top" of the file and take out the tags?
Your files seem quite large, so I don't know what would be efficient. If you wish to remove N bytes from the beginning of a file, you can use tail. The command might look like this: tail -c +100 FILE1 > FILE2 (this would remove 99 bytes and create a second file).
If editing a large file crashes your editors, you can split the file into smaller chunks, edit the first one and merge them again. This: split -b 100m FILE chunk would create 100 Meg files called chunkaa, chunkab and so on. They could be merged with cat chunkaa chunkab chunkac > FILE. To get one small chunk and one big chunk, use head to extract data from the beginning and tail to get the rest, but be careful not to corrupt the data by extracting overlapping sections or leaving out a middle part.
Simple scripts can be written to edit files or data streams if the format is known. I use sed for simple tasks and perl for more complicated ones. Read the man and info pages for more information.
|All times are GMT -5. The time now is 04:29 AM.|