Your files seem quite large, so I don't know what would be efficient. If you wish to remove N bytes from the beginning of a file, you can use tail. The command might look like this: tail -c +100 FILE1 > FILE2 (this would remove 99 bytes and create a second file).
If editing a large file crashes your editors, you can split the file into smaller chunks, edit the first one and merge them again. This: split -b 100m FILE chunk would create 100 Meg files called chunkaa, chunkab and so on. They could be merged with cat chunkaa chunkab chunkac > FILE. To get one small chunk and one big chunk, use head to extract data from the beginning and tail to get the rest, but be careful not to corrupt the data by extracting overlapping sections or leaving out a middle part.
Simple scripts can be written to edit files or data streams if the format is known. I use sed for simple tasks and perl for more complicated ones. Read the man and info pages for more information.