You are misunderstanding how file I/O works.
When a file is opened by a process, it has a file pointer of where to perform its *next* write. In your script, the shell has $FILE open for the redirection. After the shell has written byte 1024, its file pointer is set to write starting at byte 1025. Since you truncated the file, the next write by the shell will create a sparse file, where bytes 0-1024 are empty, and tcpdump's redirected output will continue at bytes 1025+. (this is a generalization of the concept; I've omitted some details for the sake of simplicity). Let's confirm with some file sizes:
-rw-r--r-- 1 root root 113K Jul 23 23:14 tcpdump.log
-rw-r--r-- 1 root root 32K Jul 23 23:09 tcpdump.log.1
-rw-r--r-- 1 root root 48K Jul 23 23:10 tcpdump.log.2
-rw-r--r-- 1 root root 64K Jul 23 23:11 tcpdump.log.3
-rw-r--r-- 1 root root 80K Jul 23 23:11 tcpdump.log.4
-rw-r--r-- 1 root root 96K Jul 23 23:12 tcpdump.log.5
-rw-r--r-- 1 root root 112K Jul 23 23:13 tcpdump.log.6
Ok, that seems to agree with the theory. Now, let's check if these are sparse files:
od -b tcpdump.log
It will reveal that indeed all the initial bytes are zeros:
0000000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000
0240000 061 070 065 067 040 ...
Yup, that is also as predicted.
The astute observer will also notice that the tcpdump.log.# file sizes do not exactly equal 1024, but jump in size according the I/O buffer size and when it is flushed. On my system, the file size delta between consecutive tcpdump.log.# files is 16K (some runs have 32K or even 64K deltas, depending upon timing, and whether or not -l is used with tcpdump).
As I have been trying to tell you, you cannot force another process to change or reset its internal file pointer. I'll state it again - two random processes CANNOT write to the same file at the same time as the results are undefined. I hope you now understand this basic lesson.
So you need a different approach.
One approach is to write your own read-n-rotate program, that reads and writes its STDIN, and rotates its output file every N bytes. This can easily be done in C, perl, or your language of choice. This way - you control the open and close of the output files.
Another approach is to use tcpdump's -w, -C option and -W options to create N files, each of a specified maximum size. tcpdump will automatically close one file and open the next file for output. Using -W allows you to create a ring of files (eg. 1, 2, 3, 1, 2, 3, ...). When tcpdump has finished writing a file, it will advance to the next. You are now safe to read the file with tcpdump -r, and you can delete the file too. How do you know when to read the next file ? Answer: after tcpdump has create the file N+1 mod the max number of files you specify with -W. This is how you manage a ringer buffer with one writer and one reader (called the producer/consumer problem in computer science).