When an application allocates and frees memory, it's actually using application-level code which (as necessary) turns to the operating system to obtain chunks of memory to add to its pool of available storage. The application-level code then suballocates the large chunks of storage it has thus obtained.
Usually, these chunks are not "given back" to the operating system until the application ends, or when a chunk is found to be completely free (and perhaps, has been so for some time). It takes a lot of time to go to the operating system to get more storage.
Also remember that all of the storage obtained by an application is virtual, which means that storage really isn't being "wasted." Application storage management routines are designed to be favorable to virtual storage implementations.
The application behavior you are seeing is by-design.