Well, one way might be to make a singly linked list out of all free blocks. Suppose we have:
struct free_block {
struct free_block *next;
};
struct free_block *free_head;
To free a block, we simply do:
struct free_block *f = (void*)block_to_free;
f->next = free_head;
free_head = f;
To allocate a block, we do:
struct block_struct *b;
if (free_head) {
b = (void*)free_head;
free_head = free_head->next;
} else {
//make more memory somehow
}
That's a simple way to do it. There are likely more efficient ways to do this if locality is an issue.
Also, I don't know how scalable your application needs to be (specifically, how many files you expect to have). Consider the case of having 100,000 files. To find one, you have to traverse, on average, 50,000 links (which might take quite a while due to cache misses and/or seek times). In other words, it takes linear time to find files using this approach.
By contrast, modern filesystems and databases use more efficient data structures called B-trees that can find things in logarithmic time. This means that, with 100,000 files, it would only take, say, 17 steps to find a file. I recommend reading up on
binary search trees and
B-trees if you haven't learned about those yet. They will change your life.
Also, watch out for endianness. If you don't know what that is, look it up.
You might actually be better off using an existing database solution. For this type of thing,
SQLite and
tdb (trivial database) are worth looking into.