When journaling there are two things an fs can journal.
data <- self explanatory
metadata <- data about the data.
ext3 only journals data
reiser files both
I don't know how ext2 & 3 store data but legacy fs use file allocation tables (FAT). Where files are stored along branches with similar files. This can lead to loooong file branches. and therefor long seek times and fragmentation. This sound familiar to any windoze people?
Also most fs tend to use presized blocks of disk spaces (nodes)
a common default is (i beleive) 1024 bytes. So a file that is say a script 40 lines long takes up 1024 bytes disk space. Likewise if a file is 1025 bytes it takes up 2048 bytes.
Rreiserfs will actually store multiple files in the the same node, saving space. Furthermore for very small files Reiserfs will store the data and metadata together. Most fs keep the metadata seperate. Rfs also uses a (fast) balancing tree algorithm to keep the file tree as short and efficient as possible. Faster seek times.
But this is also sort of like a data puree. Different methods for different files...
All of this makes Rfs well suited for small file manipulation and serving. (Think web sites with all the little textfiles and gifs and such. Whereas you save no space or time serving .iso's)
Well that covers most things but what the heck is journaling???
well instead of keeping data in a buffer while waiting to be written to disk journalled fs write data (for Rfs it also writes the metadata to disk) to a temporary location on the disk finds a place to store it, indexes it and writes the data to a more suitable place on the disk.
Finally it (optimally) checks it against the journal.
Think of it as if I were going to tell you something.
I would first tell you what I was going t o say.
Then I would say it.
Then I would tell you what I just said.
If the power is cut the only info lost is that which was being written to the journal. All else is recoverable and there for fsck needn't run and fix so much
)
ie If you heard me tell you what I was going to say you could later fill that in even if interuppted.
Also I think next version Rfs will use a modular plug in architecture so you can program how it works for particular files. Say you are splitting DNA and you want it staored a certain way for your database, you simply write a plug-in and boom data is preparsed in storage. This can also be used for in house encryption etc...
Whew! Corrections are appreciated but I think that's a good lay description.
A final note. when updating Rfs you must also update kernel Rfs code. This is difficult as the Kernel code generally lags behind the fs code. This I think is also because of it's relative youth. Once Rfs matures a bit and is incorporated into a distro it won't be a problem. but currently it is developing to quickly to make it through the six month wait period between releases.