Well, this is a pretty open ended and subjective question; and you didn't say, so I'll assume x86-based.
By OS file System backup do you mean a file level backup (allowing subsequent incremental backups), or a monolithic backup of the entire filesystem? Do you want a canned solution (either Open Source or paid Commercial) or are you willing to do it yourself?
So, it depends. And the best method is highly dependent on your situation (are you backing up just one system or a thousand?), and your goals (ease of use, commercial support, file level recovery, bare metal restoration following a failure, cost?).
Amanda/Zmanda, and Bacula are two backup systems that I've tried, but are real overkill for the half a dozen systems I have at home. As I remember, both worked well. There are lots of others besides these two.
In work we back up about a six hundred Linux systems with IBM's Tivoli Storage Manager. It's expensive. And since TSM does file level backups and restores, it is unsuitable for Disaster Backup bare metal restores. But it allows our users (thousands of them) to do self service file restores.
For bare metal restoration we use 'tar'. Each file system is first "snapped" (if LVM), then tar'd across the network to a system that archives all of the backups. If a full system restoration is needed it is done in rescue mode: the disk is prep'd; the filesystems are tar'd back across the network; and the bootloader is rewritten to the boot disk, if necessary. It's cheap, simple and effective, but not really state of the art. Linux dump and restore commands are also an option. (Both tar and restore allow you to do individual file extracts if you have to.)
One thing that is highly desirable is to be able to snapshot your filesystems prior to back up so they are not changing as you back them up. Our /boot filesystem is not LVM, but there are no changes to it while the backup is taking place; all other filesystems are on LVM logical volumes.