As a current user of RH9 and someone who has built LFS systems numerous times I think it'd be fair to make a few suggestions here.
An LFS system in the cookie cutter format is really kind of pointless other than for educational purposes (ie you REALLY want to understand your system). With what you learn from building an LFS system you will know how to properly disect any linux distro you come across and this can be helpful.
It sounds to me however like you are really looking for slightly more modern packages and better performance on your system. In this respect and LFS system can be very very fast, but actually building an LFS system into the equivelent of a modern distribution takes ALOT of time. Once your done you don't neccesarily have an easily recreatable system either, if you want one then you need to invest alot more time building an installer and making a proper distro out of it that can be installed without going through the entire procedure to recompile each time (or at least automating it).
I've found LFS for performance is useful on dedicated purpose systems. You probably don't want to bother with doing this for your desktop system, but if you need a router that will compete with most cisco's out there... a trimmed LFS might just do the trick.
If your wanting these advantages on a desktop system there are a few things you can do with RH9 that will get you closer to where you want to go.
1. Compile a custom kernel, the optimization benefits of everything else are debatable, but a properly tuned kernel can make all the difference in the world. At the time I'm posting this the 2.4.21 kernel has been released, I've had no trouble using a completely patch free compile of this kernel... the performance improvements in terms of system responsivness and "feeling quick" are amazing. If your like me you've often looked at complex tasks on windows systems compared to linux and gasp at linux's speed... but then open a linux app on your desktop and wonder how the hell it can take so long, as of 2.4.21 this issue has been seriously improved (espcially if you use IDE drives).
2. hdparm, make sure you use hdparm to tune your disks... if not you may not actually be getting the full benefits. I've experiemented with this for fun... I've noticed odd things as well I haven't had time to investigate (and certainly never tried on any real use box) but using hdparm and telling your drive is UDMA on an old drive that isn't actually seems to significantly improve performance, instead of breaking everything as you'd expect?
3. Memory, remember your linux system doesn't like to use virtual memory until it has no choice and you take a performance hit when the system turns it on because your running low on ram. Make sure you have as much memory in your machine as possible. On linux this can results in significantly bigger performance boosts than a processor upgrade!
4. For updates, redhat is dog slow on updates unless they are bug fix releases. For my server or business workstation this is good... for my home desktop I no like
On my desktops at home I install apt from apt.freshrpms.net, this is preconfigured to use their apt respository which is reasonably large and usually quite a bit more cutting edge than redhat. Just install the apt rpm from the site, do an `apt-get update`, to update your package lists to match the current stuff in the respository, then `apt-get upgrade`.
I think what you meant to ask might have been simply, does the optimization from custom compiling have a big enough performance impact to make a difference. You'll likely hear alot of conflicting answers on this... I've found that it can make a difference in time critical applications where the most minute on paper gain is critical... but gains that are only visible on paper don't generally make a feelable impact on a desktop system.