Rsync is great, and if you have two servers in different physical geographically separated locations rsync'ing them over the internet might be sufficient. but if both servers are in a single datacenter and it burns to the ground, well then an off site backup would be worth it's weight in gold.
also if both servers you are rsyncing are Internet connected and are exploited, having a known good backup that is offline would be a good thing.
call me overly paranoid if you will, but if you end up in a position with no backup for vital data you will wish you had taken additional steps to ensure you have a clean copy of your data.
I can tell you first hand I have had two servers, One was a database server with our Accounting system database, The second server was the backup, similar to your Rsync scenario.. Both Servers had RAID 5 arrays with Hot spares.
The Backup server had multiple drives fail at the same time and the backup data was lost... OUCH!!! but the production server was still live. I replaced all the failed drives in the backup server array. got it back online but now have to backup ALL the data from the live system. I kicked off the backup job at 5:00 as I left the office. I came into work the following morning to discover the Database server had gone down HARD and also had a multiple drive failure in the RAID 5 Array. Now keep in mind the first server that failed was my backup, and since the DB server Failed During the night it's backup did not complete.. so GAME OVER the database was GONE, and the partial backup was useless..
As luck would have it just before starting the backup job I copied the entire database to my workstation before going home. had I not taken that extra step our company would most likely have gone out of business, or at the very least would have been hurting really bad for a long time. The system was still down for about 2 Days as I had to repair the server, reload the OS, restore the database, and make everything work again..
On the brighter side management FINALLY started to listen to my complaints about our server hardware and backup situation. Since that day last July I have got almost ALL new servers, ALL new workstations, and am working towards a complete overhaul of the datacenter. All things I have been requesting for the last couple years and have been denied time after time. The offsite backup solution was also approved in that budget, as well as many other items I had been asking for. I'm rather annoyed it had to take such a disaster to finally get their attention, but things are MUCH improved now and I can also sleep better at night not worrying about a FrankenServer (read: home built server with Off the shelf parts, that when it goes down you can't find exact replacements for) being in a failed condition in the morning.
I have never before witnessed a RAID 5 Array with 3 drives fail at the same time, let alone TWO separate arrays with that type os failure on two separate servers within a week of each other..
I was literally, physically ill when that DB server crashed, it was a rough couple weeks between the failure of the first server and the recovery of the second and a recovery for me afterwards..
So you can maybe get a little insight to my paranoia.. if it would have only taken $10,000.00 investment to have proper offsite backups, and you forgo that expense and loose all your data and your company goes out of business because of it.. was $10k really that much to spend for insurance ? the Data recovery specialists I spoke with wanted $20k to recover the data from the failed array, and said they couldn't tell if the data was any good until after they recovered it.. "Penny wise, Pound foolish" as they say.
You have to make the determination what your data is worth and if your backup solution is sufficient.
I hope I have given you a little food for thought, that will make your backup solution as robust as it can possibly be, taking into account all possible scenarios.
Best of luck !