A (hardware) RAID controller may have onboard cache memory. This memory may be used for two different functions:
- Read-ahead cache to speed up read operations
- Writeback cache to speed up write operations
Read-ahead caching means a program running on the RAID controller CPU monitors read requests from the host OS, and issues its own (at the time superfluous) read requests to cache the contents of sectors near the sector(s) being requested.
This program has to rely on educated guesses to figure out which sectors to read, and sometimes it gets it right (cache hits) while at other times the next read request is for a completely different sector (cache misses). Therefore, the effectiveness of read-ahead caching varies greatly with the read-ahead algorithm used and the actual server load (many vs. few threads/tasks, linear vs. random disk access etc.)
Writeback caching means write requests are marked as completed, even though the data is only stored in cache RAM and is not yet written to the disk(s). The data will be written to disk at a later time, for instance when the controller is less busy or when the read/write heads of the disks are in closer proximity to the relevant sectors. Often the controller is able to consolidate multiple write requests into fewer and far more effective I/O operations.
This is likely to have a significant positive effect on write performance regardless of server load, but there's a certain risk associated with it: if there's a power failure or a sudden reboot, the contents of the cache RAM may not get flushed to disk at all, and the result could be file systems or databases in an inconsistent/broken state.
Some RAID controllers have battery-backed cache RAM to eliminate the risks normally associated with writeback caching.