Today a customer found out to her unhappy surprise that the Dell PERC 6/i Integrated RAID controller (perhaps others) doesn't report RAID status properly, and probably lost a full array.
When a drive fails, the drive light goes red/amber, but so does the front-panel display that shows status of the entire server. Responsible system admins make it a point to eyeball the servers regularly, looking for the "all is well" cool blue indicator.
This front-panel reflects not just RAID status, but everything else -- memory, power supply, fans, etc. -- and is a great early warning system even if SNMP traps or alert emails or whatever aren't working right.
But what it doesn't reflect is the status of virtual drives, and this is seriously bad news.
If a failed drive is inadvertently replaced with the wrong kind, be it SATA instead of SAS (and many RAID controllers talk to both), or with a drive that's size incompatible (just slightly smaller than the old size), the RAID controller will adopt the drives with "Ready" status, turn off the front-panel amber status even though the new drive won't rebuild the array.
It's devastatingly bad design to indicate "all is well" when RAID arrays are in degraded status.
The poor customer who obtains a stack of the wrong drives, dutifully replaces them when failed, and is seduced into thinking that all is well when the lights all go green (on the drives) and blue (on the server) may well lose everything if the wrong two drives fail in succession.
We found a different server that had been limping along with a degraded array and a bad drive:
Thankfully we caught this second server before it was truly gone and were able to replace the bogus drive with a proper one. We've ordered a stack of additional spares just to avoid this in the future.
The moral of the story is that you can't be sure of a RAID rebuild until you can query the controller directly regarding virtual-drive status, either in the BIOS (requires downtime), or with vendor-provided utilities under the host operating system.