Storage industry changes the idiom: Lies, damn lies and statistics to something new: Lies, damn lies and Storage Industry!
Tbh i would trust more a sleezy sub-1000$ used car dealer than the storage industry after dabbling into for my work into the big storage field. Just some pointers here:
ZFS:
Intermittent hardware failure is almost guaranteed to nuke your data, ZFS never goes to read only mode for one.
Performance sucks, it's the worst design in regards of performance - UNLESS - you need single user sequential. In RAIDZ a single read will usually activate *all* drives in the vdev, seriously, ALL drives? It ruins the IOPS completely. Your other choice is mirroring - which will still remove half of your storage and half of your performance.
Don't believe the lies, ZFS is not the golden bullet solve-it-all, it's pure lies when they say that ZFS outperforms everything and anything - it doesn't because it lacks the IOPS capability.
For sequential one user at a time access it does have huge performance tho, so if that's your usage and you don't mind risking your data - go with it!
ZFS has good SSD caching tho, it's extremely effective. Problem is that it takes weeks to warm up in any sensible capacity. Weeks upon weeks. Yes, it really takes that long to warm. Tho, once it is warm, it does provide the most common data immensively fast.
So ZFS does have it PROs too! :) But for a multi-user setup? If you can handle the loss of IOPS and data risk you might be able to use it.
Older Adaptec Cards, couple gens old:
It seems almost like it has a timer, more time passed since release, more the card "sleeps". Original reviews of the cards gave brilliant reviews and performance, with many years more modern drives today the cards didn't even reach half of the performance from the reviews from release time. Card was SATA II capable.
SATA II Promise cards:
Drive fails? Sorry no go, your system won't boot up and you cannot diagnose the issue. Please try to guess which drive failed.
Further, in JBOD mode you cannot even get serial of a drive in any normal method.
RocketRAID, LSI MegaRaid + Newer AMD FX CPU / Chipset
RocketRaids loose connection to drives every now and then with no reason, not sure if this happens on Intel motherboards as well.
LSI MegaRaid won't let the system boot with Udevd modprobe kill timeouts. Apparently this happens only with newer AMD FX platforms.
Legacy RAID:
Way way too slow recheck, resync times, lacks flexibility etc. for today's huge drives. Otherwise very stable tho, decent performance.
Btier:
Nukes data, corrupting the OS if you have very high load, i would assume a block move from tier to another fails?
This same happens if you try to manually move A LOT of blocks from tier to tier.
Very nice idea tho, just needs some more work.
SSD Caching:
All the usual methods, Flashcache, EnhanceIO etc. sucks and bad, you are lucky to get 2 magnetic drives worth of performance from your expensive SSD cache. It's better to just get more magnetic drives for that array, and use the SSDs completely separately.
All assume you have infinite write speed, and eventually the SSD Cache will work as a SSD Bottleneck, way to solve it is secure erase the SSD and then re-enable caching.
SSDs:
Still need to provision 15%+ of space to sit empty for firmware to handle wear leveling, trim etc. discreetly on the background. Firmwares have huge differences, one SSD might need 25%+ and other is fine with 5%+ for this purpose.
Trim/Discard in many SSD cache packages doesn't actually work.
They also fail. They fail a lot. Infact, just don't use them, ok??
If you plan to use them in any other scenario than traditional disk / raid, they tend to fail super quickly.
As a single desktop drive, or perhaps RAID0 on a server or even RAID5 they work insanely well tho. Haven't tested as journal only device tho.