My FeedDiscussionsHeadless CMS
New
Sign in
Log inSign up
Learn more about Hashnode Headless CMSHashnode Headless CMS
Collaborate seamlessly with Hashnode Headless CMS for Enterprise.
Upgrade ✨Learn more

Looking for system upgrade advice, to SSD or not to SSD...

Jason Knight's photo
Jason Knight
·Sep 15, 2018

$900 1TB SSD, 2x 6tb RAID 1, 12tb HDD

70%

$700 4x 6tb HDD RAID 5 SATA 6gb/s

20%

$1100 4x 6tb HDD RAID 5 SAS 12gb/s

10%

10 votes · Closed

Not our usual question here since this is more of a dev site, but since we have a general advice area for non-programming topics, I'd be interested in some input on this.

The machine in question is my media center / network storage rig. It does double duty for gaming and NAS since it's the machine that's easiest to just leave on 24/7. And is in the room that has the most stable power, the fuses blow out most of the time that room stays on. Living in NH power isn't the most stable/reliable thing hence it also being on a UPS This is separate from my low-power multi-display workstation I do... well, work on, but is where all my long-term network storage (on a RAID 1 mirror) is.

Current specs in terms of CPU and RAM are fine. i7 4770k with 24 gigs of RAM, it's just not the money to switch over to even a step sideways in tech yet despite being a six year old CPU. (and I have a spare CPU and mobo just in case).

What REALLY needs changing is storage, and if I list out the configuration and the hours it becomes apparent WHY.

1TB WD Caviar Black, 60% full, 78,905 hours

This has been by go-to as a boot drive since it was the fastest standard HDD of its day and never failed me, but with NINE YEARS continuous uptime it's time for it to retire. I didn't care it was a slow old SATA 2 drive, for the simple fact that I don't care how long this machine takes to boot/reboot since it almost never does! Where I don't understand the whole "use a SSD for fast boots" malarkey. Go get a coffee while waiting ONCE a day.

2x 4TB Seagate M000 in Raid 1 (mirrored), 40% full, 42,453 hours

This is set up as a network share and is where I keep all the business related stuff I work on. These have almost five years continuous 24/7 uptime on them. Given the mission critical nature of these they should probably get the boot as well.

4TB Seagate M0000, 67% full, 47,830 hours

Games, OS ISO's, Downloads

4TB Seagate M005, 93% full, 10,420 hours

Movie and audio storage, again LAN facing network share.

2tb Hibachi, 81% full -- 72,022 hours

Anything high fragmentation -- browser cache, torrents, downloads, etc. This is disk is hammered pretty hard and, well... 72K hours is over 8 years of again continuous uptime. Aka "replace BEFORE it fails"

That's a LOT of uptime for consumer level drives. That trusty old 1tb WD is ready to be put out to pasture, as is that 2tb hibachi deathstar. I'd also like to proactively replace the RAID array.

My original plan was for two 6tb drives to replace the raid array, a 12 tb for mass storage replacing the scratch drive and media drive, and a 1tb SSD... coming to around $900 USD.

But I just cannot justify the $170 to $180 per TB of that SSD, particularly when the entire config works out to 17tb total free space at $52.94 a gig...

So it got me thinking... IF I'm going to have a RAID array, just build a freaking raid array! It's not like this Asrock Fatal1ty Z87 Killer mobo doesn't support it.

As such, I'm looking at four 6tb 7200 RPM 256 meg cache Seagate Ironwolfs or Enterprise Capacity. Not sure yet which I favor and the price difference is negligible. In RAID 5 that gives me 18tb, all my storage would have single drive failure tolerance same as the RAID 1 would have given me, and drops the total price to around $700 -- that's around $39 per TB, spitting distance from the cost per TB of the single 12tb drive I was considering at $35/tb.

6tb drives are the optimal price to capacity right now at $29/tb.

Even better, the read times would be faster than any SATA SSD I could put in the rig! Though write times would still be at normal HDD speeds because... reasons. (parity access).

... and I'd still have two drive connectors free meaning I can make the low hours 4tb the new scratchspace and have one free for... the future.

I also like the idea because I have a rabid distrust of SSD's -- probably because I've replaced too many broken ones for other people. Since SSD's became the 'norm' I've seen ten times as many failures from people needing hardware help than I have ever seen in standard HDD's. EVEN from brands I'd expect to be on the ball like Samsung and Crucial. The brands constantly pimped as best performing and reliable!

But adding that all up it got me thinking... what about going SAS? It ends up $400 more bringing me up past the price of my original plan and does come with some heavier duty configuration headaches, but I would get 12gb/s throughput and 10K rpm enterprise class drives -- though I'm NOT convinced the performance increase of that is sufficient to justify the cost jump.

Only issue is that a good controller needs a PCIe x4 slot, which would drop my video card (GTX 1070) to x8 as there's only 16 lanes available... I know the performance difference on video cards with x16 is more placebo than fact, but still...

Though I could just go totally nutters and go with a RAID 10 array -- NOT financially viable.

I dunno, I'm having trouble deciding so that's why I'm asking for opinions. Attaching relevant poll. I afford any of the choices, but is it worth the extra money for either of the other choices? I'm leaning towards the cheapest of these options because it looks like the least headaches, highest reliability, and, well... it's cheaper. I mean, in theory that SHOULD give me sequential reads twice that of the best SATA SSD given the read speeds of 7200 RPM 6tb drives. Close enough?

I do have the money for any of these three options... so it really comes down to which is the best bang for my buck. Explaining why YOU would favor one over the other choices will help too.