• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Should I use an SSD cache?

Tiggerlator

Member
Joined
Oct 27, 2022
Messages
6
Likes
7
My PC gets turned on in a morning, and is on all day till i go to bed. I have 3x NVME ssd's in it now 512,2tb,4tb. I use a external slot in USB3 caddy for a 4tb rust spinner, for my movies and some backups. I don't use any standby setting on my pc at all. My PC has a custom loop cooling the CPU and GPU btw.

Imo use all NVME in your PC if you can, they are fast and consume 10w at the most(?)
 

khensu

Active Member
Joined
Dec 23, 2022
Messages
167
Likes
232
Location
Colorado
At what point is it better to have hardrives set to spin down versus staying on?

I use have a relatively large library of approximately 20TB, over 4 drives in a 4 bay USB caddy. Each drive can be switched on independently. I only switch them on when I am listening to music and currently have Windows settings such that they don't spin down. I am still unclear as to whether they will last longer this way or whether I am better to set them to spin down. What is the general consensus?

Thanks in advance.
If the switch controls power to the drive, you may be better off leaving them on and letting power management spin them down. Spin up/down is likely less taxing on hard drives than actual power off/on.
 

Chr1

Addicted to Fun and Learning
Forum Donor
Joined
Jul 21, 2018
Messages
847
Likes
645
If the switch controls power to the drive, you may be better off leaving them on and letting power management spin them down. Spin up/down is likely less taxing on hard drives than actual power off/on.
Thanks. I generally turn the PC off at night and usually only power the drives on for around 8-10 hours max. Figure it is still probably better for them than continuous spin up/down. No problem with noise as I generally play my music pretty loud. I do use SSDs for downloading and playing my most recent music. Will upgrade to all SSD and use the current spinners as backup when I can afford to as I have my current backup accross a ton of very old small capacity drives from yesteryear...
 

Jinjuku

Major Contributor
Forum Donor
Joined
Feb 28, 2016
Messages
1,279
Likes
1,180
My understanding is the SSD cache is used for front ending large spinning arrays were there is a pre-fetch algorithm running and doing it's best to determine, ahead of request, what's going to be needed. Not that the array is spinning disks down.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,973
Likes
4,979
Location
UK
so I am wondering if an SSD cache would eliminate latency (from spun down drives). What do you think?

It can remove the latency for writing (assume the write policy accepts a cache write as a commit), but for reading, it will only do so if the requested data happens to be in the cache. The likelihood of this will depend on the ration of HDD to cache and your use profile.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,973
Likes
4,979
Location
UK
I have NVMe drives in the 'cache' M.2 slots of both my Synology NAS. However, they are both configured as data volumes and store my most accessed data (essentially, the M.2 slots just give me space for 2 more data drives in my DS920+ and DS420+).

In my opinion, there is little point in using fast NVMe drives to cache HDDs in a NAS that is already limited by it's 1GbE interfaces (even when two 1GbE interfaces are aggregated as mine are).

I have a plan to fill the 2nd M.2 slot in each of my NAS with a 2TB NVMe PCI-e Gen3 drive when they hit £50 in the UK. At this point, I will move my music collection to the new drives.
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,426
Likes
18,431
Location
Netherlands
In my opinion, there is little point in using fast NVMe drives to cache HDDs in a NAS that is already limited by it's 1GbE interfaces (even when two 1GbE interfaces are aggregated as mine are).
Get 10Gbe :cool:. I just upgraded to 10Gbe on my tiny proxmox server, and now I can read with 1.2 GB/sec over the network from my NVME drive over SMB.
I have a plan to fill the 2nd M.2 slot in each of my NAS with a 2TB NVMe PCI-e Gen3 drive when they hit £50 in the UK. At this point, I will move my music collection to the new drives.
You should be only a few points off by now. Just be aware that the TBW may vary wildly, especially for the budget models.
 
OP
D

Digby

Major Contributor
Joined
Mar 12, 2021
Messages
1,632
Likes
1,561
However, they are both configured as data volumes and store my most accessed data (essentially, the M.2 slots just give me space for 2 more data drives in my DS920+ and DS420+).
When you say most accessed, do you manually assess this or is some programme working this out for you?
 

Multicore

Major Contributor
Joined
Dec 6, 2021
Messages
1,790
Likes
1,968
Which is about 2-3 times the price for a HDD of similar size. I can afford it, but this isn't data that demands high speeds, it'd just be nice to access without latency.
It can be 10 seconds or so. For a lot of purposes that's really not acceptable.

As for an SSD cache, it won't work unless the files you want to access are in it. How do you arrange that? What architecture do you use that can make that likely?
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,973
Likes
4,979
Location
UK
When you say most accessed, do you manually assess this or is some programme working this out for you?

I store stuff like home drives, documents and installation binaries on the NVMe hosted volumes. My media sits on HDD. To be honest, this has more to do with the size of the volumes as it does with latency or speed of access (the reason for the wait for 2TB NVMe drives is that I have over 1TB of FLACs)

I actually encourage HDD spin down in both my NAS. I have a primary NAS (DS920+) and a secondary NAS (DS420+). Both NAS identically sized HDDs and NVMe drives in them. Each drive hosts a single volume (I do not use RAID), each volume hosts one or more shares. The shares are synchronized from the primary to the secondary NAS once per day. Important data is also synchronized to cloud storage and occasionally to a USB drive which is kept off-line at other times.

This setup has the following benefits...

1. Single drives (i.e. non-RAID) can spin up and down as and when required saving power and runtime in both NAS.

2. I don't need to worry about getting the right type of drives for RAID (CMR vs SMR) - I can buy the cheapest possible drives from a £/TB perspective, this usually means shucking a drive from a USB enclosure.

3. Upgrading is cheaper, I don't have to replace a whole set of drives. I can just add a single drive to each NAS. e.g. at the moment I have the following drive config... 512GB NVME, spare M.2 slot, 4TB HDD, 6TB HDD, 6TB HDD, spare HDD slot. To upgrade media storage on HDD, I will put another drive in the spare slot in each NAS (probably an 8TB or 10TB), I will then migrate the share(s) from the 4TB drive to the new drive and retire the 4TB drive (which will be 4 or 5 years old by this point). This will free up another HDD slot for the next upgrade.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,973
Likes
4,979
Location
UK
Get 10Gbe :cool:. I just upgraded to 10Gbe on my tiny proxmox server, and now I can read with 1.2 GB/sec over the network from my NVME drive over SMB.

You should be only a few points off by now. Just be aware that the TBW may vary wildly, especially for the budget models.

I know, I purchased 6x 1.1PB (useable @ 3:1 DRR) Dell PowerStore 9200T all flash arrays this year in my day job, they have 4x 100GbE ports on them :)

At home, I have a bunch of 1GbE managed switches that would be expensive to upgrade to 10GbE and a lot of CAT5e that may or may not be up to the job.
 
OP
D

Digby

Major Contributor
Joined
Mar 12, 2021
Messages
1,632
Likes
1,561
3. Upgrading is cheaper, I don't have to replace a whole set of drives. I can just add a single drive to each NAS. e.g. at the moment I have the following drive config... 512GB NVME, spare M.2 slot, 4TB HDD, 6TB HDD, 6TB HDD, spare HDD slot. To upgrade media storage on HDD, I will put another drive in the spare slot in each NAS (probably an 8TB or 10TB), I will then migrate the share(s) from the 4TB drive to the new drive and retire the 4TB drive (which will be 4 or 5 years old by this point). This will free up another HDD slot for the next upgrade.
Might be a thankless task, but now there are huge 16tb/18tb drives out there, have you weighed up cost per TB and wattage (elec costs) used over the year? Maybe you could get it down to one NAS with two 18tb drives? The 16tb drive I tried made an awful racket though, dunno if this factors into your usage considerations, but I think the newer (helium?) drives are pretty noisy compared to older (6tb and under) tech.
 

Barrelhouse Solly

Senior Member
Joined
Aug 13, 2020
Messages
383
Likes
364
I have a little under a TB of music on a NAS. I have no latency problem. If you have a network the router may support connecting a USB drive and allowing a SMB or DLNA connection. I did that for several years before getting the NAS.
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,221
Likes
3,569
Location
33.6 -117.9
You can 'report' me if this is OT:
...As speedy solid-state drives began to rise as a realistic alternative to the decades-old hard drive for consumers, early adopters worried that their new SSDs would fail faster and more frequently. But that might not be the case now, if it ever was. According to the latest Drive Stats report from online backup service Backblaze, its SSDs are failing at a consistently lower rate than old-fashioned hard drives. If the data can be replicated, it means that one of the last advantages (or perceived advantages) of the hard drive is disappearing.
...Backblaze tested its SSDs performing the same tasks it previously assigned to hard drives after a full company transition in 2018. This includes serving as boot drives, primary storage, temporary SMART storage, et cetera. After four years of aggregated data, the SSDs from a variety of manufacturers are showing a failure rate of 1.05%, well below the 1.83% failure rate of hard drives over the previous four-year period. Dramatically, the SSDs showed a 0.00% failure rate in their first year of service compared to .66% for hard drives. Even the most pessimistic projections show the SSDs outperforming hard drives for reliability by a considerable margin.

link
202308_HDDvSSDBackblaze.jpg

Boy, have we come a looong way!:)
 

mrbungle

Active Member
Joined
Mar 7, 2021
Messages
177
Likes
175
Location
Boston
Quite an unusual setup, why do have 4 SSDs in Raid 0?
Sorry, 10 Raid it is. But with all the backups not sure I need the redundancy at home.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,973
Likes
4,979
Location
UK
Might be a thankless task, but now there are huge 16tb/18tb drives out there, have you weighed up cost per TB and wattage (elec costs) used over the year? Maybe you could get it down to one NAS with two 18tb drives? The 16tb drive I tried made an awful racket though, dunno if this factors into your usage considerations, but I think the newer (helium?) drives are pretty noisy compared to older (6tb and under) tech.

I have 2 NAS because I want at least two copies of all my data. The drives that I have now were mostly purchased when 6TB was the sweat spot for £/TB, the exception being the last one I bought to replace a 3TB drive in my secondary that would shortly have been unable to hold all the contents of the 6TB drive in the primary (I don't have to have exactly the same capacity drives in each NAS, as long as the smallest one it big enough to hold the data that is replicated).

Since my drives are often spun down (especially the ones in the secondary), i'm not so concerned with the number or capacity of drives. It would be wasteful to replace the 4 & 6TB drives before a reasonable amount of use (5 or 6 years).

I did consider consolidation of all the drives in the primary onto a large single drive (16 or 18TB) and filling the secondary with the older drives, but I don't need that much storage at the moment, so couldn't justify the cost to myself.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,973
Likes
4,979
Location
UK
Sorry, 10 Raid it is. But with all the backups not sure I need the redundancy at home.

RAID is an availability technology. So if you are prepared for data to be unavailable whilst hardware is replaced and the data restored, then you don't need RAID (or the associated expense or loss of capacity).
 

mrbungle

Active Member
Joined
Mar 7, 2021
Messages
177
Likes
175
Location
Boston
RAID is an availability technology. So if you are prepared for data to be unavailable whilst hardware is replaced and the data restored, then you don't need RAID (or the associated expense or loss of capacity).
Initially I thought I’ll use it as semi-backup (I know RAID is not a backup) together with AWS when I set it up, but then realized a second local disc backup is cheaper and much faster than a full recovery from S3 Glacier. Will take a while until I fill it up, so I’ll keep it as RAID for now.
 

TK750

Active Member
Joined
Apr 19, 2021
Messages
230
Likes
414
Location
UK
Why is that?
Abundance of NAND flash currently, large supply but demand plummeted. A lot of producers have started to scale back production though as a result. Most estimates think we are very close to bottoming out or just about have done in terms of price. Likely to have some increase at some point in Q4.
 
Top Bottom