This is part 2 of a series of lessons learned and examples worked based on a 400+ TB VSAN solution I helped a partner engineer. I’m comparing high capacity VSAN nodes with standard configurations. This post will focus on calculating the IOPS required to de-stage writes from the caching layer, then examining the implications on hardware choices.
Calculating IOPS Required
The caching layer acts to soak up spikes of data and IOPS which come up during the course of operations (for example, a periodic dump of meteorological images). When using low-density (IOPS/TB) storage, the rate at which the data can be de-staged from the caching layer to the persistence layer becomes very important. In the example burst of data, it’s important to know the average rate at which data is being written to the storage. Though the caching layer can soak up the spikes, ultimately, the persistence layer needs to be able to handle the data being de-staged to it. Keep in mind that policies might dictate more than a single copy of data be written. Let’s consider 1 TB ingested every day. This is a round number that I picked which has profound implications on the worked example. Every situation is different, so you should really examine the specifics of yours. Some rough back-of-the-napkin math:
That’s 11.6 MB/s for a single copy. Since the policy dictates two copies, that’s 23.2 MB/s.
Implications of Block Size
To calculate the IOPS one needs to handle that de-staging throughput, consider the block size being written. This is actually an interesting advantage of VSAN, as operating systems typically use 4KB blocks, VMFS is traditionally 1 MB, but the VSAN 6.0 FS (VirstoFS) can write in 4MB chunks while using 512 bytes allocation units. Why this is significant? VirstoFS can capture block changes at the granularity of 512 bytes, so can create very small, performant snapshots. But imagine if it could only write 512 byte blocks:
You can immediately see that the block size for this solution will have another profound impact on the IOPS. This is something that needs to be examined a bit further and probably tested on the final solution. If 4KB blocks are being used, for example, we’d need around 5,800 IOPS. 1MB blocks would require 23.2 IOPS. If 4MB blocks are used…
That is a big difference.
Great, we now know that for our hypothetical 1TB/day ingest rate, we only need 5.8 IOPS?! Even with our low IOPS/TB density storage, that’s still handled by a single drive at 100 IOPS. Imagine if we were using 4KB blocks:
We’d need 58 drives just for IOPS! With Large Form Factor (LFF) drives, we can only fit 12 per 2U host, which would mean:
Note: is the “ceiling” function which represents rounding up to the next integer
As it is, the real issue we face is the retention rate for our data. If it needs to be held for a year, then we need:
So we need only need 1 drive for the IOPS requirements, but 176 drives housed in 15 hosts for the capacity requirement. Capacity wins, and we’ll have excess IOPS. Keep in mind that in this case we needed more disks for capacity (176) than we did even if we needed 5,800 IOPS. Oh, how many IOPS do we end up with?
Now let’s compare that to using Small Form Factor (SFF) 10K RPM drives at 140 IOPS/drive.
For capacity requirements:
Again, for each project we have to meet both capacity and IOPS requirements, so we choose the larger of the two numbers and go with 879 drives housed in 42 hosts. This solution will also give us an excess of IOPS:
Just to re-emphasize, this is a measure of the rate at which the cache layer can de-stage writes to the persistence layer and perform read cache misses.
At this point, we can see why a higher density solution would be more cost effective; Using LFF drives uses 16 fewer hosts, and either 16 or 32 fewer licenses, depending on whether populating one or two CPU sockets per host (depending on compute requirements).
Up next: Flexibility of Purchased Capacity
- 13148718593_c41b313cc3_math-chalkboard: flickr:KimManleyOrt