Input/Output Operations Per Second or IOPS (pronounced eye-ops) is the most common method to benchmark a storage system’s performance – hard disk drives, solid-state drives, and storage area networks.
The most common performance characteristics are as follows:
[table id=72/]
(source: https://en.wikipedia.org/wiki/IOPS)
It measures the number of read/write operations a device can complete in a second.
There are a lot of IOPS performance claims out there published by vendors and manufacturers. A majority of these performance claims are measured under the most favorable conditions and as such, shouldn’t be relied upon too heavily as they rarely match the actual workloads that companies run on a daily basis. Many performance claims are based on a 4k block size, when as we know – real-world workloads are much, much larger.
Calculating IOPS
Calculating IOPS depends on a few factors and latency is one of them. Latency is a measure of time delay for an input/output (I/O) operation.
Spindle Speed (RPMs) – Enterprise-level storage rotations speeds are most commonly 10,000 and 15,000 RPM Seek Time – How long it takes the read/write head to move to the track on the platter needed.
Newer SSD drives have significantly better IOPS performance than their traditional hard disk drive counterparts. This topic could fill an entirely separate post. A lack of moving parts, among many other things, drastically improves their performance. As with most things, an increase in performance usually results in an increase in price.
Here’s a basic formula to calculate IOPS range: Divide 1 by the sum of the average latency in ms and the average seek time in ms. So, *(1 / (average latency in ms + average seek time in ms)*.
The basic formula above applies to a single disk when using multiple disks in an array, the calculation changes. Further change that when using a RAID configuration.
Your IOPS needs will depend on a myriad of factors.