Performance Consistency
In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.
To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.
We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.
The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.
The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).
The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.
As expected, IO consistency is mostly similar to the regular EVO. The only change appears to be in steady-state behavior where the 2.5" EVO exhibits more up-and-down behavior, whereas the EVO mSATA is more consistent. This might be due to the latest firmware update because it changed some TurboWrite algorithms and it seems that the TurboWrite is kicking in in the 2.5" EVO every once in a while to boost performance (our EVO mSATA has the latest firmware but the 2.5" EVO was tested with the original firmware).
Increasing the OP in the EVO mSATA results in noticeably better performance but also causes some weird behavior. After about 300 seconds, the IOPS repeatedly drops to 1000 until it evens out after 800 seconds. I am not sure what exactly is happening here but I have asked Samsung to check if this is normal and if they can provide an explanation. My educated guess would be TurboWrite (again) because the drive seems to be reorganizing blocks to increase performance back to peak level. If you're focusing too much on reorganizing existing blocks of data, the latency for incoming writes will increase (and IOPS will drop).
TRIM Validation
To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 60 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.
Surprisingly, it's not. The write speed should be around 300MB/s for the 250GB model based on our Iometer test but here the performance is only 100-150MB/s for the earliest LBAs. Sequential writes do restore performance slowly but even after a full drive the performance has not fully recovered.
Samsung SSD 840 EVO mSATA Resiliency - Iometer Sequential Write | |||
Clean | Dirty (40min torture) | After TRIM | |
Samsung SSD 840 EVO mSATA 120GB | 180.4MB/s | 69.3MB/s | 126.2MB/s |
At first I thought this was an error in our testing but I was able to duplicate the issue with our 120GB sample and using Iometer for testing (i.e. 60-second sequential write run in Iometer instead of HD Tach). Unfortunately I ran out of time to test this issue more thoroughly (e.g. does a short period of idling help) but I'll be sure to run more tests once I get back to my testbed.
ncG1vNJzZmivp6x7orrAp5utnZOde6S7zGiqoaenZIR2hZNoqpqlo6q7qHnSrJtmcGRleqbCzmakrJmklnpyfo%2BgmWZqZWW0o3mUaWegml1mwaN50Z6top2nZIA%3D