Greg Schulz, founder of the technology consulting firm StorageIO, spoke with SearchSMBStorage.com Assistant Editor John Hilliard about SSD trends, including the latest developments in the technology and which industries are taking a hard look at putting them to work.
Download for later:
- Internet Explorer: Right Click > Save Target As
- Firefox: Right Click > Save Link As
Let’s start with the basics – mechanical hard disk drives are available at relatively low cost, and I can build storage arrays with terabytes of data with a proven technology. Why should an organization switch from what works?
The hard disk drive has been around for 55 years… solid state has been around for many decades, but something important is happening here: disk drives are being used, and have traditionally been used, for storing data, but also supporting I/Os. But spinning drives haven’t kept up on a performance capability as they have on a space-capacity basis. Organizations of all sizes should be adding solid-state to addressing I/O problems. Right now, I/O issues are resolved by throwing lots of hard drives at the problem, which means underutilization, as well as more complexity and cost. If your organization has an I/O bottleneck that is currently being resolved by throwing hard drives at it, you are a candidate for solid-state.
Let’s talk about performance gains with SSDs. What makes them work faster, and how much faster are we talking about?
Where that speed boost comes in is that solid state is a form of memory. Solid-state, particularly with NAND flash, single-layer cell, multi-layer cell is persistent, unlike DRAM, when you turn it off, [you lose data]. Where the speed comes from -- you’re not waiting for that disk head to rotate around and the mechanical delays associated with it -- [with SSDs,] it’s random access memory, just like in your computer, your RAM. You’re accessing data at the speed of memory. That’s where the biggest speed boost comes into play.
[Based on rough estimates], a disk drive can do a couple hundred I/Ops a second or many megabytes per second throughput. A typical solid-state flash drive should be in the tens of thousands of I/Ops per second. Bandwidth should be pretty comparable [for] throughput… [but] just like a fast hard drive, a solid-state drive needs to be paired with a fast controller and a fast interface that doesn’t slow things down. If you’re using [up to hundreds] of disk drives today… to achieve a certain level of performance, you should be looking at solid state.
Are typical storage devices today compatible with SSDs, or would I have to switch additional hardware as well?
All the pieces need to be there. Typically speaking, you could put a fast solid-state into the slowest storage controller and see some improvement. However, you might be fully utilizing that capability until all the pieces -- the faster interface, the faster controller -- are [available].
Not all storage systems are able to fully utilized [SSDs]. Are they fully compatible? Yes, you can plug them in. But how well can that storage system fully exploit the capabilities of that storage? While many of them are compatible, it’s more than simply plug-and-play to retrieve that whole benefit and capabilities.
How do you address SSD compatibility with existing technology?
There are so many different types of solid-state: there’s single-layer cell (SLC), which has a longer duration and better durability than the lower-cost multi-level cell (MLC). But what is it you’re looking to do? Is it a read problem, is it a write challenge you’re looking to overcome? Is it sequential for throughput or is it random -- what is the particular issue in the configuration?
Ask your vendor to show you reference architecture, benchmarks that actually show how their systems perform compared to other systems, given different workloads. If a vendor just says, “Yes, we support solid-state, so we’re faster,” I’d ask them some more questions, like, “Okay, show me your benchmarks, show me for IOPS, show me for throughput, show me for transactions, show me how much you can improve on the latency on a given workload."
How do they use the solid-state? What do they do for durability? What do they do for survivability? How does it integrate with snapshots and other functionalities, or are they simply putting it into a slot where a disk drive would have occupied?
What is known about the cons of SSDs? What is the reliability over the long term? Basically, we know how to break an HDD – so how do we break an SSD?
It’s actually tougher today than it was a couple of years ago to break a NAND flash, particularly a multi-level cell [SSD] -- it used to be easy: Use it, wear them out. Multi-level cells do wear out, that’s where you get the term duty cycle, duration, things like that. But there’s a growing confidence that -- if it is SLC (single-level cell SSD], which most enterprise products have – there’s a pretty good confidence they have a good duty cycle, very good durability. Vendors are working very diligently around MLC, to reduce the costs. And what they’re doing is enhancing their controller algorithm so that they can do things to maximize wear leveling, to prevent individual cells from repeatedly being used, which would wear them out. They are doing a lot of different things and the confidence [in SSDs] is increasing. That’s probably the biggest drawback, is that they will wear out.
A number of companies of recently released midmarket SSD storage arrays, but who is taking a serious look at SSDs? Are there specific industries or applications that would benefit more from adopting SSDs than others?
Solid-state [drives] have been around for awhile, they’re expensive, so consequently, they gravitate toward those very high-profile, those very I/O-intensive workloads. The reality is, the smallest organization can benefit from some amount of cash, if you’re doing anything where productivity or time is money. It could be something like a Microsoft SQL application, a small Oracle database, Sharepoint, file system, Exchange -- anywhere there is a concentrated amount of I/O activity that is a barrier. That’s where you’re seeing vendors bringing capabilities down into the SMBs, SMEs, or even some of the high-end SOHO-type products, introducing small solid-state drives as part of NAS appliances.
After months of historic levels of flooding have struck Thailand, which is a major manufacturer of hard disks, media reports indicate the prices of the equipment is going up. How does that affect the SSD market?
With the flooding, a lot of those [hard disk] factories have been disrupted, [and] the supply chain. So there’s going to be shortages of hard drives. Does that mean that solid-state can step in? It can, but here’s the catch: The price of a hard drive is going to go up, which so far has been the barrier to adopting solid-state. If the price of a hard drive goes up high enough, organizations might look and say, “Why don’t we buy solid-state?”
But here’s the thing: For organizations that are using lots of hard drives grouped together, aggregated together, striped together, to achieve a certain level of performance, now would actually be a good time to go in there and do some I/O consolidation. Put some solid-state drives in, mirror protect them, move those I/O activities off those hard drives, and re-deploy those hard drives to keep yourself running, particularly if there’s going to be a shortage and higher price on the drives until things stabilize. So it could actually prompt some good, pro-active best practices [in] I/O consolidation.
This was first published in November 2011