Most storage area networks (SANs) have a section in their marketing material entitled "Thin Provisioning," but...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
what is it, how does it work and why does it matter? Put simply, it is the ability to let an application or server think it has all the storage it needs, but only purchase disk capacity that keeps pace with total consumed storage, can be a compelling benefit. While all SANs make cross-environment storage utilization far more efficient than islands of direct-attached storage (DAS), thin provisioning can further help with both the utilization of storage and the budgeting of hardware purchases.
Take a standard Windows file share, stored on a network-attached storage (NAS) device -- either a true NAS or on an area of a unified storage platform. You want to allocate sufficient space for immediate storage requirements and perhaps 10% per annum for growth. A 1 TB file storage area plus five years worth of cumulative 10% growth ends up with around 1.6 TB of required space.
In the days of direct attached disk, you would have purchased around 2 TB of disk space and then waited five years for the storage to fill up and the server to go out of normal support. That example yields 600 GB to 1 TB of wasted space that isn't available to other applications to use. That's a lot of money spent on disk that is going to do nothing more than spin inside a server for years, generating heat that has to be dissipated by ever larger air conditioning units. A typical fully loaded shelf of disks has a thermal rating of 1,500 BTU, or 0.4kWh, for those on the metric system.
Of course, there is never simply one file share and the whole point of a SAN/NAS is to consolidate storage from many or all servers into a single platform -- whether or not that platform is replicated to another location as part of a disaster recovery (DR) process is irrelevant -- to increase the efficiency of storage utilization. As you provide file shares to new projects, they invariably ask for an arbitrary amount and never use anything near that amount. It's far easier just to let project managers and server administrators think they have everything they asked for and just add physical disk space in a just-in-time manner.
Perhaps the best way of making the case for thin provisioning is to look at a database server. Windows likes fixed logical unit number (LUN) sizes and while most SAN vendors have software-based methods of growing LUNs on the fly, there's still a need for the server and storage admins to have a conversation when new storage is required. Without thin provisioning, the storage administrator has to add the storage, configure it for use and then tell the server administrator which LUN(s) he can grow and by how much.
With thin provisioning implemented, an Exchange administrator can, up front, be given all the space they ultimately need for all of the 50 stores they may want to use and the server administrator never needs to take the server down to add space or reconfigure anything. So long as storage utilization is properly projected, the storage administrator can purchase additional disk shelves as and when necessary. If the Exchange server has directly attached disks, either terabytes of disk space will go unused for several years or the entire server has to be shut down so that new disks can be added, thereby significantly impacting service availability.
It's not all rosy in the thin provisioning garden, however. The main drawback is that devoting time to careful monitoring is critical. If you run out of physical storage before you run out of "visible" storage, your application will fail with a write error. The server administrators won't know what's wrong because as far as they can see there is plenty of storage available. This will lead to an awful lot of unnecessary troubleshooting, not to mention potential animosity among the IT department.
Another drawback is the attitude of management and the capital expenditure (Capex) practices that may be in force within an organization. If a company has a system where all expenditure for a given project must be made upfront then it will be difficult to change processes so that capital items may be purchased as if they were operational expenditure (Opex). One thing is essential though; if the system is bad and will cost you money, you need to change the system, not the technology.
Depending on the platform used, a final drawback may be space reclamation. If project A uses an amount of disk space, project B takes over the space provisioned for A and the files relevant to project A are archived to slower disk, you may find that all that space is not given back to the storage pool for reallocation. Thus, from day one, project B has the same physical space that project A had when it closed down.
In modern networked storage environments, thin provisioning has become an essential part of keeping initial procurement costs down. This can also help in making departments pay for their storage in a "gigabyte-consumed" manner rather than storage being part of a huge pool and always being the poor relation and on the losing end of constant interdepartmental bickering.
About the author: Mark Arnold, MCSE+M, Microsoft MVP, is the principal consultant with LMA Consulting LLC, a Philadelphia, PA-based private messaging and storage consultancy. Mark assists customers in designs of SAN-based Exchange implementations. You can contact him at email@example.com.