To avoid IT service delivery disruptions due to lack of resources, capacity, performance or availability, small-midsized...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
businesses (SMBs) can put a capacity planning and management program into place. Specifically, a storage capacity plan enables proactive steps to be taken to forecast what storage performance, availability, capacity and energy requirements will be needed in the future.
A storage capacity plan can be very simple, or extremely complex and detailed using models or sophisticated projections. However, in both cases, the fundamental essence of a capacity plan is to align storage resource needs with growth and business plans.
Details about resource use might include what percentage of the servers or storage are busy and at what times, for how long, how capacity is used and how much data is moved in a given time frame. Additional details would include response time or some other measure of productivity, along with availability and comparisons to historical usage patterns and forecasts. Start simple with a capacity plan and have clear objectives for what is to be accompanied.
Leverage various available tools including modeling, reporting and forecasting as well as one of the most popular forecasting tools: spreadsheets. Look at performance, response time and availability, in addition to resource space utilization and corresponding energy usage.
For active work, look at how many IOPS, bandwidth, files and videos can be processed to a given response time level per watt of energy used. For idle or inactive data, or offline storage, look at how much capacity is supported in a given footprint per watt of energy used. Plan across different IT infrastructure resource management domains, such as servers, networks storage and I/O along with facilities (power, cooling, and floor space).
Establish a baseline for your environment and different applications of what is normal performance. For example, if you know what typical IOPS rates and throughput rates are for various devices, the common error rates, the average queue depths and response time, you can use those to make quick comparisons for when things change or if you have to look into performance issues.
Take into consideration planned and seasonal workload increases, for example holiday shopping periods, summer travel or other events that cause a surge in activity for your systems. Part of establishing your baseline is to gain insight at different levels of what servers and devices are the top talkers generating traffic with various levels of detail, including specific LUNs or volumes, ports, protocols, and even applications on servers.
A byproduct of performing an assessment of available storage resources may be the discovery of unused or unallocated storage, as well as data that may be eligible for archiving. The result can be that additional storage capacity can be recovered for near-term growth. Part of forecasting is to look at previous trends and issues as well as how previous forecasts have aligned with actual usage. Also, assess business and application growth rates. Aligning future growth plans, current activity and past trends can help formulate future upgrade and capacity needs.
Use the information obtained from assessments along with growth plans and previous usage information to put together a forecast. The forecast can be as simple as taking current usage (space and performance), assuming current response time objectives are being met and applying growth factors to them.
The tricky part, which has resulted in capacity planning being called part science and part black magic, is to determine what growth rates to use as well as applying "uplift" factors. An uplift factor is a value to account for peak activity, unanticipated growth and buffer space. Certainly a capacity plan and forecast can be much more sophisticated, relying on detailed models and analysis maintained by in-house or external sources. The level of detail will vary depending on your environment and specific needs.
Your level of focus for a particular area of interest will have a bearing on what metrics and tools you need to perform your required tasks. Metrics can be obtained from operating system-based tools and utilities, along with those from storage systems and third-party storage management software vendors. They can also be taken from purpose-built protocol and interface analyzers, fault injectors, sniffers, taps and probes combined with event correlation, analysis and reporting tools.
Additional tools are available from vendors including Agilent Technologies, Akorri Inc., BMC Software Inc., Brocade, Cisco Systems Inc., Demand Technology Software Inc., EMC Corp., Emulex Corp., Finisar Corp., Hewlett-Packard Co., HyperIO, IBM Corp., IntelliMagic, LeCroy Corp., LSI Corp., Microsoft Corp., MonoSphere Inc., NetApp Inc., NetQoS Inc., NetScout Systems Inc., Network Instruments, QLogic Corp., Symantec Corp., TeamQuest Corp.,Tek-Tools Inc. and Wireshark.
Sources for storage performance-related benchmarks include the Storage Performance Council (SPC) and the Transaction Processing Council (TPC) full-disclosure reports, along with Standard Performance Evaluation Corporation (SPEC) and Microsoft ESRP reports.
Read more about server, storage and network capacity planning and capacity management in Chapter 10 of my book "Resilient Storage Networks" and in my new book "The Green and Virtual Data Center."
About this author: Greg Schulz is founder and senior analyst with the IT infrastructure analyst and consulting firm StorageIO.