In planning to acquire new servers, aim to balance maximum capacity per server with the lowest possible TCO. “Buy the cheapest” is good basic advice for industry standard servers that all share the same processor hardware. However, cheapest can’t necessarily be the smallest when consolidation is an important factor and cost considerations go beyond initial price to ongoing TCO over the life of the server.
- Start with the capacity requirements for all the apps that are going to be housed on the server. This includes the processor, memory, network I/0 and storage requirements for each.
- Evaluate server configurations and form factors that meet the current requirements plus future capacity requirements. Also consider the immediate and ongoing costs of the solutions over a five-year lifecycle.
- Compare the TCO of multiple solutions that meet capacity requirements. For example, a blade solution vs. rack mount or more lower capacity servers vs. fewer higher capacity servers.
Applications are the intersection point between the strategic and operational goals of the enterprise and IT. Business needs should inform application requirements that drive current and future capacity requirements.
The business measures for the hardware infrastructure are unit cost of capacity plus cost of mitigating risk.
- For capacity planning, current capacity requirements plus expected growth in capacity requirements need to be considered.
- For risk management, the business criticality of the business process and data used by the application need to be considered. For example, high uptime requirements may be met through more redundancy, which will increase cost per unit of capacity.
- Note also that total cost of capacity needs to include facilities costs, such as power and cooling costs.
Server Capacity = Processors, Cores, Threads and Memory
The capacity of server CPUs is a function of the number of parallel processors, cores and threads that are operating plus the amount of addressable memory available to the processors. Here are the fundamentals when evaluating server performance:
- Speed. Over two decades of distributed processing development server processor speeds (Megahertz to Gigahertz) and bandwidth (16-bit to 32-bit to 64-bit) grew exponentially. Deciding between two server options became about speed comparison.
- Processor Count. Recent processor generations have boosted performance through parallel processing. Adding more concurrent processors (2, 4, 8, 16) to the mother board as well as adding more processor cores to each processor.
- Threads. In multithreading ,more than one instruction thread can be processed at the same time. This means that a single physical core can act as 2 logical processors.
- 64-bit. The use of 64-bit operating systems (including 64-bit hypervisors for virtualization) has also significantly increased the amount of memory that can be addressed by an application.
Server consolidation is often more about doing the same with less than doing more with less. It is an important strategic consideration, but it is never the trigger for server purchase.
The typical triggers for server purchase are:
- Replacement of server hardware that has reached end-of-life.
- Adding new servers to host new applications (i.e. new hardware for an new enterprise application)
- Adding new servers to boost capacity to meet increased CPU demand (i.e. adding new servers to a database cluster to meet increasing transactional demand).
In a recent survey, refresh was cited as the reason for purchase more than twice as often as new applications or the need for additional CPU capacity. In 81% of cases, replacement is for server equipment five or more years old.
Capacity: That was Then, This is Now
In 2006 Info-Tech found a significant number of enterprises, small and large, beginning to explore server virtualization. That study found that the most economical configuration for hosting virtual servers was dual processor/dual core servers.
In the latter half of 2010, it is likely that enterprises that acquired servers for virtualization in 2010 will be looking to server refresh both for existing VMs hosting and possibly for migrating additional workloads to a virtual platform.
Current generation servers, with higher density of processor cores threads and more memory, can handle significantly higher workloads. Intel’s Xeon Processor-based Server Refresh Savings Estimator predicts that 5 older processor servers each hosting 7 VMs could be replaced by a single Xeon 5500 quad core server with capacity for 76 VMs.
In refreshing VM-hosted servers, not only can more workloads be virtualized on fewer physical servers, larger workloads can be virtualized, such as apps requiring multiple processors and large memory blocks.
It’s time to review past evaluations of server candidates for virtualization. Applications that you previously red flagged as candidates because of high capacity requirements could well be hosted on higher capacity servers. Use our Virtual Candidate Assessment Tool to map the capacity requirements of various servers that are consolidation candidates.