In planning to acquire new servers, aim to balance maximum capacity per server with the lowest possible TCO. “Buy the cheapest” is good basic advice for industry standard servers that all share the same processor hardware. However, cheapest can’t necessarily be the smallest when consolidation is an important factor and cost considerations go beyond initial price to ongoing TCO over the life of the server.

  1. Start with the capacity requirements for all the apps that are going to be housed on the server. This includes the processor, memory, network I/0 and storage requirements for each.
  2. Evaluate server configurations and form factors that meet the current requirements plus future capacity requirements. Also consider the immediate and ongoing costs of the solutions over a five-year lifecycle.
  3. Compare the TCO of multiple solutions that meet capacity requirements. For example, a blade solution vs. rack mount or more lower capacity servers vs. fewer higher capacity servers.

Applications are the intersection point between the strategic and operational goals of the enterprise and IT. Business needs should inform application requirements that drive current and future capacity requirements.

The business measures for the hardware infrastructure are unit cost of capacity plus cost of mitigating risk.

  • For capacity planning, current capacity requirements plus expected growth in capacity requirements need to be considered.
  • For risk management, the business criticality of the business process and data used by the application need to be considered. For example, high uptime requirements may be met through more redundancy, which will increase cost per unit of capacity.
  • Note also that total cost of capacity needs to include facilities costs, such as power and cooling costs.

Server Capacity = Processors, Cores, Threads and Memory

The capacity of server CPUs is a function of the number of parallel processors, cores and threads that are operating plus the amount of addressable memory available to the processors. Here are the fundamentals when evaluating server performance:

  1. Speed. Over two decades of distributed processing development server processor speeds (Megahertz to Gigahertz) and bandwidth (16-bit to 32-bit to 64-bit) grew exponentially. Deciding between two server options became about speed comparison.
  2. Processor Count. Recent processor generations have boosted performance through parallel processing. Adding more concurrent processors (2, 4, 8, 16) to the mother board as well as adding more processor cores to each processor.
  3. Threads. In multithreading ,more than one instruction thread can be processed at the same time. This means that a single physical core can act as 2 logical processors.
  4. 64-bit. The use of 64-bit operating systems (including 64-bit hypervisors for virtualization) has also significantly increased the amount of memory that can be addressed by an application.

Server consolidation is often more about doing the same with less than doing more with less. It is an important strategic consideration, but it is never the trigger for server purchase.

The typical triggers for server purchase are:

  • Replacement of server hardware that has reached end-of-life.
  • Adding new servers to host new applications (i.e. new hardware for an new enterprise application)
  • Adding new servers to boost capacity to meet increased CPU demand (i.e. adding new servers to a database cluster to meet increasing transactional demand).

In a recent survey, refresh was cited as the reason for purchase more than twice as often as new applications or the need for additional CPU capacity. In 81% of cases, replacement is for server equipment five or more years old.

Capacity: That was Then, This is Now

In 2006 Info-Tech found a significant number of enterprises, small and large, beginning to explore server virtualization. That study found that the most economical configuration for hosting virtual servers was dual processor/dual core servers.

In the latter half of 2010, it is likely that enterprises that acquired servers for virtualization in 2010 will be looking to server refresh both for existing VMs hosting and possibly for migrating additional workloads to a virtual platform.

Current generation servers, with higher density of processor cores threads and more memory, can handle significantly higher workloads.  Intel’s Xeon Processor-based Server Refresh Savings Estimator predicts that 5 older processor servers each hosting 7 VMs could be replaced by a single Xeon 5500 quad core server with capacity for 76 VMs.

In refreshing VM-hosted servers, not only can more workloads be virtualized on fewer physical servers, larger workloads can be virtualized, such as apps requiring multiple processors and large memory blocks.

It’s time to review past evaluations of server candidates for virtualization. Applications that you previously red flagged as candidates because of high capacity requirements could well be hosted on higher capacity servers.  Use our Virtual Candidate Assessment Tool to map the capacity requirements of various servers that are consolidation candidates.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

When replacing end-of-life servers and/or adding new servers there are a range of tactical choices to be made. What is the most cost effective configuration? Are blades the future or will traditional rack mount do? How many processors/cores and how much memory per server is optimal?

These tactical decisions should be informed by a solid strategic context. Server consolidation and virtualization mean that a x86/x64 server is no longer a stand-alone silo running a single Windows or Linux application. It is now a single processing node in something variously called a utility infrastructure or an internal cloud.

The transition from an infrastructure of distributed server silos to a consolidation will be evolutionary– progressing as servers come up for refresh or new servers need to be added. Against a consolidation strategy, tactical choices need to be made to meet immediate need while the appropriate building blocks for the cloud.

Blade servers are typically used in models for the “processing layer” of the consolidated utility infrastructure. Blades are the right choice for certain situations, however rack mounted and perhaps even stand alone servers can also fit into this picture.

  • Think of servers as nodes of processing capacity. The server is only one layer of a multi-part consolidated infrastructure along with storage, networking, and virtualization. The role of the server in this consolidated infrastructure is to provide processing capacity to host current and future application workloads.
  • Measure capacity in processors, cores, threads, and memory. The workload capacity of individual servers is a function of multiple processor cores plus addressable memory and I/O bandwidth. Significant progress has been made in processing capability that has exponentially increased the workload capacity of modern servers.
  • Now is the time to evaluate (and re-evaluate) server capacity. Server capacity has grown an order of magnitude since early 2009. In evaluating various server configurations and form factors seek to maximize a balance of capacity for consolidation with the lowest possible TCO.

Make Server Consolidation a Continuing Strategic Goal

Save capital spend and improve infrastructure efficiency by making server consolidation a continuing strategic goal:

  • Server consolidation is simply about doing the required compute work with fewer physical servers using shared resources (such as storage). Fewer physical server requirements mean less hardware to purchase, less physical complexity to manage, and less space requirements.
  • Server virtualization is a critical enabler of consolidation (multiple app workloads sharing physical servers). Info-Tech has seen one-time and ongoing hardware acquisition cost savings of 40% to 75% in consolidation projects leveraging virtualization.
  • Non-virtualized servers are also part of the consolidation picture. For example, increasing processing capacity of physical servers reduces the number of concurrent clustered servers required for high performance data processing.
  • Consolidated servers form a utility infrastructure where individual physical servers are units of processing capacity that are managed as a pool to provision enterprise applications.

Consolidation has its biggest impact in industry standard x86/x64 servers – by far where most server acquisition is being made. Improvements in x86/x64 architecture is the tide that rises all boats (whether those server boats be from IBM, Cisco, Dell, HP, Sun or others).  Unlike proprietary server platforms that are based on different processor designs (e.g. Sun Sparc, IBM Power, or Intel/HP Itanium), all industry standard servers all based on the same processors from Intel or AMD.

Consolidation enablers, such as server virtualization and shared network storage, are a relatively recent development (within the past decade) for this class of server (originally intended for distributed stand-alone processing).

The server acquisition strategy should look at total cost of a particular server configuration and form factor against capacity requirements and current and future workload consolidation (through virtualization).

Whether it is called consolidated utility infrastructure or an internal cloud, there are three laws that govern investment:

  1. Alignment is software. Applications are the intersection point between the strategic and operational goals of the enterprise and IT. All investments need to be considered first in how they enable the applications that enable the business.
  2. Hardware is capacity. Applications are provisioned with processing, memory, and storage resources that are derived from underlying hardware. The important business measure is cost per unit of capacity as well as the value added cost per unit of capacity of risk mitigation and service levels.
  3. Management is the differentiator. Software that efficiently manages the utility infrastructure for business processes is a key value add. Management software can also provide visibility into the infrastructure for compliance and performance monitoring purposes.

What does this mean for server acquisition?

  1. Start with application requirements, current and future.
  2. Model capacity requirements based on application needs plus reserve capacity.
  3. Base acquisition on maximizing capacity at lowest TCO per workload.

Vendors will likely not differentiate around capacity, but rather on management and how the servers fit into the larger consolidated infrastructure.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter