System Center 2012 is Microsoft’s unified systems management platform, intended for hybrid IT environments, in terms of its ability to manage private and public clouds, and physical and virtual servers, as well as in terms of its ability to manage Microsoft and third party products. Its level of success in performing the latter claim is debatable and, I would postulate, not fully realized in this release of System Center.

For large organizations with Microsoft environments, System Center 2012 can be a solid investment, in terms of its ability to provide centralized and automated management. It bundles together a number of components that were previously sold separately, which both simplifies the licensing, but also makes it more expensive.

A core component to the upgrade is System Center’s Virtual Machine Manager (VMM). This component is really the driving force of Microsoft’s strategic vision for System Center as it claims the ability to manage Virtual Machines (VMs) on VMware and Citrix hypervisors, as well as Hyper-V VMs within a single console. However, functionality for the management of these third party server virtualization solutions is basic. If the majority of your virtual infrastructure depends on VMware or Citrix, you will certainly still want to use vSphere or XenServer for managing your VMs.

VMM can now also be used to manage VMs deployed in Azure public cloud, and SP1 (expected to be released soon) includes a new Service Provider Framework API, providing the ability to manage other third party public cloud providers.

A strong benefit in VMM for business users is its Service Template Design and System Center’s App Controller component. These functionalities enable the bundling of VMs that work together to deliver a service into Service Templates that can then be deployed on demand by delegated end users. This ability for self-service of applications is an exciting new feature in its ability to minimize the loss of communication between the business and IT by enabling business users to have the ability to interact with the resources they require and have a level of self-service.

Deployment of System Center is quite complex. Before investing in System Center 2012, consider your environments needs and weight the associated costs of System Center licensing and the infrastructure that is necessary to implement it against the value you will receive out of deploying it.

For more information, see Decide if Microsoft System Center 2012 is Right for the Enterprise.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

Assessment is a critical step between planning and execution of a virtual server implementation. It has three equally important components: infrastructure assessment, business assessment, and operational assessment. The goal of the technology assessment is to inventory current infrastructure and identify virtualization candidates. It has two primary components:

  • Discovery. Build an inventory of all current servers, network connections, and storage. This inventory should also include information on where components are in their lifecycle, as well as information on hosted applications and operating system configuration and licensing.
  • Collection of data on current utilization. Key to identifying virtualization candidates is to build utilization profiles for each server. The four components of the utilization profile are: CPU, Memory, Disk I/O and Network. This information should be gathered over a period of time so that variability and meaningful overall average baseline utilization can be calculated.

The key deliverables of this assessment will be:

Armed with the above information, appropriate candidates for virtualization can be identified. Note that this will be an analysis of appropriate virtualization candidates from a technical point of view – i.e. can the server run on a virtual machine within acceptable limits. Whether a candidate should be virtualized will also depend on combining these findings with the results from a business and operational assessment for virtual server implementation.

How to Conduct a Technical Virtualization Assessment

  1. Inventory Current Sever/Storage Infrastructure. Start by building a complete inventory of the current x 86 server infrastructures as well as any external storage (e.g. storage area network array) used by these servers. This inventory should include:
  • Current Hardware Configuration. This will include the processor type, number of processors, speed (in GHz), memory configuration and size, storage configuration and size, speed and type of network connection.
  • Current Software Configuration. This includes the current operating system type and version as well as the application software installed on the server. An additional question should be whether the current software installation can be licensed and supported in a virtual environment. Also, who is the key user and/or owner of the application.
  • Current Role of the Server. What is the current role of the server hardware within data center operations? Roles will likely include software development and testing server, staging server, parallel or DR failover server, and production server.
  • Age of Server/Current place in lifecycle. Hardware lifecycle planning is used, indicate where the server is in its lifecycle (years to replacement).

As noted above, it may be possible to draw on existing infrastructure monitoring software to build this inventory. Another source of this information might be a recent audit which has recently been carried out for another strategic initiative such as disaster recovery planning.

  1. Measure Resource Utilization over Time. The inventory called for in step one is a fairly static measure of the current environment. However assessing resource utilization requires more than a point in time snapshot of the environment. The next step is to measure utilization over time for four key areas:
  • Processor Utilization. Applications running on a virtual machine act as if they have complete control of a host processor where in fact they are allocated a share of the processor cycles by a host hypervisor. An ideal candidate for virtualization will only use a fraction of the available processing cycles of a single physical processor – less than 50% good, less than 25% excellent. Utilization could be as little as 5% for some applications.
  • Memory Utilization. As with processor capacity, memory will also be shared among virtual machines on a hypervisored host server. However, where processing is dynamically apportioned to VMs as needed, each virtual machine will be assigned a share of physical memory (see Figure 2). It is very important to understand how much memory a running application and operating system actually require.

Not All Memory is Utilized

Memory Utilization in Server Virtualization
  • Storage Resources. Virtual machines store their operating system and application software on virtual hard disk drives which are actually stored locally or more likely on a storage area network array. This virtual hard drive is accessed via a virtualized SCSI interface. In assessing disk resource requirements of virtual machine requirements it is necessary to look at both required size capacity and the utilization of disk I/O.
  • Network Resources. Virtual machines on a virtualization host server will share the network bandwidth available to the host. The host may be connected to the network via Gigabit ethernet NIC, for example, while an individual VM has a virtual 100 MB/second adapter based on its share of the host adapter. If a candidate requires a consistently high bandwidth connection to the network, say 500 MB/second, it will need to be accommodated on a host server where the other VMs have consistently low bandwidth requirements, such as 10 MB/second.

Typically utilization is measured over a period of a month, although it can be longer depending on the application.

Compile Results for Analysis. A specialized assessment tool such as the virtualization candidates report generator for Microsoft Operations Manager (MOM) will generate candidate lists based on specific parameters, such as maximum processor and memory utilization. Those using more manual methods and non-specialized tools can create a spreadsheet where the key findings can be compiled and compared side by side with other servers.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter