microsoft_logo
That Microsoft is making Windows free for consumer tablet and smart phone makers is historic. It doesn’t mean Redmond is on the verge of giving away the store – especially for the enterprise – but it does further signify a shift away from a Windows centrism that threatens Microsoft with irrelevance.

It was about 40 years ago that young Bill Gates upset the early microcomputer hobbyist community by packaging and selling software. The idea that software is something you buy has been sacred to Microsoft to this day.

Further to Microsoft orthodoxy is the idea that one particular type of software, the computer operating system, was something that was sold and licensed to a particular device. That started with MS-DOS for PCs in the 80s, then Windows for PCs in the 90s, and Windows Server into the new millennium.

Call it a Windows first policy – a policy that made Microsoft billions but has been an impediment to modern trends in virtualization, cloud computing, and mobility.

In the mobile world in particular Microsoft has missed a big boat with a miniscule number of Windows-running devices while Google’s free Android has dominated followed by Apple’s IOS. Meanwhile sales of the venerable PC (and laptop) are cratering big time.

So now Microsoft is talking “Cloud first and Mobile first” and moving away from Windows first. Following the March 24th announcement of Office (Word, Excel, and PowerPoint) for the iPad we now get news that Windows is going to be free for some mobile devices.

These moves are limited. For example the free Windows is only for consumer devices of nine-inch screen or smaller and Windows was only costing $5 to $15 for these devices before. But they do underscore Microsoft’s intention that the so called post-PC era will not be the post Microsoft era. The question is whether this is too little too late.

Meanwhile businesses are grappling with Microsoft licensing that remains Windows centric and device centric. Though there have been some changes to bend to trends such as cloud and virtualization, a clean break with past has not been made.

Info-Tech clients have had success, to save money and aggravation, in grappling with the resulting maze that is Microsoft licensing. Check out the infographic below for our Microsoft licensing solution (click on the image for larger view and to go to our project blueprint). In spite of cloud first and mobile first this isn’t a story due to get easier (or cheaper) any time soon.

microsoftreview

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

NetappSoftware defined storage is the key to the data center of the future and (insert vendor name here) has been the leader in software defined storage for years.

This is a common thread in messaging from a number of storage vendors in recent weeks. IBM points at SAN Volume Controller (SVC) as evidence of their long term leadership in software defined storage (for more see the Info-Tech Blog from IBM Edge “Addition by Subtraction? IBM and Software Defined Storage Commoditization“). HP points to StoreVirtual as part of a “six year lead in software defined storage.”

And now NetApp was talking this week at their Industry Analyst Summit about their established leadership in software defined storage centered around Clustered ONTAP, the latest version of their core storage operating system Data ONTAP. Through node clustering, ONTAP is going beyond individual NetApp arrays to manage storage as a service across multiple storage devices including other vendor’s arrays and even commodity server hardware (with a product called ONTAP Edge).

NetApp CEO Tom Georgans told the assembled industry analysts that NetApp isn’t going after software defined storage but, rather, with Clustered ONTAP, “the software defined storage story is coming to us.”

Software defined is virtually new 

They are all correct, each in their own way, when they claim to have years of leadership in software defined storage. We just didn’t call it software defined storage years ago. We called it storage virtualization. What is new is that developments from cloud to flash to big data are forcing storage vendors out of their comfortable (and high margin) boxes.

One reason “software defined” is the new buzzword is that “virtualization” was too associated with server virtualization and got murky when applied to networks and storage. There are several different ways that storage can be, and is, virtualized. So, instead of the virtual data center (VDC), we’re hearing about the software defined data center or SDDC.

Over the past decade we’ve seen server virtualization turn processing into something that can be managed as a service on open standard (x86) hardware. But the storage side of the house has fought against such commoditization, insisting on hardwiring their unique intellectual property into storage arrays. I would argue that they have fought against the trend of “software defined” for more than a decade.

Storage virtualization has been around this whole time but I’ve typically seen it as a tactical device to ease customer transition from the other guys’ arrays to your own. I wrote about this seven years ago, comparing storage virtualization to a competitive neutron bomb – kill your enemies but leave their assets for your use (see the link at the end of this blog).

It always was the software, stupid

As a term, “software defined” carries its own fuzziness. I mean storage (at least beyond the most basic definition of a spinning disk, tape, or flash) has always been software defined. Disk arrays have something called a controller. A controller is nothing more than a processor running an operating system and software that manages and serves up the storage capacity on the disks. Controllers are often based on the same x86 processors found in other computers and servers (and can be virtualized just as easily as any x86 server).

But at the end of the day, the storage business has been about selling and supporting those boxes of spinning disk. The differentiating software needs to be hardwired into the box to justify why “enterprise” storage needs to be so expensive.

I’m reminded a little of how Apple made waves a few years ago when they started to base the Mac on x86 processors. For years, Apple fans had been convinced that Apple hardware was special, one of the reasons why it was more expensive than commodity PCs. So how could the Mac be based on the same commodity hardware and still be special?

It was never about the hardware. The uniqueness of the Mac was its user interface and operating system. The Mac is software. It could run on any old x86 box. The fact that you still can only use the Mac on an Apple device is enforced exclusivity. Mac OS could run on any PC.

Forced out of the box

NetApp storage is defined by Data ONTAP. The value of ONTAP is in how it inserts an abstraction layer between the port and the spinning disk (or solid state flash). Essentially, ONTAP virtualized the storage inside the NetApp array enabling all kinds of nifty features from deduplication to snapshots and replication.

But NetApp recognizes that to survive and thrive it has to take their version of software defined storage beyond the box with the stylized blue N on it. Forces compelling this move include the success of server virtualization and cloud computing. Customers are comfortable with the idea of compute being abstracted and consumed as a service from a grid of servers in the data center or in the cloud. Why can’t data center based storage also be abstracted as a service from aggregate resources?

Clustered ONTAP 8.2 includes a raft of new features specifically about managing storage as a service across those aggregate resources. These include storage virtual machines (SVMs) for matching specific storage pools with specific workloads and quality of service features for setting SLAs for specific workloads.

clustered_ontap_svms

This is the right way to go but it is going to pose challenges for NetApp. They are going to have to manage the challenge of shift of focus from a hardware company to a software and services company. They haven’t had to worry as much about valuating the software and hardware as separate layers when it was all in the one box.

You can see this struggle right now with the ONTAP Edge product (basically ONTAP running on a virtual machine on any VMware host server). NetApp is limiting the size of Edge storage appliance and targeting it mainly at the small branch office use case. They still want you to buy that box with the N on it and are not quite ready to set ONTAP free.

Note: Hard to believe the Neutron Bomb article was seven years ago. Have a look and see how much things have (or have not) changed. Note that IBM SVC is mentioned as is LeftHand SAN/IQ which is now HP StoreVirtual. See Why Virtualization is Like a Neutron Bomb at InfoStor.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

Over the past seven years or so I’ve seen machine virtualization grow from a neat trick for server consolidation to a platform for agile data center management. Throughout that time there has never been doubt about who is number one in this game. But I’ve also been impressed that VMware has never been complacent about their leader status.

An interesting story has not been so much VMware’s leadership but whether any other 133958124player would ever be a serious and legitimate alternative. Five years ago it was really no contest. Today there is a contest, particularly from Microsoft. But in tracking the progress of competitors we shouldn’t take VMware’s leadership position for granted. VMware doesn’t.

In short, let’s give VMware their due.

Areas of Leadership

In focusing on a competitive landscape we often look at feature parity. Back in the day there were things that VMware did that nobody else did. Things like being able to move a running virtual machine from one host to another and being able to increase the number of VMs that could comfortably share a host machine through memory sharing.

But VMware can rightly claim that, while the competition can add “me too” features, that doesn’t mean that they do it better.  In memory sharing and “over commit”, for example, the competition can claim progress, but VMware has a larger slate of capabilities including memory compression and transparent page sharing. In memory management, VMware is clearly ahead.

Another example is storage management. In a shared environment, storage management is critical. VMware is not the only vendor that has storage management in their portfolio, but VMware is the only one that has built APIs (vSphere APIs for Array Integration or VAAI) to integrate virtual management with the native management of storage arrays. The degree to which storage vendors support VAAI is a differentiating feature in our storage vendor landscapes.

VMware can, and does, point to other areas where they continue to show leadership. In securing virtual infrastructure, for example, VMware has vShield application, data, network, and endpoint security. These have recently been amalgamated under the banner of vCloud Networking and Security 5.1.

Good Enough Might Be Good Enough

Does all this mean that we think VMware is the only and obvious choice for virtualization in your infrastructure? Of course not. You don’t always need to go with best in class. Sometimes good enough is good enough. As noted above, an interesting story has been about whether the competition has been good enough.

A year ago I blogged on how Microsoft is VMware’s only real competitive threat. I still hold to this position. Microsoft has continued to get traction for Hyper-V. The main reason they have not been a champion in our Vendor Landscapes has been the slow general availability release of Hyper-V 3.0 and System Center 2012. Microsoft has a tendency to talk about a product as if it is in general use a year or more before the fact. Only now is it coming together in actual product.

Citrix XenServer has always scored well in our feature-by-feature comparisons with VMware but it has struggled for market share. Citrix no longer argues XenServer as a general replacement for VMware instead focusing on targeting it to areas where Citrix has existing strength such as application and desktop virtualization and service provider clouds.

In the meantime, VMware continues to do what they have always done, focus on where virtualization is going next and innovating to remain the market leader. This includes cloud of course as well as the fully software defined data center (servers, networks, and storage).

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

93294061When faced with the task of building a “system,” the only way to go is to build the most appropriate solution for the situation. In some cases, that may be a fat architecture; in some cases, it may be a thin architecture; or it may be a little “chubby” client (a hybrid of both), but the main point is to build the most appropriate solution.

Fat clients will not automatically be replaced by thin clients. Either approach has its share of positive and negative attributes. For a fuller discussion, please see the article, What are the pros and cons of fat and thin architectures, and will thin replace fat in the future?

The trend across many businesses regardless of industry is a move towards thin client systems, primarily because thin client systems can support on-demand and other Internet-based applications with relatively little administrative or technical support. If you want to operate in a thin client environment, you’ll need to make sure that your network resources are extremely robust and you have some form of guaranteed uptime since thin clients can’t do a lot of work when the network is down.

Many of the advantages to taking a thin client approach revolve around cost savings. Workstations running the thin client do not need the vast system requirements that the application may require. Because of this, it is possible to outfit the workplace with low-cost computers that do not have the fastest, newest processors, lots of memory, and storage space. Only the computers that are really running the actual application need to be expensive, state-of-the-art machines. This cost, the cost of the computer needed to run a fat client, can be a negative factor. For fat client situations, the desktops do need to be state of the art, multi-processor, high RAM machines which, in large enterprise situations can have massive cost implications. This cannot be ignored when making a decision to go fat or thin.

There are also cost-savings in license fees. Not every user of the application needs to connect to it at the same time. So instead of paying a license fee to have the application installed on every computer (and sit idle), you pay a license fee for every simultaneous connection to the application. Keep in mind that not all software vendors offer this option, so you will need to investigate that this is a licensing model option when choosing software.

Additionally, there are savings to be had with respect to time, which also leads to cost savings and increased productivity. When a new version of an application is released, or if there is a maintenance upgrade, there is no need to install the fix, patch, update, or upgrade on every workstation. Only the computers running the application need to have the software installed. The thin clients on the workstations connect just as easily to the new version of the systems as they did with the old. In a large organization, this promotes a greatly reduced installation and deployment time that can save hundreds of hours. Downtime is also reduced since multiple thin clients can access one upgraded version and get back to work as soon as that upgrade is completed.

Thin clients can run just as easily on laptops, tablets, desktops, smartphones, and a host of other devices such as smartboards, all with virtually no dependence on the actual OS of the device. This means that key personnel can access the application while out of the office, from various locations (whether on the other side of the facility, or the other side of the world), which can be especially useful in the case of EHR systems or simply for emergencies where your staff needs to be connected.

Not everything about thin clients is perfect; there are some disadvantages which must be weighed in when deciding the direction. As mentioned above, thin clients do require a stable network connection, whether that is the local network or the Internet. If a router fails or if the connection is disturbed for any reason, work can often come to a grinding halt. Responsiveness is also sometimes an issue. Even the fastest connections are not faster than a local machine. Internet lag time and network transmission speed affect the thin client application. There is always some delay as information is transmitted over the network, and this delay gets greater as the distance to the servers increases (particularly for internet traffic).

If you are a globally distributed organization, and the servers (and thus the application) are located in a different country, then besides the lag time due to distance, you may be faced with local laws and regulations that apply to the location of the application but not to that of the client. You may end up in a situation where certain control of the application is dictated to you and not something you can control.

Also with thin clients, unless they are properly load balanced with redundant failovers, etc., they do tend to create a single point of failure which can be catastrophic for a business if it does not have proper contingencies in place.

It is not unreasonable to predict that thin client computing is the future for business, especially when the thin client related technology continues to advance at a pace that begins to remove, or water down the disadvantages. For now, as I mentioned before, every organization needs to weigh the advantages and disadvantages in terms of its needs before taking the step towards one or the other.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

100804918As organizations increase the number of VMs they run per host and move virtualization into production workloads, they require greater management capabilities. Vendors included in Info-Tech Research Group’s Vendor Landscape all provide solutions to ease management and have moved their development focus into utility infrastructure.

Citrix and VMware, both ranked as Champions in the report, also receive Info-Tech’s two Vendor Landscape Awards. VMware receives the Trend Setter award for its addition of innovative features that enable increased control over network and storage resources and capabilities.  Citrix wins the Best Overall Value award for its robust solution offered at a much lower cost than many competitors.

Microsoft’s recent development on Hyper-V and Windows Server 2012 has greatly improved Microsoft’s offering, making it a closer competitor to market leader VMware. While the initial cost of licensing Windows Server is high, Hyper-V comes at no additional cost to customers, with other solutions requiring licensing for Server Software plus the cost of licensing the virtualization product.

Red Hat and Oracle, while not offering as strong a feature set as competitors, provide a cost effective solution to organizations looking at implementing a virtual environment without the extraneous features.

For all the details, see Info-Tech’s Vendor Landscape: Server Virtualization.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter