114262919The end of HP’s venerable midrange EVA storage array was nigh when HP acquired 3Par back in 2010. Though HP management made clear that 3Par was the new flagship of HP storage, EVA continued to exist. HP has finally made it official that EVA will no longer be sold after January 31, 2014.

EVA was the flagship of HP midrange storage for many years. Devotion within HP to the EVA was strong. But the platform was starting to show its age and was not the feature innovator it once was.

Why did the EVA last this long? HP will say that it was out of respect for the legacy customers that had invested and would continue to invest in EVA storage. I think a more likely explanation is that, for all its compelling features, 3Par just cost too darned much for midsized enterprises that previously bought EVA.

I offer two examples that are indicative of the situation a mere 12 months ago.

  1. A midsized firm was in the market for a new midrange SAN storage array. They were formerly HP customers and owned EVA storage. They were looking at a variety of solutions including NetApp and Dell Compellent. Why, I asked, not HP? Answer: EVA no longer suited their needs and 3Par wasn’t even on the table because of cost.
  2. An oil company was looking to build an internal private cloud on converged servers and storage. They were deciding between HP and IBM and leaning toward HP. In looking at the architectures being evaluated I was surprised to find the HP solution was based on EVA storage rather than 3Par. Again, cost was an issue. EVA made the HP proposal more competitive.

But a lot has changed in 12 months. What finally drove the last nail in the coffin of EVA was the 3Par 7000 series. The 3Par 7400 is a midrange offering more palatable to the midsize enterprise budget. What’s more HP  debuted 3Par Online import software to aid migration from EVA to 3Par through the EVA management interface.

So in the 3Par 7000 HP has a real contender to replace EVA in the midrange as well as the means to get EVA customers over to 3par smoothly.  With that in place it is no surprise that HP will stop selling that last two EVA models, the EVA P6530 and the EVA P6550.

I can report that since Christmas I have seen a resurgence in interest in 3Par among midsized enterprise customers. Customers that were “done with EVA” are considering 3Par now. It may not win every deal but 3Par is definitely back on the table. It is a key piece of HPs converged infrastructure and cloud infrastructure story.

Maybe what is a surprise is that it took this long.

HP will continue to offer support for the EVA’s for five years until January 31, 2019 and customers will be  able to purchase additional hardware and software upgrades for three years to January 31, 2017.

Info-Tech will be revisiting its midsize to large enterprise storage array vendor landscape this fall. If you’ve had experiences, good or bad, with acquisition of midrange storage from HP, NetApp, Dell, IBM, or any other vendor we’d be interested hearing from you.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

HP’s recent Industry Analyst Summit in Boston was a success if its goal was to 96397118demonstrate that top management had their act together after months of turmoil and controversy. The 250 invited analysts saw plenty of unity of vision, a focus on continued innovation, and a chief executive who demonstrated confidence and keen understanding of the challenges and opportunities ahead.

It had been a turbulent two years since analysts had last been invited to an event like this. The lag time in itself was indicative of HP’s trials. The summits were a regular springtime occurrence in Boston for years. But in 2011 it was moved to San Francisco by new CEO Léo Apotheker. Apotheker was hired by HP after the controversial resignation of former CEO Mark Hurd.

That 2011 San Francisco Summit was a broader affair. While the focus of these events was traditionally enterprise systems this one added personal systems with plenty of talk about HPs tablet and WebOS operating system. There was also talk of HP’s plans to acquire British software company Autonomy.

Before the end of that same year, the HP tablet was discontinued after only weeks on the market, WebOS would be spun off, plans were announced to spin off personal systems, and HP took a $9 billion write down on the Autonomy purchase. Then, in September 2011, the HP Board removed Apotheker and board member and former eBay exec Meg Whitman took over.

The Analyst Summits were scheduled and then cancelled twice in 2012 as Whitman got to work on getting HP back on track. In this year’s event, Whitman reiterated that this is not a project that can be completed in one year, but they have made a good start.

Personal Systems were there again and had some interesting things to show in the mobile space, but I am primarily interested in convergence and virtualization. Here I saw a reiteration of HP’s leadership in convergence. HP execs rightly noted that the company was first to promote converged systems where servers, networks, storage, virtualization and management come together in a unified system.

HP has solid products at each layer of the converged systems layer cake as well as in aspects of the software defined infrastructure (especially software defined networking). They are investing more in R&D to further innovate on convergence. It was noted that the current iteration of convergence – for example stacks of blade servers, disk arrays, and switches – will not be able to cope with future requirements to store and process mountains of data.

A big part of HP’s future innovations roadmap is Project Moonshot. Moonshot is a server architecture project built around the processors normally associated with smart phones, tablets, and netbooks (such as ARM and Intel’s Atom). Moonshot is shooting to create servers that consume up to 89% less energy, 94% less space, and cost 63% less costs of traditional x86 servers – a hyperscale platform ready for those mountains of data HP says are coming from everything from social media to ubiquitous sensors in the global Internet of things.

I was impressed with the focus and collaboration of the various parts of HP’s portfolio, and leadership, under the mantra of “One HP.” There are no guarantees here. The challenges of the future remain, and the competition is fierce, especially in industry standard architectures where the dreaded (by tier one vendors) term “commodity” is often heard.

HP as a company and a product portfolio has been there all along in spite of the turmoil at the top. It was gratifying, then, to see that HP’s leadership team has found themselves again.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

A difficulty in analyzing the mid-market or the mid-range in any area of information technology is that the mid-range often doesn’t work as one uniform category. This year, for example, we decided to subdivide the mid-range storage landscape into two. Now we’re doing it again for this year’s backup software Vendor Landscapes (VL).

We’re calling the two VLs homogeneous and heterogeneous backup.

  1. Homogeneous focuses on vendors that provide backup primarily for Windows and Linux systems. It is homogeneous in that these are all industry standard x86 systems. Typically, customers are at the small to mid-range of the SMEs. Champions: [withheld].
  2. Heterogeneous backup focuses on products typically in the mid-sized to enterprise space that support a range of architectures. While x86 remains a critical component here we are also looking to support for proprietary UNIX systems up to mainframes. Champions: [withheld].

As with our unified storage array landscapes, we find that solutions in the small to mid-range come from multiple antecedents and tend to overlap in terms of market coverage. In storage, for example, you have traditional enterprise solutions, typically based on Fibre Channel networking, that have come down market to the mid-range. Then there are the iSCSI and NAS players that started in smaller-end and grew up-market.

Similarly, in backup, there are products that began in larger heterogeneous enterprises as far back as the 1980s and then there are more recent entrants that catered to the smaller, primarily Windows-based, end of the market. When x86 servers became a data center staple the former big iron titles expanded their reach. The former Windows backup titles expanded their capacity. Now they’re all playing in the mid-range and there is considerable overlap for potential mid-range customers.

fig1-BU-software

Treating the mid-range as one market can be problematic for product differentiation, particularly for vendors that have multiple product offerings. In storage, if Dell is a leader, is it for Dell EqualLogic, is it for Dell Compellent, or is it for both? In backup, is Symantec being evaluated for Backup Exec or for NetBackup?

On the other hand, there are vendors that have one product whose sweet spot is precisely in the middle of the mid-range, right in that overlap zone of small-to-mid and mid-to-large. CommVault is such a vendor and product in the mid-range backup space. Its lineage is in Windows backup, but it has grown up to take on the enterprise titles at the larger end.

fig2-BU-software

We hope having two backup VLs rather than one will improve the clarity of our industry view. If that isn’t enough, we’ve recently published a third VL on virtual infrastructure backup. Effective backup of virtual machines is becoming critical as more server infrastructure is virtualized. In addition to the big system/little system predecessors to modern backup, there is also a group of players that come from a pure-play virtual backup realm (Veeam, Vizioncore, PhdVirtual).

For more information, please see:

 

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

93294061When faced with the task of building a “system,” the only way to go is to build the most appropriate solution for the situation. In some cases, that may be a fat architecture; in some cases, it may be a thin architecture; or it may be a little “chubby” client (a hybrid of both), but the main point is to build the most appropriate solution.

Fat clients will not automatically be replaced by thin clients. Either approach has its share of positive and negative attributes. For a fuller discussion, please see the article, What are the pros and cons of fat and thin architectures, and will thin replace fat in the future?

The trend across many businesses regardless of industry is a move towards thin client systems, primarily because thin client systems can support on-demand and other Internet-based applications with relatively little administrative or technical support. If you want to operate in a thin client environment, you’ll need to make sure that your network resources are extremely robust and you have some form of guaranteed uptime since thin clients can’t do a lot of work when the network is down.

Many of the advantages to taking a thin client approach revolve around cost savings. Workstations running the thin client do not need the vast system requirements that the application may require. Because of this, it is possible to outfit the workplace with low-cost computers that do not have the fastest, newest processors, lots of memory, and storage space. Only the computers that are really running the actual application need to be expensive, state-of-the-art machines. This cost, the cost of the computer needed to run a fat client, can be a negative factor. For fat client situations, the desktops do need to be state of the art, multi-processor, high RAM machines which, in large enterprise situations can have massive cost implications. This cannot be ignored when making a decision to go fat or thin.

There are also cost-savings in license fees. Not every user of the application needs to connect to it at the same time. So instead of paying a license fee to have the application installed on every computer (and sit idle), you pay a license fee for every simultaneous connection to the application. Keep in mind that not all software vendors offer this option, so you will need to investigate that this is a licensing model option when choosing software.

Additionally, there are savings to be had with respect to time, which also leads to cost savings and increased productivity. When a new version of an application is released, or if there is a maintenance upgrade, there is no need to install the fix, patch, update, or upgrade on every workstation. Only the computers running the application need to have the software installed. The thin clients on the workstations connect just as easily to the new version of the systems as they did with the old. In a large organization, this promotes a greatly reduced installation and deployment time that can save hundreds of hours. Downtime is also reduced since multiple thin clients can access one upgraded version and get back to work as soon as that upgrade is completed.

Thin clients can run just as easily on laptops, tablets, desktops, smartphones, and a host of other devices such as smartboards, all with virtually no dependence on the actual OS of the device. This means that key personnel can access the application while out of the office, from various locations (whether on the other side of the facility, or the other side of the world), which can be especially useful in the case of EHR systems or simply for emergencies where your staff needs to be connected.

Not everything about thin clients is perfect; there are some disadvantages which must be weighed in when deciding the direction. As mentioned above, thin clients do require a stable network connection, whether that is the local network or the Internet. If a router fails or if the connection is disturbed for any reason, work can often come to a grinding halt. Responsiveness is also sometimes an issue. Even the fastest connections are not faster than a local machine. Internet lag time and network transmission speed affect the thin client application. There is always some delay as information is transmitted over the network, and this delay gets greater as the distance to the servers increases (particularly for internet traffic).

If you are a globally distributed organization, and the servers (and thus the application) are located in a different country, then besides the lag time due to distance, you may be faced with local laws and regulations that apply to the location of the application but not to that of the client. You may end up in a situation where certain control of the application is dictated to you and not something you can control.

Also with thin clients, unless they are properly load balanced with redundant failovers, etc., they do tend to create a single point of failure which can be catastrophic for a business if it does not have proper contingencies in place.

It is not unreasonable to predict that thin client computing is the future for business, especially when the thin client related technology continues to advance at a pace that begins to remove, or water down the disadvantages. For now, as I mentioned before, every organization needs to weigh the advantages and disadvantages in terms of its needs before taking the step towards one or the other.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

There has been a lot of buzz of a new concept emerging in the network community– software defined networking (SDN). SDN is glamorized as the network’s latest push towards a more streamlined and cost-efficient solution compared to the physical infrastructure currently dominating the floors of IT departments. Promoters are trumpeting this advancement as an innovation marvel; much like virtualization was to servers. In fact, a key component of SDN is bringing networks to a virtual environment. Despite the hype of SDN giving it much notability, many are still confused about the underlying concept of SDN, the possible complications, and the business value of having an SDN network. Visit Info-Tech’s solution set Prepare for Software Defined Networking (SDN) to guide you through fact and fiction.

SDN is essentially a network architecture where the management, mapping, and control of traffic flow is removed from network devices, and centralized in the network. This separation is said to increase performance, network visibility, and simplicity given it is constructed correctly. However, given SDN’s infancy, a sufficient number of use cases and proof-of-concepts have yet to emerge in the SDN space, leaving organizations wondering if there is any revenue generating or cost saving opportunities. How can they make a sound decision on SDN? It may be too early to make a final decision, but they can start crafting the case and investigate the early movers in the SDN space.

Be prepared to see a shift in networking paradigms because of SDN: hardware to software, physical to virtual, propriety to commodity. Naturally, this will throw off traditional networking staff from their game. But, do not worry, current SDN solutions are still in “Version 1” and future versions may see solutions become friendlier to traditional network practices and concepts. With the attention it is getting from the media and established network leaders, SDN technologies will likely (and hopefully) evolve to mainstream deployment states.

Realize SDN is here. Understand where it came from and how it can help your business. Remember to wait for the SDN space to settle and mature before implementing SDN in your organization. After all, you wouldn’t want your child driving your multi-million dollar car.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter