It’s time to get out of traditional endpoint management. It used to be the only way: desktop PCs were purchased, set up, provisioned, managed, and supported by IT. Frequent hands-on time with every PC was time-consuming, but there was no reasonable alternative.
Today it’s different. The typical desktop endpoint has been replaced by a variety of devices: laptops, tablets, and smartphones are all used for work. Many of them are owned by users rather than IT, and are no longer connected to the corporate network with a blue cable.
IT’s arsenal has also changed. Technology like desktop virtualization, the cloud, and enterprise mobility management make hands-on time with every endpoint unnecessary.
These changes have slowly made endpoint management less desirable, for both users and IT, in comparison with alternatives. The vast majority (80%) of the IT professionals we asked agreed that “IT spends more time than necessary managing endpoints.” Like frogs staying in boiling water, many IT departments have stuck with increasingly onerous management practices rather than exploring an exit plan.
Our project blueprint, Stop Managing Endpoints, will walk through the process of creating a pilot test for getting out of (or at least reducing) endpoint management, including putting together a business case. Spoiler alert: in most cases, the total cost of sticking with endpoint management is higher than alternatives.
The end goal should be to treat IT as utility. Like the thin client of ancient history, any endpoint, stationery or mobile, user-owned or company-owned, should be able to access the applications and services needed by its user. And like the power company doesn’t need to manage light bulbs receiving electricity, IT doesn’t need to manage endpoints receiving IT services.
RIP endpoint management. It was fun while it lasted, but the world has moved on.
The mantra from industry leaders is that agility and innovation must be key objectives for IT leaders. But most IT organizations score poorly in those two areas; they are neither agile nor innovative. The road to Utopia is long and difficult. The important question is, given a limited capacity to change, where and how should IT departments focus their efforts in increasing agility or innovation?
Leaving innovation for a future blog, let’s consider the factors that inhibit agility. Architectural standards, complex processes, complex and rigid applications, sunk costs, limited internal capability, and convoluted procurement practices all slow change in the interests of strong control, easy integration and minimizing risk. “Quick and dirty” or “cheap and cheerful” are often the routes to faster solutions. But they are typically distasteful to IT practitioners.
In general, increased agility may require a loosening of traditional approaches used in selecting and deploying IT solutions. But because this deviation from “safe” solutions can increase real risk, it makes little sense to shift away from traditional controls and standards unless the need for flexible solutions trumps concerns about controlling risk and minimizing unit costs. In general, flexibility conflicts with control, and IT leaders must identify those situations where the bias to control must be shifted to a bias to flexibility.
So how can we readily determine where IT must be agile and where traditional approaches are good enough? Parts of every organization are very stable and unchanging; others are in a state of rapid change and uncertainty. Control is the appropriate approach for the first component, and agility is essential and appropriate for the second. New products and services evolve through up to three stages: time to market, time to volume and time to profit. As a new product is introduced, and becomes successful (or not), the organization needs to focus on different objectives over this developmental period. And these different objectives drive specific and different priorities for the selection of technology and support processes.
Time to Market = Speed.
The first phase, time-to-market, involves the initial introduction of a service or product. Being able to deliver the product ahead of the competition is an essential requirement for competitive advantage. Demand is uncertain, so investment in production capacity and support technology must be constrained. The nature of the product or service may have to be modified based on initial market experience. So the two key characteristics of any system change through this initial phase become fast and cheap.
At this stage of product launch, flexibility and adaptability are essential. Launching a support application (or enhancement to an existing one) must be done quickly. At this stage, IT must be prepared to cobble together “quick and dirty” solutions that enable the product to be launched quickly and modified based on actual customer behavior and preferences. The support processes and technologies may dictate divergence from existing architectural and technical standards.
IT should not hold back the launch of the product and its refinement. If the product fails in the marketplace, it is withdrawn, and the small investments can be written off. In the initial stages of any product launch or major product change, flexibility is job one. Of course, adherence to standards, ability to scale and support for efficient operation are desirable considerations, but speed and flexibility trumps all.
Time to Volume = Scale.
The second phase, time to volume, addresses the challenges created by a product that generates high demand. The ability of the organization to meet demand is enabled (but not guaranteed) when the technology used for the solution can be scaled to projected volumes of customers and business transactions. Delays in the expansion of application capacity, network, servers, storage and access devices and the number of simultaneous users can significantly dampen demand. The focus of this phase is fast and scalable. At this stage, the IT organization has to be flexible in terms of infrastructure capacity and application performance.
Time to Profit = Optimized.
In the third stage, where market demand has been confirmed and the basic organizational delivery capacity is in place, the organization then moves to stabilize the product and the supporting processes. It focuses on making the product profitable. Once a product or service reaches this state, agility is no longer the primary requirement. The solution implemented during the Time to Market phase may have to be modified or replaced.
IT staff tend to plan solutions that address the challenge of Time to Profit even when the product or service is in the Time to Market or Time to Volume stage. IT can frustrate the organization if it delays basic and scalable solutions in the interest of control and standards. Before approaching a new requirement in conventional ways, determine whether the service is in early stage development and deployment and requires a more flexible approach.
Agility is essential at the early stages of launching new products and services. IT organizations that are seen as obstacles to time to market will increasingly see themselves excluded from business planning and will experience the reality and challenges of shadow IT. Take action now with our projects to Make IT More Responsive & Agile and Deploy Changes More Rapidly by Going Agile.
Info-Tech is launching our first Live Collaboration on Choose the Right Development Tools for Big Data. Live Collaborations are a chance to share best practices with your peers and our analysts through an interactive video conference. Join us on December 11 at 2:00PM EST for a Live Collaboration and gain valuable insight on your next big IT project. REGISTER HERE
Choose the Right Development Tools for Big Data is a new Guided Implementation Blueprint aimed at helping application development managers use the right tools for handling big data. Organizations are increasingly examining big data as a means of analyzing vast amounts of data rapidly. This applies not only to BI initiatives, but also to real time commerce related activities that are based on real time consumer patterns and behavior.
Much of the literature around big data focuses on architectural benefits such as divide and conquer or entity relationships. Little focus is given to the actual tools other than the use of generic programming languages like Java. This limits an application development manager’s ability to provide development tools to maximize productivity. This is the first complexity vector in Big Data tool selection – developer productivity — that can easily lead to increased maintenance costs and future derailment of an important business initiative. Now is the time to consider the right tools or tool chain to help ease development and maintenance burden on IT.
A second complexity vector in Big Data tool selection is integration. Legacy applications were not built to handle Big Data design. From a development perspective, tool bridging now becomes part of the roadmap into Big Data projects. However, this introduces additional complexity around legacy test automation and harnessing. That, in turn, introduces complexity with deployment and release caused by dependencies amongst various bits.
The final complexity vector in Big Data tool selection is a meta project issue around communication. Big Data can disrupt existing architectures. Communication and impact analysis is imperative. But how do we go about discussing these concepts? Classic data flows aren’t enough. We now need to talk about metadata and master data and strive for effective multi domain communication.
Big Data represents some interesting possibilities. Jumping into it without thinking through complexity vectors can result in significant pain later on. Better to plan this out now and improve development velocity and quality over time as more learning happens.
Cloud for backup is a hot topic right now and it certainly seems attractive. But things are not always as they seem. Vendors promise rock-bottom prices and a pay-as-you-go model that will mitigate the costs. As attractive as those pennies-per-gigabyte prices may seem, it’s critical to dig deeper into where the cloud will save—and where it will cost—over the long-term.
For our new cloud backup project blueprint — Create a Game Plan to Implement Cloud Backup the Right Way – we spoke with implementers who had very different takes on the cost savings potential of cloud backup. A CTO for an archdiocese, for example, noted that by moving to a cloud provider he was able to exceed expectations for recovery point and recovery time objectives while saving money in the process. An IT manager for a global professional association anticipated an astonishing savings of 50-75% on the TCO of his backup as a result of moving to the cloud.
The above results must come with the standard disclaimer “results not typical.” A third client, with an international marketing firm, had a different story for us. He said that after crunching the numbers cloud backup just didn’t make sense. He didn’t see compelling savings in the cloud and, in fact, believed the cloud would come in at a higher cost than the on-premises solutions he was considering.
So what gives?
Our first two examples were small shops with limited staff and little room in the budget. They didn’t have a lot of data to backup, and the cloud offered them benefits that would have otherwise over-extended them financially. One was able to use the cloud as an off-site disaster recovery tier, something he would have been unable to build on his own. The other calculated that he would be able to cut the cost of renting a local datacenter and transporting tapes and turn the savings into upgrading his network pipe—something that had benefits across IT.
Our third example is a mature international organization with a significant volume of data to move. He calculated that while the cloud might reduce his up-front costs, those costs didn’t go away altogether. Rather, they were re-distributed across the life of his contract.
As attractive as those pennies-per-gigabyte storage costs are, it is critical to consider all the pieces over the long term. While the cloud can help you avoid purchasing a new storage array, it’s unlikely you will get away without any capital expenditure at all. You also have to factor in network upgrades as well the costs of training, migration, and getting your security protocols up to snuff.
Essentially, when deciding whether or not to go the cloud, you will face two major deltas that should play a big role in your decision. The first is the gap between the CAPEX costs of each solution. For some organizations, avoiding a significant capital expenditure might justify a larger TCO over the long term. They’re able to secure an ongoing budget for backup but any new expenditures take a lot more work to get approved. Other organizations, though, need to take a look at the long term.
That’s where the second delta comes into play. The TCO gap over a four- or five-year term can make a big difference to your organization. Your cloud costs may be minuscule now, but it is quite possible that the cost of cloud storage outstrips your on-premises solution. Remember, with a cloud model you’re paying for that gigabyte of data each and every month. And don’t forget about the cost of bringing that data back if you’re ever unlucky enough to be in a true disaster recovery scenario.
Apple announced new iPads on Tuesday, reigniting the iOS versus Android debate. Much is being made of Apple’s relative decline in market share against Android devices. But simple market share numbers give an incomplete picture of the continuing strength of Apple in the battle against Android.
Google launched the Android Open Source Project in 2007, a few months after Apple debuted the first iPhone. As an open source project, the OS was free to any device maker that wanted to compete with the iPhone (and later the iPad). Open source was a volatile secret weapon early in the smartphone wars, released with the intent of creating multiple alternatives to prevent Apple from monopolizing the mobile space. Mission accomplished: today, Android phones account for something like 80% of the worldwide smartphone market. In tablets, this year Android tablet shipments are passing Apple’s.
Google prevented Apple from dominating, but having a winning market share does not translate into Google being the winner, or even the leader, in the mobile wars. Using open source as a weapon always had a chance of backfiring. The 80% market share in smart phones is deceiving. There are currently seven versions of stock Android out there now, and upgrades are up to the carrier’s or OEM’s whim. Within 30 days over 90% of all iOS users upgrade to the latest version. Android is still a fragmented nightmare. And that’s not even taking into account the 1000+ different devices running their own versions of Android. This graphic from Open Signal illustrates the number of fragments:
So let’s not forget that we are comparing a single set of devices—iPhones and iPads, most running the same operating system—to over a thousand other devices that only started with the same operating system. It’s a bit like saying that all independently-owned restaurants put together outnumber McDonald’s restaurants. True, but it doesn’t say much about the success of McDonald’s.
Apps are a whole other issue. Open source was the strategy to get Android out there as much as possible, but it’s a hassle for developers to support all these fragments. Google sacrificed app revenue by promoting an adversarial market for developers due to OS fragmentation. It shows in app revenue; Apple is far more successful than Google by that measure.
It’s also a different story within businesses. Issues like fragmentation and openness helped gain ground in the consumer battle, but they are a liability in a business environment. Most of the ground that BlackBerry has been losing in business has been taken over by Apple. Recent figures from Good Technology put iOS device activation at 72%, and almost every app activation runs on iOS. Without even trying, Apple has become the default source for mobile devices and apps in the workplace. Android is still a major (and growing) player in business, especially through bring your own device, but iOS is the current winner on the business front of the mobile wars.