Darin Stahl
Darin Stahl
Principal Consulting Analyst
Info-Tech Research Group

A Business Continuity Plan (BCP) is a complex project that touches all aspects of the organization, and yet often has few or no dedicated resources. It’s work that you hope to get done in between other projects, or at home at night after the kids go to bed. It’s no surprise so many organizations struggle with BCP.

Please join me and subject matter experts on Thursday, April 24, at 4 p.m. EDT for a webinar “Developing a Business Continuity Plan: Should it be IT or the Business?” Project ownership is just one of the challenges of Business Continuity Planning we’ll be discussing in this webinar.

Go here to register for this Webinar
(Video replay will be available at this link after the event)

Info-Tech Research Group webinars occur during the early weeks of our research projects. Attendees will weigh-in on several key polls and will be able to pose questions to the group.We want to work closely with our members and potential members as we build out our research to ensure we are thoroughly meeting your needs.



If you chargeback for IT, ensure that customer choices can actually save them and you money.

IT departments are increasingly being challenged to provide financial transparency and demonstrate IT’s value. In general, willingness to spend money to buy services is considered a reflection of perceived value, the conventional supply and demand equation. If a department spends $100,000 on a new application, the assumption is that the system will provide at least $100,000 of value.

The challenge is that in most organizations the cost of services is not visible, and, more significantly, the decision to consume IT resources typically has no financial repercussions on the consuming department.

I don’t normally recommend a complex chargeback mechanism for IT expenses. The data collection and allocation process typically takes significant effort for IT. And while the costs of IT may be more transparent, the re-charged departments can do little to change what IT costs them. There is generally no relationship between actual usage and chargebacks. The process achieves little in the way of cost-reduction or demand management. It is a tax more than a manageable expense to the consuming departments.

But we do want to reduce or increase usage when it makes financial sense to the business. So here’s a thought about chargeback approaches.

  1. Identify IT services that are candidates for direct chargeback. They must meet the following criteria:
    1. They are optional for the user department (for example, enabling an employee to access a design tool, or providing an employee with a higher performance laptop). If they are compulsory, the business unit has no opportunity to reduce demand.
    2. The cost of the service to IT varies somewhat in proportion to the number of subscriptions (for example, a per named user charge for software licensing, or the purchase of an additional device). If this is not the case, reduced consumption may have little impact on overall organizational costs.
    3. The total cost of the service is material to the IT budget. Don’t bother with lower cost services.
  2. Modify the current IT chargeback formula to reflect two components. First, establish variable charges for services that meet the criteria above. The charges should enable IT to offset the real costs for the service. Second, establish fixed re-charges for services that don’t meet the criteria above. A typical re-charge is based on number of departmental users or end-user devices supported.
  3. If appropriate, document the costs that constitute the shared component. But don’t waste time on a granular chargeback mechanism for services over which the business units have no control, or whose costs are fixed in the near term regardless of usage.

The result allows the business units to make decisions on the usage of at least some IT services based on their value relative to cost. And IT can avoid investing in tracking mechanisms where the information provides no incentive for reducing overall organizational IT costs.


riskMetrics can be incredibly valuable, but also incredibly easy to get wrong. Of the plethora of available IT metrics, risk metrics are probably the most difficult to get right. This is mainly due to the fact that risk is tricky to define and therefore tough to measure.

You Can’t Manage What You Can’t Measure

Defining the word ‘risk’ itself is actually quite simple. In a nutshell, risk is the probability/likelihood of a negative event occurring multiplied by the magnitude of the impact/loss that could happen as a result of said event. Sounds easy enough, right? Not so fast…

Risk must be measured in a way that is meaningful to the organization. A good risk metric demands that you aggregate numerous measurements in order to create a piece of intelligence that’s actually useful to both IT and the business. As I mentioned in an earlier blog post on root cause analysis, accurate measurement means asking the right questions:

  1. What problem is the risk metric addressing?
  2. Which decision point(s) does the risk metric support?
  3. What context should the metric take into account?

Issues arise when these questions are not rigorously applied to each and every risk metric under consideration for deployment.

Metric Overboard!

The inclusion or exclusion of certain loss events (say, an asteroid wiping out your data center) are going to determine whether or not the risk metric itself has any meaning. Take the example of using a very basic risk formula to help flesh out which precautions should be taken in the IT disaster recovery plan:

% chance of loss event x $ loss magnitude

Sure, a meteor strike would probably have the highest dollar cost in terms of magnitude, but the likelihood of such an event is about as close to zero as you can get without falling in. I’m oversimplifying for dramatic effect here, but I hope you can see that such a risk metric utterly fails to take into account the three questions about problem, decision, and context.

Choosing Metrics That Matter

Here are a few takeaways to think about as you plan risk metrics for your organization.

1. Always consider your audience. Metrics for risk are high-level enough that they are probably being communicated to executives and others of that ilk. At lower levels, it’s all about IT managers and domain-specific professionals or support staff. Meet with these stakeholders on a regular basis to understand which metrics are the most important to them. For example:

audience type

2. Determine the suitability of each risk metric. This is where the three questions come into play. Once you’ve settled on the audience you’re delivering metrics reporting to, apply the questions accordingly. Let’s use security performance trends as an example:

  • Problem – how to determine effectiveness of the security program
  • Decision – whether or not to continue with certain program elements
  • Context – executives believe it’s just as important to measure failures

3. Tie risk metrics back to business objectives. Business objectives could include maximizing IT investments, using risk management to adapt to changing regulations, or a host of other business concerns. Whatever the objectives are, your risk metrics are going to be rendered useless if they’re not aligned with corporate strategic initiatives.

Obviously there’s a lot more to risk metrics than what I’ve talked about here. Please refer to the links below for more in-depth guidance.

Related Info-Tech Resources



LEAN_PDCALean IT, the extension of lean manufacturing principles to the development of IT products and services, is a good approach IT process improvement. But care must be taken not to go overboard force fitting manufacturing process improvement methodology into IT.

Lean manufacturing principles have been around for more than half a century but the application to IT is relatively recent. The central focus of Lean IT is the elimination of waste – work that adds no value to a product or service. Anything that requires rework or slows down decision making is “Waste” and a bug or an incident is a “Defect”.

Info-Tech recently engaged a client in a World Class Operations IT Strategy workshop (see infographic below). They were getting into Lean IT in a big way. Though Lean IT wasn’t the focus of the workshop, here are some of my observations:

  1. Frameworks such as COBIT and ITIL would identify improvement areas within an IT department, and Lean would provide the methodology on how to close the gaps. Lean doesn’t do an assessment on, say, Change Management or Disaster Recovery practices. For example, routine maintenance/upgrades/patches would follow a PDCA (Plan-Do-Check-Act) cycle, with formal documentation at each step in the cycle.
  2. Lean comes to us from manufacturing, so the terminology and thought process is all designed to drive out waste and defects. In my view it seemed to be a bit of a stretch how industrial concepts were being applied in IT, but this was early days of Lean for this client. A lot of Japanese words thrown around, of course (if you want to come across as any kind of expert) such as Muda (waste), Kaizen (improvement), and Kanban (high efficiency production methodology).
  3. IT organization design and staff seating arrangements follows their core workflows and frequency of touchpoints. If an application specialist is frequently contacted by the Service Desk, the specialist will likely sit in the neighbouring cube, not with the other application specialists.
  4. Lean IT is all about ideas being shared from all directions (especially from the levels closest to the work) and requires a motivated IT team to truly get the benefits of the “philosophy”. With this specific client, it wasn’t the case, and Lean IT initiatives were often seen as more paperwork and/or rework.
  5. There are some neat tools that Lean prescribes, some of which may be useful in Root Cause Analysis or Post-Implementation Reviews. “5 Whys”, etc.
  6. On a darker note, Lean IT is usually driven by a corporate Lean initiative, and as with religion, the corporate group may have concerns about any discussion about other framework orthodoxies such as ITIL or COBIT. I actually had to get the Lean group’s blessing that COBIT wasn’t saying anything disagreeable to Lean. This was to the point that the final deliverable had to be scrubbed of most references to COBIT (and always mitigated by statements on how COBIT is merely complementary to Lean).

All in all, I found Lean IT to be a useful approach to process improvement. But care should be taken to not make it the One True Religion. If an organization appoints a Lean Czar at the executive level, going down the path of (6) above becomes likely, which may actually apply blinkers/filters to the IT group’s view of the world.

For more on Info-Tech’s World Class Operations IT Strategy project and workshop click on the infographic below:



alertBy now, you’ve likely heard that a serious vulnerability has been reported in the commonly-deployed OpenSSL cryptographic library.  The bug puts widespread SSL/TLS encryption at risk of failing to properly protect encrypted data, potentially exposing usernames and passwords and other content transferred over the encrypted link.

This is a serious matter, being addressed as an emergency by IT professionals around the world.  A few questions you may be asking yourself include:

  • As a provider of IT services, is the security of any of those services at risk due to the bug?
  • As a consumer of IT services (or from your customers’ perspective), is any information at risk due to the bug?
  • What can and should I do in either of these cases?

From the IT service provider standpoint, the answer is (unfortunately) probably a yes – more than 2/3 of all internet-facing websites run on a platform that includes the OpenSSL library, and then there are all the internal-facing web services.  Suffice it to say that this is a serious matter, and is worth every organization investigating further.

Organizations should read the material available at heartbleed.com to understand the problem in greater depth.  After confirming the state of OpenSSL usage within the organization, and checking to see if the version used in each case is affected by the bug, “[r]ecovery from this leak requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys.”

From a pragmatic standpoint, Info-Tech advises focusing on externally-facing services (e.g., web servers, mail servers, SSL VPN services, etc.) first, as those are potentially at risk from an external attack.  Once these have been remediated, focus can turn to the inside of the organization, where risks may crop up from web management consoles of a myriad of devices including network components, printers, and more.

From the IT consumer standpoint, the answer is again an unfortunate yes.  Many commonly-used social media sites and consumer-focused applications (such as e-banking) were subject to the vulnerability, and there’s no way to determine whether or not the vulnerability was exploited.  As such, once the services have been fixed, it is necessary for consumers of each service to change passwords in order to ensure that any data that might have been exposed is no longer accessible to an attacker.

Info-Tech advises individuals to take a look at The Heartbleed Hit List: The Passwords You Need to Change Right Now to determine the status of their favorite sites, and Info-Tech further advises organizations that have been affected to inform their customers and users that a change of password is warranted – again, after the vulnerability has been patched and potentially compromised SSL/TLS certificates have been replaced.

Finally, individuals should consider their password management practices more generally.  If, for example, someone used the same password for Tumblr (one of many at-risk sites that have since remediated the vulnerability) as they use for online banking or internal network access, it is possible that an attacker has already sniffed out that password.  As such, Info-Tech recommends changing any passwords that were the same as any affected services, as well as recommending a better general practice of avoiding re-use of passwords that grant access into sensitive applications or systems.