IBM is leveraging the API economy and opening itself up as-a-service. The IBM Cloud Marketplace provides customers with a location from which they can purchase IBM services, and also acquire 3rd party services. The marketplace is focused on bringing together the Business, Developers, and IT. IBM is also providing easier access to their GBS professionals through the marketplace. SOWs and extensive contracts are removed, allowing business value to be achieved faster, and with fewer applied restrictions. IBM’s BlueMix Platform-as-a-Service provides developers with a rich set of IBM components that can be assembled into new applications. IBM is also providing advanced Enterprise 2.0 services to help with the management of services on, and off-premise.

BlueMix beta is IBM’s push into a development platform in the Cloud. It contains approximately 100 different services that can be composed into applications and solutions. Some of the services IBM is offering include: Integration, Rules, RapidApps, BlueInsight (Big Data Analytics), and Worklight. In the future we can expect to see Watson cognitive computing services become part of this new service mix.

IBM is going beyond just offering the platform, they are building physical BlueMix Garages. The first is opening in San Francisco’s Galvanize community. The garage is providing a collaborative environment in which application development will use BlueMix in highly disciplined agile processes that give developers maximum opportunities for creativity and success as they turn new ideas into products.

MQTT car An example of a cool innovation that likely came out of a garage, perhaps an alpha BlueMix Garage, is the connected car that was at the show. At Impact 2014 a RallyFighter car built by Local Motors was turned into a connected car. The car takes advantage of MQTT (lightweight messaging protocol), the Internet of Things, SmartCloud, Node-RED, BlueMix Cloud Foundry, Big Data Streams, Hadoop, and Cloudant to provide connection between mobile devices and the car. For more information about the car, check out the blog post and videos of how it was built, on the Local Motors web site. The car is a good example of how IBM technology came together, into a physical application, fully mobile enabled (it’s a car!), in less than 2 months.

Exposing IBM as a service, means IBM has to come together. It was evident at this year’s Impact that the Software Group, Systems Group, Global Business and Technology Services are working with each other to deliver value to IBM customers. The value proposition IBM can deliver to customers is that they have it all: from hardware, to software, to business services. Exposing all of it as a service, whether that is on SoftLayer, in their Marketplace, or on BlueMix, is delivering value faster and better than they used to. This integrated approach improves the opportunities for value, instead of when each group worked in more isolated environments to reach their own select objectives. IBM stated that the quick wins they are hoping to provide their customers with are in the 3 – 6 month time frame. No more 12-24 month projects before you get value. If it takes more than 3 – 6 months to deliver a new capability in today’s economy, the opportunity is lost.

I will admit that blue runs through my blood. I was an IBMer for 10 years, and I enjoyed my time with Global Business Services. I also saw the “division” across the lines of business, so I am thrilled to see how IBM is working together better and smarter for the benefit of their customers. I was there when former CEO Sam Palmisano launched the eBusiness on Demand “journey”. Even though IBM launched it, the entire industry has been on that journey, and I believe the “on-demand” is here, and now; through implementation of Cloud, Mobile, the Internet of Things, and the digital economy. I also think we are still on the journey, and time will tell where we will go through the increased availability of as-a-Service offerings, the drive for more information, innovation, and the needs of the now generation.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

Impact 2014

This year’s IBM Impact conference was all about seeing the results of their MobileFirst initiatives and the Cloud. Last year at this time, I remember an analyst asking the IBM panel “When are you going to get serious about the Cloud?” Their response was that they were, but it was evident that their focus was on the private Cloud. Fitting, given IBM’s big business customers including financial institutions, insurance companies, healthcare, and others that are either adverse to putting their data into a shared computing service, or are regulated in a way that prevents them from using the public Cloud.

If IBM was serious about MobileFirst, they had to get serious about the Cloud. To support their new initiative IBM recently bought SoftLayer to pave its way to the public Cloud. The public Cloud, accessible from mobile devices no matter where and when is helping to deliver solutions that take a MobileFirst approach. Since the acquisition, many of IBM’s software products have found their way into the Cloud, offering lower cost alternatives for rental vs. ownership. This is the benefit of the Cloud and very telling of today’s generation. It was during a presentation by Daimler Automotive on their Car2Go business in which they stated that “Access trumps Ownership” for today’s generation. They would rather have access to a car than the hassle and expense of owning one.  Just as a car depreciates with every turn of the key, and maintenance increases its TCO, Cloud technology raises the debate of asset ownership vs. renting a space or employing a service.

The same can be said for on-premise, enterprise software and systems. Isn’t access to the computing capability better than owning that capability? Owning it doesn’t provide any benefit, all it provides is another asset that needs to be managed and maintained. Steve Mills, SVP and Group Executive, IBM Software and Systems reminded us during the general session on Day 2, that the total cost of ownership far extends past the initial purchase, and that the cost of care and feeding alone, will exceed the initial cost of purchase. If that’s the case, and clear benefits cannot be identified for purchases, it only makes economical sense to rent in as-a-service models instead of buying.

IBM is finally giving its customers these options, and opening up new markets where IBM was not traditionally a consideration. Scott Megill, CEO of Coriell Life Sciences, winner of IBM’s SmartCamp and global entrepreneur of the year awards, and general session presenter, built their solution in 4 months on IBM technology using a lean startup budget, stating that; “IBM isn’t only for giant implementations anymore. The cost model has shifted with IBM’s acquisition of SoftLayer. Emerging and mid-market companies like ours can now get access to technologies, at a pricing structure that map with our size and growth.”

Some assembly no longer required. IBM is also addressing the needs of the Now generation. Their patterns based Pure Application systems technology, has found its way to the Cloud. The Pure Application Service is a set of pre-configured, pre-assembled, pre-integrated, IBM solutions knit together in common deployment and configuration options, in the Cloud. Considering what used to take hours to days now takes a few clicks is a significant advancement. With these new IBM offerings, the generation of Now, can spend less time installing and configuring, and more time doing.

IBM has come a long way in a year, moving towards providing cost effective solutions for mid-size and startup businesses in the Cloud. Their MobileFirst message continues to come through in the solutions presented at IBM Impact, in live demos, on the main stage and in breakouts. Gutsy move in a conference center with weak internet connectivity and 9000 people trying to tweet and update statuses with the news they were hearing. In the end though, it was a move we appreciated, because we saw the real thing, not slideware. Thank you for the risk you took… but next year, you may want to work on the internet bandwidth for live demos.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

167323951The line between Data Integration and Application Integration is no longer fuzzy – it has finally vaporized in the Cloud; fitting.

The emergence of the Cloud, including SaaS, IaaS, and growing PaaS adoption, has brought with it standardized interfaces at the application layer, otherwise known as APIs. Traditional integration vendors that have identified their products with Data or Application integration are no longer making that distinction in the Cloud; they are simply talking about integration. The line is disappearing as the Platform-as-a-Service war heats up.

Informatica, a vendor associated with Data Integration in the on-premise world just released its next version of Informatica Cloud, Winter 2014, which is delivering new process, service, and data integration in one package into the Cloud. Process integration, including human-centric workflows are part of the Winter 2014 release, as is industry standard service integration of RESTful and Web Services APIs. Informatica has also announced new ERP adapters, which will unlock back office ERP data for Cloud consumption.

The Informatica Cloud leverages its unique Vibe “map once, run anywhere” virtual data machine, allowing integration mappings to move between the Cloud and on-premise without the need for code changes. Informatica didn’t leave their jewels on the ground: they brought their data quality and profiling capabilities into the Cloud as well. Sounds like a comprehensive data and integration platform-as-a-service.

Pervasive, now owned by Actian, dropped the data integration moniker a while ago, and now likes to simply think of themselves as integration solution providers. Actian’s new “Invisible Integration” capabilities allows for easy integration construction in the Cloud between standard APIs.

This week at Dreamforce, Salesforce1 was announced. Salesforce has brought their multiple PaaS offerings under “1” platform, to create a unified platform that will allow for the creation of next generation apps using new APIs and advanced capabilities of integration with apps, data, processes, devices, and social networks. Salesforce, a successful product and company that was born in the Cloud, continues to build out their offerings and is making the move to expand their PaaS offerings.

IBM’s acquisition of SoftLayer and recent understanding of the importance of the public Cloud (guess they finally saw the light?), coupled with IBM’s software business’ focus on platform will put pressure on the PaaS market for new, unique, and innovative integration solutions in the Cloud.

Any discussion about the Cloud wouldn’t be complete without mentioning Amazon Web Services. Once primarily an IaaS vendor, AWS, which has always supported open standard APIs, now offers data warehousing, data integration, service integration, simple workflow, and messaging services. AWS is increasingly becoming more of a PaaS provider than ever before.

Why are we seeing these trends? Cloud is big, Cloud is hot, Cloud has been a growth area for many vendors; higher growth than what they are experiencing with on-premise offerings. Growth areas get investment money. Investment money seeds new projects, products, and innovations. Vendors stay competitive.

So I wonder when the line between Application and Data Integration is going to disappear on-premise. Maybe when the apps do.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

leadspace_Edge2013-medAt last year’s IBM Edge in Orlando, I said that IBM was looking to increase investment in storage while simultaneously aligning and focusing its then complex storage portfolio.

Was I wrong?

Well, I was at least half right. Instead of simplifying, IBM went out and bought another storage vendor, Texas Memory Systems (TMS), adding another product to its already complex portfolio. In addition to the initial purchase, IBM has committed to investing one billion dollars – one sixth of IBM’s annual R&D budget – on flash research and development in systems and software. So it has definitely increased investment in storage.

“We’ve tripled the size of the FlashSystem development team since the acquisition of TMS,” said Jan Janick, Vice President of Flash Systems and Technology at IBM. Moreover, IBM has committed two billion dollars thus far to its PureSystems research and development. The addition of TMS RamSan flash arrays, rebranded by IBM as FlashSystems, is the answer to what many believed was a missing component in IBM’s portfolio.

In my opinion, the biggest differentiator for IBM is its ability to move data off the array. In a recent Info-Tech solution set on how to Evaluate the Role of Solid State in the Next Storage Purchase, I point out the importance of understanding your organization’s requirements for data movement off of the all-flash array. If you’re just trying to provide consistently ultra high performance storage for a specific application, an all-flash array may be fine. But if you’re looking for a broader deployment with unpredictable workloads or data that degrades in value over time, eventual movement of data off the array is critical to keeping down total cost of ownership.

IBM accomplishes off the array movement by putting FlashSystem behind IBM System SAN Volume Controller (SVC), enabling movement of data from FlashSystem to slower, more cost effective storage. They call it the FlashSystem Solution. Flash System Solution also enables the use of storage services, such as snapshots and replication, which only adds 100 microseconds to system latency to the FlashSystem without SVC. Real-time Compression (RtC) can also be enabled on SVC, maximizing usable FlashSystem capacity (although the additional impact of RtC on latency has not yet been published). This improves the overall value of flash within the context of the larger system; it’s all about the economics of flash.

So, what does this mean for the future of IBM? Like I said, it has added complexity to its portfolio. Compound this with future plans for adding unified capabilities to XIV and the many updates to its other storage products (such as V7000 Unified, XIV, SONAS, N-series) and its storage solutions all start to overlap considerably in features, functionality, and management. In the long run, however, IBM has set itself up to simplify, by taking the storage media out of the equation; or rather, it will move much of the value and margins from hardware to software (of course the 1s and 0s have to get stored somewhere).

IBM argues that, right now, they are ahead of the competition in Software Defined Storage (read: Storage Virtualization) with SVC. While many vendors, including IBM, have developed the capability to abstract and virtualize underlying storage and add storage services, the key to what IBM calls Software Defined Storage 2.0 is about industry-led openness. IBM has invested heavily in OpenStack. It is the number two contributor (250 employees contributing) behind Red Hat, and it also made significant contributions to OpenDaylight for Software Defined Networking, which led to significant community sourced innovation in this space.

Nonetheless, most people’s reaction to this push for openness is “Why are you supporting the commoditization of storage? You’re a storage vendor?” The answer is that the value proposition is in the software and data services. By supporting open initiatives, IBM (in a sense) enables the extension of its Storwize platform to OpenStack, so that others can deploy applications directly on IBM’s platform. Thus, through the OpenStack GUI, organizations can now leverage IBM’s storage services capabilities (snapshots, replication, data movement, Real Time Compression).

The next step for IBM, and where it really plans to deliver value, is all about the data. With what it calls Software Defined Storage 3.0, data storage will be simplified. This will be done by delivering a single protected pool of data with all the automated management occurring on the backend, observing patterns in the data to dynamically and automatically create policy on the fly and match data with the right storage medium (solid state, spinning disk, or tape). Further, because movement of large volumes of data across the network is inefficient, it will intelligently move applications to the data. By making it easier for others to use its platform, IBM positions itself to capitalize on its strong support services capabilities and strong portfolio of analytics software, from which it can then derive its margins.

While storage virtualization…ahem, Software Defined Networking, really isn’t new, IBM has pushed the boundaries in terms of where it is headed. By abstracting storage services away from the hardware and opening them up, IBM will simplify decisions for customers around what storage hardware to buy from IBM. We are still pretty far out from this, however, and it will initially be of most benefit to large organizations leveraging OpenStack or for partners looking to develop new services through integration. However, it will change the way we purchase and utilize storage in the future. The only question, I suppose, is whether the software portfolio, where the value resides, will be equally complex to navigate.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

elephant-on-ball

Why is everyone making a big deal about big data?

Lots of reasons! However, one of the most critical is that there is a general need for resources with big data skills who can work with emerging big data technologies.

Currently, there are the really smart data scientist roles: these are people that look for solutions to problems inside massive amounts of data. There are also infrastructure roles: people that are responsible for setting up and maintaining information in technologies such as Distributed, Columnar, Document, Graph, and Geospatial databases.

However, one of the biggest skills gaps seems to fall in between these two extremes. There is a need for people that can write MapReduce jobs to look for patterns and data that is then fed into the Data Scientist’s algorithms. These roles may also require data integration, integrity and quality management across big data repositories, operational data stores, and data warehouses. Depending on the maturity of the big data technologies and market space, combined with the size of a given organization’s big data projects, these roles may be handled by the same person or multiple resources.

Data technology vendors to the Rescue!

Vendors, including the likes of Informatica, IBM, Actian, HP, SAP, and many others are providing technologies that will help reduce the learning curve by allowing developers to use well established data management and integration environments – such as Informatica’s PowerCenter or IBM’s new technology, BIG SQL – to work with complex big data technologies.

This week, Informatica released “VibeTM virtual data machine”. It underlies Informatica’s core data management and integration products and essentially gives you the ability to map once, run anywhere. Similar to the Java model of write once, run anywhere, VibeTM is an idea that is long overdue: or should I say the implementation is long overdue because the idea has been around for a long time. As a former enterprise integration architect, I saw a huge potential to re-use mapping logic without having to re-write it in each integration tool. For example, the mapping between two data objects in an ETL should be able to be re-used in an ESB that maps the same two objects.

VibeTM provides this capability and it further supports my PoV that the lines between application and data integration are disappearing, leaving us with simply integration. VibeTM takes this even further by allowing for changes to the underlying data technologies. So not only can you map once, and run anywhere, in any of Informatica’s integration tools, but you can also use the same mapping regardless of whether the data objects are stored in Hadoop, Oracle RDBMS, IBM DB2, HBase, etc. This means that business rules in data profiling tools that are used for quality checking in operational data stores, can also be used to run quality checks on Hadoop data sources. This is a significant reuse capability that will help improve data quality, integration and integrity regardless of the persistence technology.

VibeTM reduces the skills gap by allowing users with Informatica PowerCenter experience and capabilities to build the equivalent of MapReduce jobs much faster than even experienced MapReduce technologists can. This is a common trend in the market; IBM has made a similar move with the introduction of BIG SQL: providing ANSI SQL like language for querying Big Data databases and eliminating the need for complex MapReduce coding. SAP is also helping technologists familiar with their Business Objects suite shorten the Big Data learning curve by providing connectors to Hadoop and other Big data database technologies.

IBM, Informatica, SAP: all big players, traditionally expensive solutions, prohibitive in the small to mid-sized business, right? Not so much. IBM is working with the appropriate authorities to make BIG SQL a standard. Informatica will be doing the same with VibeTM: they want the VibeTM Virtual Data Machine (VDM) to become as pervasive as the Java Virtual Machine (JVM). The industry as a whole has an opportunity to benefit from these unique innovations that are also shortening the Big Data learning curve.

IBM also has community and express offerings for many of its middleware products. SAP has been moving down market with their entire product stack for some time now, making further progress in the SMB space.

Informatica is announcing an Express version of PowerCenter this week. PowerCenter Express is Informatica’s flagship product in a deployment model that meets the needs of departments and SMBs at a much more reasonable price point than the enterprise version. This will bring the capabilities of VibeTM and PowerCenter to a new market segment that is likely the one most in need of Big Data utilization accelerators.

So if you are trying to learn how to implement big data in your IT environment but don’t have the skills in your organization, look for a vendor that can join you on your big data journey. Having a partner to share ideas with, and one that will provide support as you go through the wilderness, makes the journey a little less scary, and they will learn as much from you as you will from them.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter