Info-Tech is launching our first Live Collaboration on Choose the Right Development Tools for Big Data. Live Collaborations are a chance to share best practices with your peers and our analysts through an interactive video conference. Join us on December 11 at 2:00PM EST for a Live Collaboration and gain valuable insight on your next big IT project. REGISTER HERE
Choose the Right Development Tools for Big Data is a new Guided Implementation Blueprint aimed at helping application development managers use the right tools for handling big data. Organizations are increasingly examining big data as a means of analyzing vast amounts of data rapidly. This applies not only to BI initiatives, but also to real time commerce related activities that are based on real time consumer patterns and behavior.
Much of the literature around big data focuses on architectural benefits such as divide and conquer or entity relationships. Little focus is given to the actual tools other than the use of generic programming languages like Java. This limits an application development manager’s ability to provide development tools to maximize productivity. This is the first complexity vector in Big Data tool selection – developer productivity — that can easily lead to increased maintenance costs and future derailment of an important business initiative. Now is the time to consider the right tools or tool chain to help ease development and maintenance burden on IT.
A second complexity vector in Big Data tool selection is integration. Legacy applications were not built to handle Big Data design. From a development perspective, tool bridging now becomes part of the roadmap into Big Data projects. However, this introduces additional complexity around legacy test automation and harnessing. That, in turn, introduces complexity with deployment and release caused by dependencies amongst various bits.
The final complexity vector in Big Data tool selection is a meta project issue around communication. Big Data can disrupt existing architectures. Communication and impact analysis is imperative. But how do we go about discussing these concepts? Classic data flows aren’t enough. We now need to talk about metadata and master data and strive for effective multi domain communication.
Big Data represents some interesting possibilities. Jumping into it without thinking through complexity vectors can result in significant pain later on. Better to plan this out now and improve development velocity and quality over time as more learning happens.