Reasons why big data projects work
It seems to be a part of human nature that people always look for reasons why a project or proposal cannot possibly work. And it is important to find these reasons when something is implemented for the first time. If enough experience is available, then the best approach is to follow the best practice principle. This means learning from experience. We have now reached this point when it comes to big data.
The real work actually begins once management, heads of IT and experts have realized that the analysis of huge streams of data – i.e., big data – is able to deliver huge advantages to their company’s business model.
As in most central and key projects such as business integration, business intelligence and now big data, one of the first fundamental steps involves defining the results of the projects as part of the company’s ultimate goals. Governance and management support from the top is also very important. This can prevent or alleviate any resistance that could hinder the realization of a project. Ideally big data projects should be integrated in the overall IT strategy of a business enterprise – in other words, they should be included in the IT strategy as it applies to specific business goals.
Experience ranks higher than size and strength
Generally speaking, seasoned IT employees realize big data projects based on their experience with their company’s own IT, but they usually don’t have any expertise when it comes to big data analyses. Therefore, it is wise to begin with small projects. This enables employees to gain the experience they need. Gradually new functions can be added in order to verify the results with the expectations and, if necessary, adjust the strategy or the big data approach accordingly. There is no universal big data approach – but each business enterprise will have one approach that matches its business model. Thus, it is key that the concrete goals and expectations an enterprise has in terms of big data are determined early in the big data use case so that maximum value added will be achieved.
Introducing big data step by step also protects investments. If Hadoop technologies are applied as a supplement to an existing data warehouse system, with DWH offload, an enterprise can gently and smoothly migrate to future-proof big data technology while protecting its current investments in traditional database technologies at the same time. The data warehouse can then gradually evolve into a big data Hadoop until the company’s traditional database solutions for specific applications can be replaced.
We have noticed that many enterprises with large IT departments initially try to implement big data on their own. The reason: Technologies are based on open source and are thus available free of charge for initial experiments. Such projects allow employees to become familiar with the technology, and they learn to recognize the benefits for their own company, but this does not automatically mean that implementation will take place in response to concrete business requirements. In any case, a trial-and- error strategy is more expensive than planning with support from external know-how. Partners with a proven performance track record are important sources of support for such new technologies like Hadoop or SAP HANA because they have years of experience in business intelligence and analytics, data warehouse solutions and hosting large IT infrastructures, not to mention big data Hadoop expertise.
For many major enterprises big data is a further development of analytics requirements based on a company’s business figures and KPIs. Mapping existing analytics requirements to huge and constantly growing data volumes is not a problem for IT enterprises with experience in DWH and big data Hadoop. Such experts simply have the skills to master the facets of Hadoop technology, like big data solutions based on local, on-premise or cloud scenarios.