Let’s talk a bit about agile BI development.
Let’s talk a bit about agile BI development.
Please do not mistake it for Agile BI which is basically about self-service, business driven BI, and the ability to make rapid decisions. This agile BI development is about being effective and flexible, satisfying users, and delivering high quality results on time and on budget when developing and maintaining a BI environment. Take a look at the Agile Manifesto.
Do you want to learn more? Take a look at slides from our seminar “Agile BI Development Through Automation“, held in London on May 19th, 2016.
My relationship with agile is a little bit difficult. I really admire experienced guys like Kent Beck, Martin Fowler, and the other Fathers of Agile. Their focus on quality, flexibility, and the ability to deliver good software systems is highly admirable. A lot of articles have been written about agile principles, agile methodologies, and how to put them into practice, so there is no need for me to rephrase them.
But what I don’t like is how we treat those great agile principles in the real world of large enterprises and especially in BI environments! Agile became an excuse for chaos, weak documentation, and bad software architecture. Agile also became a way to cover up late and over budget delivery.
During my many years in business I have seen both great projects and really desperate ones. Have you ever heard about The Chaos Report? Its authors, The Standish Group, focus on gathering and analyzing all the data about software projects. And they have been doing it for quite a long time :) You need to be a member to access the most recent reports and the whole Chaos Knowledge Center, but take a look at this report from 2013 [in PDF]. You can see several important things:
1) 43% of all projects were challenged (late, over budget, and/or with less than the required features and functions) and 18% failed (canceled prior to completion or delivered and never used). That is much better than ten years ago, but it still sucks!
2) When we assess large and small projects separately, we can see a completely different picture. For smaller projects (less than $1 million in labor costs) the failure rate was 4% and only 20% of projects were challenged.
So what does that mean? Planning big but acting small makes sense. Intensive user involvement makes sense. No surprises so far! And one must admit that agile forces us to behave like this. So a big plus from my point of view.
Unfortunately, I’m not aware of any exact numbers related only to BI projects. My experience has been that BI projects are in general less successful. But, as I said, I have no proof of that except what I have seen. What I know for sure is that when you look at BI development teams you will typically see several serious issues.
The test driven approach is one of the cornerstones of any agile methodology. If you want to deliver often, you need to be able to very effectively test and deploy your solution. But in BI environments it is quite rare to see automated testing in place. What’s good is you can change that very easily. You just need to get a solid tool or, even better, a set of tools (static code analysis, unit and integration testing, system testing at least) and start using them persistently in your development process. I encourage you to take a look at Manta Checker which can help you with static analysis and the enforcement of policies and code standards for your team. We roll it out with a rich set of rules, representing best practices for all the supported technologies.
Everyone is using GIT today, am I right? Or at least SVN or something like that. People use branches, merging, automated diffs, and several other things we love so much about modern version control systems. But not in BI, folks. I see it again and again – all the scripts (DDL, stored procedures, etc.) and critical documents are stored in a shared filesystem. Code is written in a way that makes it impossible to apply changes and to later roll them back easily. Data is excluded from standard versioning processes. There is no easy and quick fix. But it is essential to start using such tools and to think about BI release processes to make them more agile.
I mentioned it in the previous point – we have real issues with deployment in BI. In other arenas you can see teams doing Continuous Delivery or at least Continuous Integration. It is a must have standard for software development. But not in BI. Rapid and automated build, test, and deploy is seen very rarely. Again – it is not easy to automate things, but reducing manual labor would help a lot as it is the biggest source of errors and ineffectiveness.
Going agile doesn’t mean you should stop documenting software, even if someone tries to pretend so. Documentation is essential to understanding and sharing critical ideas regarding your software from a long term perspective. But there are plenty of ways you can make your life easier. Today’s IDEs (Integrated Development Environments) like Visual Studio, IDEA, and Eclipse provide you with a very advanced way to look at code and analyze it. Those tools can show you different types of diagrams to help you understand code faster. Your focus stays on how to capture more abstract, high-level ideas. The rest is generated from the code. In BI this approach is not so common. This is unfortunate because there are definitely tools out there which can help. There are several great IDEs for writing SQL code with almost all the necessary features. And if you want to see the big picture of what your code does across several different technologies, you can use Manta Flow together with your favorite metadata manager. Or Manta Flow alone, if your interest is just in code.
How many times have I heard it said: “Let’s go and refactor it!”? The whole idea of refactoring is worth writing hundreds of books (and most of them have been written already). One of the most crucial things to know if you want to refactor something is what else will be impacted by that change. Are you going to change a table? A column? Some logic in a stored procedure? Different types of changes have different levels of severity in impact. And the biggest issue is that it is quite hard in a BI environment to identify the impact of planned changes and to do it quickly. Most metadata management tools ignore custom code, so your data lineage is not complete and impact analysis does not work as you would expect. But there are some tools out there which can help you. Manta Flow is one of them. So refactoring will never be an issue any more.
Anyone can complain about how my focus was more on tools and techniques and not so much on people and processes. And they are right. But when it comes to topics like requirements, specifications, and project management I can’t see any significant difference between BI and non-BI projects.
So to say it again – there are plenty of agile principles and techniques which are great, and it is a shame we have not been able to implement them for BI projects yet. Several weeks ago I met one guy from IBM who is an experienced architect, and he said it right – “Agile is fragile!”. But the reason is not the agile methodology itself, but the way we put it into practice. And that is even more true in BI environments.
Be sure to follow us on Twitter and submit your questions using the form on the right.