An Awkward Moment: Metadata In Next Generation Information Architecture

An Awkward Moment: Metadata In Next Generation Information Architecture

About a week ago I attended a great event in Las Vegas – Informatica World 2015. Unfortunately, I had no time for the casino (shame on me, I know), but even so, the whole event was really good, full of visions and inspirational as always.

An Awkward Moment: Metadata In Next Generation Information Architecture

About a week ago I attended a great event in Las Vegas – Informatica World 2015. Unfortunately, I had no time for the casino (shame on me, I know), but even so, the whole event was really good, full of visions and inspirational as always.

The common topics were (not surprisingly) big data, analytics, cloud and all those other things that are packaged together in what is called Next Generation Information Architecture. There is no need to be precise about definitions at this very moment. The main question is “What’s happening that we have started to talk about Next Generation Information Architecture?” And you know what? It is not about technology.

Did you know that the BI department is under the CFO in most organizations? And it makes sense. Data has been used for so many years almost exclusively by finance – there are plenty of regulations to comply with and owners expect their P&L reports every week. I have heard so many times from marketing or sales – “We have great risk profiles of our customers but nothing that can be used easily for marketing or sales!” – and other similar complaints.

But something has changed, right? Today we are using data not just for compliance, but more and more for business. And amazing things have started to happen. Old data warehouses are expensive and slow now, not agile enough to cope with volatile business requirements, and they lack a lot of the information necessary to support the growth you need. So new approaches and technologies have emerged and penetrated the old BI world. Today our heads are full of big data, analytics of every kind, data lakes, agile, embedded, and self service BI. We are excited about finding a needle in a haystack and leveraging the data. Big teams are crunching data every day to understand customers better, optimize operation, and do other great things. We have found a new toy to play with, and it is going to be a business game changer.

Everybody is happy now, but take a look at what else has been happening. We have had real issues with a lot of custom code in old data warehouses, and we have not been very good at managing that code. We have become slow and far from being agile. And now we deploy new technologies almost every single day, dozens of applications are being created to process data, and even more custom code is and will be in our BI than ever before. We have lost our understanding of what exactly is happening with our data, we are not able to explain how this or that very important customer attribute was computed or where this number in the report came from. Oh yes, sorry, we can do it. But only by hand, manually. And with all those programs in BI maintenance it is going to be a real nightmare. That means slooow and expeeeensive. And no self service BI solution will help you. They make it even worse because the business staff will start to use their own numbers and values without any governance and everything will quickly collapse. Trust will be lost. It is always the same when it comes to a critical software system – it starts out easy, but things very quickly turn out to be complicated.

An Awkward Moment

And because we think this is a real issue which will become very painful in a year or two, during one of Informatica World’s key presentations we asked some smart experienced guys from a huge consulting company in full audience room the following two questions:

1] Do you believe that up-to-date, correct, and complete metadata is needed to run Next Generation Information Architecture?
And the answer was simply “Yes, it is and it will be crucial!”

2] How do you plan to collect metadata from custom code that is basically everywhere in Next Generation Information Architecture?
And the answer was (hold your breath): “Yes, that is a real issue, and we believe the best way to solve it is to document your code appropriately.”

WTF? Do you seriously mean that? Have you ever in your entire life met even one programmer who documented her code “appropriately”? I’ve spent many years of my life as a developer, an architect (a quite good one I believe:)), and a project manager later, and I have learned one thing – when it comes to documentation you can bet any amount you want, your docs are not up-to-date, complete, or even correct. And usually if there is any documentation no one really understands it except the original developer (and sometimes not even her). Take it or leave it, but please stop lying to yourself. The manual way of doing boring and repetitive tasks is always the road to hell. So what to do?

It is not so hard. You need to deploy a solution which creates important parts of your documentation automatically from your BI environment and focus only on high level conceptual issues. That kind of stuff can still only be documented by people. So I would start with a good metadata manager (like the one from Informatica or other vendors) to analyze and document data structures, and package it together with Manta Tools to handle your custom code. Without Manta Tools custom code will be a black box for you. Or just start to quickly see great results. With Manta Tools you will get interactive and always up-to-date documentation on what your custom code is actually doing in both the old and new BI world. And that is just a small piece of what we can do for you.

Just go check out our demos or our case studies. You will be surprised by what is possible.

We cherish your privacy.

By using this site, you agree with using our cookies.