A few months ago I had one of those “Freaky Moments” during a pre-sales presentation to one of our prospects. During our sales presentation, in the very beginning, we showed a few examples of what we have done for other clients and how that benefited them. When pitching prospects who have Teradata it usually goes like this:
Us: “In the company XXX we saved the running metadata project by extracting metadata from more than 500 Teradata BTEQ scripts and loading it onto their metadata management tool. We shortened the time needed for this task from 4 months to just 2 weeks and automated this extraction so they would never lose a data lineage hidden in custom code. Have you experienced similar issues with data lineage hidden in your custom code?”
Them: “Yes, it is exactly that way. We have a lot of custom code in Teradata, and our tools are not able to show us what is going on inside. Our business guys force us to make changes in the warehouse all the time, saying something about a volatile market, flexibility, and time-to-market. So it takes a lot of manual labor when dealing with impact analysis, what-if analysis, or when looking for errors.”
That is a great start for us. They are on board and nothing can stop us 🙂 But not this time. We started the usual way “In the company….” The reaction was “It is an issue we have with our old data warehouse. Terrible. But we are building a new one.” My first thought was “GREAT! Migration from old to new, an even bigger opportunity for us!” (I love migrations and upgrades; they are always very stressful and at least five times more expensive than originally planned. But more on this topic in another post). And then they continue. “So it doesn’t bother us anymore because we have a great ETL tool now.” WTF? How would any ETL tool help them manage custom code in Teradata. But it became very clear in a few seconds. The idea of those architects was very simple – all transformations and data processing would be executed by the ETL tool. No SQL code anymore, only drawing pictures in the “great graphical interface” of that superb new ETL tool (btw. they did choose a really great ETL tool).
What do you think? Is it smart? No way.
First – you have one of the greatest and fastest database platforms ever and you are not going to use its power? Not very smart! Second – the drawing and modeling of anything other than very simple transformation logic is a nightmare. It is so much more effective to put those things into the code (I wrote about it in another article). People who do not program always think the same way.
Code is a mess and no one understands what’s there.
Maintenance is expensive and everything is slow.
But hey – code is not the criminal here. It is the victim. Please try to accept that you have a lot of requirements for your data warehouse and your BI. You want complex stuff. Every problem has its inherent complexity, and there is no secret way around it. Just try to get rid of code and use those diagrams. You will end up with hundreds of thousands of them, maybe millions. And it will be a mess again. And it will be even slower to change anything and more expensive to do maintenance. The only thing you really need is simply to take control of your custom code and make it an integral part of your BI governance and information management. And we can definitely help you with that. Our Manta Flow and Manta Checker products are designed to fulfill exactly this purpose.
And how did it end with our presentation? We spent an hour trying to educate those guys on the other side, but we failed. They blindly believe that having a new data warehouse and especially a new ETL tool is going to solve all their problems. So we gave up. And when we left the meeting I realized that they are simply not customers for us. Not yet. They need to figure it out and fail the same way I have failed many times before in my life, when closing my eyes not to see there are ugly and unpleasant things in my software system or even in my life. Good luck guys, and see you in one or two years.