All Topics

MANTA 3.19: Dynamic SQL Support, Fully-Automated Collibra, Service API & More…

The new MANTA 3.19 is here and it will leave you with exactly the same great feeling as the first sip of pumpkin spice latte on a rainy fall afternoon.* 

The new MANTA 3.19 is here and it will leave you with exactly the same great feeling as the first sip of pumpkin spice latte on a rainy fall afternoon.* 

In this version, we took a close look at our integration with Collibra and fully automated the whole process. Before, when we integrated MANTA with Data Governance Center, it required an initial setup that was tailored to fit each customer. But now it’s all part of the product, automatically ready to connect to your DGC!

This next new feature is a big deal for our partners who work with systems that MANTA doesn’t support, usually ETL tools, etc. These tools can contain SQL which our partners need to parse in order to understand their customers’ BI environments. With MANTA’s new „MANTA Service API“, our partners can now connect MANTA to their own solutions, make them crunch all the code in the customer databases that they can’t read, and then pull back all the information to provide their customers with detailed and accurate data-lineage.

So with the new “MANTA Service API” and Public API we introduced in our last release, you can now use MANTA’s SQL-analyzing superpower ANYWHERE. You’re welcome.

We boosted all the analyzation processes as well, especially the DB2 connector. So now when you are exporting to IBM InfoSphere Information Governance Catalog, you can see the SQL source code right in the window.

MANTA does static code analysis, and one of its handicaps was dynamic SQL analysis. In 3.19, we have made steps toward speeding up the process of analyzing dynamic SQL. MANTA is able to recognize and read your dynamic SQL patterns, although some specification is needed from time to time.

Last but not least there have been a few improvements affecting Informatica PowerCenter integrations. For example, MANTA can now easily read what database IFPC is connected to, which significantly decreases the amount of manual work required in the initial setup, saving many hours of valuable time on MANTA x Informatica integrations.

*We have not verified this claim; it’s just based on my personal experience. Please, don’t sue us.

Also, if you have any questions, just let us know!

So, You Are Planning a GDPR Solution without Data Lineage?

September 21, 2017 by

How complete data lineage can help you with GDPR compliance projects. Or, more precisely, how they cannot survive without it. 

How complete data lineage can help you with GDPR compliance projects. Or, more precisely, how they cannot survive without it. 

For the last couple of months, anytime I open my browser, it’s all over the internet! GDPR everywhere. Our partners and customers see it the same way and often come to us asking what role data lineage from MANTA plays in this whole GDPR boom. Let’s have a look.

The General Data Protection Regulation (GDPR) takes effect on May 25th, 2018. So it’s about time to find out if this regulation affects you or not. The GDPR will apply to any company that stores personal information of EU Citizens, so chances are this is your company as well. And because the fine is up to €20 million or 4% of the company’s revenue (whichever amount is higher), we all might want to avoid paying that money.

Data Lineage for the GDPR?!

Although data lineage by itself won’t make you compliant (you still need a Certified Data Protection Officer (DPO), consent from EU citizens in your database, and a “few” other things), it can solve a large portion of your GDPR worries.

End-to-end data lineage can give you overall insight into ALL of your data. Here are the things that quality data lineage can tick off your 20-something-long GDPR to-do list:

1. Know your data: The GDPR requires you to know not only were data is being stored, but why, how, and when it has been shared with other systems, both externally and internally. This includes knowing where your data is, where each record in your CRM or business database comes from, and where it is held in your data warehouse.

Luckily for you, MANTA can crunch all the SQL code in your database and with the provided custom mapping, make not only a technical, but also a business data lineage map so anyone can swiftly maneuver through it.

2. Give individuals the “right to be forgotten” (RTBF): This is something anyone can request from your company as of May 25th, 2018. You must comply without delay, and certainly within 30 days. (Source) But how are you going to do that when you know you have your customers data scattered across different databases? Data lineage from MANTA can take you back to all the records where your customer’s data is being stored so you can be sure that you have erased it from all the records.

3. Data portability for everyone: Anyone can also request a copy of all their data stored by your company. (Source) ALL their data, including stored e-mails, purchasing and payment history from different databases, and so on. With full data lineage, you can much more effectively allocate your resources and save money on developing such solutions.

Better Safe Than Sorry

If you are struggling with the GDPR and trying to find a way to prep your company as much as possible so that you are in the safe zone when the GDPR takes effect, data lineage is something you should definitely consider getting yourself. An average implementation of  MANTA to your data warehouse usually takes one or two days. Depending on the length of your usual purchasing process, you might want to give us a call tomorrow, next week, or next month at the latest. Each company’s needs are different, and it’s better to start in time so you can be safe rather than sorry.

Not sure how it works? Try our online demo and make sure you know all supported technologies and 3rd party solutions. Any questions on how MANTA can be useful when complying with the GDPR? Just ask us at manta@getmanta.com!

A Metadata Map Story: How We Got Lost When Looking for a Meeting Room

September 1, 2017 by

You may think that I have gone crazy after reading the title above or hope that our blog is finally becoming a much funnier place. But no, I am not crazy and this is not a funny story. [LONG READ]

You may think that I have gone crazy after reading the title above or hope that our blog is finally becoming a much funnier place. But no, I am not crazy and this is not a funny story. [LONG READ]

It is, surprisingly, a metadata story. A few months ago, when visiting one of our most important and hottest prospects, we arrived at the building (a super large finance company with a huge office), signed in and passed through security, called our main contact there, shook hands with him, and entered their private office space with thousands of work desks and chairs, plus many restrooms, kitchens, paintings, and also meeting rooms.

The Ghost of Blueberry Past

A very important meeting was ahead of us, with the main business sponsor who had significant influence over the MANTA purchasing process. Our main agenda was to discuss business cases involving metadata and the role of Manta Flow. So we followed our guide and I asked where we were going. “The blueberry meeting room”, he replied. We stopped several times, checking our current position on the map and trying to figure out where to go next. (It is a really super large office space.) After 10 minutes, we finally got very close, at least according to the map. Our meeting room should have been, as we read it on the map, straight and to the left. But it was not! We ran all over the place, looking around every corner, checking the name printed on every meeting room door, but nothing. We were lost.

Fortunately, there was a big group of people working in the area, so we asked those closest to us. Several guys stood up and started to chat with us about where that room could be. Some of them started to search for the room for us. And luckily, there was one smart and knowledgeable woman who actually knew the blueberry meeting room very well and directed us to it. In 20 seconds, we were there with the business sponsor, although we were a few minutes late. Uffff.

That’s a Suggestive Question, Sir!

Our gal runs a big group, a business, and BI analysts who work with data every single day – they do impact and what-if analyses for the initial phase of every data-related project in the organization. They also do plenty of ad-hoc analyses whenever something goes wrong. You know, to answer those tricky management questions like:

“How did it happen that we didn’t approve this great guy for a loan five months ago?”

or

“Tell me if there is any way a user with limited access can see any reports or run any ad-hoc queries on sensitive and protected data that should be invisible to her?”

And I knew that they had very bad documentation of the environment, non-existing or obsolete (which is even worse) as do many organizations out there, most of it in Excel sheets that were manually created for compliance reasons and uploaded to the Sharepoint portal. And luckily for us, they had recently started a data governance project with only one goal – to implement Informatica Metadata Manager and build a business glossary and an information catalog with a data lineage solution in it. It seemed to be a perfect time for us with our unique ability to populate IMM with detailed metadata extracted from various types of programming code (Oracle, Teradata, and Microsoft SQL in this particular environment).

Just Be Honest with Yourself: Your Map Is Bad

So I started my pitch about the importance of metadata for every organization, how critical it is to cover the environment end-to-end, and also the serious limitations IMM has regarding programming code, which is widely used there to move and transform data and to implement business logic. But things went wrong. Our business sponsor was very resistant to believe the story, being pretty OK with what they have now as a metadata portal. (Tell me, how anyone can call Sharepoint with several manually created and rarely updated Excel sheets, a metadata portal? I don’t understand!) She asked us repeatedly to show her precisely how we can increase their efficiency. And she was not satisfied with my answers based on our results with other clients. I was lost for the second time that day.

And as I desperately tried to convince her, I told her the story about how we get lost and mixed it with our favorite “metadata like a map, programming code like a tricky road” comparison. “It is great that you even have a map”, I told her. “This map helped us to quickly get very close to the room and saved us a lot of time. But even when we were only 40 meters from our target, we spent another 10 minutes, the very same amount of time needed to walk all the way from the front desk to that place, looking for our room. Only because your great map was not good enough for the last complex and chaotic 5% of our trip. And what is even worse, others had to help us, so we wasted not only our time, but also theirs. So this missing piece of the map led to multiple times increased effort and decreased efficiency. And now think about what happens if your metadata map is not complete from 40% to 50%, which is the portion of logic you have hidden here inside various kinds of programming code invisible to IMM. Do you really want to ignore it? Or do you really want to track it and maintain it manually?”

And that was it! We got her. The rest of our meeting was much nicer and smoother. Later, when we left, I realized once again how important a good story is in our business. And understandability, urgency and relevance for the customer are what make any story a great one.

And what happened next? We haven’t won anything yet, it is still an open lead, but now nobody has doubts about MANTA. They are struggling with IMM a little bit. So we are waiting and trying to assist them as much as possible, even with technologies that are not ours. Because in the end it does not matter if we load our metadata into IMM or any other solution out there. As long as there is any programming code there, we are needed.

This article was originally published on Tomas Kratky’s LinkedIn Pulse.

HELP! How Can I Include OFSAA in My Data Lineage?

August 30, 2017 by

Lately, we have written a lot about how MANTA can help you comply with all kinds of regulations, but risk and compliance go hand-in-hand for our customers.

Lately, we have written a lot about how MANTA can help you comply with all kinds of regulations, but risk and compliance go hand-in-hand for our customers.

They often use a risk management tool called Oracle Financial Services Analytical Application (OFSAA) and ask us if MANTA works together with it. The answer is, YES! Read on to find out how to use MANTA together with OFSAA to really get the hang of risk management.

RISKY BUSINESS?

Let’s start with a story from the field. We use a credit card company in this example, but this story really does apply to a wide range of financial products. Our story starts with a customer who applied for a credit card, but the application was denied by the company.

Then he returned half a year later and applied again. The second time, he got it. But how was this possible? The credit card company then had to manually go through all the data and current calculations, and it took them months to find out what had caused the problem – during the 6 months between the two requests the financial company had changed the algorithms for calculating creditworthiness. So, being able to give the customer an explanation required a lot of work, stress, and time! Wouldn’t all of this have been much easier if the company had had MANTA?

DO IT THE EASY WAY…

How could the company have solved the problem using MANTA and OFSAA? MANTA provides information about stored procedures and data handling quickly and accurately. It not only provides data lineage but is able to compare current revisions with historical ones. Instead of manually going through all the past algorithm records, they could have had MANTA solve the problem for them automatically. The company could have easily looked at the algorithm used on the date the customer first applied in just a few clicks. And OFSAA helps MANTA effectively get all the information it needs.

OFSAA MAKES IT EVEN BETTER!

OFSAA works as a system above your Oracle database, using its logic to generate SQL codes. MANTA and SQL code are good friends, so it’s easy for MANTA to take the information about your financial services from OFSAA and add it to the data lineage. The outcome is detailed end-to-end data lineage that MANTA provides by parsing your Oracle database and adding mapping and OFSAA scripts. And the best part of it is that OFSAA and MANTA “speak” the same language, so the entire process is FAST, saving you time and money you can use – well, really on anything better than manually searching through scripts and stored procedures.

Don’t do “risky business”. Go ahead and fill in this form to get a free trial to have a look at MANTA yourself.

MANTA + Scalefree = Data Vault Heaven

August 15, 2017 by

We have an exciting announcement for all of you! MANTA has teamed up with Scalefree, and exciting things are headed your way! The guys at Scalefree are real pros at building information systems and data vaults, offering a full range of BI solutions.

We have an exciting announcement for all of you! MANTA has teamed up with Scalefree, and exciting things are headed your way! The guys at Scalefree are real pros at building information systems and data vaults, offering a full range of BI solutions.

Data Vault 2.0 is a system of business intelligence comprised of 3 (plus one) pillars that are required to successfully build an enterprise data warehouse system:

  • A flexible model: designed especially for data warehousing, the Data Vault model is very flexible, easy to extend, and can span across multiple environments, integrating all data from all enterprise sources in a fully auditable and secure model.
  • An agile methodology: because the Data Vault model is easy to extend (with near-zero or zero change impact to existing entities), successful projects choose the Data Vault 2.0 methodology, which is based on Scrum, CMMI Level 5, and other best practices.
  • A reference architecture: spanning the enterprise data warehouse across multiple environments and integrating batch-driven data, near-real-time and actual real-time data streams, and unstructured data sets.

Furthermore, the agile methodology also includes best practices for the actual implementation of the Data Vault model, for deriving the target structures (in many cases, dimensional models, but not limited to those), and for the implementation of the architecture. All implementation patterns have been fine-tuned for high performance over more than 20 years and successfully used to process up to 3 petabytes of data in a U.S. government context (defense/security).

While adapting better to changes than pretty much any other architecture, Data Vault is braced for “Big Data” and “NoSQL”. This provides the customer with the same level of efficiency, now and in ten years, erasing all worries about the rapidly growing amount of data in your business.

As one of Scalefree’s founders, Dan is now establishing his concept on the market. In the training, exclusively certified by him, customers can learn the why/what/how of Data Vault 2.0.

And where does MANTA come in?

When you want to build a truly perfect data vault model, having strong data lineage is essential. Complete end-to-end lineage gives you insight into the structure and all procedures inside your data warehouse. Now that you know exactly where your data comes from and what data flows it goes through to get all the way to the end table, you can create an accurate data vault model that is applicable in many ways.

Have we aroused your curiosity about how to build and use a data vault in your business? Then be sure to check out the Scalefree Data Vault 2.0 Boot Camps  and save yourself a seat. Upcoming training programs will be held in Brussels, New York City (with Dan Linstedt), Vienna, Oslo, Dublin, Santa Clara CA, and Frankfurt.

Manta Goes Public with Its API!

Nowadays, every app, tool and solution needs to be connected to everything else. And MANTA is ready to join the club. 

Nowadays, every app, tool and solution needs to be connected to everything else. And MANTA is ready to join the club. 

You Asked for It

Here at MANTA HQ, we’ve been literally buried with customer requests to add various integration possibilities for Manta Flow. You asked for it! As of version 3.18, MANTA has a public REST API. This new feature, together with multi-level data lineage gives users the option to use MANTA with all kinds of technologies.

Through the public API you can connect MANTA to any custom tool or app and allow it to work with its data. How exactly? Take a look at this example:

Let’s say you have your own quality monitoring tool that monitors critical elements of data lineage for you. You can let MANTA export an excel file and then manually go through all the values, find out what their sources are, and manually look for changes. But now, thanks to public API, you can do all this automatically using your own tool!

Put an End to Boring Manual Reports

The tool can call MANTA’s API, automatically pull out all the critical elements of data lineage, and report the changes found. Now, you can automatically monitor all changes that occur to your data during a given time period, saving you and your company hours of manual labor spent pouring data from MANTA into your own tool.

And there are many, many other ways you can use our new API!

To learn more about capabilities of our solution, try a live demo, ask for trial or drop us a line on manta@getmanta.com.

Return of the Metadata Bubble

July 27, 2017 by

The bubble around metadata in BI is back – with all it’s previous sins and even more just around the corner. [LONG READ]  

The bubble around metadata in BI is back – with all it’s previous sins and even more just around the corner. [LONG READ]  

In my view, 2016 and 2017 are definitely the years for metadata management and data lineage specifically. After the first bubble 15 years ago, people were disappointed with metadata. A lot of money was spent on solutions and projects, but expectations were never met (usually because they were not established realistically, as with any other buzzword at its start). Metadata fell into damnation for many years.

But if you look around today, visit few BI events, read some blog posts and comments on social networks, you will see metadata everywhere. How is it possible? Simply, because metadata has been reborn through the bubble of data governance associated with big data and analytics hype. Could you imagine any bigger enterprise today without a data governance program running (or at least in its planning phase)? No! Everyone is talking about a business glossary to track their Critical Data Elements, end-to-end data lineage is once again the holy grail (but this time including the Big Data environment), and we get several metadata related RFPs every few weeks.

Don’t get me wrong, I’m happy about it. I see proper metadata management practice to be a critical denominator for the success of any initiative around data. With huge investments flowing into big data today, it is even more important to have proper governance in place. Without it, no additional revenue, chaos, and lost money would be the only outcome of big (and small) data analytics. My point is that even if everything looks promising on the surface, I feel a lot of enterprises have taken the wrong approach. Why?

A) No Numbers Approach

I have heard so often that you can’t demonstrate with numbers how metadata helps an organisation. I couldn’t disagree more. Always start to measure efficiency before you start a data governance/metadata project. How many days does it take, on average, to do an impact analysis? How long does it take, on average, to do an ad-hoc analysis. How long does it take to get a new person onboard – data analyst, data scientist, developer, architect, etc. How much time do your senior people spend analysing incidents and errors from testing or production and correcting them? My advice is to focus on one or two important teams and gather data for at least several weeks, or better yet, months. If you aren’t doing it already, you should start immediately.

You should also collect as many “crisis” stories as you can. Such as when a junior employee at a bank mistyped an amount in a source system and a bad $1 000 000 transaction went through. They spent another three weeks in a group of 3 tracking it from its source to all its targets and making corrections. Or when a finance company refused to give a customer a big loan and he came to complain five months later. What a surprise when they ran simulations and found out that they were ready to approve his application. They spent another 5 weeks in a group of 2 trying to figure out what exactly happened to finally discover that a risk algorithm in use had been changed several times over the last few months. When you factor in bad publicity related to this incident, your story is more than solid.

Why all this? Because using your numbers to build a business case and comparing them with numbers after a project to demonstrate efficiency improvements and those well-known, terrifying stories that cause so many troubles to your organisation, will be your “never want it to happen again” memento.

B) Big Bang Approach

I saw several companies last year that started too broad and expected too much in very short time. When it comes to metadata and data governance, your vision must be complex and broad, but your execution should be “sliced” – the best approach is simply to move step-by-step. Data governance usually needs some time to demonstrate its value in reduced chaos and better understanding between people in a company. It is tempting to spend a budget quickly, to implement as much functionality as possible and hope for great success. In most cases, however, it becomes a huge failure. Many, good resources are available online on this topic, so I recommend investing your time to read and learn from others’ mistakes first.

I believe that starting with several, critical data elements most often used is the best strategy. Define their business meaning first, than map your business terms to the the real world and use an automated approach to track your data elements both at a business and technical level. When the first, small set of your data elements is mapped, do your best to show their value to others (see the previous section about how to measure efficiency improvements). With success, your experience with other data sets will be much smoother and easier.

C) Monolithic Approach

you collect all your metadata and data governance related requirements from both business and technical teams, include your management and other key stakeholders, prepare a wonderful RFP and share it with all vendors from the top right Gartner Data Governance quadrant (or Forrester wave if you like it more). You meet well-dressed sales people and pre-sales consultants, see amazing demonstrations and marketing papers, hear a lot of promises how all your requirements will be met, pick up a solution you like, implement it, and earn you credit. Prrrrr! Wake up! Marketing papers lie most of the time (see my other post on this subject).

Your environment is probably very complex with hundreds of different and sometimes very old technologies. Metadata and data governance is primarily an integration initiative. To succeed, business and IT has to be put together – people, systems, processes, technologies. You can see how hard it is, and you may already know it! To be blunt, there is no single product or vendor covering all your needs. Great tools are out there for business users with compliance perspectives such as Collibra or Data3Sixty, more big data friendly information catalogs such as Alation, Cloudera Navigator, or Waterline Data, and technical metadata managers such as IBM Governance Catalog, Informatica Metadata Manager, Adaptive, or ASG. Each one of them, of course, overlaps with the others. Smaller vendors then also focus on specific areas not covered well by other players. Such as MANTA, with the unique ability to turn your programming code into both technical and business data lineage and integrate it with other solutions.

Metadata is not an easy beast to tame. Don’t make it worse by falling into the “one-size-fits-all” trap.

D) Manual Approach

I meet a lot of large companies ignoring automation when it comes to metadata and data governance. Especially with big data. Almost everyone builds a metadata portal today, but in most cases it is only a very nice information catalog (the same sort you can buy from Collibra, Data3Sixty, or IBM) without proper support for automated metadata harvesting. The “How to get metadata in” problem is solved in a different way. How? Simply by setting up a manual procedure – whoever wants to load a piece of logic into DWH or Data lake has to provide associated metadata describing meaning, structures, logic, data lineage, etc. Do you see how tricky this is? On the surface, you will have a lot of metadata collected, but every bit of information is not reality – it is a perception of reality and only as good as the information input by a person. What is worse, is that it will cost you a lot of money to keep synchronised with real logic during all updates, upgrades, etc. The history of engineering tells us clearly one fact – any documentation, especially documentation not an integral part of your code/logic, created and maintained manually, is out of date the very same moment it was created.

Sometimes there is a different reason for harvesting metadata manually – typically when you choose a promising DG solution, but it turns out that a lot is missing. Such as when your solution of choice cannot extract metadata from programming code and you end up with an expensive tool without the important pieces of your business and transformation logic inside. Your only chance is to analyse everything remaining by hand, and that means a lot of expense and a slow and error-prone process.

Most of the time I see a combination of a), c) and d), and in rare cases also with b). Why is that? I do not know. I have plenty of opinions but none of them have been substantiated. One thing for sure is that we are doing our best to kill metadata, yet again. This is something I am not ready to accept. Metadata is about understanding, about context, about meaning. Companies like Google and Apple have known it for a long time, which is why they win. The rest of the world is still behind with compliance, regulations being the most important factor why large companies implement data governance programs.

I am asking every single professional out there to fight for metadata, to explain that measuring is necessary and easy to implement, small steps are much safer and easier to manage than a big bang, an ecosystem of integrated tools provides greater coverage of requirements than a huge monolith, and that automation is possible.

Tomas Kratky is the CEO of MANTA and this article was originally published on his LinkedIn Pulse. Let him know what you think on manta@getmanta.com.

MANTA 3.18: We Are Going Public… With Our API! (And More!)

Manta Flow introduces new API, complete DB2 and Netezza in IMM, and detailed business lineage transformations.

Manta Flow introduces new API, complete DB2 and Netezza in IMM, and detailed business lineage transformations.

This month we went all out. We sat down and worked hard to bring you MANTA 3.18 as soon as possible, because it wouldn’t have been fair of us to have kept these amazing features to ourselves. Come and join the ride!

Growing Integration Capital

Up until now MANTA has had a standard API, but from now on we will have a public REST API, which gives users many more options. Through the public API you can connect MANTA to any app and let it run impact analyses to get data lineage information in CSV or JSON to use in custom analyses.

Speaking of connecting, MANTA can now read both of the previously mentioned IBM databases in IMM and IGC. DB2 and Netezza users, now you can enjoy data lineage at its finest in your own data management solution. And while we were at it, we also improved our Oracle, MSSQL, and Teradata connectors.

Deeper into Business Lineage

Another life-changing new feature that we already introduced in our last release is business lineage. But, this time we went back to it and added business lineage transformations. From now on, your business team will not only see where the data is coming from, but what happens to it along the way. This makes MANTA’s business lineage as detailed as physical lineage, but in more businessperson friendly language.

Last but not least, we have made a few tweaks and fixes to our native visualization and improved export for IBM InfoSphere Information Governance Catalog 11.5. We’ll be more than happy if you let us know what you think!

One Small Step for MANTA, One Big Leap for Mankind

June 30, 2017 by

Tomas Kratky explores his vision behind MANTA’s new capability to visualize business & logical lineage.

Tomas Kratky explores his vision behind MANTA’s new capability to visualize business & logical lineage.

We just recently published a blog post announcing one new feature – MANTA now works not only with physical lineage but with business and logical lineage as well. I was shocked by the intensity of the feedback we got from our customers and partners – they were confused. MANTA has a clear vision to provide users with the most detailed, accurate, and fully automated data lineage from programming code. We do it because all data-driven organizations need it, because others are afraid to do it, and because we are smart.

New Levels of Lineage

But now we have announced business lineage and everyone has been asking what that means. Is MANTA moving towards being a more general metadata or data governance solution? NOT AT ALL! So why the business and logical lineage? Let me explain a little bit more.

MANTA offers capabilities not covered by other players, capabilities very much needed in any data intensive environment. But MANTA is not a metadata manager or information catalog. There are other better equipped vendors like IBM, Informatica, Collibra, Alation, Adaptive, etc. This means that with some exceptions MANTA alone does not meet all the metadata related requirements of a customer. But other metadata solutions, when selected, purchased, and deployed by a customer, also fail to meet several critical needs related to metadata accuracy and completeness, especially regarding data processing logic hidden inside programming code. This leads to an inevitable conclusion – MANTA is usually served together with other tool(s).

MANTA: Born To Integrate

Simply said, we live and die with great integrations. We have many prospects out there, since almost everyone will need us sooner or later, but to fully demonstrate our value, we need smooth integration with existing data governance / metadata solutions. We originally started with more technical oriented tools like Informatica Metadata Manager, so physical lineage was the best option. But now more and more customers have Collibra, IBM Information Governance Catalog, Alation, Data3Sixty, or Axon, and they want to see lineage there. But those solutions are not designed to capture and visualize large amounts of data processing metadata. They tend to slow down or even crash with the millions of processing steps you have in your environment.

Automate or Drown

Some vendors in this space don’t even offer automated harvesting capabilities. Some of them do, but in a limited way. So I very often see customers trying to build simple business level lineage manually. And this is where our unique features come into play. MANTA still harvests physical technical metadata from your programming code but is now also able to use existing business or logical mappings to prepare a different perspective – simplified, with easier to understand names and descriptions, but still accurate, complete, and fully automated. It allows us to easily integrate with all the not-so-technical solutions mentioned above. It means less wasted effort and fewer stressful moments for our customers and more prospects for MANTA. I see it as a win-win situation.

This article was originally published on Tomas Kratky’s LinkedIn Pulse.

MANTA Introduces Connectors for IBM Netezza and DB2

June 27, 2017 by

MANTA is swimming deeper into the world of IBM. 

MANTA is swimming deeper into the world of IBM. 

We’ve already mentioned both IBM DB2 and IBM Netezza in our introductory article to the latest version, but maybe it’s time to explain how all it works. Take a look at the picture:

Manta is great at understanding logic hidden in programming code and it can parse:

  • NZPLSQL scripts, stored procedures, and more
  • DB2 scripts, stored procedures, and more
  • Other technologies you might have in your BI

After the initial parsing, Manta reconstructs the lineage and visualizes it (take a look on DB2 screenshot!) or pushes it into a 3rd party metadata solution – such as Informatica Metadata Manager (along with other technologies). “But I’ve purchased IBM IGC with my Netezza/DB2 databases!” Say no more, we’ve got you covered.

Get a Boost for Your Information Governance Catalog

Our goal was to create a seamless way to push complete lineage into IGC. Manta is now able to naturally connect to it and is simply present as a new metamodel (called, unsurprisingly, MantaModel). If some of your lineage is missing or hidden in Netezza or DB2 scripts and stored procedures, Manta is the ultimate solution for your problem.

Take a look at how smooth the integration is (Oracle is used in the video, but for DB2 and Netezza it works the same). We strongly recommend watching it full screen. 

Interested? Then you should know there’s a 30-day free trial and assisted pilot, if your organization requires one. Get in touch with us at manta@getmanta.com or use this form.

Subscribe to our newsletter

We cherish your privacy.

By using this site, you agree with using our cookies.