All Topics

MANTA + Scalefree = Data Vault Heaven

August 15, 2017 by

We have an exciting announcement for all of you! MANTA has teamed up with Scalefree, and exciting things are headed your way! The guys at Scalefree are real pros at building information systems and data vaults, offering a full range of BI solutions.

We have an exciting announcement for all of you! MANTA has teamed up with Scalefree, and exciting things are headed your way! The guys at Scalefree are real pros at building information systems and data vaults, offering a full range of BI solutions.

Data Vault 2.0 is a system of business intelligence comprised of 3 (plus one) pillars that are required to successfully build an enterprise data warehouse system:

  • A flexible model: designed especially for data warehousing, the Data Vault model is very flexible, easy to extend, and can span across multiple environments, integrating all data from all enterprise sources in a fully auditable and secure model.
  • An agile methodology: because the Data Vault model is easy to extend (with near-zero or zero change impact to existing entities), successful projects choose the Data Vault 2.0 methodology, which is based on Scrum, CMMI Level 5, and other best practices.
  • A reference architecture: spanning the enterprise data warehouse across multiple environments and integrating batch-driven data, near-real-time and actual real-time data streams, and unstructured data sets.

Furthermore, the agile methodology also includes best practices for the actual implementation of the Data Vault model, for deriving the target structures (in many cases, dimensional models, but not limited to those), and for the implementation of the architecture. All implementation patterns have been fine-tuned for high performance over more than 20 years and successfully used to process up to 3 petabytes of data in a U.S. government context (defense/security).

While adapting better to changes than pretty much any other architecture, Data Vault is braced for “Big Data” and “NoSQL”. This provides the customer with the same level of efficiency, now and in ten years, erasing all worries about the rapidly growing amount of data in your business.

As one of Scalefree’s founders, Dan is now establishing his concept on the market. In the training, exclusively certified by him, customers can learn the why/what/how of Data Vault 2.0.

And where does MANTA come in?

When you want to build a truly perfect data vault model, having strong data lineage is essential. Complete end-to-end lineage gives you insight into the structure and all procedures inside your data warehouse. Now that you know exactly where your data comes from and what data flows it goes through to get all the way to the end table, you can create an accurate data vault model that is applicable in many ways.

Have we aroused your curiosity about how to build and use a data vault in your business? Then be sure to check out the Scalefree Data Vault 2.0 Boot Camps  and save yourself a seat. Upcoming training programs will be held in Brussels, New York City (with Dan Linstedt), Vienna, Oslo, Dublin, Santa Clara CA, and Frankfurt.

Manta Goes Public with Its API!

Nowadays, every app, tool and solution needs to be connected to everything else. And MANTA is ready to join the club. 

Nowadays, every app, tool and solution needs to be connected to everything else. And MANTA is ready to join the club. 

You Asked for It

Here at MANTA HQ, we’ve been literally buried with customer requests to add various integration possibilities for Manta Flow. You asked for it! As of version 3.18, MANTA has a public REST API. This new feature, together with multi-level data lineage gives users the option to use MANTA with all kinds of technologies.

Through the public API you can connect MANTA to any custom tool or app and allow it to work with its data. How exactly? Take a look at this example:

Let’s say you have your own quality monitoring tool that monitors critical elements of data lineage for you. You can let MANTA export an excel file and then manually go through all the values, find out what their sources are, and manually look for changes. But now, thanks to public API, you can do all this automatically using your own tool!

Put an End to Boring Manual Reports

The tool can call MANTA’s API, automatically pull out all the critical elements of data lineage, and report the changes found. Now, you can automatically monitor all changes that occur to your data during a given time period, saving you and your company hours of manual labor spent pouring data from MANTA into your own tool.

And there are many, many other ways you can use our new API!

To learn more about capabilities of our solution, try a live demo, ask for trial or drop us a line on

Return of the Metadata Bubble

July 27, 2017 by

The bubble around metadata in BI is back – with all it’s previous sins and even more just around the corner. [LONG READ]  

The bubble around metadata in BI is back – with all it’s previous sins and even more just around the corner. [LONG READ]  

In my view, 2016 and 2017 are definitely the years for metadata management and data lineage specifically. After the first bubble 15 years ago, people were disappointed with metadata. A lot of money was spent on solutions and projects, but expectations were never met (usually because they were not established realistically, as with any other buzzword at its start). Metadata fell into damnation for many years.

But if you look around today, visit few BI events, read some blog posts and comments on social networks, you will see metadata everywhere. How is it possible? Simply, because metadata has been reborn through the bubble of data governance associated with big data and analytics hype. Could you imagine any bigger enterprise today without a data governance program running (or at least in its planning phase)? No! Everyone is talking about a business glossary to track their Critical Data Elements, end-to-end data lineage is once again the holy grail (but this time including the Big Data environment), and we get several metadata related RFPs every few weeks.

Don’t get me wrong, I’m happy about it. I see proper metadata management practice to be a critical denominator for the success of any initiative around data. With huge investments flowing into big data today, it is even more important to have proper governance in place. Without it, no additional revenue, chaos, and lost money would be the only outcome of big (and small) data analytics. My point is that even if everything looks promising on the surface, I feel a lot of enterprises have taken the wrong approach. Why?

A) No Numbers Approach

I have heard so often that you can’t demonstrate with numbers how metadata helps an organisation. I couldn’t disagree more. Always start to measure efficiency before you start a data governance/metadata project. How many days does it take, on average, to do an impact analysis? How long does it take, on average, to do an ad-hoc analysis. How long does it take to get a new person onboard – data analyst, data scientist, developer, architect, etc. How much time do your senior people spend analysing incidents and errors from testing or production and correcting them? My advice is to focus on one or two important teams and gather data for at least several weeks, or better yet, months. If you aren’t doing it already, you should start immediately.

You should also collect as many “crisis” stories as you can. Such as when a junior employee at a bank mistyped an amount in a source system and a bad $1 000 000 transaction went through. They spent another three weeks in a group of 3 tracking it from its source to all its targets and making corrections. Or when a finance company refused to give a customer a big loan and he came to complain five months later. What a surprise when they ran simulations and found out that they were ready to approve his application. They spent another 5 weeks in a group of 2 trying to figure out what exactly happened to finally discover that a risk algorithm in use had been changed several times over the last few months. When you factor in bad publicity related to this incident, your story is more than solid.

Why all this? Because using your numbers to build a business case and comparing them with numbers after a project to demonstrate efficiency improvements and those well-known, terrifying stories that cause so many troubles to your organisation, will be your “never want it to happen again” memento.

B) Big Bang Approach

I saw several companies last year that started too broad and expected too much in very short time. When it comes to metadata and data governance, your vision must be complex and broad, but your execution should be “sliced” – the best approach is simply to move step-by-step. Data governance usually needs some time to demonstrate its value in reduced chaos and better understanding between people in a company. It is tempting to spend a budget quickly, to implement as much functionality as possible and hope for great success. In most cases, however, it becomes a huge failure. Many, good resources are available online on this topic, so I recommend investing your time to read and learn from others’ mistakes first.

I believe that starting with several, critical data elements most often used is the best strategy. Define their business meaning first, than map your business terms to the the real world and use an automated approach to track your data elements both at a business and technical level. When the first, small set of your data elements is mapped, do your best to show their value to others (see the previous section about how to measure efficiency improvements). With success, your experience with other data sets will be much smoother and easier.

C) Monolithic Approach

you collect all your metadata and data governance related requirements from both business and technical teams, include your management and other key stakeholders, prepare a wonderful RFP and share it with all vendors from the top right Gartner Data Governance quadrant (or Forrester wave if you like it more). You meet well-dressed sales people and pre-sales consultants, see amazing demonstrations and marketing papers, hear a lot of promises how all your requirements will be met, pick up a solution you like, implement it, and earn you credit. Prrrrr! Wake up! Marketing papers lie most of the time (see my other post on this subject).

Your environment is probably very complex with hundreds of different and sometimes very old technologies. Metadata and data governance is primarily an integration initiative. To succeed, business and IT has to be put together – people, systems, processes, technologies. You can see how hard it is, and you may already know it! To be blunt, there is no single product or vendor covering all your needs. Great tools are out there for business users with compliance perspectives such as Collibra or Data3Sixty, more big data friendly information catalogs such as Alation, Cloudera Navigator, or Waterline Data, and technical metadata managers such as IBM Governance Catalog, Informatica Metadata Manager, Adaptive, or ASG. Each one of them, of course, overlaps with the others. Smaller vendors then also focus on specific areas not covered well by other players. Such as MANTA, with the unique ability to turn your programming code into both technical and business data lineage and integrate it with other solutions.

Metadata is not an easy beast to tame. Don’t make it worse by falling into the “one-size-fits-all” trap.

D) Manual Approach

I meet a lot of large companies ignoring automation when it comes to metadata and data governance. Especially with big data. Almost everyone builds a metadata portal today, but in most cases it is only a very nice information catalog (the same sort you can buy from Collibra, Data3Sixty, or IBM) without proper support for automated metadata harvesting. The “How to get metadata in” problem is solved in a different way. How? Simply by setting up a manual procedure – whoever wants to load a piece of logic into DWH or Data lake has to provide associated metadata describing meaning, structures, logic, data lineage, etc. Do you see how tricky this is? On the surface, you will have a lot of metadata collected, but every bit of information is not reality – it is a perception of reality and only as good as the information input by a person. What is worse, is that it will cost you a lot of money to keep synchronised with real logic during all updates, upgrades, etc. The history of engineering tells us clearly one fact – any documentation, especially documentation not an integral part of your code/logic, created and maintained manually, is out of date the very same moment it was created.

Sometimes there is a different reason for harvesting metadata manually – typically when you choose a promising DG solution, but it turns out that a lot is missing. Such as when your solution of choice cannot extract metadata from programming code and you end up with an expensive tool without the important pieces of your business and transformation logic inside. Your only chance is to analyse everything remaining by hand, and that means a lot of expense and a slow and error-prone process.

Most of the time I see a combination of a), c) and d), and in rare cases also with b). Why is that? I do not know. I have plenty of opinions but none of them have been substantiated. One thing for sure is that we are doing our best to kill metadata, yet again. This is something I am not ready to accept. Metadata is about understanding, about context, about meaning. Companies like Google and Apple have known it for a long time, which is why they win. The rest of the world is still behind with compliance, regulations being the most important factor why large companies implement data governance programs.

I am asking every single professional out there to fight for metadata, to explain that measuring is necessary and easy to implement, small steps are much safer and easier to manage than a big bang, an ecosystem of integrated tools provides greater coverage of requirements than a huge monolith, and that automation is possible.

Tomas Kratky is the CEO of MANTA and this article was originally published on his LinkedIn Pulse. Let him know what you think on

MANTA 3.18: We Are Going Public… With Our API! (And More!)

Manta Flow introduces new API, complete DB2 and Netezza in IMM, and detailed business lineage transformations.

Manta Flow introduces new API, complete DB2 and Netezza in IMM, and detailed business lineage transformations.

This month we went all out. We sat down and worked hard to bring you MANTA 3.18 as soon as possible, because it wouldn’t have been fair of us to have kept these amazing features to ourselves. Come and join the ride!

Growing Integration Capital

Up until now MANTA has had a standard API, but from now on we will have a public REST API, which gives users many more options. Through the public API you can connect MANTA to any app and let it run impact analyses to get data lineage information in CSV or JSON to use in custom analyses.

Speaking of connecting, MANTA can now read both of the previously mentioned IBM databases in IMM and IGC. DB2 and Netezza users, now you can enjoy data lineage at its finest in your own data management solution. And while we were at it, we also improved our Oracle, MSSQL, and Teradata connectors.

Deeper into Business Lineage

Another life-changing new feature that we already introduced in our last release is business lineage. But, this time we went back to it and added business lineage transformations. From now on, your business team will not only see where the data is coming from, but what happens to it along the way. This makes MANTA’s business lineage as detailed as physical lineage, but in more businessperson friendly language.

Last but not least, we have made a few tweaks and fixes to our native visualization and improved export for IBM InfoSphere Information Governance Catalog 11.5. We’ll be more than happy if you let us know what you think!

One Small Step for MANTA, One Big Leap for Mankind

June 30, 2017 by

Tomas Kratky explores his vision behind MANTA’s new capability to visualize business & logical lineage.

Tomas Kratky explores his vision behind MANTA’s new capability to visualize business & logical lineage.

We just recently published a blog post announcing one new feature – MANTA now works not only with physical lineage but with business and logical lineage as well. I was shocked by the intensity of the feedback we got from our customers and partners – they were confused. MANTA has a clear vision to provide users with the most detailed, accurate, and fully automated data lineage from programming code. We do it because all data-driven organizations need it, because others are afraid to do it, and because we are smart.

New Levels of Lineage

But now we have announced business lineage and everyone has been asking what that means. Is MANTA moving towards being a more general metadata or data governance solution? NOT AT ALL! So why the business and logical lineage? Let me explain a little bit more.

MANTA offers capabilities not covered by other players, capabilities very much needed in any data intensive environment. But MANTA is not a metadata manager or information catalog. There are other better equipped vendors like IBM, Informatica, Collibra, Alation, Adaptive, etc. This means that with some exceptions MANTA alone does not meet all the metadata related requirements of a customer. But other metadata solutions, when selected, purchased, and deployed by a customer, also fail to meet several critical needs related to metadata accuracy and completeness, especially regarding data processing logic hidden inside programming code. This leads to an inevitable conclusion – MANTA is usually served together with other tool(s).

MANTA: Born To Integrate

Simply said, we live and die with great integrations. We have many prospects out there, since almost everyone will need us sooner or later, but to fully demonstrate our value, we need smooth integration with existing data governance / metadata solutions. We originally started with more technical oriented tools like Informatica Metadata Manager, so physical lineage was the best option. But now more and more customers have Collibra, IBM Information Governance Catalog, Alation, Data3Sixty, or Axon, and they want to see lineage there. But those solutions are not designed to capture and visualize large amounts of data processing metadata. They tend to slow down or even crash with the millions of processing steps you have in your environment.

Automate or Drown

Some vendors in this space don’t even offer automated harvesting capabilities. Some of them do, but in a limited way. So I very often see customers trying to build simple business level lineage manually. And this is where our unique features come into play. MANTA still harvests physical technical metadata from your programming code but is now also able to use existing business or logical mappings to prepare a different perspective – simplified, with easier to understand names and descriptions, but still accurate, complete, and fully automated. It allows us to easily integrate with all the not-so-technical solutions mentioned above. It means less wasted effort and fewer stressful moments for our customers and more prospects for MANTA. I see it as a win-win situation.

This article was originally published on Tomas Kratky’s LinkedIn Pulse.

MANTA Introduces Connectors for IBM Netezza and DB2

June 27, 2017 by

MANTA is swimming deeper into the world of IBM. 

MANTA is swimming deeper into the world of IBM. 

We’ve already mentioned both IBM DB2 and IBM Netezza in our introductory article to the latest version, but maybe it’s time to explain how all it works. Take a look at the picture:

Manta is great at understanding logic hidden in programming code and it can parse:

  • NZPLSQL scripts, stored procedures, and more
  • DB2 scripts, stored procedures, and more
  • Other technologies you might have in your BI

After the initial parsing, Manta reconstructs the lineage and visualizes it (take a look on DB2 screenshot!) or pushes it into a 3rd party metadata solution – such as Informatica Metadata Manager (along with other technologies). “But I’ve purchased IBM IGC with my Netezza/DB2 databases!” Say no more, we’ve got you covered.

Get a Boost for Your Information Governance Catalog

Our goal was to create a seamless way to push complete lineage into IGC. Manta is now able to naturally connect to it and is simply present as a new metamodel (called, unsurprisingly, MantaModel). If some of your lineage is missing or hidden in Netezza or DB2 scripts and stored procedures, Manta is the ultimate solution for your problem.

Take a look at how smooth the integration is (Oracle is used in the video, but for DB2 and Netezza it works the same). We strongly recommend watching it full screen. 

Interested? Then you should know there’s a 30-day free trial and assisted pilot, if your organization requires one. Get in touch with us at or use this form.

Trust Your Lineage, All of It

Can you trust your physical, business and logical lineage? Manta introduces support for many different levels of data lineage abstraction. 

Can you trust your physical, business and logical lineage? Manta introduces support for many different levels of data lineage abstraction. 

When it comes to data, trust is always the key. And getting a complete overview of data flows in your system is necessary to get that trust back. Now, different types of lineage won’t mess things up anymore. It’s almost impossible to map complex BI systems on more levels of abstraction. Many different tools provide physical (technical), business and logical lineage, but this lineage is only good when it is complete. Like, totally.

Physical Lineage Is the Key

At Manta, we’re good at getting detailed and accurate physical data lineage from your logic hidden in programming code. Never mind SQL overrides, manually defined procedures, stored procedures – MANTA will just map it all. On top of that, we are now able to include different levels of lineage, but backed up by original physical lineage so it’s 100% accurate. And it’s all fully automatic, so there’s no manual labor necessary… Here is how it works:

First, MANTA will do what it is the best in the world at – map detailed and accurate physical data lineage from the logic hidden in your programming code.

Second, it loads external mapping between physical and logical or business objects (like business name to specific table / column mapping) from available sources and ties that with rendered physical lineage.

Third, it uses mapped objects (business, logical) to transform existing physical lineage so it is better aligned with the objects provided (i.e. no detailed technical transformations for business objects but rather simplified descriptions). The result is accurate, trustworthy lineage of any kind – based in reality and yet useful for everyone who needs to understand the specific level of abstraction.

Any Lineage, Any Source

It does not matter which technology your physical lineage is from (check out our list of supported technologies!). It also does not matter how you provide the initial business/logical to physical object mapping – it could be:

  • your favorite business glossary
  • a data modelling platform
  • plain ol’ Excel spreadsheets
  • virtually any other structured data format


From an Excel Sheet to a Data Governance Solution

Our main goal is to connect and share information with other tools and solutions in our customers’ BI. That’s why we are not only able to pull metadata from other tools, but MANTA can easily push everything back into 3rd party solutions. Want an example? At one of our successful implementations, MANTA:

  1. Loaded business lineage mapping from Collibra’s Data Governance Center
  2. Combined it with complete physical lineage from actual code
  3. Pushed lineage back to Collibra, ensuring that the lineage was complete and functional

Additionally, MANTA is always capable of visualizing everything in our visualization – feel free to take a look at our introductory video:

To learn more about MANTA, simply get in touch and ask for 30-day free trial!

The Dark Side of the Metadata & Data Lineage World

June 10, 2017 by

You wouldn’t believe it, but there is a dark side to the metadata & data lineage world as well. Tomas Kratky digs deep and explains how you can get into trouble. 

You wouldn’t believe it, but there is a dark side to the metadata & data lineage world as well. Tomas Kratky digs deep and explains how you can get into trouble. 

It has been a wonderful spring this year, hasn’t it? The first months of 2017 were hot for us. Data governance, metadata, and data lineage are everywhere. Everyone is talking about them, everyone is looking for a solution. It’s an amazing time. But there is also the other side, the dark side.

The Reality of Metadata Solutions

As we meet more and more large companies, industry experts & analysts, investors and just data professionals, we see a huge gap between their perception of reality and reality itself. What am I talking about? About the end-to-end data lineage ghost. With data being used to make decisions every single day, with regulators like FINRA, Fed, SEC, FCC, and ECB requesting reports, with initiatives like BCBS 239 or GDPR (a new European Data Protection Directive), proper governance and a detailed understanding of the data environment is a must for every enterprise. And E2E (end-to-end) data lineage has become a great symbol of this need. Every metadata/data governance player on the market is talking about it and their marketing is full of wonderful promises (in the end, that is the main purpose of every marketing leaflet, isn’t it?). But what’s the reality?

The Automated Baby Beast

The truth is, that E2E data lineage is a very tough beast to tame. Just imagine how many systems and data sources you have in your organization, how much data processing logic, how many ETL jobs, how many stored procedures, how many lines of programming code, how many reports, how many ad-hoc excel sheets, etc. It is huge. Overwhelming!

If your goal is to track every single piece of data and to record every single processing step, every “hop” of data flow through your organization, you have a lot of work to do. And even if you split your big task into smaller ones and start with selected data sets (so-called “critical data elements”) one by one, it can still be so exhausting that you will never finish or even really start. And now data governance players have come in with gorgeous promises packaged in one single word – AUTOMATION.

The promise itself is quite simple to explain – their solutions will analyze all data sources and systems, every single piece of logic, extract metadata from them (so-called metadata harvesting), link them up (so-called metadata stitching), store them and make them accessible to analysts, architects, and other users through best-in-class, award-winning user interfaces. And all of this through automation. No manual work necessary, or just a little bit. Is it so tempting that you are open to it, you want to believe. And so you buy the tool. And then the fun part starts.

The Machine Built to Fail

Almost nothing works as expected. But somehow you progress with the help of hired (and usually overpriced) experienced consultants. Databases (tables, columns) are there, your nice graphically created ETL jobs are there, your first simple reports also, but hey! There is something missing! Why? Simply because you used a nasty complex SQL statement in your beautiful Cognos report. And you used another one when you were not satisfied with the performance of one Informatica PowerCenter job. And hey! Here, lineage is completely broken? Why is THAT? Hmmm, it seems that you decided to write some logic inside stored procedures and not to draw a terrifying ETL workflow, simply because it was so much easier with all those Oracle advanced features. Ok, I believe you have got it. Different kinds of SQL code (and not just SQL but also Java, C, Python and many others) are everywhere in your BI environment. Usually, there are millions and millions of lines of code everywhere. And unfortunately (at least for all metadata vendors) programming code is super tough to analyze and extract the necessary metadata from. But without it, there is no E2E data lineage.

At this moment, marketing leaflets hit the wall of reality. As of today, we have met a lot of enterprises but only very few solutions capable of automated metadata extraction from SQL programming code. So what do most big vendors usually do in this situation (or big system integrators implementing their solutions)? Simply finish the rest of the work manually. Yes, you heard me! No automation anymore. Just good old manual labor. But you know what – it can be quite expensive. For example, a year ago we helped one of our customers reduce the time needed to “finish” their metadata project from four months to just one week! They were ready to invest the time of five smart guys, four months per person, to manually analyze hundreds and hundreds of BTEQ scripts, extract metadata from them and store them on the existing metadata tool. In the United States, we typically meet clients with several hundreds of thousands of database scripts and stored procedures. That’s sooo many! Who is going to pay for that? The vendor? The system integrator? No, you know the answer. In most cases, the customer is the one who has to pay for it.

Know Your Limits

I have been traveling a lot the last few weeks and have met a lot of people, mostly investors and industry analysts, but also a few professionals. And I was amazed by how little they know about the real capabilities and limitations of existing solutions. Don’t get me wrong, I think those big guys do a great job. You can’t imagine how hard it is to provide a really easy-to-use metadata or data governance solution. There are so many different stakeholders, needs and requirements. I admire those big ones. But it should not mean that we close our eyes and pretend that those solutions have no limitations. They have limitations and fortunately, the big guys, or at least some of them, have finally realized, that it is much better to provide open API and to allow third parties like Manta to integrate and fill the existing gaps. I love the way IBM and Collibra have opened their platforms and I feel that others will soon follow.

How can you protect yourself as a customer? Simply conduct proper testing before you buy. Programming code in BI is ignored so often, maybe because it is very low-level and it is typically not the main topic of discussion among C-level guys. (But there are exceptions – just recently I met a wonderful CDO of a huge US bank who knows everything about the scripts and code they have inside their BI. It was so enlightening, after quite a long time.) It is also very hard to choose a reasonable subset of the data environment for testing. But you must do it properly if you want to be sure about the software you are going to buy. With proper testing, you will realize much sooner that there are some limitations and you will start looking for a solution to solve them up-front, not in the middle of your project, already behind schedule and with a C-level guy sitting on your neck.

It is time to admit that marketing leaflets lie in most cases (oh, sorry, they just add a little bit of color to the truth), and you must properly test every piece of software you want to buy. Your life will be much easier and success stories of nicely implemented metadata projects won’t be so scarce.

Originally published on Tomas Kratky’s LinkedIn profile.

MANTA + Informatica

MANTA completes Informatica to form a comprehensive metadata management platform. But how precisely does the bond work? [VIDEO BELOW ↓ ]

MANTA completes Informatica to form a comprehensive metadata management platform. But how precisely does the bond work? [VIDEO BELOW ↓ ]

Trust Through Understanding

Our product allows you to trust the data in your BI environment, because it is specialized in cracking SQL code and helps to fill gaps in metadata management solutions. It does not matter whether they were deployed to fulfill compliance regulations or create the backbone of your data governance efforts. The Informatica Metadata Management solution has a rudimentary capability to parse SQL, but, in our experience with our customers over the years, there are still blind spots here and there.

MANTA connects to IMM through XConnect (native API plugin), and enriches the metadata model of IMM with missing pieces of data lineage:

  • BTEQ scripts, stored procedures, views, and macros from Teradata
  • PL/SQL scripts, stored procedures, packages, and more, including DB links, from Oracle DB & Exadata
  • T-SQL scripts, stored procedures and more, including linked servers, from Microsoft SQL Server, Sybase (now SAP ASE), and PDW
  • NZPLSQL scripts, stored procedures, and more from IBM Netezza
  • DB2 scripts, stored procedures, and more from IBM DB2
  • SQL overrides from Informatica PowerCenter, Cognos and Microsoft SQL Server Reporting Services

It fills in the gaps in data flows and allows our customers to get end-to-end data lineage (including those pesky indirect data flows).

Here is a live shot of MANTA + IMM,  fullscreen is recommended.

Beyond Lineage

MANTA’s ultimate goal is to understand the semantics of the code being analyzed. If you have ever thought about advanced performance tuning, real data protection analysis, automated business lineage extraction,or migration of your code base to a different platform, MANTA is exactly what you need.

For customers who do not have Informatica Metadata Manager, but do enjoy the advanced ETL capabilities of  Informatica PowerCenter, we are also able to provide data lineage in our own visualization or IBM IGC. MANTA’s open API allows customers to push metadata to 3rd-party tools such as Collibra, Adaptive, Alation, Axon, and others.

Faster and Cheaper

The impressive part is how fast MANTA works. It can map a BI environment at a speed that is incomparable with any  human workforce, which means you save quite a lot of money on man-hours.

If you would like a more detailed explanation of MANTA + Informatica, and how it can help with your current situation, be sure to contact one of our specialists! They can discuss your situation in detail and give you a better idea of how MANTA can help with your data governance efforts.

MANTA 3.17: Say Hello to Netezza, Business Data Lineage, and Much More

It’s a bird! It’s a plane! It’s MANTA Flow 3.17 coming your way!

It’s a bird! It’s a plane! It’s MANTA Flow 3.17 coming your way!

When we said (and we always say this) that we would expand the list of our supported technologies, we weren’t lying. MANTA 3.17 comes with support for IBM Netezza and IBM DB2, provides business data lineage, and much more!

The biggest news among big news is that MANTA now supports Netezza. As always, pioneers are wanted and needed (ask for your trial).  It can do everything we taught it to, so if there is anyone out there who has a Netezza solution and wants to have the data lineage all nice and clean, we’ve got you covered.

This release is also the debut for a DB2 connector, now (almost) in stock. It’s no secret that we still have some plans for that one, so stay tuned. Because the Parser Team rocks, we worked on all of our parsers and we can proudly say that every new version of MANTA brings more and more improvements.

Did you know that complete business data lineage is now a thing? Yes, it is! And MANTA can provide that for you!

What we do is we can put together the physical metadata from the existing MANTA connectors with business terms provided by the user. From that, MANTA Flow can give you end-to-end business data lineage. We are the first ones who can provide you with data lineage that suits the needs of your business users as well as your BI team from information that your company already owns.

And as usual, that’s not it. We have added some cool features to MANTA, e.g. searching in source code, and much more.

Subscribe to our newsletter

We cherish your privacy.

By using this site, you agree with using our cookies.